Tests Test Coverage Latest release BSD-3 clause license PyPI - Python Version

Command Line

Help:

hydrotoolbox --help

about

$ hydrotoolbox about --help
usage: hydrotoolbox about [-h]

Display version number and system information.

options:
  -h, --help  show this help message and exit

baseflow_sep

$ hydrotoolbox baseflow_sep --help
usage: hydrotoolbox baseflow_sep [-h]
                                 {boughton,chapman,chapman_maxwell,eckhardt,ewma,five_day,furey,lyne_hollick,ihacres,ukih,willems,usgs_hysep_fixed,usgs_hysep_local,usgs_hysep_slide} ...

positional arguments:
  {boughton,chapman,chapman_maxwell,eckhardt,ewma,five_day,furey,lyne_hollick,ihacres,ukih,willems,usgs_hysep_fixed,usgs_hysep_local,usgs_hysep_slide}
    boughton            Boughton double-parameter filter [1]_
    chapman             Chapman filter [1]_
    chapman_maxwell     Digital filter (Chapman and Maxwell, 1996)
    eckhardt            Eckhardt filter (Eckhardt, 2005)
    ewma                Exponential Weighted Moving Average (EWMA) filter
                        (Tularam and Ilahee, 2008)
    five_day            Value kept if less than 90 percent of adjacent 5-day
                        blocks.
    furey               Furey digital filter [Furey and Gupta, 2001]
    lyne_hollick        Digital filter [1]_
    ihacres             IHACRES
    ukih                Graphical method developed by UK Institute of
                        Hydrology (UKIH, 1980)
    willems             Digital filter (Willems, 2009)
    usgs_hysep_fixed    USGS HYSEP Fixed interval method.
    usgs_hysep_local    USGS HYSEP Local minimum graphical method (Sloto and
                        Crouse, 1996)
    usgs_hysep_slide    USGS HYSEP sliding interval method

options:
  -h, --help            show this help message and exit

baseflow_sep boughton

$ hydrotoolbox baseflow_sep boughton --help
usage: hydrotoolbox baseflow_sep boughton [-h] [--input_ts INPUT_TS]
                                          [--columns COLUMNS]
                                          [--source_units SOURCE_UNITS]
                                          [--start_date START_DATE]
                                          [--end_date END_DATE]
                                          [--dropna DROPNA] [--clean]
                                          [--round_index ROUND_INDEX]
                                          [--skiprows SKIPROWS]
                                          [--index_type INDEX_TYPE]
                                          [--names NAMES]
                                          [--target_units TARGET_UNITS]
                                          [--print_input]
                                          [--tablefmt TABLEFMT]
                                          [--float_format FLOAT_FORMAT]

::

    bₜ = [k⁄(1+C)] bₜ₋₁ + [C⁄(1+C)] Qₜ

    bₜ = baseflow for the current day
    bₜ₋₁ = baseflow for the previous day
    Qₜ = total stream flow for the current day
    k = groundwater recession constant
    C = watershed shape parameter

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep chapman

$ hydrotoolbox baseflow_sep chapman --help
usage: hydrotoolbox baseflow_sep chapman [-h] [-k K] [--input_ts INPUT_TS]
                                         [--columns COLUMNS]
                                         [--source_units SOURCE_UNITS]
                                         [--start_date START_DATE]
                                         [--end_date END_DATE]
                                         [--dropna DROPNA] [--clean]
                                         [--round_index ROUND_INDEX]
                                         [--skiprows SKIPROWS]
                                         [--index_type INDEX_TYPE]
                                         [--names NAMES]
                                         [--target_units TARGET_UNITS]
                                         [--print_input] [--tablefmt TABLEFMT]
                                         [--float_format FLOAT_FORMAT]

::

    bₜ = (3k-1)⁄(3-k) bₜ₋₁ + (1-k)⁄(3-k) (Qₜ + Qₜ₋₁)

    bₜ = baseflow for the current day
    bₜ₋₁ = baseflow for the previous day
    Qₜ = total stream flow for the current day
    Qₜ₋₁ = total stream flow for the previous day
    k = groundwater recession constant

options:
  -h, --help            show this help message and exit
  -k K                  [optional, default is None, where k will be calculated from the
                        input data]
                        
                        Groundwater recession constant.  The value of k is between 0 and 1.
                        The number is usually close to 1.
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep chapman_maxwell

$ hydrotoolbox baseflow_sep chapman_maxwell --help
usage: hydrotoolbox baseflow_sep chapman_maxwell [-h] [-k K]
                                                 [--input_ts INPUT_TS]
                                                 [--columns COLUMNS]
                                                 [--source_units SOURCE_UNITS]
                                                 [--start_date START_DATE]
                                                 [--end_date END_DATE]
                                                 [--dropna DROPNA] [--clean]
                                                 [--round_index ROUND_INDEX]
                                                 [--skiprows SKIPROWS]
                                                 [--index_type INDEX_TYPE]
                                                 [--names NAMES]
                                                 [--target_units TARGET_UNITS]
                                                 [--print_input]
                                                 [--tablefmt TABLEFMT]
                                                 [--float_format FLOAT_FORMAT]

::

    bₜ = k⁄(2-k) bₜ₋₁ + (1-k)⁄(2-k) Qₜ

    bₜ = baseflow for the current day
    bₜ₋₁ = baseflow for the previous day
    Qₜ = total stream flow for the current day
    Qₜ₋₁ = total stream flow for the previous day
    k = groundwater recession constant

options:
  -h, --help            show this help message and exit
  -k K                  [optional, default is None, where k will be calculated from the
                        input data]
                        
                        Groundwater recession constant.  The value of k is between 0 and 1.
                        The number is usually close to 1.
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep eckhardt

$ hydrotoolbox baseflow_sep eckhardt --help
usage: hydrotoolbox baseflow_sep eckhardt [-h] [--input_ts INPUT_TS]
                                          [--columns COLUMNS] [-k K]
                                          [--bfi_max BFI_MAX]
                                          [--source_units SOURCE_UNITS]
                                          [--start_date START_DATE]
                                          [--end_date END_DATE]
                                          [--dropna DROPNA] [--clean]
                                          [--round_index ROUND_INDEX]
                                          [--skiprows SKIPROWS]
                                          [--index_type INDEX_TYPE]
                                          [--names NAMES]
                                          [--target_units TARGET_UNITS]
                                          [--print_input]
                                          [--tablefmt TABLEFMT]
                                          [--float_format FLOAT_FORMAT]

::

    bₜ = [(1 - BFIₘₐₓ) k bₜ₋₁ + (1 - k) BFIₘₐₓ Qₜ]⁄[1 - (k BFIₘₐₓ)]

    bₜ = baseflow for the current day
    bₜ₋₁ = baseflow for the previous day
    Qₜ = total stream flow for the current day
    k = groundwater recession constant
    BFI_{max} = long-term ratio of baseflow to total streamflow
             [values between 0 and 1]

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  -k K                  [optional, default is None, where k will be calculated from the
                        input data]
                        
                        Groundwater recession constant.  The value of k is between 0 and 1.
                        The number is usually close to 1.
  --bfi_max BFI_MAX     [optional, default is None, where bfi_max will be calculated from the
                        input data]
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep ewma

$ hydrotoolbox baseflow_sep ewma --help
usage: hydrotoolbox baseflow_sep ewma [-h] [--input_ts INPUT_TS]
                                      [--columns COLUMNS]
                                      [--source_units SOURCE_UNITS]
                                      [--start_date START_DATE]
                                      [--end_date END_DATE] [--dropna DROPNA]
                                      [--clean] [--round_index ROUND_INDEX]
                                      [--skiprows SKIPROWS]
                                      [--index_type INDEX_TYPE]
                                      [--names NAMES]
                                      [--target_units TARGET_UNITS]
                                      [--print_input] [--tablefmt TABLEFMT]
                                      [--float_format FLOAT_FORMAT]

Exponential Weighted Moving Average (EWMA) filter (Tularam and Ilahee, 2008)

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep five_day

$ hydrotoolbox baseflow_sep five_day --help
usage: hydrotoolbox baseflow_sep five_day [-h] [--input_ts INPUT_TS]
                                          [--columns COLUMNS]
                                          [--source_units SOURCE_UNITS]
                                          [--start_date START_DATE]
                                          [--end_date END_DATE]
                                          [--dropna DROPNA] [--clean]
                                          [--round_index ROUND_INDEX]
                                          [--skiprows SKIPROWS]
                                          [--index_type INDEX_TYPE]
                                          [--names NAMES]
                                          [--target_units TARGET_UNITS]
                                          [--print_input]
                                          [--tablefmt TABLEFMT]
                                          [--float_format FLOAT_FORMAT]

Value kept if less than 90 percent of adjacent 5-day blocks.

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep furey

$ hydrotoolbox baseflow_sep furey --help
usage: hydrotoolbox baseflow_sep furey [-h] [-k K] [--c3c1 C3C1]
                                       [--input_ts INPUT_TS]
                                       [--columns COLUMNS]
                                       [--source_units SOURCE_UNITS]
                                       [--start_date START_DATE]
                                       [--end_date END_DATE] [--dropna DROPNA]
                                       [--clean] [--round_index ROUND_INDEX]
                                       [--skiprows SKIPROWS]
                                       [--index_type INDEX_TYPE]
                                       [--names NAMES]
                                       [--target_units TARGET_UNITS]
                                       [--print_input] [--tablefmt TABLEFMT]
                                       [--float_format FLOAT_FORMAT]

This hydrograph separation filter, introduced in 2001, is based on a mass
balance equation for baseflow through a hillside, and its construction is
founded on a physical-statistical theory of low streamflows developed by
Furey and Gupta.

::

    bₜ = k bₜ₋₁ + (1-k) c3c1 (Qₜ₋₁ - bₜ₋₁)

    bₜ = baseflow for the current day
    bₜ₋₁ = baseflow for the previous day
    Qₜ₋₁ = total stream flow for the previous day
    k = recession constant [values between 0 and 1]
    c3c1 = ratio of overland flow to groundwater flow, sometimes expressed
           as c3/c1 where c3 is the ratio of groundwater recharge to
           precipitation and c1 is the ratio of overland flow to
           precipitation.

options:
  -h, --help            show this help message and exit
  -k K                  [optional, default is None, where k will be calculated from the
                        input data]
                        
                        Groundwater recession constant.  The value of k is between 0 and 1.
                        The number is usually close to 1.
  --c3c1 C3C1           [optional, default is None.]
                        
                        Value from 0.001 to 10.
                        
                        Ratio of overland flow to groundwater flow.  If set to None will be
                        estimated from the flow data.
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep ihacres

$ hydrotoolbox baseflow_sep ihacres --help
usage: hydrotoolbox baseflow_sep ihacres [-h] [--input_ts INPUT_TS]
                                         [--columns COLUMNS]
                                         [--source_units SOURCE_UNITS]
                                         [--start_date START_DATE]
                                         [--end_date END_DATE]
                                         [--dropna DROPNA] [--clean]
                                         [--round_index ROUND_INDEX]
                                         [--skiprows SKIPROWS]
                                         [--index_type INDEX_TYPE]
                                         [--names NAMES]
                                         [--target_units TARGET_UNITS]
                                         [--print_input] [--tablefmt TABLEFMT]
                                         [--float_format FLOAT_FORMAT]
                                         k C a

Jakeman-Hornberger digital filter [1]_

::

    bₜ = k/(1 + C) bₜ₋₁ + C⁄(1 + C) (Qₜ + a Qₜ₋₁)

    bₜ = baseflow for the current day
    bₜ₋₁ = baseflow for the previous day
    Qₜ = total stream flow for the current day
    Qₜ₋₁ = total stream flow for the previous day

positional arguments:
  k                     k
                        coefficient
  C                     C
                        coefficient
  a                     a
                        coefficient

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep lyne_hollick

$ hydrotoolbox baseflow_sep lyne_hollick --help
usage: hydrotoolbox baseflow_sep lyne_hollick [-h] [--input_ts INPUT_TS]
                                              [--alpha ALPHA]
                                              [--columns COLUMNS]
                                              [--source_units SOURCE_UNITS]
                                              [--start_date START_DATE]
                                              [--end_date END_DATE]
                                              [--dropna DROPNA] [--clean]
                                              [--round_index ROUND_INDEX]
                                              [--skiprows SKIPROWS]
                                              [--index_type INDEX_TYPE]
                                              [--names NAMES]
                                              [--target_units TARGET_UNITS]
                                              [--print_input]
                                              [--tablefmt TABLEFMT]
                                              [--float_format FLOAT_FORMAT]

::

    bₜ = α bₜ₋₁ + (1-α)⁄2 (Qₜ + Qₜ₋₁)

    bₜ = baseflow for the current day
    bₜ₋₁ = baseflow for the previous day
    Qₜ = total stream flow for the current day
    Qₜ₋₁ = total stream flow for the previous day
    α = recession constant [values between 0 and 1]

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   Streamflow
  --alpha ALPHA         Catchment constant (value between 0 and 1).  Default is 0.925.
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep ukih

$ hydrotoolbox baseflow_sep ukih --help
usage: hydrotoolbox baseflow_sep ukih [-h] [--input_ts INPUT_TS]
                                      [--columns COLUMNS]
                                      [--source_units SOURCE_UNITS]
                                      [--start_date START_DATE]
                                      [--end_date END_DATE] [--dropna DROPNA]
                                      [--clean] [--round_index ROUND_INDEX]
                                      [--skiprows SKIPROWS]
                                      [--index_type INDEX_TYPE]
                                      [--names NAMES]
                                      [--target_units TARGET_UNITS]
                                      [--print_input] [--tablefmt TABLEFMT]
                                      [--float_format FLOAT_FORMAT]

Graphical method developed by UK Institute of Hydrology (UKIH, 1980)

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep usgs_hysep_fixed

$ hydrotoolbox baseflow_sep usgs_hysep_fixed --help
usage: hydrotoolbox baseflow_sep usgs_hysep_fixed [-h] [--num_days NUM_DAYS]
                                                  [--area AREA]
                                                  [--input_ts INPUT_TS]
                                                  [--columns COLUMNS]
                                                  [--source_units SOURCE_UNITS]
                                                  [--start_date START_DATE]
                                                  [--end_date END_DATE]
                                                  [--dropna DROPNA] [--clean]
                                                  [--round_index ROUND_INDEX]
                                                  [--skiprows SKIPROWS]
                                                  [--index_type INDEX_TYPE]
                                                  [--names NAMES]
                                                  [--target_units TARGET_UNITS]
                                                  [--print_input]
                                                  [--tablefmt TABLEFMT]
                                                  [--float_format FLOAT_FORMAT]

Sloto, Ronald A., and Michele Y. Crouse. “HYSEP: A Computer Program for
Streamflow Hydrograph Separation and Analysis.” USGS Numbered Series.
Water-Resources Investigations Report. Geological Survey (U.S.), 1996.
http://pubs.er.usgs.gov/publication/wri964040

options:
  -h, --help            show this help message and exit
  --num_days NUM_DAYS   [optional, default is None, where N is then set to 5 days]
                        
                        Override the calculation of N days using the area.  This is useful for
                        testing the effect of different N days on the baseflow separation.
  --area AREA           [optional, default is None, where N is then set to 5 days]
                        
                        Basin area in km^2.
                        
                        The area is used to estimate N days using the following equation:
                        
                        .. math::
                        
                            N = {0.38610216 * A}^{0.2}
                        
                        The equation in the HYSEP report expects the area in square miles, but
                        the equation above used in hydrotoolbox is for square kilometers.
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep usgs_hysep_local

$ hydrotoolbox baseflow_sep usgs_hysep_local --help
usage: hydrotoolbox baseflow_sep usgs_hysep_local [-h] [--num_days NUM_DAYS]
                                                  [--area AREA]
                                                  [--input_ts INPUT_TS]
                                                  [--columns COLUMNS]
                                                  [--source_units SOURCE_UNITS]
                                                  [--start_date START_DATE]
                                                  [--end_date END_DATE]
                                                  [--dropna DROPNA] [--clean]
                                                  [--round_index ROUND_INDEX]
                                                  [--skiprows SKIPROWS]
                                                  [--index_type INDEX_TYPE]
                                                  [--names NAMES]
                                                  [--target_units TARGET_UNITS]
                                                  [--print_input]
                                                  [--tablefmt TABLEFMT]
                                                  [--float_format FLOAT_FORMAT]

USGS HYSEP Local minimum graphical method (Sloto and Crouse, 1996)

options:
  -h, --help            show this help message and exit
  --num_days NUM_DAYS   [optional, default is None, where N is then set to 5 days]
                        
                        Override the calculation of N days using the area.  This is useful for
                        testing the effect of different N days on the baseflow separation.
  --area AREA           [optional, default is None, where N is then set to 5 days]
                        
                        Basin area in km^2.
                        
                        The area is used to estimate N days using the following equation:
                        
                        .. math::
                        
                            N = {0.38610216 * A}^{0.2}
                        
                        The equation in the HYSEP report expects the area in square miles, but
                        the equation above used in hydrotoolbox is for square kilometers.
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep usgs_hysep_slide

$ hydrotoolbox baseflow_sep usgs_hysep_slide --help
usage: hydrotoolbox baseflow_sep usgs_hysep_slide [-h] [--num_days NUM_DAYS]
                                                  [--area AREA]
                                                  [--input_ts INPUT_TS]
                                                  [--columns COLUMNS]
                                                  [--source_units SOURCE_UNITS]
                                                  [--start_date START_DATE]
                                                  [--end_date END_DATE]
                                                  [--dropna DROPNA] [--clean]
                                                  [--round_index ROUND_INDEX]
                                                  [--skiprows SKIPROWS]
                                                  [--index_type INDEX_TYPE]
                                                  [--names NAMES]
                                                  [--target_units TARGET_UNITS]
                                                  [--print_input]
                                                  [--tablefmt TABLEFMT]
                                                  [--float_format FLOAT_FORMAT]

The USGS HYSEP sliding interval method described in
`Sloto and Crouse, 1996`

The flow series is filter with scipy.ndimage.genericfilter1D using
numpy.nanmin function over a window of size `size`

Sloto, Ronald A., and Michele Y. Crouse. “HYSEP: A Computer Program for
Streamflow Hydrograph Separation and Analysis.” USGS Numbered Series.
Water-Resources Investigations Report. Geological Survey (U.S.), 1996.
http://pubs.er.usgs.gov/publication/wri964040.

options:
  -h, --help            show this help message and exit
  --num_days NUM_DAYS   [optional, default is None, where N is then set to 5 days]
                        
                        Override the calculation of N days using the area.  This is useful for
                        testing the effect of different N days on the baseflow separation.
  --area AREA           [optional, default is None, where N is then set to 5 days]
                        
                        Basin area in km^2.
                        
                        The area is used to estimate N days using the following equation:
                        
                        .. math::
                        
                            N = {0.38610216 * A}^{0.2}
                        
                        The equation in the HYSEP report expects the area in square miles, but
                        the equation above used in hydrotoolbox is for square kilometers.
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

baseflow_sep willems

$ hydrotoolbox baseflow_sep willems --help
usage: hydrotoolbox baseflow_sep willems [-h] [--input_ts INPUT_TS]
                                         [--columns COLUMNS]
                                         [--source_units SOURCE_UNITS]
                                         [--start_date START_DATE]
                                         [--end_date END_DATE]
                                         [--dropna DROPNA] [--clean]
                                         [--round_index ROUND_INDEX]
                                         [--skiprows SKIPROWS]
                                         [--index_type INDEX_TYPE]
                                         [--names NAMES]
                                         [--target_units TARGET_UNITS]
                                         [--print_input] [--tablefmt TABLEFMT]
                                         [--float_format FLOAT_FORMAT]

::

    v = (1 - w) * (1 - k) / (2 * w)
    bₜ = (k - v)/(1 + v) bₜ₋₁ + v⁄(1 + v) (Qₜ + Qₜ₋₁)

    bₜ = baseflow for the current day
    bₜ₋₁ = baseflow for the previous day
    Qₜ = total stream flow for the current day
    Qₜ₋₁ = total stream flow for the previous day

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --print_input         [optional, default is False, output format]
                        
                        If set to 'True' will include the input columns in the output
                        table.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

exceedance_time

$ hydrotoolbox exceedance_time --help
usage: hydrotoolbox exceedance_time [-h] [--input_ts INPUT_TS]
                                    [--delays DELAYS]
                                    [--under_over UNDER_OVER]
                                    [--time_units TIME_UNITS]
                                    [--columns COLUMNS]
                                    [--source_units SOURCE_UNITS]
                                    [--start_date START_DATE]
                                    [--end_date END_DATE] [--dropna DROPNA]
                                    [--clean] [--round_index ROUND_INDEX]
                                    [--skiprows SKIPROWS]
                                    [--index_type INDEX_TYPE] [--names NAMES]
                                    [--target_units TARGET_UNITS]
                                    [--tablefmt TABLEFMT]
                                    [--float_format FLOAT_FORMAT]
                                    [thresholds ...]

Calculate the time that a time series exceeds (or is below) a threshold.

positional arguments:
  thresholds

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   [optional though required if using within Python, default is '-'
                        (stdin)]
                        
                        Whether from a file or standard input, data requires a single line
                        header of column names.  The default header is the first line of
                        the input, but this can be changed for CSV files using the
                        'skiprows' option.
                        
                        Most common date formats can be used, but the closer to ISO 8601
                        date/time standard the better.
                        
                        Comma-separated values (CSV) files or tab-separated values (TSV)::
                        
                            File separators will be automatically detected.
                        
                            Columns can be selected by name or index, where the index for
                            data columns starts at 1.
                        
                        Command line examples:
                        
                            +---------------------------------+---------------------------+
                            | Keyword Example                 | Description               |
                            +=================================+===========================+
                            | --input_ts=fn.csv               | read all columns from     |
                            |                                 | 'fn.csv'                  |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.csv,2,1           | read data columns 2 and 1 |
                            |                                 | from 'fn.csv'             |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.csv,2,skiprows=2  | read data column 2 from   |
                            |                                 | 'fn.csv', skipping first  |
                            |                                 | 2 rows so header is read  |
                            |                                 | from third row            |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.xlsx,2,Sheet21    | read all data from 2nd    |
                            |                                 | sheet all data from       |
                            |                                 | "Sheet21" of 'fn.xlsx'    |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.hdf5,Table12,T2   | read all data from table  |
                            |                                 | "Table12" then all data   |
                            |                                 | from table "T2" of        |
                            |                                 | 'fn.hdf5'                 |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.wdm,210,110       | read DSNs 210, then 110   |
                            |                                 | from 'fn.wdm'             |
                            +---------------------------------+---------------------------+
                            | --input_ts='-'                  | read all columns from     |
                            |                                 | standard input (stdin)    |
                            +---------------------------------+---------------------------+
                            | --input_ts='-' --columns=4,1    | read column 4 and 1 from  |
                            |                                 | standard input (stdin)    |
                            +---------------------------------+---------------------------+
                        
                        If working with CSV or TSV files you can use redirection rather
                        than use `--input_ts=fname.csv`.  The following are identical:
                        
                        From a file:
                        
                            command subcmd --input_ts=fname.csv
                        
                        From standard input (since '--input_ts=-' is the default:
                        
                            command subcmd < fname.csv
                        
                        Can also combine commands by piping:
                        
                            command subcmd < filein.csv | command subcmd1 > fileout.csv
                        
                        Python library examples::
                        
                            You must use the `input_ts=...` option where `input_ts` can be
                            one of a [pandas DataFrame, pandas Series, dict, tuple, list,
                            StringIO, or file name].
  --delays DELAYS       [optional, default 0]
                        
                        List of delays to calculate exceedance for.  This can be an empty list
                        in which case the delays are all 0.  If one delay is given, then each
                        flow requires a delay term.
  --under_over UNDER_OVER
                        [optional, default "over"]
                        
                        Whether to calculate exceedance or under-exceedance.
  --time_units TIME_UNITS
                        [optional, default "day"]
                        
                        Units for the delays and the returned exceedance time.  Can be any
                        of the following strings: "year", "month", "day", "hour", "min", or "sec".
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

flow_duration

$ hydrotoolbox flow_duration --help
usage: hydrotoolbox flow_duration [-h] [--input_ts INPUT_TS]
                                  [--exceedance_probabilities EXCEEDANCE_PROBABILITIES]
                                  [--columns COLUMNS]
                                  [--source_units SOURCE_UNITS]
                                  [--start_date START_DATE]
                                  [--end_date END_DATE] [--dropna DROPNA]
                                  [--clean] [--round_index ROUND_INDEX]
                                  [--skiprows SKIPROWS]
                                  [--index_type INDEX_TYPE] [--names NAMES]
                                  [--target_units TARGET_UNITS]
                                  [--tablefmt TABLEFMT]
                                  [--float_format FLOAT_FORMAT]

Flow duration.

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   [optional though required if using within Python, default is '-'
                        (stdin)]
                        
                        Whether from a file or standard input, data requires a single line
                        header of column names.  The default header is the first line of
                        the input, but this can be changed for CSV files using the
                        'skiprows' option.
                        
                        Most common date formats can be used, but the closer to ISO 8601
                        date/time standard the better.
                        
                        Comma-separated values (CSV) files or tab-separated values (TSV)::
                        
                            File separators will be automatically detected.
                        
                            Columns can be selected by name or index, where the index for
                            data columns starts at 1.
                        
                        Command line examples:
                        
                            +---------------------------------+---------------------------+
                            | Keyword Example                 | Description               |
                            +=================================+===========================+
                            | --input_ts=fn.csv               | read all columns from     |
                            |                                 | 'fn.csv'                  |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.csv,2,1           | read data columns 2 and 1 |
                            |                                 | from 'fn.csv'             |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.csv,2,skiprows=2  | read data column 2 from   |
                            |                                 | 'fn.csv', skipping first  |
                            |                                 | 2 rows so header is read  |
                            |                                 | from third row            |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.xlsx,2,Sheet21    | read all data from 2nd    |
                            |                                 | sheet all data from       |
                            |                                 | "Sheet21" of 'fn.xlsx'    |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.hdf5,Table12,T2   | read all data from table  |
                            |                                 | "Table12" then all data   |
                            |                                 | from table "T2" of        |
                            |                                 | 'fn.hdf5'                 |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.wdm,210,110       | read DSNs 210, then 110   |
                            |                                 | from 'fn.wdm'             |
                            +---------------------------------+---------------------------+
                            | --input_ts='-'                  | read all columns from     |
                            |                                 | standard input (stdin)    |
                            +---------------------------------+---------------------------+
                            | --input_ts='-' --columns=4,1    | read column 4 and 1 from  |
                            |                                 | standard input (stdin)    |
                            +---------------------------------+---------------------------+
                        
                        If working with CSV or TSV files you can use redirection rather
                        than use `--input_ts=fname.csv`.  The following are identical:
                        
                        From a file:
                        
                            command subcmd --input_ts=fname.csv
                        
                        From standard input (since '--input_ts=-' is the default:
                        
                            command subcmd < fname.csv
                        
                        Can also combine commands by piping:
                        
                            command subcmd < filein.csv | command subcmd1 > fileout.csv
                        
                        Python library examples::
                        
                            You must use the `input_ts=...` option where `input_ts` can be
                            one of a [pandas DataFrame, pandas Series, dict, tuple, list,
                            StringIO, or file name].
  --exceedance_probabilities EXCEEDANCE_PROBABILITIES
                        [optional, default: (99.5, 99, 98, 95, 90, 75, 50, 25, 10, 5, 2, 1,
                        0.5)]
                        Exceedance probabilities
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

indices

$ hydrotoolbox indices --help
usage: hydrotoolbox indices [-h] [--water_year WATER_YEAR]
                            [--drainage_area DRAINAGE_AREA] [--use_median]
                            [--input_ts INPUT_TS] [--columns COLUMNS]
                            [--source_units SOURCE_UNITS]
                            [--start_date START_DATE] [--end_date END_DATE]
                            [--dropna DROPNA] [--clean]
                            [--round_index ROUND_INDEX] [--skiprows SKIPROWS]
                            [--index_type INDEX_TYPE] [--names NAMES]
                            [--target_units TARGET_UNITS]
                            [--tablefmt TABLEFMT]
                            [--float_format FLOAT_FORMAT]
                            indice_codes

+------+------------------------------------------------------------------+
| Code | Description                                                      |
+======+==================================================================+
| MA1  | Mean of the daily mean flow values for the entire flow record.   |
|      | cubic feet per second—temporal                                   |
+------+------------------------------------------------------------------+
| MA2  | Median of the daily mean flow values for the entire flow record. |
|      | cubic feet per second—temporal                                   |
+------+------------------------------------------------------------------+
| MA3  | Mean (or median) of the coefficients of variation (standard      |
|      | deviation/mean) for each year.  Compute the coefficient of       |
|      | variation for each year of daily flows. Compute the mean of the  |
|      | annual coefficients of variation. percent—temporal               |
+------+------------------------------------------------------------------+
| MA4  | Standard deviation of the percentiles of the logs of the entire  |
|      | flow record divided by the mean of percentiles of the logs.      |
|      | Compute the log10 of the daily flows for the entire record.      |
|      | Compute the 5th, 10th, 15th, 20th, 25th, 30th, 35th, 40th, 45th, |
|      | 50th, 55th, 60th, 65th, 70th, 75th, 80th, 85th, 90th, and 95th   |
|      | percentiles for the logs of the entire flow record. Percentiles  |
|      | are computed by interpolating between the ordered (ascending)    |
|      | logs of the flow values. Compute the standard deviation and mean |
|      | for the percentile values. Divide the standard deviation by the  |
|      | mean. percent–spatial                                            |
+------+------------------------------------------------------------------+
| MA5  | The skewness of the entire flow record is computed as the mean   |
|      | for the entire flow record (MA1) divided by the median (MA2) for |
|      | the entire flow record. dimensionless—spatial                    |
+------+------------------------------------------------------------------+
| MA6  | Range in daily flows is the ratio of the 10-percent to           |
|      | 90-percent exceedance values for the entire flow record. Compute |
|      | the 5-percent to 95-percent exceedance values for the entire     |
|      | flow record. Exceedance is computed by interpolating between the |
|      | ordered (descending) flow values.  Divide the 10-percent         |
|      | exceedance value by the 90-percent value. dimensionless—spatial  |
+------+------------------------------------------------------------------+
| MA7  | Range in daily flows is computed like MA6, except using the 20   |
|      | percent and 80 percent exceedance values. Divide the 20 percent  |
|      | exceedance value by the 80 percent value. dimensionless—spatial  |
+------+------------------------------------------------------------------+
| MA8  | Range in daily flows is computed like MA6, except using the      |
|      | 25-percent and 75-percent exceedance values. Divide the          |
|      | 25-percent exceedance value by the 75-percent value.             |
|      | dimensionless—spatial                                            |
+------+------------------------------------------------------------------+
| MA9  | Spread in daily flows is the ratio of the difference between the |
|      | 90th and 10th percentile of the logs of the flow data to the log |
|      | of the median of the entire flow record. Compute the log10 of    |
|      | the daily flows for the entire record.  Compute the 5th, 10th,   |
|      | 15th, 20th, 25th, 30th, 35th, 40th, 45th, 50th, 55th, 60th,      |
|      | 65th, 70th, 75th, 80th, 85th, 90th, and 95th percentiles for the |
|      | logs of the entire flow record. Percentiles are computed by      |
|      | interpolating between the ordered (ascending) logs of the flow   |
|      | values.  Compute MA9 as (90th –10th) /log10(MA2).                |
|      | dimensionless—spatial                                            |
+------+------------------------------------------------------------------+
| MA10 | Spread in daily flows is computed like MA9, except using the     |
|      | 20th and 80th percentiles. dimensionless—spatial                 |
+------+------------------------------------------------------------------+
| MA11 | Spread in daily flows is computed like MA9, except using the     |
|      | 25th and 75th percentiles. dimensionless—spatial                 |
+------+------------------------------------------------------------------+
| MA12 | Means (or medians) of monthly flow values. Compute the means for |
| to   | each.  Means (or medians) of monthly flow values. Compute the    |
| MA23 | means for each month over the entire flow record. For example,   |
|      | MA12 is the mean of all January flow values over the entire      |
|      | record (cubic feet per second— temporal).                        |
+------+------------------------------------------------------------------+
| MA24 | Variability (coefficient of variation) of monthly flow values.   |
| to   | Compute the standard deviation for each.  Variability            |
| MA35 | (coefficient of month in each year over the entire flow record.  |
|      | Divide the standard deviation by the mean for each month.        |
|      | Average (or take median of) these values for each month across   |
|      | all years. percent—temporal                                      |
+------+------------------------------------------------------------------+
| MA36 | Variability across monthly flows. Compute the minimum, maximum,  |
|      | and mean flows for each month in the entire flow record.  MA36   |
|      | is the maximum monthly flow minus the minimum monthly flow       |
|      | divided by the median monthly flow. dimensionless-spatial        |
+------+------------------------------------------------------------------+
| MA37 | Variability across monthly flows. Compute the first (25th        |
|      | percentile) and the third (75th percentile) quartiles (every     |
|      | month in dimensionless— the flow record). MA37 is the third      |
|      | quartile minus the first quartile divided by the median of the   |
|      | monthly means.                                                   |
+------+------------------------------------------------------------------+
| MA38 | Variability across monthly flows. Compute the 10th and 90th      |
|      | percentiles for the monthly means (every month in the flow       |
|      | record). MA38 is the 90th percentile minus the 10th percentile   |
|      | divided by the median of the monthly means.                      |
|      | dimensionless—spatial                                            |
+------+------------------------------------------------------------------+
| MA39 | Variability across monthly flows. Compute the standard deviation |
|      | for the monthly means. MA39 is the standard deviation times 100  |
|      | divided by the mean of the monthly means. percent—spatial        |
+------+------------------------------------------------------------------+
| MA40 | Skewness in the monthly flows. MA40 is the mean of the monthly   |
|      | flow means minus the median of the monthly means divided by the  |
|      | median of the monthly means. dimensionles-sspatial               |
+------+------------------------------------------------------------------+
| MA41 | Annual runoff. Compute the annual mean daily flows. MA41 is the  |
|      | mean of the annual means divided by the drainage area. cubic     |
|      | feet per second/ square mile—temporal                            |
+------+------------------------------------------------------------------+
| MA42 | Variability across annual flows. MA42 is the maximum annual flow |
|      | minus the minimum annual flow divided by the median annual flow. |
|      | dimensionless-spatial                                            |
+------+------------------------------------------------------------------+
| MA43 | Variability across annual flows. Compute the first (25th         |
|      | percentile) and third (75th percentile) quartiles and the 10th   |
|      | and 90th — percentiles for the annual means (every year in the   |
|      | flow record). MA43 is the third quartile minus the first         |
|      | quartile divided by the median of the annual means.              |
|      | dimensionless-spatial                                            |
+------+------------------------------------------------------------------+
| MA44 | Variability across annual flows. Compute the first (25th         |
|      | percentile) and third (75th percentile) quartiles and the 10th   |
|      | and 90th percentiles for the annual means (every year in the     |
|      | flow record). MA44 is the 90th percentile minus the 10th         |
|      | percentile divided by the median of the annual means.            |
|      | dimensionless-spatial                                            |
+------+------------------------------------------------------------------+
| MA45 | Skewness in the annual flows. MA45 is the mean of the annual     |
|      | flow means minus the median of the annual means divided by the   |
|      | median of the annual means. dimensionless-spatial                |
+------+------------------------------------------------------------------+
| ML1  | Mean (or median) of minimum flows for each month across all      |
| to   | years. Compute the minimums for each month over the entire flow  |
| ML12 | record. For example, ML1 is the mean of the minimums of all      |
|      | January flow values over the entire record. cubic feet per       |
|      | second— temporal                                                 |
+------+------------------------------------------------------------------+
| ML13 | Variability (coefficient of variation) across minimum monthly    |
|      | flow values. Compute the mean and standard deviation for the     |
|      | minimum monthly flows over the entire flow record. ML13 is the   |
|      | standard deviation times 100 divided by the mean minimum monthly |
|      | flow for all years. percent—spatial                              |
+------+------------------------------------------------------------------+
| ML14 | Compute the minimum annual flow for each year. ML14 is the mean  |
|      | of the ratios of minimum annual flows to the median flow for     |
|      | each year. dimensionless—temporal                                |
+------+------------------------------------------------------------------+
| ML15 | Low-flow index. ML15 is the mean of the ratios of minimum annual |
|      | flows to the mean flow for each year. dimensionless—temporal     |
+------+------------------------------------------------------------------+
| ML16 | Median of annual minimum flows. ML16 is the median of the ratios |
|      | of minimum annual flows to the median flow for each year.        |
|      | dimensionless— temporal                                          |
+------+------------------------------------------------------------------+
| ML17 | Base flow. Compute the mean annual flows. Compute the minimum of |
|      | a 7-day moving average flows for each year and divide them by    |
|      | the mean annual flow for that year. ML17 is the mean (or         |
|      | median—Use Preferenceset by using the Preference option) of      |
|      | those ratios. dimensionless—temporal                             |
+------+------------------------------------------------------------------+
| ML18 | Variability in base flow. Compute the standard deviation for the |
|      | ratios of 7-day moving average flows to mean annual flows for    |
|      | each year. ML18 is the standard deviation times 100 divided by   |
|      | the mean of the ratios. percent—spatial                          |
+------+------------------------------------------------------------------+
| ML19 | Base flow. Compute the ratios of the minimum annual flow to mean |
|      | annual flow for each year. ML19 is the mean (or median) of these |
|      | ratios times 100. dimensionless—temporal                         |
+------+------------------------------------------------------------------+
| ML20 | Base flow. Divide the daily flow record into 5-day blocks. Find  |
|      | the minimum flow for each block. Assign the minimum flow as a    |
|      | base flow for that block if 90 percent of that minimum flow is   |
|      | less than the minimum flows for the blocks on either side.       |
|      | Otherwise, set it to zero. Fill in the zero values using linear  |
|      | interpolation. Compute the total flow for the entire record and  |
|      | the total base flow for the entire record. ML20 is the ratio of  |
|      | total flow to total base flow. dimensionless—spatial             |
+------+------------------------------------------------------------------+
| ML21 | Variability across annual minimum flows. Compute the mean and    |
|      | standard deviation for the annual minimum flows. ML21 is the     |
|      | standard deviation times 100 divided by the mean.                |
|      | percent—spatial                                                  |
+------+------------------------------------------------------------------+
| ML22 | Specific mean annual minimum flow. ML22 is the mean (or median)  |
|      | of the annual minimum flows divided by the drainage area. cubic  |
|      | feet per second/square mile—temporal                             |
+------+------------------------------------------------------------------+
| MH1  | Mean (or median) maximum flows for each month across all years.  |
| to   | Compute the maximums for each month over the entire cubic feet   |
| MH12 | per flow record. For example, MH1 is the mean of the maximums of |
|      | all January flow values over the entire record. second—temporal  |
+------+------------------------------------------------------------------+
| MH13 | Variability (coefficient of variation) across maximum monthly    |
|      | flow values. Compute the mean and standard deviation for the     |
|      | maximum monthly flows over the entire flow record. MH13 is the   |
|      | standard deviation times 100 divided by the mean maximum monthly |
|      | flow for all years. percent—spatial                              |
+------+------------------------------------------------------------------+
| MH14 | Median of annual maximum flows. Compute the annual maximum flows |
|      | from monthly maximum flows. Compute the ratio of annual maximum  |
|      | flow to median annual flow for each year. MH14 is the median of  |
|      | these ratios. dimensionless—temporal                             |
+------+------------------------------------------------------------------+
| MH15 | High flow discharge index. Compute the 1-percent exceedance      |
|      | value for the entire data record. MH15 is the 1-percent          |
|      | exceedance value divided by the median flow for the entire       |
|      | record. dimensionless—spatial                                    |
+------+------------------------------------------------------------------+
| MH16 | High flow discharge index. Compute the 10-percent exceedance     |
|      | value for the entire data record. MH16 is the 10-percent         |
|      | exceedance value divided by the median flow for the entire       |
|      | record. dimensionless—spatial                                    |
+------+------------------------------------------------------------------+
| MH17 | High flow discharge index. Compute the 25-percent exceedance     |
|      | value for the entire data record. MH17 is the 25-percent         |
|      | exceedance value divided by the median flow for the entire       |
|      | record. dimensionless—spatial                                    |
+------+------------------------------------------------------------------+
| MH18 | Variability across annual maximum flows. Compute the logs        |
|      | (log10) of the maximum annual flows. Find the standard           |
|      | percent—spatial deviation and mean for these values. MH18 is the |
|      | standard deviation times 100 divided by the mean.                |
+------+------------------------------------------------------------------+
| MH19 | Skewness in annual maximum flows. dimensionless—spatial          |
+------+------------------------------------------------------------------+
| MH20 | Specific mean annual maximum flow. MH20 is the mean (or median)  |
|      | of the annual maximum flows divided by the drainage area. cubic  |
|      | feet per second/square mile—temporal                             |
+------+------------------------------------------------------------------+
| MH21 | High flow volume index. Compute the average volume for flow      |
|      | events above a threshold equal to the median flow for the entire |
|      | record. MH21 is the average volume divided by the median flow    |
|      | for the entire record. days—temporal                             |
+------+------------------------------------------------------------------+
| MH22 | High flow volume. Compute the average volume for flow events     |
|      | above a threshold equal to three times the median flow for the   |
|      | entire record. MH22 is the average volume divided by the median  |
|      | flow for the entire record. days—temporal                        |
+------+------------------------------------------------------------------+
| MH23 | High flow volume. Compute the average volume for flow events     |
|      | above a threshold equal to seven times the median flow for the   |
|      | entire record. MH23 is the average volume divided by the median  |
|      | flow for the entire record. days—temporal                        |
+------+------------------------------------------------------------------+
| MH24 | High peak flow. Compute the average peak flow value for flow     |
|      | events above a threshold equal to the median flow for the entire |
|      | record. MH24 is the average peak flow divided by the median flow |
|      | for the entire record. dimensionless—temporal                    |
+------+------------------------------------------------------------------+
| MH25 | High peak flow.  Compute the average peak flow value for flow    |
|      | events above a threshold equal to three times the median flow    |
|      | for the entire record.  MH25 is the average peak flow divided by |
|      | the median flow for the entire record. dimensionless—temporal    |
+------+------------------------------------------------------------------+
| MH26 | High peak flow. Compute the average peak flow value for flow     |
|      | events above a threshold equal to seven times the median flow    |
|      | for the entire record. MH26 is the average peak flow divided by  |
|      | the median flow for the entire record. dimensionless—temporal    |
+------+------------------------------------------------------------------+
| MH27 | High peak flow.  Compute the average peak flow value for flow    |
|      | events above a threshold equal to 75th-percentile value for the  |
|      | entire flow record. MH27 is the average peak flow divided by the |
|      | median flow for the entire record. dimensionless—temporal        |
+------+------------------------------------------------------------------+
| FL1  | Low flood pulse count. Compute the average number of flow events |
|      | with flows below a threshold equal to the 25th-percentile value  |
|      | for the entire flow record. FL1 is the average (or median)       |
|      | number of events. number of events/year—temporal                 |
+------+------------------------------------------------------------------+
| FL2  | Variability in low pulse count. Compute the standard deviation   |
|      | in the annual pulse counts for FL1. FL2 is 100 times the         |
|      | standard deviation divided by the mean pulse count.              |
|      | percent—spatial                                                  |
+------+------------------------------------------------------------------+
| FL3  | Frequency of low pulse spells. Compute the average number of     |
|      | flow events with flows below a threshold equal to 5 percent of   |
|      | the mean flow value for the entire flow record. FL3 is the       |
|      | average (or median) number of events. number of                  |
|      | events/year—temporal                                             |
+------+------------------------------------------------------------------+
| FH1  | High flood pulse count. Compute the average number of flow       |
|      | events with flows above a threshold equal to the 75th-percentile |
|      | value for the entire flow record. FH1 is the average (or median) |
|      | number of events. number of events/year—temporal                 |
+------+------------------------------------------------------------------+
| FH2  | Variability in high pulse count. Compute the standard deviation  |
|      | in the annual pulse counts for FH1. FH2 is 100 times the         |
|      | standard deviation divided by the mean pulse count. number of    |
|      | events/year—spatial                                              |
+------+------------------------------------------------------------------+
| FH3  | High flood pulse count. Compute the average number of days per   |
|      | year that the flow is above a threshold equal to three times the |
|      | median flow for the entire record. FH3 is the mean (or median)   |
|      | of the annual number of days for all years. number of            |
|      | days/year—temporal                                               |
+------+------------------------------------------------------------------+
| FH4  | High flood pulse count. Compute the average number of days per   |
|      | year that the flow is above a threshold equal to seven times the |
|      | median flow for the entire record. FH4 is the mean (or median)   |
|      | of the annual number of days for all years. number of            |
|      | days/year—temporal                                               |
+------+------------------------------------------------------------------+
| FH5  | Flood frequency. Compute the average number of flow events with  |
|      | flows above a threshold equal to the median flow value for the   |
|      | entire flow record. FH5 is the average (or median) number of     |
|      | events. number of events/year—temporal                           |
+------+------------------------------------------------------------------+
| FH6  | Flood frequency. Compute the average number of flow events with  |
|      | flows above a threshold equal to three times the median flow     |
|      | value for the entire flow record. FH6 is the average (or median) |
|      | number of events. number of events/year—temporal                 |
+------+------------------------------------------------------------------+
| FH7  | Flood frequency. Compute the average number of flow events with  |
|      | flows above a threshold equal to seven times the median flow     |
|      | value for the entire flow record. FH6 is the average (or median) |
|      | number of events. number of events/year—temporal                 |
+------+------------------------------------------------------------------+
| FH8  | Flood frequency. Compute the average number of flow events with  |
|      | flows above a threshold equal to 25-percent exceedance value for |
|      | the entire flow record. FH8 is the average (or median) number of |
|      | events. number of events/year—temporal                           |
+------+------------------------------------------------------------------+
| FH9  | Flood frequency. Compute the average number of flow events with  |
|      | flows above a threshold equal to 75-percent exceedance value for |
|      | the entire flow record. FH9 is the average (or median) number of |
|      | events. number of events/year—temporal                           |
+------+------------------------------------------------------------------+
| FH10 | Flood frequency. Compute the average number of flow events with  |
|      | flows above a threshold equal to median of the annual cubic      |
|      | feet/second minima for the entire flow record. FH10 is the       |
|      | average (or median) number of events. number of                  |
|      | events/year—temporal                                             |
+------+------------------------------------------------------------------+
| DL1  | Annual minimum daily flow. Compute the minimum 1-day average     |
|      | flow for each year. DL1 is the mean (or median) of these cubic   |
|      | feet per values. second—temporal                                 |
+------+------------------------------------------------------------------+
| DL2  | Annual minimum of 3-day moving average flow. Compute the minimum |
|      | of a 3-day moving average flow for each year. DL2 cubic feet per |
|      | is the mean (or median) of these values. second—temporal         |
+------+------------------------------------------------------------------+
| DL3  | Annual minimum of 7-day moving average flow. Compute the minimum |
|      | of a 7-day moving average flow for each year. DL3 cubic feet per |
|      | is the mean (or median) of these values. second—temporal         |
+------+------------------------------------------------------------------+
| DL4  | Annual minimum of 30-day moving average flow. Compute the        |
|      | minimum of a 30-day moving average flow for each year. cubic     |
|      | feet per DL4 is the mean (or median) of these values.            |
|      | second—temporal                                                  |
+------+------------------------------------------------------------------+

positional arguments:
  indice_codes          A list of the hydrologic indice codes, stream classifications, and/or
                        flow regime indices to be computed.
                        
                        The hydrologic indice codes are taken as is, but the collected stream
                        classifications are intersected with the flow regime indices.

options:
  -h, --help            show this help message and exit
  --water_year WATER_YEAR
                        [optional, default="YE-SEP"]
                        
                        The water year to use for the calculation.  This uses the one of the
                        "A-..." Pandas offset codes.  The "YE-SEP" code represents the very end
                        of September (the start of October) as the end of the water year.
  --drainage_area DRAINAGE_AREA
                        [optional, default=1]
                        
                        The drainage area to use for the calculations.  This is the drainage
                        area in square miles.
  --use_median          [optional, default=False]
                        
                        If True, use the median instead of the mean for the calculations.
  --input_ts INPUT_TS   [optional though required if using within Python, default is '-'
                        (stdin)]
                        
                        Whether from a file or standard input, data requires a single line
                        header of column names.  The default header is the first line of
                        the input, but this can be changed for CSV files using the
                        'skiprows' option.
                        
                        Most common date formats can be used, but the closer to ISO 8601
                        date/time standard the better.
                        
                        Comma-separated values (CSV) files or tab-separated values (TSV)::
                        
                            File separators will be automatically detected.
                        
                            Columns can be selected by name or index, where the index for
                            data columns starts at 1.
                        
                        Command line examples:
                        
                            +---------------------------------+---------------------------+
                            | Keyword Example                 | Description               |
                            +=================================+===========================+
                            | --input_ts=fn.csv               | read all columns from     |
                            |                                 | 'fn.csv'                  |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.csv,2,1           | read data columns 2 and 1 |
                            |                                 | from 'fn.csv'             |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.csv,2,skiprows=2  | read data column 2 from   |
                            |                                 | 'fn.csv', skipping first  |
                            |                                 | 2 rows so header is read  |
                            |                                 | from third row            |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.xlsx,2,Sheet21    | read all data from 2nd    |
                            |                                 | sheet all data from       |
                            |                                 | "Sheet21" of 'fn.xlsx'    |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.hdf5,Table12,T2   | read all data from table  |
                            |                                 | "Table12" then all data   |
                            |                                 | from table "T2" of        |
                            |                                 | 'fn.hdf5'                 |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.wdm,210,110       | read DSNs 210, then 110   |
                            |                                 | from 'fn.wdm'             |
                            +---------------------------------+---------------------------+
                            | --input_ts='-'                  | read all columns from     |
                            |                                 | standard input (stdin)    |
                            +---------------------------------+---------------------------+
                            | --input_ts='-' --columns=4,1    | read column 4 and 1 from  |
                            |                                 | standard input (stdin)    |
                            +---------------------------------+---------------------------+
                        
                        If working with CSV or TSV files you can use redirection rather
                        than use `--input_ts=fname.csv`.  The following are identical:
                        
                        From a file:
                        
                            command subcmd --input_ts=fname.csv
                        
                        From standard input (since '--input_ts=-' is the default:
                        
                            command subcmd < fname.csv
                        
                        Can also combine commands by piping:
                        
                            command subcmd < filein.csv | command subcmd1 > fileout.csv
                        
                        Python library examples::
                        
                            You must use the `input_ts=...` option where `input_ts` can be
                            one of a [pandas DataFrame, pandas Series, dict, tuple, list,
                            StringIO, or file name].
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

recession

$ hydrotoolbox recession --help
usage: hydrotoolbox recession [-h] [--date DATE] [--ice_period ICE_PERIOD]
                              [--input_ts INPUT_TS] [--columns COLUMNS]
                              [--source_units SOURCE_UNITS]
                              [--start_date START_DATE] [--end_date END_DATE]
                              [--dropna DROPNA] [--clean]
                              [--round_index ROUND_INDEX]
                              [--skiprows SKIPROWS] [--index_type INDEX_TYPE]
                              [--names NAMES] [--target_units TARGET_UNITS]
                              [--tablefmt TABLEFMT]
                              [--float_format FLOAT_FORMAT]

Recession coefficient.

options:
  -h, --help            show this help message and exit
  --date DATE           Date term
  --ice_period ICE_PERIOD
                        Period of ice that changes the discharge relationship
  --input_ts INPUT_TS   Streamflow
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.

storm_events

$ hydrotoolbox storm_events --help
usage: hydrotoolbox storm_events [-h] [--input_ts INPUT_TS] [--window WINDOW]
                                 [--min_peak MIN_PEAK] [--columns COLUMNS]
                                 [--source_units SOURCE_UNITS]
                                 [--start_date START_DATE]
                                 [--end_date END_DATE] [--dropna DROPNA]
                                 [--clean] [--round_index ROUND_INDEX]
                                 [--skiprows SKIPROWS]
                                 [--index_type INDEX_TYPE] [--names NAMES]
                                 [--target_units TARGET_UNITS]
                                 [--tablefmt TABLEFMT]
                                 [--float_format FLOAT_FORMAT]
                                 rise_lag fall_lag

Storm events.

positional arguments:
  rise_lag              Sets the number of time-series terms to include from the rising limb of
                        the hydrograph.
  fall_lag              Sets the number of time-series terms to include from the falling limb of
                        the hydrograph. window=1

options:
  -h, --help            show this help message and exit
  --input_ts INPUT_TS   [optional though required if using within Python, default is '-'
                        (stdin)]
                        
                        Whether from a file or standard input, data requires a single line
                        header of column names.  The default header is the first line of
                        the input, but this can be changed for CSV files using the
                        'skiprows' option.
                        
                        Most common date formats can be used, but the closer to ISO 8601
                        date/time standard the better.
                        
                        Comma-separated values (CSV) files or tab-separated values (TSV)::
                        
                            File separators will be automatically detected.
                        
                            Columns can be selected by name or index, where the index for
                            data columns starts at 1.
                        
                        Command line examples:
                        
                            +---------------------------------+---------------------------+
                            | Keyword Example                 | Description               |
                            +=================================+===========================+
                            | --input_ts=fn.csv               | read all columns from     |
                            |                                 | 'fn.csv'                  |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.csv,2,1           | read data columns 2 and 1 |
                            |                                 | from 'fn.csv'             |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.csv,2,skiprows=2  | read data column 2 from   |
                            |                                 | 'fn.csv', skipping first  |
                            |                                 | 2 rows so header is read  |
                            |                                 | from third row            |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.xlsx,2,Sheet21    | read all data from 2nd    |
                            |                                 | sheet all data from       |
                            |                                 | "Sheet21" of 'fn.xlsx'    |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.hdf5,Table12,T2   | read all data from table  |
                            |                                 | "Table12" then all data   |
                            |                                 | from table "T2" of        |
                            |                                 | 'fn.hdf5'                 |
                            +---------------------------------+---------------------------+
                            | --input_ts=fn.wdm,210,110       | read DSNs 210, then 110   |
                            |                                 | from 'fn.wdm'             |
                            +---------------------------------+---------------------------+
                            | --input_ts='-'                  | read all columns from     |
                            |                                 | standard input (stdin)    |
                            +---------------------------------+---------------------------+
                            | --input_ts='-' --columns=4,1    | read column 4 and 1 from  |
                            |                                 | standard input (stdin)    |
                            +---------------------------------+---------------------------+
                        
                        If working with CSV or TSV files you can use redirection rather
                        than use `--input_ts=fname.csv`.  The following are identical:
                        
                        From a file:
                        
                            command subcmd --input_ts=fname.csv
                        
                        From standard input (since '--input_ts=-' is the default:
                        
                            command subcmd < fname.csv
                        
                        Can also combine commands by piping:
                        
                            command subcmd < filein.csv | command subcmd1 > fileout.csv
                        
                        Python library examples::
                        
                            You must use the `input_ts=...` option where `input_ts` can be
                            one of a [pandas DataFrame, pandas Series, dict, tuple, list,
                            StringIO, or file name].
  --window WINDOW       [optional, default=1]
                        
                        Adjacent peaks can not be within `window` time-series terms of each
                        other.
  --min_peak MIN_PEAK   [optional, default=0]
                        
                        All detected storm peaks in the hydrograph must be greater than
                        `min_peak`.
  --columns COLUMNS     [optional, defaults to all columns, input filter]
                        
                        Columns to select out of input.  Can use column names from the
                        first line header or column numbers.  If using numbers, column
                        number 1 is the first data column.  To pick multiple columns;
                        separate by commas with no spaces. As used in `toolbox_utils pick`
                        command.
                        
                        This solves a big problem so that you don't have to create a data
                        set with a certain column order, you can rearrange columns when
                        data is read in.
  --source_units SOURCE_UNITS
                        [optional, default is None, transformation]
                        
                        If unit is specified for the column as the second field of a ':'
                        delimited column name, then the specified units and the
                        'source_units' must match exactly.
                        
                        Any unit string compatible with the 'pint' library can be used.
  --start_date START_DATE
                        [optional, defaults to first date in time-series, input filter]
                        
                        The start_date of the series in ISOdatetime format, or 'None' for
                        beginning.
  --end_date END_DATE   [optional, defaults to last date in time-series, input filter]
                        
                        The end_date of the series in ISOdatetime format, or 'None' for
                        end.
  --dropna DROPNA       [optional, defauls it 'no', input filter]
                        
                        Set `dropna` to 'any' to have records dropped that have NA value in
                        any column, or 'all' to have records dropped that have NA in all
                        columns. Set to 'no' to not drop any records.  The default is 'no'.
  --clean               [optional, default is False, input filter]
                        
                        The 'clean' command will repair a input index, removing duplicate
                        index values and sorting.
  --round_index ROUND_INDEX
                        [optional, default is None which will do nothing to the index,
                        output format]
                        
                        Round the index to the nearest time point.  Can significantly
                        improve the performance since can cut down on memory and processing
                        requirements, however be cautious about rounding to a very course
                        interval from a small one.  This could lead to duplicate values in
                        the index.
  --skiprows SKIPROWS   [optional, default is None which will infer header from first line,
                        input filter]
                        
                        Line numbers to skip (0-indexed) if a list or number of lines to
                        skip at the start of the file if an integer.
                        
                        If used in Python can be a callable, the callable function will be
                        evaluated against the row indices, returning True if the row should
                        be skipped and False otherwise.  An example of a valid callable
                        argument would be
                        
                        ``lambda x: x in [0, 2]``.
  --index_type INDEX_TYPE
                        [optional, default is 'datetime', output format]
                        
                        Can be either 'number' or 'datetime'.  Use 'number' with index
                        values that are Julian dates, or other epoch reference.
  --names NAMES         [optional, default is None, transformation]
                        
                        If None, the column names are taken from the first row after
                        'skiprows' from the input dataset.
                        
                        MUST include a name for all columns in the input dataset, including
                        the index column.
  --target_units TARGET_UNITS
                        [optional, default is None, transformation]
                        
                        The purpose of this option is to specify target units for unit
                        conversion.  The source units are specified in the header line of
                        the input or using the 'source_units' keyword.
                        
                        The units of the input time-series or values are specified as the
                        second field of a ':' delimited name in the header line of the
                        input or in the 'source_units' keyword.
                        
                        Any unit string compatible with the 'pint' library can be used.
                        
                        This option will also add the 'target_units' string to the
                        column names.
  --tablefmt TABLEFMT   [optional, default is 'cvs_nos']
                        
                        The table format.  Can be one of 'csv', 'tsv', 'csv_nos', 'tsv_nos',
                        'plain', 'simple', 'github', 'grid', 'fancy_grid', 'pipe', 'orgtbl',
                        'jira', 'presto', 'psql', 'rst', 'mediawiki', 'moinmoin', 'youtrack',
                        'html', 'latex', 'latex_raw', 'latex_booktabs' and 'textile'.
  --float_format FLOAT_FORMAT
                        [optional, default is 'g']
                        
                        The format for floating point numbers in the output table.