Tests Test Coverage Latest release BSD-3 clause license tstoolbox downloads PyPI - Python Version

Command Line

Help:

tstoolbox --help

about

$ tstoolbox about --help
usage: tstoolbox about [-h]

Display version number and system information.

options:
  -h, --help  show this help message and exit

accumulate

$ tstoolbox accumulate --help
usage: tstoolbox accumulate [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--clean]
  [--statistic STATISTIC] [--round_index ROUND_INDEX] [--skiprows SKIPROWS]
  [--index_type INDEX_TYPE] [--names NAMES] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--print_input] [--tablefmt TABLEFMT]

Calculate accumulating statistics.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --statistic STATISTIC
      [optional, default is "sum", transformation]
      OneOrMore("sum", "max", "min", "prod")
      Python example::
        statistic=["sum", "max"]

      Command line example::
        --statistic=sum,max

  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

add_trend

$ tstoolbox add_trend --help
usage: tstoolbox add_trend [-h] [--start_index START_INDEX]
  [--end_index END_INDEX] [--input_ts INPUT_TS] [--start_date START_DATE]
  [--end_date END_DATE] [--skiprows SKIPROWS] [--columns COLUMNS] [--clean]
  [--dropna DROPNA] [--names NAMES] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--round_index ROUND_INDEX] [--index_type
  INDEX_TYPE] [--print_input] [--tablefmt TABLEFMT] start_offset end_offset

Adds a linear interpolated trend to the input data. The trend values start at
[start_index, start_offset] and end at [end_index, end_offset].

positional arguments:
  start_offset          The starting value for the applied trend.  This is the starting
    value for the linear interpolation that will be added to the input data.

  end_offset            The ending value for the applied trend.  This is the ending
    value for the linear interpolation that will be added to the input data.


options:
  -h | --help
      show this help message and exit
  --start_index START_INDEX
      [optional, default is 0, transformation]
      The starting index where start_offset will be initiated. Rows prior to
      start_index will not be affected.
  --end_index END_INDEX
      [optional, default is -1, transformation]
      The ending index where end_offset will be set. Rows after end_index will
      not be affected.
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

aggregate

$ tstoolbox aggregate --help
usage: tstoolbox aggregate [-h] [--input_ts INPUT_TS] [--groupby GROUPBY]
  [--statistic STATISTIC] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--dropna DROPNA] [--clean] [--agg_interval
  AGG_INTERVAL] [--ninterval NINTERVAL] [--round_index ROUND_INDEX]
  [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--print_input]
  [--tablefmt TABLEFMT] [--min_count MIN_COUNT]

Take a time series and aggregate to specified frequency.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --groupby GROUPBY
      [optional, default is None, transformation]
      The pandas offset code to group the time-series data into. A special code
      is also available to group 'months_across_years' that will group
      into twelve monthly categories across the entire time-series. The
      groupby keyword has a special option 'all' which will aggregate all
      records.
      ┌───────┬───────────────┐
      │ Alias │ Description   │
      ╞═══════╪═══════════════╡
      │ N     │ Nanoseconds   │
      ├───────┼───────────────┤
      │ U     │ microseconds  │
      ├───────┼───────────────┤
      │ L     │ milliseconds  │
      ├───────┼───────────────┤
      │ S     │ Secondly      │
      ├───────┼───────────────┤
      │ T     │ Minutely      │
      ├───────┼───────────────┤
      │ H     │ Hourly        │
      ├───────┼───────────────┤
      │ D     │ calendar Day  │
      ├───────┼───────────────┤
      │ W     │ Weekly        │
      ├───────┼───────────────┤
      │ M     │ Month end     │
      ├───────┼───────────────┤
      │ MS    │ Month Start   │
      ├───────┼───────────────┤
      │ Q     │ Quarter end   │
      ├───────┼───────────────┤
      │ QS    │ Quarter Start │
      ├───────┼───────────────┤
      │ A     │ Annual end    │
      ├───────┼───────────────┤
      │ AS    │ Annual Start  │
      ╘═══════╧═══════════════╛

      Business offset codes.
      ┌───────┬────────────────────────────────────┐
      │ Alias │ Description                        │
      ╞═══════╪════════════════════════════════════╡
      │ B     │ Business day                       │
      ├───────┼────────────────────────────────────┤
      │ BM    │ Business Month end                 │
      ├───────┼────────────────────────────────────┤
      │ BMS   │ Business Month Start               │
      ├───────┼────────────────────────────────────┤
      │ BQ    │ Business Quarter end               │
      ├───────┼────────────────────────────────────┤
      │ BQS   │ Business Quarter Start             │
      ├───────┼────────────────────────────────────┤
      │ BA    │ Business Annual end                │
      ├───────┼────────────────────────────────────┤
      │ BAS   │ Business Annual Start              │
      ├───────┼────────────────────────────────────┤
      │ C     │ Custom business day (experimental) │
      ├───────┼────────────────────────────────────┤
      │ CBM   │ Custom Business Month end          │
      ├───────┼────────────────────────────────────┤
      │ CBMS  │ Custom Business Month Start        │
      ╘═══════╧════════════════════════════════════╛

      Weekly has the following anchored frequencies:
      ┌───────┬─────────────┬───────────────────────────────┐
      │ Alias │ Equivalents │ Description                   │
      ╞═══════╪═════════════╪═══════════════════════════════╡
      │ W-SUN │ W           │ Weekly frequency (SUNdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-MON │             │ Weekly frequency (MONdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-TUE │             │ Weekly frequency (TUEsdays)   │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-WED │             │ Weekly frequency (WEDnesdays) │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-THU │             │ Weekly frequency (THUrsdays)  │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-FRI │             │ Weekly frequency (FRIdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-SAT │             │ Weekly frequency (SATurdays)  │
      ╘═══════╧═════════════╧═══════════════════════════════╛

      Quarterly frequencies (Q, BQ, QS, BQS) and annual frequencies (A, BA, AS,
      BAS) replace the "x" in the "Alias" column to have the following
      anchoring suffixes:
      ┌───────┬──────────┬─────────────┬────────────────────────────┐
      │ Alias │ Examples │ Equivalents │ Description                │
      ╞═══════╪══════════╪═════════════╪════════════════════════════╡
      │ x-DEC │ A-DEC    │ A Q AS QS   │ year ends end of DECember  │
      │       │ Q-DEC    │             │                            │
      │       │ AS-DEC   │             │                            │
      ├───────┼─QS-DEC───┼─────────────┼────────────────────────────┤
      │ x-JAN │          │             │ year ends end of JANuary   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-FEB │          │             │ year ends end of FEBruary  │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAR │          │             │ year ends end of MARch     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-APR │          │             │ year ends end of APRil     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAY │          │             │ year ends end of MAY       │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUN │          │             │ year ends end of JUNe      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUL │          │             │ year ends end of JULy      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-AUG │          │             │ year ends end of AUGust    │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-SEP │          │             │ year ends end of SEPtember │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-OCT │          │             │ year ends end of OCTober   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-NOV │          │             │ year ends end of NOVember  │
      ╘═══════╧══════════╧═════════════╧════════════════════════════╛

  --statistic STATISTIC
      [optional, defaults to 'mean', transformation]
      Any string in the following table of list of same to calculate on each
      groupby group.
      ┌───────────┬───────────┬─────────────────────────────────────────────┐
      │ statistic │ Allow kwd │ Description                                 │
      ╞═══════════╪═══════════╪═════════════════════════════════════════════╡
      │ count     │           │ Compute count of group, excluding missing   │
      │           │           │ values.                                     │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ nunique   │           │ Return number of unique elements in the     │
      │           │           │ group.                                      │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ first     │ min_count │ Return first value within each group.       │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ last      │ min_count │ Return last value within each group.        │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ max       │ min_count │ Compute max of group values.                │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ mean      │           │ Compute mean of groups, excluding missing   │
      │           │           │ values.                                     │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ median    │           │ Compute median of groups, excluding missing │
      │           │           │ values.                                     │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ min       │ min_count │ Compute min of group values.                │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ ohlc      │           │ Compute open, high, low and close values of │
      │           │           │ a group, excluding missing values.          │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ prod      │ min_count │ Compute prod of group values.               │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ size      │           │ Compute group sizes.                        │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ sem       │           │ Compute standard error of the mean of       │
      │           │           │ groups, excluding missing values.           │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ std       │           │ Compute standard deviation of groups,       │
      │           │           │ excluding missing values.                   │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ sum       │ min_count │ Compute sum of group values.                │
      ├───────────┼───────────┼─────────────────────────────────────────────┤
      │ var       │           │ Compute variance of groups, excluding       │
      │           │           │ missing values.                             │
      ╘═══════════╧═══════════╧═════════════════════════════════════════════╛

      Python example::
        statistic=['mean', 'max', 'first']

      Command line example::
        --statistic=mean,max,first

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --agg_interval AGG_INTERVAL
      DEPRECATED: Use the 'groupby' option instead.
  --ninterval NINTERVAL
      DEPRECATED: Just prefix the number in front of the 'groupby' pandas offset
      code.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.
  --min_count MIN_COUNT
      The required number of valid values to perform the operation. If fewer
      than min_count non-NA values are present the result will be NA.
      Default is 0.
      Only available for the following statistic methods: "first", "last",
      "max", "min", "prod", and "sum".

calculate_fdc

$ tstoolbox calculate_fdc --help
usage: tstoolbox calculate_fdc [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--clean] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--percent_point_function PERCENT_POINT_FUNCTION] [--plotting_position
  PLOTTING_POSITION] [--source_units SOURCE_UNITS] [--target_units
  TARGET_UNITS] [--sort_values SORT_VALUES] [--sort_index SORT_INDEX]
  [--tablefmt TABLEFMT] [--add_index] [--include_ri] [--include_sd]
  [--include_cl] [--ci CI]

DOES NOT return a time-series.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --percent_point_function PERCENT_POINT_FUNCTION
      [optional, default is None, transformation]
      The distribution used to shift the plotting position values. Choose from
      'norm', 'lognorm', 'weibull', and None.
  --plotting_position PLOTTING_POSITION
      [optional, default is 'weibull', transformation]
      ┌────────────┬────────┬──────────────────────┬────────────────────┐
      │ Name       │ a      │ Equation (i-a)/(n+1- │ Description        │
      │            │        │ -2*a)                │                    │
      ╞════════════╪════════╪══════════════════════╪════════════════════╡
      │ weibull    │ 0      │ i/(n+1)              │ mean of sampling   │
      │ (default)  │        │                      │ distribution       │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ filliben   │ 0.3175 │ (i-0.3175)/(n+0.365) │                    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ yu         │ 0.326  │ (i-0.326)/(n+0.348)  │                    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ tukey      │ 1/3    │ (i-1/3)/(n+1/3)      │ approx. median of  │
      │            │        │                      │ sampling distribu- │
      │            │        │                      │ tion               │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ blom       │ 0.375  │ (i-0.375)/(n+0.25)   │                    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ cunnane    │ 2/5    │ (i-2/5)/(n+1/5)      │ subjective         │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ gringorton │ 0.44   │ (1-0.44)/(n+0.12)    │                    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ hazen      │ 1/2    │ (i-1/2)/n            │ midpoints of n     │
      │            │        │                      │ equal intervals    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ larsen     │ 0.567  │ (i-0.567)/(n-0.134)  │                    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ gumbel     │ 1      │ (i-1)/(n-1)          │ mode of sampling   │
      │            │        │                      │ distribution       │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ california │ NA     │ i/n                  │                    │
      ╘════════════╧════════╧══════════════════════╧════════════════════╛

      Where 'i' is the sorted rank of the y value, and 'n' is the total number
      of values to be plotted.
      The 'blom' plotting position is also known as the 'Sevruk and Geiger'.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --sort_values SORT_VALUES
      [optional, default is 'ascending', input filter]
      Sort order is either 'ascending' or 'descending'.
  --sort_index SORT_INDEX
      [optional, default is 'ascending', input filter]
      Sort order is either 'ascending' or 'descending'.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.
  --add_index
      [optional, default is False]
      Add a monotonically increasing index.
  --include_ri
      [optional, default is False]
      Include the recurrence interval (sometimes called the return interval).
      This is the inverse of the calculated plotting position defined by
      the equations available with the plotting_position keyword.
  --include_sd
      [optional, default is False]
      Include a standard deviation column for each column in the input. The
      equation used is:
      Sd = (Pc(1 - Pc)/N)**0.5

      where:
      Pc is the cumulative probability
      N is the number of values

  --include_cl
      [optional, default is False]
      Include two columns showing the upper and lower confidence limit for each
      column in the input. The equations used are:
      U = Pc + 2(1 - Pc) t Sd
      L = Pc - 2Pc t Sd

      where:
      Pc is the cumulative probability
      t is the Student's "t" value for number of samples and
          confidence interval as defined with `ci` keyword
      Sd is the standard deviation with the equation above

  --ci CI
      [optional, default is 0.9]
      This is the confidence interval used when the include_cl keyword is
      active. The confidence interval of 0.9 implies an upper limit of
      0.95 and a lower limit of 0.05 since 0.9 = 0.95 - 0.05.

calculate_kde

$ tstoolbox calculate_kde --help
usage: tstoolbox calculate_kde [-h] [--ascending] [--evaluate]
  [--input_ts INPUT_TS] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--clean] [--skiprows SKIPROWS] [--index_type
  INDEX_TYPE] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--names NAMES] [--tablefmt TABLEFMT]

Returns a time-series or the KDE curve depending on the evaluate keyword.

options:
  -h | --help
      show this help message and exit
  --ascending
      [optional, defaults to True, input filter]
      Sort order.
  --evaluate
      [optional, defaults to False, transformation]
      Whether or not to return a time-series of KDE density values or the KDE
      curve. Defaults to False, which would return the KDE curve.
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

clip

$ tstoolbox clip --help
usage: tstoolbox clip [-h] [--input_ts INPUT_TS] [--start_date START_DATE]
  [--end_date END_DATE] [--columns COLUMNS] [--dropna DROPNA] [--clean]
  [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--a_min
  A_MIN] [--a_max A_MAX] [--round_index ROUND_INDEX] [--source_units
  SOURCE_UNITS] [--target_units TARGET_UNITS] [--print_input] [--tablefmt
  TABLEFMT]

Return a time-series with values limited to [a_min, a_max].

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --a_min A_MIN
      [optional, defaults to None, transformation]
      All values lower than this will be set to this value. Default is None.
  --a_max A_MAX
      [optional, defaults to None, transformation]
      All values higher than this will be set to this value. Default is None.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

convert

$ tstoolbox convert --help
usage: tstoolbox convert [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--clean]
  [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--factor
  FACTOR] [--offset OFFSET] [--print_input] [--round_index ROUND_INDEX]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--float_format
  FLOAT_FORMAT] [--tablefmt TABLEFMT]

See the 'equation' subcommand for a generalized form of this command.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --factor FACTOR
      [optional, default is 1.0, transformation]
      Factor to multiply the time series values.
  --offset OFFSET
      [optional, default is 0.0, transformation]
      Offset to add to the time series values.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

convert_index

$ tstoolbox convert_index --help
usage: tstoolbox convert_index [-h] [--interval INTERVAL] [--epoch EPOCH]
  [--input_ts INPUT_TS] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--round_index ROUND_INDEX] [--dropna DROPNA]
  [--clean] [--names NAMES] [--source_units SOURCE_UNITS] [--target_units
  TARGET_UNITS] [--skiprows SKIPROWS] [--tablefmt TABLEFMT] to

Convert datetime to/from Julian dates from different epochs.

positional arguments:
  to                    One of 'number' or 'datetime'.  If 'number', the source time-series
    should have a datetime index to convert to a number. If 'datetime', source
    data should be a number and the converted index will be datetime.


options:
  -h | --help
      show this help message and exit
  --interval INTERVAL
      [optional, defaults to None, transformation]
      The interval parameter defines the unit time. One of the pandas offset
      codes. The default of 'None' will set the unit time for all defined
      epochs to daily except 'unix' which will default to seconds.
      You can give any smaller unit time than daily for all defined epochs
      except 'unix' which requires an interval less than seconds. For an
      epoch that begins with an arbitrary date, you can use any interval
      equal to or smaller than the frequency of the time-series.
      ┌───────┬───────────────┐
      │ Alias │ Description   │
      ╞═══════╪═══════════════╡
      │ N     │ Nanoseconds   │
      ├───────┼───────────────┤
      │ U     │ microseconds  │
      ├───────┼───────────────┤
      │ L     │ milliseconds  │
      ├───────┼───────────────┤
      │ S     │ Secondly      │
      ├───────┼───────────────┤
      │ T     │ Minutely      │
      ├───────┼───────────────┤
      │ H     │ Hourly        │
      ├───────┼───────────────┤
      │ D     │ calendar Day  │
      ├───────┼───────────────┤
      │ W     │ Weekly        │
      ├───────┼───────────────┤
      │ M     │ Month end     │
      ├───────┼───────────────┤
      │ MS    │ Month Start   │
      ├───────┼───────────────┤
      │ Q     │ Quarter end   │
      ├───────┼───────────────┤
      │ QS    │ Quarter Start │
      ├───────┼───────────────┤
      │ A     │ Annual end    │
      ├───────┼───────────────┤
      │ AS    │ Annual Start  │
      ╘═══════╧═══════════════╛

      Business offset codes.
      ┌───────┬────────────────────────────────────┐
      │ Alias │ Description                        │
      ╞═══════╪════════════════════════════════════╡
      │ B     │ Business day                       │
      ├───────┼────────────────────────────────────┤
      │ BM    │ Business Month end                 │
      ├───────┼────────────────────────────────────┤
      │ BMS   │ Business Month Start               │
      ├───────┼────────────────────────────────────┤
      │ BQ    │ Business Quarter end               │
      ├───────┼────────────────────────────────────┤
      │ BQS   │ Business Quarter Start             │
      ├───────┼────────────────────────────────────┤
      │ BA    │ Business Annual end                │
      ├───────┼────────────────────────────────────┤
      │ BAS   │ Business Annual Start              │
      ├───────┼────────────────────────────────────┤
      │ C     │ Custom business day (experimental) │
      ├───────┼────────────────────────────────────┤
      │ CBM   │ Custom Business Month end          │
      ├───────┼────────────────────────────────────┤
      │ CBMS  │ Custom Business Month Start        │
      ╘═══════╧════════════════════════════════════╛

      Weekly has the following anchored frequencies:
      ┌───────┬─────────────┬───────────────────────────────┐
      │ Alias │ Equivalents │ Description                   │
      ╞═══════╪═════════════╪═══════════════════════════════╡
      │ W-SUN │ W           │ Weekly frequency (SUNdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-MON │             │ Weekly frequency (MONdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-TUE │             │ Weekly frequency (TUEsdays)   │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-WED │             │ Weekly frequency (WEDnesdays) │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-THU │             │ Weekly frequency (THUrsdays)  │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-FRI │             │ Weekly frequency (FRIdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-SAT │             │ Weekly frequency (SATurdays)  │
      ╘═══════╧═════════════╧═══════════════════════════════╛

      Quarterly frequencies (Q, BQ, QS, BQS) and annual frequencies (A, BA, AS,
      BAS) replace the "x" in the "Alias" column to have the following
      anchoring suffixes:
      ┌───────┬──────────┬─────────────┬────────────────────────────┐
      │ Alias │ Examples │ Equivalents │ Description                │
      ╞═══════╪══════════╪═════════════╪════════════════════════════╡
      │ x-DEC │ A-DEC    │ A Q AS QS   │ year ends end of DECember  │
      │       │ Q-DEC    │             │                            │
      │       │ AS-DEC   │             │                            │
      ├───────┼─QS-DEC───┼─────────────┼────────────────────────────┤
      │ x-JAN │          │             │ year ends end of JANuary   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-FEB │          │             │ year ends end of FEBruary  │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAR │          │             │ year ends end of MARch     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-APR │          │             │ year ends end of APRil     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAY │          │             │ year ends end of MAY       │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUN │          │             │ year ends end of JUNe      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUL │          │             │ year ends end of JULy      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-AUG │          │             │ year ends end of AUGust    │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-SEP │          │             │ year ends end of SEPtember │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-OCT │          │             │ year ends end of OCTober   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-NOV │          │             │ year ends end of NOVember  │
      ╘═══════╧══════════╧═════════════╧════════════════════════════╛

  --epoch EPOCH
      [optional, defaults to 'julian', transformation]
      Can be one of, 'julian', 'reduced', 'modified', 'truncated', 'dublin',
      'cnes', 'ccsds', 'lop', 'lilian', 'rata_die', 'mars_sol_date',
      'unix', or a date and time.
      If supplying a date and time, most formats are recognized, however the
      closer the format is to ISO 8601 the better. Also should check and
      make sure date was parsed as expected. If supplying only a date, the
      epoch starts at midnight the morning of that date.
      The 'unix' epoch uses a default interval of seconds, and all other defined
      epochs use a default interval of 'daily'.
      ┌───────────┬────────────────┬────────────────┬─────────────┐
      │ epoch     │ Epoch          │ Calculation    │ Notes       │
      ╞═══════════╪════════════════╪════════════════╪═════════════╡
      │ julian    │ 4713-01-01:12  │ JD             │             │
      │           │ BCE            │                │             │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ reduced   │ 1858-11-16:12  │ JD - 2400000   │ [ 1 ] [ 2 ] │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ modified  │ 1858-11-17:00  │ JD - 2400000.5 │ SAO 1957    │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ truncated │ 1968-05-24:00  │ floor (JD -    │ NASA 1979,  │
      │           │                │ 2440000.5)     │ integer     │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ dublin    │ 1899-12-31:12  │ JD - 2415020   │ IAU 1955    │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ cnes      │ 1950-01-01:00  │ JD - 2433282.5 │ CNES [ 3 ]  │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ ccsds     │ 1958-01-01:00  │ JD - 2436204.5 │ CCSDS [ 3 ] │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ lop       │ 1992-01-01:00  │ JD - 2448622.5 │ LOP [ 3 ]   │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ lilian    │ 1582-10-15[13] │ floor (JD -    │ Count of    │
      │           │                │ 2299159.5)     │ days of the │
      │           │                │                │ Gregorian   │
      │           │                │                │ calendar,   │
      │           │                │                │ integer     │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ rata_die  │ 0001-01-01[13] │ floor (JD -    │ Count of    │
      │           │ proleptic      │ 1721424.5)     │ days of the │
      │           │ Gregorian      │                │ Common Era, │
      │           │ calendar       │                │ integer     │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ mars_sol  │ 1873-12-29:12  │ (JD - 2405522) │ Count of    │
      │           │                │ /1.02749       │ Martian     │
      ├───────────┼────────────────┼────────────────┼─days────────┤
      │ unix      │ 1970-01-01     │ JD - 2440587.5 │ seconds     │
      │           │ T00:00:00      │                │             │
      ╘═══════════╧════════════════╧════════════════╧═════════════╛

      1. Hopkins, Jeffrey L. (2013). Using Commercial Amateur Astronomical
      Spectrographs, p. 257, Springer Science & Business Media, ISBN
      9783319014425
      2. Palle, Pere L., Esteban, Cesar. (2014). Asteroseismology, p. 185,
      Cambridge University Press, ISBN 9781107470620
      3. Theveny, Pierre-Michel. (10 September 2001). "Date Format" The TPtime
      Handbook. Media Lab.
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

convert_index_to_julian

$ tstoolbox convert_index_to_julian --help
<string>:28: (WARNING/2) Option list ends without a blank line; unexpected unindent.
usage: tstoolbox convert_index_to_julian [-h] [--input_ts INPUT_TS]
  [--columns COLUMNS] [--start_date START_DATE] [--end_date END_DATE]
  [--round_index ROUND_INDEX] [--dropna DROPNA] [--clean] [--index_type
  INDEX_TYPE] [--names NAMES] [--source_units SOURCE_UNITS] [--target_units
  TARGET_UNITS] [--skiprows SKIPROWS]

Will be removed in a future version of tstoolbox.

Use convert_index in place of convert_index_to_julian.

For command line:

tstoolbox convert_index julian ...

For Python:

from tstoolbox import tstoolbox
ndf = ntstoolbox.convert_index('julian', ...)

options:
  -h | --help
      show this help message and exit
  Option list ends without a blank line; unexpected unindent.
  --input_ts INPUT_TS --columns COLUMNS --start_date START_DATE --end_date
  END_DATE --round_index ROUND_INDEX --dropna DROPNA --clean --index_type
  INDEX_TYPE --names NAMES --source_units SOURCE_UNITS --target_units
  TARGET_UNITS --skiprows SKIPROWS

converttz

$ tstoolbox converttz --help
usage: tstoolbox converttz [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--round_index ROUND_INDEX]
  [--dropna DROPNA] [--clean] [--index_type INDEX_TYPE] [--names NAMES]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--skiprows
  SKIPROWS] [--tablefmt TABLEFMT] fromtz totz

Convert the time zone of the index.

positional arguments:
  fromtz                The time zone of the original time-series.  The 'EST', and
    'America/New_York' could in some sense be thought of as the same, however
    'EST' would force the time index to have the same offset from UTC,
    regardless of daylight savings time, where 'America/New_York' would
    implement the appropriate daylight savings offsets.

  totz                  The time zone of the converted time-series.  Same note applies
    as for fromtz. Needs to be different from fromtz.


options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

correlation

$ tstoolbox correlation --help
usage: tstoolbox correlation [-h] [--method METHOD] [--input_ts INPUT_TS]
  [--start_date START_DATE] [--end_date END_DATE] [--columns COLUMNS] [--clean]
  [--index_type INDEX_TYPE] [--names NAMES] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--skiprows SKIPROWS] [--tablefmt TABLEFMT]
  [--round_index ROUND_INDEX] [--dropna DROPNA] lags

Develop a correlation between time-series and potentially lags.

positional arguments:
  lags                  If lags are greater than 0 then returns a cross correlation matrix
    between all time-series and all lags. If an integer will calculate and use
    all lags up to and including the lag number. If a list will calculate
    each lag in the list. If a string must be a comma separated list of
    integers.
    If lags == 0 then will return an auto-correlation on each input time-series.
    Python example:
    lags=[2, 5, 3]

    Command line example:
    --lags='2,5,3'



options:
  -h | --help
      show this help message and exit
  --method METHOD
      [optional, default to "pearson"]
      Method of correlation:
      pearson : standard correlation coefficient
      
      kendall : Kendall Tau correlation coefficient
      
      spearman : Spearman rank correlation

  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.

createts

$ tstoolbox createts --help
usage: tstoolbox createts [-h] [--freq FREQ] [--fillvalue FILLVALUE]
  [--input_ts INPUT_TS] [--index_type INDEX_TYPE] [--start_date START_DATE]
  [--end_date END_DATE] [--tablefmt TABLEFMT]

Create empty time series, optionally fill with a value.

options:
  -h | --help
      show this help message and exit
  --freq FREQ
      [optional, default is None]
      To use this form start_date and end_date must be supplied also. The freq
      option is the pandas date offset code used to create the index.
      Python example:
      freq='A'

      Command line example:
      --freq='A'

      ┌───────┬───────────────┐
      │ Alias │ Description   │
      ╞═══════╪═══════════════╡
      │ N     │ Nanoseconds   │
      ├───────┼───────────────┤
      │ U     │ microseconds  │
      ├───────┼───────────────┤
      │ L     │ milliseconds  │
      ├───────┼───────────────┤
      │ S     │ Secondly      │
      ├───────┼───────────────┤
      │ T     │ Minutely      │
      ├───────┼───────────────┤
      │ H     │ Hourly        │
      ├───────┼───────────────┤
      │ D     │ calendar Day  │
      ├───────┼───────────────┤
      │ W     │ Weekly        │
      ├───────┼───────────────┤
      │ M     │ Month end     │
      ├───────┼───────────────┤
      │ MS    │ Month Start   │
      ├───────┼───────────────┤
      │ Q     │ Quarter end   │
      ├───────┼───────────────┤
      │ QS    │ Quarter Start │
      ├───────┼───────────────┤
      │ A     │ Annual end    │
      ├───────┼───────────────┤
      │ AS    │ Annual Start  │
      ╘═══════╧═══════════════╛

      Business offset codes.
      ┌───────┬────────────────────────────────────┐
      │ Alias │ Description                        │
      ╞═══════╪════════════════════════════════════╡
      │ B     │ Business day                       │
      ├───────┼────────────────────────────────────┤
      │ BM    │ Business Month end                 │
      ├───────┼────────────────────────────────────┤
      │ BMS   │ Business Month Start               │
      ├───────┼────────────────────────────────────┤
      │ BQ    │ Business Quarter end               │
      ├───────┼────────────────────────────────────┤
      │ BQS   │ Business Quarter Start             │
      ├───────┼────────────────────────────────────┤
      │ BA    │ Business Annual end                │
      ├───────┼────────────────────────────────────┤
      │ BAS   │ Business Annual Start              │
      ├───────┼────────────────────────────────────┤
      │ C     │ Custom business day (experimental) │
      ├───────┼────────────────────────────────────┤
      │ CBM   │ Custom Business Month end          │
      ├───────┼────────────────────────────────────┤
      │ CBMS  │ Custom Business Month Start        │
      ╘═══════╧════════════════════════════════════╛

      Weekly has the following anchored frequencies:
      ┌───────┬─────────────┬───────────────────────────────┐
      │ Alias │ Equivalents │ Description                   │
      ╞═══════╪═════════════╪═══════════════════════════════╡
      │ W-SUN │ W           │ Weekly frequency (SUNdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-MON │             │ Weekly frequency (MONdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-TUE │             │ Weekly frequency (TUEsdays)   │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-WED │             │ Weekly frequency (WEDnesdays) │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-THU │             │ Weekly frequency (THUrsdays)  │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-FRI │             │ Weekly frequency (FRIdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-SAT │             │ Weekly frequency (SATurdays)  │
      ╘═══════╧═════════════╧═══════════════════════════════╛

      Quarterly frequencies (Q, BQ, QS, BQS) and annual frequencies (A, BA, AS,
      BAS) replace the "x" in the "Alias" column to have the following
      anchoring suffixes:
      ┌───────┬──────────┬─────────────┬────────────────────────────┐
      │ Alias │ Examples │ Equivalents │ Description                │
      ╞═══════╪══════════╪═════════════╪════════════════════════════╡
      │ x-DEC │ A-DEC    │ A Q AS QS   │ year ends end of DECember  │
      │       │ Q-DEC    │             │                            │
      │       │ AS-DEC   │             │                            │
      ├───────┼─QS-DEC───┼─────────────┼────────────────────────────┤
      │ x-JAN │          │             │ year ends end of JANuary   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-FEB │          │             │ year ends end of FEBruary  │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAR │          │             │ year ends end of MARch     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-APR │          │             │ year ends end of APRil     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAY │          │             │ year ends end of MAY       │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUN │          │             │ year ends end of JUNe      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUL │          │             │ year ends end of JULy      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-AUG │          │             │ year ends end of AUGust    │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-SEP │          │             │ year ends end of SEPtember │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-OCT │          │             │ year ends end of OCTober   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-NOV │          │             │ year ends end of NOVember  │
      ╘═══════╧══════════╧═════════════╧════════════════════════════╛

  --fillvalue FILLVALUE
      [optional, default is None]
      The fill value for the time-series. The default is None, which generates
      the date/time stamps only.
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

date_offset

$ tstoolbox date_offset --help
usage: tstoolbox date_offset [-h] [--columns COLUMNS] [--dropna DROPNA]
  [--clean] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--input_ts INPUT_TS] [--start_date START_DATE] [--end_date END_DATE]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--round_index
  ROUND_INDEX] [--tablefmt TABLEFMT] intervals offset

If you want to adjust to a different time-zone then should use the "converttz"
tstoolbox command.

positional arguments:
  intervals             Number of intervals of offset to shift the time index.  A positive
    integer moves the index forward, negative moves it backwards.

  offset                Any of the Pandas offset codes.  This is only the offset code
    and doesn't include a prefixed interval.
    ┌───────┬───────────────┐
    │ Alias │ Description   │
    ╞═══════╪═══════════════╡
    │ N     │ Nanoseconds   │
    ├───────┼───────────────┤
    │ U     │ microseconds  │
    ├───────┼───────────────┤
    │ L     │ milliseconds  │
    ├───────┼───────────────┤
    │ S     │ Secondly      │
    ├───────┼───────────────┤
    │ T     │ Minutely      │
    ├───────┼───────────────┤
    │ H     │ Hourly        │
    ├───────┼───────────────┤
    │ D     │ calendar Day  │
    ├───────┼───────────────┤
    │ W     │ Weekly        │
    ├───────┼───────────────┤
    │ M     │ Month end     │
    ├───────┼───────────────┤
    │ MS    │ Month Start   │
    ├───────┼───────────────┤
    │ Q     │ Quarter end   │
    ├───────┼───────────────┤
    │ QS    │ Quarter Start │
    ├───────┼───────────────┤
    │ A     │ Annual end    │
    ├───────┼───────────────┤
    │ AS    │ Annual Start  │
    ╘═══════╧═══════════════╛

    Business offset codes.
    ┌───────┬────────────────────────────────────┐
    │ Alias │ Description                        │
    ╞═══════╪════════════════════════════════════╡
    │ B     │ Business day                       │
    ├───────┼────────────────────────────────────┤
    │ BM    │ Business Month end                 │
    ├───────┼────────────────────────────────────┤
    │ BMS   │ Business Month Start               │
    ├───────┼────────────────────────────────────┤
    │ BQ    │ Business Quarter end               │
    ├───────┼────────────────────────────────────┤
    │ BQS   │ Business Quarter Start             │
    ├───────┼────────────────────────────────────┤
    │ BA    │ Business Annual end                │
    ├───────┼────────────────────────────────────┤
    │ BAS   │ Business Annual Start              │
    ├───────┼────────────────────────────────────┤
    │ C     │ Custom business day (experimental) │
    ├───────┼────────────────────────────────────┤
    │ CBM   │ Custom Business Month end          │
    ├───────┼────────────────────────────────────┤
    │ CBMS  │ Custom Business Month Start        │
    ╘═══════╧════════════════════════════════════╛

    Weekly has the following anchored frequencies:
    ┌───────┬─────────────┬───────────────────────────────┐
    │ Alias │ Equivalents │ Description                   │
    ╞═══════╪═════════════╪═══════════════════════════════╡
    │ W-SUN │ W           │ Weekly frequency (SUNdays)    │
    ├───────┼─────────────┼───────────────────────────────┤
    │ W-MON │             │ Weekly frequency (MONdays)    │
    ├───────┼─────────────┼───────────────────────────────┤
    │ W-TUE │             │ Weekly frequency (TUEsdays)   │
    ├───────┼─────────────┼───────────────────────────────┤
    │ W-WED │             │ Weekly frequency (WEDnesdays) │
    ├───────┼─────────────┼───────────────────────────────┤
    │ W-THU │             │ Weekly frequency (THUrsdays)  │
    ├───────┼─────────────┼───────────────────────────────┤
    │ W-FRI │             │ Weekly frequency (FRIdays)    │
    ├───────┼─────────────┼───────────────────────────────┤
    │ W-SAT │             │ Weekly frequency (SATurdays)  │
    ╘═══════╧═════════════╧═══════════════════════════════╛

    Quarterly frequencies (Q, BQ, QS, BQS) and annual frequencies (A, BA, AS,
    BAS) replace the "x" in the "Alias" column to have the following
    anchoring suffixes:
    ┌───────┬──────────┬─────────────┬────────────────────────────┐
    │ Alias │ Examples │ Equivalents │ Description                │
    ╞═══════╪══════════╪═════════════╪════════════════════════════╡
    │ x-DEC │ A-DEC    │ A Q AS QS   │ year ends end of DECember  │
    │       │ Q-DEC    │             │                            │
    │       │ AS-DEC   │             │                            │
    ├───────┼─QS-DEC───┼─────────────┼────────────────────────────┤
    │ x-JAN │          │             │ year ends end of JANuary   │
    ├───────┼──────────┼─────────────┼────────────────────────────┤
    │ x-FEB │          │             │ year ends end of FEBruary  │
    ├───────┼──────────┼─────────────┼────────────────────────────┤
    │ x-MAR │          │             │ year ends end of MARch     │
    ├───────┼──────────┼─────────────┼────────────────────────────┤
    │ x-APR │          │             │ year ends end of APRil     │
    ├───────┼──────────┼─────────────┼────────────────────────────┤
    │ x-MAY │          │             │ year ends end of MAY       │
    ├───────┼──────────┼─────────────┼────────────────────────────┤
    │ x-JUN │          │             │ year ends end of JUNe      │
    ├───────┼──────────┼─────────────┼────────────────────────────┤
    │ x-JUL │          │             │ year ends end of JULy      │
    ├───────┼──────────┼─────────────┼────────────────────────────┤
    │ x-AUG │          │             │ year ends end of AUGust    │
    ├───────┼──────────┼─────────────┼────────────────────────────┤
    │ x-SEP │          │             │ year ends end of SEPtember │
    ├───────┼──────────┼─────────────┼────────────────────────────┤
    │ x-OCT │          │             │ year ends end of OCTober   │
    ├───────┼──────────┼─────────────┼────────────────────────────┤
    │ x-NOV │          │             │ year ends end of NOVember  │
    ╘═══════╧══════════╧═════════════╧════════════════════════════╛



options:
  -h | --help
      show this help message and exit
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

date_slice

$ tstoolbox date_slice --help
usage: tstoolbox date_slice [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--clean]
  [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--round_index ROUND_INDEX] [--source_units SOURCE_UNITS] [--target_units
  TARGET_UNITS] [--float_format FLOAT_FORMAT] [--tablefmt TABLEFMT]

This isn't really useful anymore because "start_date" and "end_date" are
available in all sub-commands.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

describe

$ tstoolbox describe --help
usage: tstoolbox describe [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--transpose]
  [--tablefmt TABLEFMT]

Print out statistics for the time-series.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --transpose
      [optional, default is False, output format]
      If 'transpose' option is used, will transpose describe output.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

dtw

$ tstoolbox dtw --help
usage: tstoolbox dtw [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--round_index ROUND_INDEX]
  [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names
  NAMES] [--clean] [--window WINDOW] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--tablefmt TABLEFMT]

Dynamic Time Warping.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --window WINDOW
      [optional, default is 10000]
      Window length.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

equation

$ tstoolbox equation --help
usage: tstoolbox equation [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--print_input
  PRINT_INPUT] [--round_index ROUND_INDEX] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--float_format FLOAT_FORMAT] [--tablefmt
  TABLEFMT] [--output_names OUTPUT_NAMES] equation_str

The <equation_str> argument is a string contained in single quotes with 'x',
'x[t]', or 'x1', 'x2', 'x3', ...etc. used as the variable representing the
input. For example, '(1 - x)*sin(x)'.

positional arguments:
  equation_str String contained in single quotes that defines the equation.
    Can have multiple equations separated by an "@" symbol.
    There are four different types of equations that can be used.
    ┌───────────────────────┬───────────┬─────────────────────────┐
    │ Description           │ Variables │ Examples                │
    ╞═══════════════════════╪═══════════╪═════════════════════════╡
    │ Equation applied to   │ x         │ x*0.3+4-x**2            │
    │ all values in the     │           │ sin(x)+pi*x             │
    │ dataset. Returns same │           │                         │
    │ number of columns as  │           │                         │
    ├─input.────────────────┼───────────┼─────────────────────────┤
    │ Equation used time    │ x and t   │ 0.6*max(x[t-1],x[t+1])  │
    │ relative to current   │           │                         │
    │ record. Applies       │           │                         │
    │ equation to each      │           │                         │
    │ column. Returns same  │           │                         │
    │ number of columns as  │           │                         │
    │ input.                │           │                         │
    ├───────────────────────┼───────────┼─────────────────────────┤
    │ Equation uses values  │ x1, x2,   │ x1+x2                   │
    │ from different        │ x3, ...   │                         │
    │ columns. Returns a    │ xN        │                         │
    │ single column.        │           │                         │
    ├───────────────────────┼───────────┼─────────────────────────┤
    │ Equation uses values  │ x1, x2,   │ x1[t-1]+x2+x3[t+1]      │
    │ from different        │ x3, ...x- │                         │
    │ columns and values    │ N, t      │                         │
    │ from different rows.  │           │                         │
    │ Returns a single      │           │                         │
    ╘═column.═══════════════╧═══════════╧═════════════════════════╛

    Mathematical functions in the 'np' (numpy) name space can be used.
    Additional examples:
    'x*4 + 2',
    'x**2 + cos(x)', and
    'tan(x*pi/180)'

    are all valid <equation> strings. The variable 't' is special representing
    the index (usually time) at which 'x' occurs. This means you can do
    things like:
    'x[t] + max(x[t-1], x[t+1])*0.6'

    to add to the current value 0.6 times the maximum row adjacent value.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --print_input PRINT_INPUT
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.
  --output_names OUTPUT_NAMES
      [optional, output_format]
      The toolbox_utils will change the names of the output columns to include
      some record of the operations used on each column. The output_names
      will override that feature. Must be a list or tuple equal to the
      number of columns in the output data.

ewm_window

$ tstoolbox ewm_window --help
usage: tstoolbox ewm_window [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--statistic
  STATISTIC] [--alpha_com ALPHA_COM] [--alpha_span ALPHA_SPAN]
  [--alpha_halflife ALPHA_HALFLIFE] [--alpha ALPHA] [--min_periods
  MIN_PERIODS] [--adjust] [--ignore_na] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--print_input] [--tablefmt TABLEFMT]

Exactly one of alpha_com (center of mass), alpha_span, alpha_halflife, and alpha
must be provided to calculate the 'alpha' term. Allowed values and relationship
between the parameters are specified in the parameter descriptions below; see
the link at the end of this section for a detailed explanation.

When adjust is True (default), weighted averages are calculated using weights
(1-alpha)**(n-1), (1-alpha)**(n-2), . . . , 1-alpha, 1.

When adjust is False, weighted averages are calculated recursively as:
weighted_average[0] = arg[0] weighted_average[i] =
(1-alpha)*weighted_average[i-1] + alpha*arg[i]

When ignore_na is False (default), weights are based on absolute positions. For
example, the weights of x and y used in calculating the final weighted average
of [x, None, y] are (1-alpha)**2 and 1 (if adjust is True), and (1-alpha)**2 and
alpha (if adjust is False).

When ignore_na is True weights are based on relative positions. For example, the
weights of x and y used in calculating the final weighted average of [x, None,
y] are 1-alpha and 1 (if adjust is True), and 1-alpha and alpha (if adjust is
False).

More details can be found at <http://pandas.pydata.org/pandas-docs/stabl-
e/computation.html#exponentially-weighted-windows>

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --statistic STATISTIC
      [optional, defaults to '']
      Statistic applied to each window.
      ┌───────────┬────────────────────┐
      │ statistic │ Description        │
      ╞═══════════╪════════════════════╡
      │ corr      │ correlation        │
      ├───────────┼────────────────────┤
      │ cov       │ covariance         │
      ├───────────┼────────────────────┤
      │ mean      │ mean               │
      ├───────────┼────────────────────┤
      │ std       │ standard deviation │
      ├───────────┼────────────────────┤
      │ var       │ variance           │
      ╘═══════════╧════════════════════╛

  --alpha_com ALPHA_COM
      [optional, defaults to None]
      Specify decay in terms of center of mass:
      alpha = 1/(1+`alpha_com`), for `alpha_com` >= 0

  --alpha_span ALPHA_SPAN
      [optional, defaults to None]
      Specify decay in terms of span:
      alpha = 2/(`alpha_span`+1), for `alpha_span` > 1

  --alpha_halflife ALPHA_HALFLIFE
      [optional, defaults to None]
      Specify decay in terms of half-life:
      alpha = 1-exp(log(0.5)/alpha_halflife), for
      alpha_halflife > 0

  --alpha ALPHA
      [optional, defaults to None]
      Specify smoothing factor alpha directly, 0<alpha<=1
  --min_periods MIN_PERIODS
      [optional, default is 0]
      Minimum number of observations in window required to have a value
      (otherwise result is NA).
  --adjust
      [optional, default is True]
      Divide by decaying adjustment factor in beginning periods to account for
      imbalance in relative weightings (viewing EWMA as a moving average)
  --ignore_na
      [optional, default is False] Ignore missing values when calculating
      weights.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

expanding_window

$ tstoolbox expanding_window --help
usage: tstoolbox expanding_window [-h] [--input_ts INPUT_TS]
  [--columns COLUMNS] [--start_date START_DATE] [--end_date END_DATE] [--dropna
  DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--clean] [--statistic STATISTIC] [--min_periods MIN_PERIODS] [--center]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--print_input]
  [--tablefmt TABLEFMT]

Calculate an expanding window statistic.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --statistic STATISTIC
      [optional, default is '']
      ┌───────────┬──────────────────────┐
      │ statistic │ Meaning              │
      ╞═══════════╪══════════════════════╡
      │ corr      │ correlation          │
      ├───────────┼──────────────────────┤
      │ count     │ count of real values │
      ├───────────┼──────────────────────┤
      │ cov       │ covariance           │
      ├───────────┼──────────────────────┤
      │ kurt      │ kurtosis             │
      ├───────────┼──────────────────────┤
      │ max       │ maximum              │
      ├───────────┼──────────────────────┤
      │ mean      │ mean                 │
      ├───────────┼──────────────────────┤
      │ median    │ median               │
      ├───────────┼──────────────────────┤
      │ min       │ minimum              │
      ├───────────┼──────────────────────┤
      │ skew      │ skew                 │
      ├───────────┼──────────────────────┤
      │ std       │ standard deviation   │
      ├───────────┼──────────────────────┤
      │ sum       │ sum                  │
      ├───────────┼──────────────────────┤
      │ var       │ variance             │
      ╘═══════════╧══════════════════════╛

  --min_periods MIN_PERIODS
      [optional, default is 1]
      Minimum number of observations in window required to have a value
  --center
      [optional, default is False]
      Set the labels at the center of the window.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

fill

$ tstoolbox fill --help
usage: tstoolbox fill [-h] [--input_ts INPUT_TS] [--method METHOD]
  [--print_input] [--start_date START_DATE] [--end_date END_DATE] [--columns
  COLUMNS] [--clean] [--index_type INDEX_TYPE] [--names NAMES] [--source_units
  SOURCE_UNITS] [--target_units TARGET_UNITS] [--skiprows SKIPROWS]
  [--from_columns FROM_COLUMNS] [--to_columns TO_COLUMNS] [--limit LIMIT]
  [--order ORDER] [--tablefmt TABLEFMT] [--force_freq FORCE_FREQ]

Missing values can occur because of NaN, or because the time series is sparse.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --method METHOD
      [optional, default is 'ffill']
      String contained in single quotes or a number that defines the method to
      use for filling.
      ┌──────────────────────┬──────────────────────────────────────────────┐
      │ method=              │ fill missing values with...                  │
      ╞══════════════════════╪══════════════════════════════════════════════╡
      │ ffill                │ ...the last good value                       │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ bfill                │ ...the next good value                       │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ 2.3                  │ ...with this number                          │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ linear               │ ...ignore index, values are equally spaced   │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ index                │ ...linear interpolation with datetime index  │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ values               │ ...linear interpolation with numerical index │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ nearest              │ ...nearest good value                        │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ zero                 │ ...zeroth order spline                       │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ slinear              │ ...first order spline                        │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ quadratic            │ ...second order spline                       │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ cubic                │ ...third order spline                        │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ spline order=n       │ ...nth order spline                          │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ polynomial order=n   │ ...nth order polynomial                      │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ barycentric          │ ...barycentric                               │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ mean                 │ ...with mean                                 │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ median               │ ...with median                               │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ max                  │ ...with maximum                              │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ min                  │ ...with minimum                              │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ from                 │ ...with good values from other columns       │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ time                 │ ...daily and higher resolution to interval   │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ krogh                │ ...krogh algorithm                           │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ piecewise_polynomial │ ...piecewise-polynomial algorithm            │
      │ from_derivatives     │                                              │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ pchip                │ ...pchip algorithm                           │
      ├──────────────────────┼──────────────────────────────────────────────┤
      │ akima                │ ...akima algorithm                           │
      ╘══════════════════════╧══════════════════════════════════════════════╛

  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --from_columns FROM_COLUMNS
      [required if method='from', otherwise not used]
      List of column names/numbers from which good values will be taken to fill
      missing values in the to_columns keyword.
  --to_columns TO_COLUMNS
      [required if method='from', otherwise not used]
      List of column names/numbers that missing values will be replaced in from
      good values in the from_columns keyword.
  --limit LIMIT
      [default is None]
      Gaps of missing values greater than this number will not be filled.
  --order ORDER
      [required if method is 'spline' or 'polynomial', otherwise not used,
      default is None]
      The order of the 'spline' or 'polynomial' fit for missing values.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.
  --force_freq FORCE_FREQ
      [optional, output format]
      Force this frequency for the output. Typically you will only want to
      enforce a smaller interval where toolbox_utils will insert missing
      values as needed. WARNING: you may lose data if not careful with
      this option. In general, letting the algorithm determine the
      frequency should always work, but this option will override. Use
      PANDAS offset codes.
      ┌───────┬───────────────┐
      │ Alias │ Description   │
      ╞═══════╪═══════════════╡
      │ N     │ Nanoseconds   │
      ├───────┼───────────────┤
      │ U     │ microseconds  │
      ├───────┼───────────────┤
      │ L     │ milliseconds  │
      ├───────┼───────────────┤
      │ S     │ Secondly      │
      ├───────┼───────────────┤
      │ T     │ Minutely      │
      ├───────┼───────────────┤
      │ H     │ Hourly        │
      ├───────┼───────────────┤
      │ D     │ calendar Day  │
      ├───────┼───────────────┤
      │ W     │ Weekly        │
      ├───────┼───────────────┤
      │ M     │ Month end     │
      ├───────┼───────────────┤
      │ MS    │ Month Start   │
      ├───────┼───────────────┤
      │ Q     │ Quarter end   │
      ├───────┼───────────────┤
      │ QS    │ Quarter Start │
      ├───────┼───────────────┤
      │ A     │ Annual end    │
      ├───────┼───────────────┤
      │ AS    │ Annual Start  │
      ╘═══════╧═══════════════╛

      Business offset codes.
      ┌───────┬────────────────────────────────────┐
      │ Alias │ Description                        │
      ╞═══════╪════════════════════════════════════╡
      │ B     │ Business day                       │
      ├───────┼────────────────────────────────────┤
      │ BM    │ Business Month end                 │
      ├───────┼────────────────────────────────────┤
      │ BMS   │ Business Month Start               │
      ├───────┼────────────────────────────────────┤
      │ BQ    │ Business Quarter end               │
      ├───────┼────────────────────────────────────┤
      │ BQS   │ Business Quarter Start             │
      ├───────┼────────────────────────────────────┤
      │ BA    │ Business Annual end                │
      ├───────┼────────────────────────────────────┤
      │ BAS   │ Business Annual Start              │
      ├───────┼────────────────────────────────────┤
      │ C     │ Custom business day (experimental) │
      ├───────┼────────────────────────────────────┤
      │ CBM   │ Custom Business Month end          │
      ├───────┼────────────────────────────────────┤
      │ CBMS  │ Custom Business Month Start        │
      ╘═══════╧════════════════════════════════════╛

      Weekly has the following anchored frequencies:
      ┌───────┬─────────────┬───────────────────────────────┐
      │ Alias │ Equivalents │ Description                   │
      ╞═══════╪═════════════╪═══════════════════════════════╡
      │ W-SUN │ W           │ Weekly frequency (SUNdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-MON │             │ Weekly frequency (MONdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-TUE │             │ Weekly frequency (TUEsdays)   │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-WED │             │ Weekly frequency (WEDnesdays) │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-THU │             │ Weekly frequency (THUrsdays)  │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-FRI │             │ Weekly frequency (FRIdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-SAT │             │ Weekly frequency (SATurdays)  │
      ╘═══════╧═════════════╧═══════════════════════════════╛

      Quarterly frequencies (Q, BQ, QS, BQS) and annual frequencies (A, BA, AS,
      BAS) replace the "x" in the "Alias" column to have the following
      anchoring suffixes:
      ┌───────┬──────────┬─────────────┬────────────────────────────┐
      │ Alias │ Examples │ Equivalents │ Description                │
      ╞═══════╪══════════╪═════════════╪════════════════════════════╡
      │ x-DEC │ A-DEC    │ A Q AS QS   │ year ends end of DECember  │
      │       │ Q-DEC    │             │                            │
      │       │ AS-DEC   │             │                            │
      ├───────┼─QS-DEC───┼─────────────┼────────────────────────────┤
      │ x-JAN │          │             │ year ends end of JANuary   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-FEB │          │             │ year ends end of FEBruary  │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAR │          │             │ year ends end of MARch     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-APR │          │             │ year ends end of APRil     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAY │          │             │ year ends end of MAY       │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUN │          │             │ year ends end of JUNe      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUL │          │             │ year ends end of JULy      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-AUG │          │             │ year ends end of AUGust    │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-SEP │          │             │ year ends end of SEPtember │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-OCT │          │             │ year ends end of OCTober   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-NOV │          │             │ year ends end of NOVember  │
      ╘═══════╧══════════╧═════════════╧════════════════════════════╛

filter

$ tstoolbox filter --help
usage: tstoolbox filter [-h] [--butterworth_order BUTTERWORTH_ORDER]
  [--lowpass_cutoff LOWPASS_CUTOFF] [--highpass_cutoff HIGHPASS_CUTOFF]
  [--window_len WINDOW_LEN] [--pad_mode PAD_MODE] [--input_ts INPUT_TS]
  [--start_date START_DATE] [--end_date END_DATE] [--columns COLUMNS]
  [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names
  NAMES] [--clean] [--round_index ROUND_INDEX] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--print_input] [--float_format FLOAT_FORMAT]
  [--tablefmt TABLEFMT] filter_types filter_pass

Apply different filters to the time-series.

positional arguments:
  filter_types          One or more of
    bartlett, blackman, butterworth, fft, flat, hamming, hanning, kalman,
    lecolazet1, lecolazet2, tide_doodson, tide_fft, tide_usgs
    The "fft" and "butterworth" types are configured by cutoff frequencies
    lowpass_cutoff, and highpass_cutoff, by process defined in filter_pass.
    The "fft" is the Fast Fourier Transform filter in the frequency domain.
    Doodson filter
    The Doodson X0 filter is a simple filter designed to damp out the main tidal
    frequencies. It takes hourly values, 19 values either side of the
    central one. A weighted average is taken with the following weights
    (1010010110201102112 0 2112011020110100101)/30.
    In "Data Analysis and Methods in Oceanography":
    "The cosine-Lanczos filter, the transform filter, and the Butterworth filter
    are often preferred to the Godin filter, to earlier Doodson filter,
    because of their superior ability to remove tidal period variability
    from oceanic signals."

  filter_pass           OneOf("lowpass", "highpass", "bandpass", "bandstop")
    Indicates what frequencies to block for the "fft" and "butterworth" filters.


options:
  -h | --help
      show this help message and exit
  --butterworth_order BUTTERWORTH_ORDER
      [optional, default is 10]
      The order of the butterworth filter.
  --lowpass_cutoff LOWPASS_CUTOFF
      [optional, default is None, used only if filter is "fft" or "butterworth"
      and required if filter_pass equals "lowpass", "bandpass" or
      "bandstop"]
      The low frequency cutoff when filter_pass equals "lowpass", "bandpass", or
      "bandstop".
  --highpass_cutoff HIGHPASS_CUTOFF
      [optional, default is None, used only if filter is "fft" or "butterworth"
      and required if filter_pass equals "highpass", "bandpass" or
      "bandstop"]
      The high frequency cutoff when filter_pass equals "highpass", "bandpass",
      or "bandstop".
  --window_len WINDOW_LEN
      [optional, default is 3]
      "flat", "hanning", "hamming", "bartlett", "blackman" Time-series is padded
      by one half the window length on each end. The window_len is then
      used for the length of the convolution kernel.
      "fft" Will soften the edges of the "fft" filter in the frequency domain.
      The larger the number the softer the filter edges. A value of 1 will
      have a brick wall step function which may introduce frequencies into
      the filtered output.
      "tide_usgs", "tide_doodson" The window_len is set to 33 for "tide_usgs"
      and 39 for "tide_doodson".
  --pad_mode PAD_MODE
      [optional, default is "reflect"]
      The method used to pad the time-series. Uses some of the methods in
      numpy.pad.
      The pad methods "edge", "maximum", "mean", "median", "minimum", "reflect",
      "symmetric", "wrap" are available because they require no extra
      arguments.
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

fit

$ tstoolbox fit --help
usage: tstoolbox fit [-h] [--lowess_frac LOWESS_FRAC] [--input_ts INPUT_TS]
  [--columns COLUMNS] [--start_date START_DATE] [--end_date END_DATE] [--dropna
  DROPNA] [--clean] [--round_index ROUND_INDEX] [--skiprows SKIPROWS]
  [--index_type INDEX_TYPE] [--names NAMES] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--print_input] [--tablefmt TABLEFMT] method

Fit model to data.

positional arguments:
  method                Any of 'lowess', 'linear' or list of same. The LOWESS technique is for
    vector data, like time-series, whereas the LOESS is a generalized technique
    that can be applied to multi-dimensional data. For working with
    time-series LOESS and LOWESS are identical.


options:
  -h | --help
      show this help message and exit
  --lowess_frac LOWESS_FRAC
      [optional, default=0.01, range between 0 and 1]
      Fraction of data used for 'method'="lowess".
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

forecast

$ tstoolbox forecast --help
usage: tstoolbox forecast [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--clean]
  [--round_index ROUND_INDEX] [--skiprows SKIPROWS] [--index_type INDEX_TYPE]
  [--names NAMES] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--print_input] [--tablefmt TABLEFMT] [--horizon HORIZON] [--print_cols
  PRINT_COLS]

Machine learning forecast using PyAF (Python Automatic Forecasting)

Uses a machine learning approach (The signal is cut into estimation and
validation parts, respectively, 80% and 20% of the signal). A time-series
cross-validation can also be used.

Forecasting a time series model on a given horizon (forecast result is also
pandas data-frame) and providing prediction/confidence intervals for the
forecasts.

Generic training features

  • Signal decomposition as the sum of a trend, periodic and AR component
  • Works as a competition between a comprehensive set of possible signal
  transformations and linear decompositions. For each transformed signal , a
  set of possible trends, periodic components and AR models is generated and
  all the possible combinations are estimated. The best decomposition in term
  of performance is kept to forecast the signal (the performance is computed
  on a part of the signal that was not used for the estimation).
  • Signal transformation is supported before signal decompositions. Four
  transformations are supported by default. Other transformation are available
  (Box-Cox etc).
  • All Models are estimated using standard procedures and state-of-the-art time
  series modeling. For example, trend regressions and AR/ARX models are
  estimated using scikit-learn linear regression models.
  • Standard performance measures are used (L1, RMSE, MAPE, etc)

Exogenous Data Support

  • Exogenous data can be provided to improve the forecasts. These are expected
  to be stored in an external data-frame (this data-frame will be merged with
  the training data-frame).
  • Exogenous data are integrated in the modeling process through their past
  values (ARX models).
  • Exogenous variables can be of any type (numeric, string , date, or object).
  • Exogenous variables are dummified for the non-numeric types, and
  standardized for the numeric types.

Hierarchical Forecasting

  • Bottom-Up, Top-Down (using proportions), Middle-Out and Optimal Combinations
  are implemented.
  • The modeling process is customizable and has a huge set of options. The
  default values of these options should however be OK to produce a reasonable
  quality model in a limited amount of time (a few minutes).

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.
  --horizon HORIZON
      Number of intervals to forecast.
  --print_cols PRINT_COLS
      Identifies what columns to return. One of "all" or "forecast"

gof

$ tstoolbox gof --help
usage: tstoolbox gof [-h] [--obs_col OBS_COL] [--sim_col SIM_COL]
  [--stats STATS] [--replace_nan REPLACE_NAN] [--replace_inf REPLACE_INF]
  [--remove_neg] [--remove_zero] [--start_date START_DATE] [--end_date
  END_DATE] [--round_index ROUND_INDEX] [--clean] [--index_type INDEX_TYPE]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--tablefmt
  TABLEFMT] [--float_format FLOAT_FORMAT] [--kge_sr KGE_SR] [--kge09_salpha
  KGE09_SALPHA] [--kge12_sgamma KGE12_SGAMMA] [--kge_sbeta KGE_SBETA]

The first time series must be the observed, the second the simulated series. You
can only give two time-series.

options:
  -h | --help
      show this help message and exit
  --obs_col OBS_COL
      If integer represents the column number of standard input. Can be If
      integer represents the column number of standard input. Can be a
      csv, wdm, hdf or xlsx file following format specified in 'tstoolbox
      read ...'.
  --sim_col SIM_COL
      If integer represents the column number of standard input. Can be a csv,
      wdm, hdf or xlsx file following format specified in 'tstoolbox read
      ...'.
  --stats STATS
      [optional, Python: list, Command line: comma separated string, default is
      'default']
      Comma separated list of statistical measures.
      You can select two groups of statistical measures.
      ┌────────────┬───────────────────────────────────────┐
      │ stats      │ Description                           │
      ╞════════════╪═══════════════════════════════════════╡
      │ default    │ A subset of common statistic measures │
      ├────────────┼───────────────────────────────────────┤
      │ all        │ All available statistic measures      │
      ╘════════════╧═══════════════════════════════════════╛

      The 'default' set of statistics are:
      ┌─────────────────┬──────────────────────────────────────────────────┐
      │ stats           │ Description                                      │
      ╞═════════════════╪══════════════════════════════════════════════════╡
      │ me              │ Mean error or bias -inf < ME < inf, close to 0   │
      │                 │ is better                                        │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ pc_bias         │ Percent Bias -inf < PC_BIAS < inf, close to 0 is │
      │                 │ better                                           │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ apc_bias        │ Absolute Percent Bias 0 <= APC_BIAS < inf, close │
      │                 │ to 0 is better                                   │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ rmsd            │ Root Mean Square Deviation/Error 0 <= RMSD <     │
      │                 │ inf, smaller is better                           │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ crmsd           │ Centered Root Mean Square Deviation/Error        │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ corrcoef        │ Pearson Correlation coefficient (r) -1 <= r <= 1 │
      │                 │ 1 perfect positive correlation 0 complete        │
      │                 │ randomness -1 perfect negative correlation       │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ coefdet         │ Coefficient of determination (r^2) 0 <= r^2 <= 1 │
      │                 │ 1 perfect correlation 0 complete randomness      │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ murphyss        │ Murphy Skill Score                               │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ nse             │ Nash-Sutcliffe Efficiency -inf < NSE < 1, larger │
      │                 │ is better                                        │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ kge09           │ Kling-Gupta Efficiency, 2009 -inf < KGE09 < 1,   │
      │                 │ larger is better                                 │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ kge12           │ Kling-Gupta Efficiency, 2012 -inf < KGE12 < 1,   │
      │                 │ larger is better                                 │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ index_agreement │ Index of agreement (d) 0 <= d < 1, larger is     │
      │                 │ better                                           │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ brierss         │ Brier Skill Score                                │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ mae             │ Mean Absolute Error 0 <= MAE < 1, larger is      │
      ├─────────────────┼─better───────────────────────────────────────────┤
      │ mean            │ observed mean, simulated mean                    │
      ├─────────────────┼──────────────────────────────────────────────────┤
      │ stdev           │ observed stdev, simulated stdev                  │
      ╘═════════════════╧══════════════════════════════════════════════════╛

      Additional statistics:
      ┌─────────────┬───────────────────────────────────────────────────────┐
      │ stats       │ Description                                           │
      ╞═════════════╪═══════════════════════════════════════════════════════╡
      │ acc         │ Anomaly correlation coefficient (ACC) -1 <= r <= 1 1  │
      │             │ positive correlation of variation in anomalies 0      │
      │             │ complete randomness of variation in anomalies -1      │
      │             │ negative correlation of variation in anomalies        │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ d1          │ Index of agreement (d1) 0 <= d1 < 1, larger is better │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ d1_p        │ Legate-McCabe Index of Agreement 0 <= d1_p < 1,       │
      │             │ larger is better                                      │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ d           │ Index of agreement (d) 0 <= d < 1, larger is better   │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ dmod        │ Modified index of agreement (dmod) 0 <= dmod < 1,     │
      │             │ larger is better                                      │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ drel        │ Relative index of agreement (drel) 0 <= drel < 1,     │
      │             │ larger is better                                      │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ dr          │ Refined index of agreement (dr) -1 <= dr < 1, larger  │
      │             │ is better                                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ ed          │ Euclidean distance in vector space 0 <= ed < inf,     │
      │             │ smaller is better                                     │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ g_mean_diff │ Geometric mean difference                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h1_mahe     │ H1 absolute error                                     │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h1_mhe      │ H1 mean error                                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h1_rmshe    │ H1 root mean square error                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h2_mahe     │ H2 mean absolute error                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h2_mhe      │ H2 mean error                                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h2_rmshe    │ H2 root mean square error                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h3_mahe     │ H3 mean absolute error                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h3_mhe      │ H3 mean error                                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h3_rmshe    │ H3 root mean square error                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h4_mahe     │ H4 mean absolute error                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h4_mhe      │ H4 mean error                                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h4_rmshe    │ H4 root mean square error                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h5_mahe     │ H5 mean absolute error                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h5_mhe      │ H5 mean error                                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h5_rmshe    │ H5 root mean square error                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h6_mahe     │ H6 mean absolute error                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h6_mhe      │ H6 mean error                                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h6_rmshe    │ H6 root mean square error                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h7_mahe     │ H7 mean absolute error                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h7_mhe      │ H7 mean error                                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h7_rmshe    │ H7 root mean square error                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h8_mahe     │ H8 mean absolute error                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h8_mhe      │ H8 mean error                                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h8_rmshe    │ H8 root mean square error                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h10_mahe    │ H10 mean absolute error                               │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h10_mhe     │ H10 mean error                                        │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ h10_rmshe   │ H10 root mean square error                            │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ irmse       │ Inertial root mean square error (IRMSE) 0 <= irmse <  │
      │             │ inf, smaller is better                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ lm_index    │ Legate-McCabe Efficiency Index 0 <= lm_index < 1,     │
      │             │ larger is better                                      │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ maape       │ Mean Arctangent Absolute Percentage Error (MAAPE) 0   │
      │             │ <= maape < pi/2, smaller is better                    │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ male        │ Mean absolute log error 0 <= male < inf, smaller is   │
      │             │ better                                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ mapd        │ Mean absolute percentage deviation (MAPD)             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ mape        │ Mean absolute percentage error (MAPE) 0 <= mape <     │
      │             │ inf, 0 indicates perfect correlation                  │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ mase        │ Mean absolute scaled error                            │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ mb_r        │ Mielke-Berry R value (MB R) 0 <= mb_r < 1, larger is  │
      │             │ better                                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ mdae        │ Median absolute error (MdAE) 0 <= mdae < inf, smaller │
      │             │ is better                                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ mde         │ Median error (MdE) -inf < mde < inf, closer to zero   │
      │             │ is better                                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ mdse        │ Median squared error (MdSE) 0 < mde < inf, closer to  │
      │             │ zero is better                                        │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ mean_var    │ Mean variance                                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ me          │ Mean error -inf < me < inf, closer to zero is better  │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ mle         │ Mean log error -inf < mle < inf, closer to zero is    │
      │             │ better                                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ mse         │ Mean squared error 0 <= mse < inf, smaller is better  │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ msle        │ Mean squared log error 0 <= msle < inf, smaller is    │
      │             │ better                                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ ned         │ Normalized Euclidian distance in vector space 0 <=    │
      │             │ ned < inf, smaller is better                          │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ nrmse_iqr   │ IQR normalized root mean square error 0 <= nrmse_iqr  │
      │             │ < inf, smaller is better                              │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ nrmse_mean  │ Mean normalized root mean square error 0 <=           │
      │             │ nrmse_mean < inf, smaller is better                   │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ nrmse_range │ Range normalized root mean square error 0 <=          │
      │             │ nrmse_range < inf, smaller is better                  │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ nse_mod     │ Modified Nash-Sutcliffe efficiency (NSE mod) -inf <   │
      │             │ nse_mod < 1, larger is better                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ nse_rel     │ Relative Nash-Sutcliffe efficiency (NSE rel) -inf <   │
      │             │ nse_mod < 1, larger is better                         │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ rmse        │ Root mean square error 0 <= rmse < inf, smaller is    │
      │             │ better                                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ rmsle       │ Root mean square log error 0 <= rmsle < inf, smaller  │
      │             │ is better                                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ sa          │ Spectral Angle (SA) -pi/2 <= sa < pi/2, closer to 0   │
      │             │ is better                                             │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ sc          │ Spectral Correlation (SC) -pi/2 <= sc < pi/2, closer  │
      │             │ to 0 is better                                        │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ sga         │ Spectral Gradient Angle (SGA) -pi/2 <= sga < pi/2,    │
      │             │ closer to 0 is better                                 │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ sid         │ Spectral Information Divergence (SID) -pi/2 <= sid <  │
      │             │ pi/2, closer to 0 is better                           │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ smape1      │ Symmetric Mean Absolute Percentage Error (1) (SMAPE1) │
      │             │ 0 <= smape1 < 100, smaller is better                  │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ smape2      │ Symmetric Mean Absolute Percentage Error (2) (SMAPE2) │
      │             │ 0 <= smape2 < 100, smaller is better                  │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ spearman_r  │ Spearman rank correlation coefficient -1 <=           │
      │             │ spearman_r <= 1 1 perfect positive correlation 0      │
      │             │ complete randomness -1 perfect negative correlation   │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ ve          │ Volumetric Efficiency (VE) 0 <= ve < 1, smaller is    │
      │             │ better                                                │
      ├─────────────┼───────────────────────────────────────────────────────┤
      │ watt_m      │ Watterson's M (M) -1 <= watt_m < 1, larger is better  │
      ╘═════════════╧═══════════════════════════════════════════════════════╛

  --replace_nan REPLACE_NAN
      If given, indicates which value to replace NaN values with in the two
      arrays. If None, when a NaN value is found at the i-th position in
      the observed OR simulated array, the i-th value of the observed and
      simulated array are removed before the computation.
  --replace_inf REPLACE_INF
      If given, indicates which value to replace Inf values with in the two
      arrays. If None, when an inf value is found at the i-th position in
      the observed OR simulated array, the i-th value of the observed and
      simulated array are removed before the computation.
  --remove_neg
      If True, when a negative value is found at the i-th position in the
      observed OR simulated array, the i-th value of the observed AND
      simulated array are removed before the computation.
  --remove_zero
      If true, when a zero value is found at the i-th position in the observed
      OR simulated array, the i-th value of the observed AND simulated
      array are removed before the computation.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --kge_sr KGE_SR
      [optional, defaults to 1.0]
      Scaling factor for kge09 and kge12 correlation.
  --kge09_salpha KGE09_SALPHA
      [optional, defaults to 1.0]
      Scaling factor for kge09 alpha.
  --kge12_sgamma KGE12_SGAMMA
      [optional, defaults to 1.0]
      Scaling factor for kge12 beta.
  --kge_sbeta KGE_SBETA
      [optional, defaults to 1.0]
      Scaling factor for kge09 and kge12 beta.

lag

$ tstoolbox lag --help
usage: tstoolbox lag [-h] [--input_ts INPUT_TS] [--print_input]
  [--start_date START_DATE] [--end_date END_DATE] [--columns COLUMNS] [--clean]
  [--index_type INDEX_TYPE] [--names NAMES] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--skiprows SKIPROWS] [--tablefmt TABLEFMT]
  lags

Create a series of lagged time-series.

positional arguments:
  lags                  If an integer will calculate all lags up to and including the
    lag number. If a list will calculate each lag in the list. If a string must
    be a comma separated list of integers.


options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

normalization

$ tstoolbox normalization --help
usage: tstoolbox normalization [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--mode MODE]
  [--min_limit MIN_LIMIT] [--max_limit MAX_LIMIT] [--pct_rank_method
  PCT_RANK_METHOD] [--print_input] [--round_index ROUND_INDEX] [--source_units
  SOURCE_UNITS] [--target_units TARGET_UNITS] [--float_format FLOAT_FORMAT]
  [--tablefmt TABLEFMT] [--with_centering] [--with_scaling] [--quantile_range
  QUANTILE_RANGE]

This scales the time-series.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --mode MODE
      [optional, default is 'minmax']
      minmax
        min_limit + (X-Xmin)/(Xmax-Xmin)*(max_limit-min_limit)

      zscore
        (X-mean(X))/stddev(X)

      pct_rank
        rank(X)*100/N

      maxabs
        Scale by absolute value between -1 and 1.

      normal
        Scale to unit normal.

      robust
        Robust scale to ranked quantile ranges.

  --min_limit MIN_LIMIT
      [optional, defaults to 0, used for mode=minmax]
      Defines the minimum limit of the minmax normalization.
  --max_limit MAX_LIMIT
      [optional, defaults to 1, used for mode=minmax]
      Defines the maximum limit of the minmax normalization.
  --pct_rank_method PCT_RANK_METHOD
      [optional, defaults to 'average']
      Defines how tied ranks are broken. Can be 'average', 'min', 'max',
      'first', 'dense'.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.
  --with_centering
      [optional, defaults to True, used when mode=robust]
      If True, center the data before scaling.
  --with_scaling
      [optional, defaults to True, used when mode=robust]
      If True, scale the data to interquartile range.
  --quantile_range QUANTILE_RANGE
      [optional, defaults to (0.25, 0.75) (q_min, q_max), 0.0 < q_min < q_max <
      100.0]
      Quantile range used to calculate scale.

pca

$ tstoolbox pca --help
usage: tstoolbox pca [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--clean] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--n_components
  N_COMPONENTS] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--round_index ROUND_INDEX] [--tablefmt TABLEFMT]

Does not return a time-series.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --n_components N_COMPONENTS
      [optional, default is None]
      The columns in the input_ts will be grouped into n_components groups.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

pct_change

$ tstoolbox pct_change --help
usage: tstoolbox pct_change [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--periods
  PERIODS] [--fill_method FILL_METHOD] [--limit LIMIT] [--freq FREQ]
  [--print_input] [--round_index ROUND_INDEX] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--float_format FLOAT_FORMAT] [--tablefmt
  TABLEFMT]

Return the percent change between times.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --periods PERIODS
      [optional, default is 1]
      The number of intervals to calculate percent change across.
  --fill_method FILL_METHOD
      [optional, defaults to 'pad']
      Fill method for NA. Defaults to 'pad'.
  --limit LIMIT
      [optional, defaults to None]
      Is the minimum number of consecutive NA values where no more filling will
      be made.
  --freq FREQ
      [optional, defaults to None]
      A pandas time offset string to represent the interval.
      ┌───────┬───────────────┐
      │ Alias │ Description   │
      ╞═══════╪═══════════════╡
      │ N     │ Nanoseconds   │
      ├───────┼───────────────┤
      │ U     │ microseconds  │
      ├───────┼───────────────┤
      │ L     │ milliseconds  │
      ├───────┼───────────────┤
      │ S     │ Secondly      │
      ├───────┼───────────────┤
      │ T     │ Minutely      │
      ├───────┼───────────────┤
      │ H     │ Hourly        │
      ├───────┼───────────────┤
      │ D     │ calendar Day  │
      ├───────┼───────────────┤
      │ W     │ Weekly        │
      ├───────┼───────────────┤
      │ M     │ Month end     │
      ├───────┼───────────────┤
      │ MS    │ Month Start   │
      ├───────┼───────────────┤
      │ Q     │ Quarter end   │
      ├───────┼───────────────┤
      │ QS    │ Quarter Start │
      ├───────┼───────────────┤
      │ A     │ Annual end    │
      ├───────┼───────────────┤
      │ AS    │ Annual Start  │
      ╘═══════╧═══════════════╛

      Business offset codes.
      ┌───────┬────────────────────────────────────┐
      │ Alias │ Description                        │
      ╞═══════╪════════════════════════════════════╡
      │ B     │ Business day                       │
      ├───────┼────────────────────────────────────┤
      │ BM    │ Business Month end                 │
      ├───────┼────────────────────────────────────┤
      │ BMS   │ Business Month Start               │
      ├───────┼────────────────────────────────────┤
      │ BQ    │ Business Quarter end               │
      ├───────┼────────────────────────────────────┤
      │ BQS   │ Business Quarter Start             │
      ├───────┼────────────────────────────────────┤
      │ BA    │ Business Annual end                │
      ├───────┼────────────────────────────────────┤
      │ BAS   │ Business Annual Start              │
      ├───────┼────────────────────────────────────┤
      │ C     │ Custom business day (experimental) │
      ├───────┼────────────────────────────────────┤
      │ CBM   │ Custom Business Month end          │
      ├───────┼────────────────────────────────────┤
      │ CBMS  │ Custom Business Month Start        │
      ╘═══════╧════════════════════════════════════╛

      Weekly has the following anchored frequencies:
      ┌───────┬─────────────┬───────────────────────────────┐
      │ Alias │ Equivalents │ Description                   │
      ╞═══════╪═════════════╪═══════════════════════════════╡
      │ W-SUN │ W           │ Weekly frequency (SUNdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-MON │             │ Weekly frequency (MONdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-TUE │             │ Weekly frequency (TUEsdays)   │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-WED │             │ Weekly frequency (WEDnesdays) │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-THU │             │ Weekly frequency (THUrsdays)  │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-FRI │             │ Weekly frequency (FRIdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-SAT │             │ Weekly frequency (SATurdays)  │
      ╘═══════╧═════════════╧═══════════════════════════════╛

      Quarterly frequencies (Q, BQ, QS, BQS) and annual frequencies (A, BA, AS,
      BAS) replace the "x" in the "Alias" column to have the following
      anchoring suffixes:
      ┌───────┬──────────┬─────────────┬────────────────────────────┐
      │ Alias │ Examples │ Equivalents │ Description                │
      ╞═══════╪══════════╪═════════════╪════════════════════════════╡
      │ x-DEC │ A-DEC    │ A Q AS QS   │ year ends end of DECember  │
      │       │ Q-DEC    │             │                            │
      │       │ AS-DEC   │             │                            │
      ├───────┼─QS-DEC───┼─────────────┼────────────────────────────┤
      │ x-JAN │          │             │ year ends end of JANuary   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-FEB │          │             │ year ends end of FEBruary  │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAR │          │             │ year ends end of MARch     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-APR │          │             │ year ends end of APRil     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAY │          │             │ year ends end of MAY       │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUN │          │             │ year ends end of JUNe      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUL │          │             │ year ends end of JULy      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-AUG │          │             │ year ends end of AUGust    │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-SEP │          │             │ year ends end of SEPtember │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-OCT │          │             │ year ends end of OCTober   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-NOV │          │             │ year ends end of NOVember  │
      ╘═══════╧══════════╧═════════════╧════════════════════════════╛

  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

peak_detection

$ tstoolbox peak_detection --help
usage: tstoolbox peak_detection [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--method
  METHOD] [--extrema EXTREMA] [--window WINDOW] [--pad_len PAD_LEN] [--points
  POINTS] [--lock_frequency] [--float_format FLOAT_FORMAT] [--round_index
  ROUND_INDEX] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--print_input PRINT_INPUT] [--tablefmt TABLEFMT]

Peak and valley detection.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --method METHOD
      [optional, default is 'rel']
      'rel', 'minmax', 'zero_crossing', 'parabola', 'sine' methods are
      available. The different algorithms have different strengths and
      weaknesses.
  --extrema EXTREMA
      [optional, default is 'peak']
      'peak', 'valley', or 'both' to determine what should be returned.
  --window WINDOW
      [optional, default is 24]
      There will not usually be multiple peaks within the window number of
      values. The different methods use this variable in different ways.
      For 'rel' the window keyword specifies how many points on each side
      to require a comparator(n,n+x) = True. For 'minmax' the window
      keyword is the distance to look ahead from a peak candidate to
      determine if it is the actual peak.
      '(sample / period) / f'
      where f might be a good choice between 1.25 and 4.
      For 'zero_crossing' the window keyword is the dimension of the smoothing
      window and should be an odd integer.
  --pad_len PAD_LEN
      [optional, default is 5]
      Used with FFT to pad edges of time-series.
  --points POINTS
      [optional, default is 9]
      For 'parabola' and 'sine' methods. How many points around the peak should
      be used during curve fitting, must be odd. The
  --lock_frequency
      [optional, default is False]
      For 'sine' method only. Specifies if the frequency argument of the model
      function should be locked to the value calculated from the raw peaks
      or if optimization process may tinker with it.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input PRINT_INPUT
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

pick

$ tstoolbox pick --help
usage: tstoolbox pick [-h] [--input_ts INPUT_TS] [--start_date START_DATE]
  [--end_date END_DATE] [--round_index ROUND_INDEX] [--dropna DROPNA]
  [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--clean]
  [--tablefmt TABLEFMT] columns

DEPRECATED: Effectively replaced by the "columns" keyword available in all other
functions.

Will be removed in a future version of tstoolbox.

Can use column names or column numbers. If using numbers, column number 1 is the
first data column.

positional arguments:
  columns [optional, defaults to all columns, input filter]
    Columns to select out of input. Can use column names from the first line
    header or column numbers. If using numbers, column number 1 is the first
    data column. To pick multiple columns; separate by commas with no
    spaces. As used in toolbox_utils pick command.
    This solves a big problem so that you don't have to create a data set with a
    certain column order, you can rearrange columns when data is read in.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

plot

$ tstoolbox plot --help
usage: tstoolbox plot [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--clean] [--skiprows
  SKIPROWS] [--dropna DROPNA] [--index_type INDEX_TYPE] [--names NAMES]
  [--ofilename OFILENAME] [--type TYPE] [--xtitle XTITLE] [--ytitle YTITLE]
  [--title TITLE] [--figsize FIGSIZE] [--legend LEGEND] [--legend_names
  LEGEND_NAMES] [--subplots] [--sharex] [--sharey] [--colors COLORS]
  [--linestyles LINESTYLES] [--markerstyles MARKERSTYLES] [--bar_hatchstyles
  BAR_HATCHSTYLES] [--style STYLE] [--logx] [--logy] [--xaxis XAXIS] [--yaxis
  YAXIS] [--xlim XLIM] [--ylim YLIM] [--secondary_y] [--secondary_x]
  [--mark_right] [--scatter_matrix_diagonal SCATTER_MATRIX_DIAGONAL]
  [--bootstrap_size BOOTSTRAP_SIZE] [--bootstrap_samples BOOTSTRAP_SAMPLES]
  [--xy_match_line XY_MATCH_LINE] [--grid] [--label_rotation LABEL_ROTATION]
  [--label_skip LABEL_SKIP] [--force_freq FORCE_FREQ] [--drawstyle DRAWSTYLE]
  [--por] [--invert_xaxis] [--invert_yaxis] [--round_index ROUND_INDEX]
  [--plotting_position PLOTTING_POSITION] [--prob_plot_sort_values
  PROB_PLOT_SORT_VALUES] [--source_units SOURCE_UNITS] [--target_units
  TARGET_UNITS] [--lag_plot_lag LAG_PLOT_LAG] [--plot_styles PLOT_STYLES]
  [--hlines_y HLINES_Y] [--hlines_xmin HLINES_XMIN] [--hlines_xmax
  HLINES_XMAX] [--hlines_colors HLINES_COLORS] [--hlines_linestyles
  HLINES_LINESTYLES] [--vlines_x VLINES_X] [--vlines_ymin VLINES_YMIN]
  [--vlines_ymax VLINES_YMAX] [--vlines_colors VLINES_COLORS]
  [--vlines_linestyles VLINES_LINESTYLES]

Plot data.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --ofilename OFILENAME
      [optional, defaults to 'plot.png']
      Output filename for the plot. Extension defines the type, for example
      'filename.png' will create a PNG file.
      If used within Python, and ofilename is None will return the Matplotlib
      figure that can then be changed or added to as needed.
  --type TYPE
      'lag_plot', 'autocorrelation', 'bootstrap', 'histogram', 'kde',
      'kde_time', 'bar', 'barh', 'bar_stacked', 'barh_stacked', 'heatmap',
      'norm_xaxis', 'norm_yaxis', 'lognorm_xaxis', 'lognorm_yaxis',
      'weibull_xaxis', 'weibull_yaxis', 'taylor', 'target'}, optional
      [optional, defaults to 'time']
      The plot type.
      Can be one of the following:
      time
        Standard time series plot.
        Data must be organized as 'index,y1,y2,y3,...,yN'. The 'index' must be a
        date/time and all data columns are plotted. Legend names are
        taken from the column names in the first row unless over-ridden
        by the legend_names keyword.

      xy
        An 'x,y' plot, also know as a scatter plot.
        ${xydata}

      double_mass
        An 'x,y' plot of the cumulative sum of x and y.
        ${xydata}

      boxplot
        Box extends from lower to upper quartile, with line at the median.
        Depending on the statistics, the wiskers represent the range of
        the data or 1.5 times the inter-quartile range (Q3 - Q1).
        ${ydata}

      scatter_matrix
        Plots all columns against each other in a matrix, with the diagonal
        plots either histogram or KDE probability distribution depending
        on scatter_matrix_diagonal keyword.
        ${ydata}

      lag_plot
        Indicates structure in the data.
        ${yone}

      autocorrelation
        Plot autocorrelation. Only available for a single time-series.
        ${yone}

      bootstrap
        Visually assess aspects of a data set by plotting random selections of
        values. Only available for a single time-series.
        ${yone}

      histogram
        Calculate and create a histogram plot. See 'kde' for a smooth
        representation of a histogram.

      kde
        This plot is an estimation of the probability density function based on
        the data called kernel density estimation (KDE).
        ${ydata}

      kde_time
        This plot is an estimation of the probability density function based on
        the data called kernel density estimation (KDE) combined with a
        time-series plot.
        ${ydata}

      bar
        Column plot.

      barh
        A horizontal bar plot.

      bar_stacked
        A stacked column plot.

      barh_stacked
        A horizontal stacked bar plot.

      heatmap
        Create a 2D heatmap of daily data, day of year x-axis, and year for
        y-axis. Only available for a single, daily time-series.

      norm_xaxis
        Sort, calculate probabilities, and plot data against an x axis normal
        distribution.

      norm_yaxis
        Sort, calculate probabilities, and plot data against an y axis normal
        distribution.

      lognorm_xaxis
        Sort, calculate probabilities, and plot data against an x axis lognormal
        distribution.

      lognorm_yaxis
        Sort, calculate probabilities, and plot data against an y axis lognormal
        distribution.

      weibull_xaxis
        Sort, calculate and plot data against an x axis weibull distribution.

      weibull_yaxis
        Sort, calculate and plot data against an y axis weibull distribution.

      taylor
        Creates a taylor diagram that compares three goodness of fit statistics
        on one plot. The three goodness of fit statistics calculated and
        displayed are standard deviation, correlation coefficient, and
        centered root mean square deviation. The data columns have to be
        organized as 'observed,simulated1,simulated2,simulated3,...etc.'

      target
        Creates a target diagram that compares three goodness of fit statistics
        on one plot. The three goodness of fit statistics calculated and
        displayed are bias, root mean square deviation, and centered
        root mean square deviation. The data columns have to be
        organized as 'observed,simulated1,simulated2,simulated3,...etc.'

  --xtitle XTITLE
      [optional, default depends on type]
      Title of x-axis.
  --ytitle YTITLE
      [optional, default depends on type]
      Title of y-axis.
  --title TITLE
      [optional, defaults to '']
      Title of chart.
  --figsize FIGSIZE
      [optional, defaults to '10,6.5']
      The 'width,height' of plot in inches.
  --legend LEGEND
      [optional, defaults to True]
      Whether to display the legend.
  --legend_names LEGEND_NAMES
      [optional, defaults to None]
      Legend would normally use the time-series names associated with the input
      data. The 'legend_names' option allows you to override the names in
      the data set. You must supply a comma separated list of strings for
      each time-series in the data set.
  --subplots
      [optional, defaults to False]
      Make separate subplots for each time series.
  --sharex
      [optional, default to True]
      In case subplots=True, share x axis.
  --sharey
      [optional, default to False]
      In case subplots=True, share y axis.
  --colors COLORS
      [optional, default is 'auto']
      The default 'auto' will cycle through matplotlib colors in the chosen
      style.
      At the command line supply a comma separated matplotlib color codes, or
      within Python a list of color code strings.
      Can identify colors in four different ways.
      1. Use 'CN' where N is a number from 0 to 9 that gets the Nth color from
      the current style.
        2. Single character code from the table below.
      ┌──────┬─────────┐
      │ Code │ Color   │
      ╞══════╪═════════╡
      │ b    │ blue    │
      ├──────┼─────────┤
      │ g    │ green   │
      ├──────┼─────────┤
      │ r    │ red     │
      ├──────┼─────────┤
      │ c    │ cyan    │
      ├──────┼─────────┤
      │ m    │ magenta │
      ├──────┼─────────┤
      │ y    │ yellow  │
      ├──────┼─────────┤
      │ k    │ black   │
      ╘══════╧═════════╛

      3. Number between 0 and 1 that represents the level of gray, where 0 is
      white an 1 is black.
        4. Any of the HTML color names.
      ┌──────────────────┐
      │ HTML Color Names │
      ╞══════════════════╡
      │ red              │
      ├──────────────────┤
      │ burlywood        │
      ├──────────────────┤
      │ chartreuse       │
      ├──────────────────┤
      │ ...etc.          │
      ╘══════════════════╛

      Color reference: <http://matplotlib.org/api/colors_api.html>
  --linestyles LINESTYLES
      [optional, default to 'auto']
      If 'auto' will iterate through the available matplotlib line types.
      Otherwise on the command line a comma separated list, or a list of
      strings if using the Python API.
      To not display lines use a space (' ') as the linestyle code.
      Separated 'colors', 'linestyles', and 'markerstyles' instead of using the
      'style' keyword.
      ┌─────────┬──────────────┐
      │ Code    │ Lines        │
      ╞═════════╪══════════════╡
      │ -       │ solid        │
      ├─────────┼──────────────┤
      │ --      │ dashed       │
      ├─────────┼──────────────┤
      │ -.      │ dash_dot     │
      ├─────────┼──────────────┤
      │ :       │ dotted       │
      ├─────────┼──────────────┤
      │ None    │ draw nothing │
      ├─────────┼──────────────┤
      │ ' '     │ draw nothing │
      ├─────────┼──────────────┤
      │ ''      │ draw nothing │
      ╘═════════╧══════════════╛

      Line reference: <http://matplotlib.org/api/artist_api.html>
  --markerstyles MARKERSTYLES
      [optional, default to ' ']
      The default ' ' will not plot a marker. If 'auto' will iterate through the
      available matplotlib marker types. Otherwise on the command line a
      comma separated list, or a list of strings if using the Python API.
      Separated 'colors', 'linestyles', and 'markerstyles' instead of using the
      'style' keyword.
      ┌───────┬────────────────┐
      │ Code  │ Markers        │
      ╞═══════╪════════════════╡
      │ .     │ point          │
      ├───────┼────────────────┤
      │ o     │ circle         │
      ├───────┼────────────────┤
      │ v     │ triangle down  │
      ├───────┼────────────────┤
      │ ^     │ triangle up    │
      ├───────┼────────────────┤
      │ <     │ triangle left  │
      ├───────┼────────────────┤
      │ >     │ triangle right │
      ├───────┼────────────────┤
      │ 1     │ tri_down       │
      ├───────┼────────────────┤
      │ 2     │ tri_up         │
      ├───────┼────────────────┤
      │ 3     │ tri_left       │
      ├───────┼────────────────┤
      │ 4     │ tri_right      │
      ├───────┼────────────────┤
      │ 8     │ octagon        │
      ├───────┼────────────────┤
      │ s     │ square         │
      ├───────┼────────────────┤
      │ p     │ pentagon       │
      ├───────┼────────────────┤
      │ *     │ star           │
      ├───────┼────────────────┤
      │ h     │ hexagon1       │
      ├───────┼────────────────┤
      │ H     │ hexagon2       │
      ├───────┼────────────────┤
      │ +     │ plus           │
      ├───────┼────────────────┤
      │ x     │ x              │
      ├───────┼────────────────┤
      │ D     │ diamond        │
      ├───────┼────────────────┤
      │ d     │ thin diamond   │
      ├───────┼────────────────┤
      │ _     │ hlines_y       │
      ├───────┼────────────────┤
      │ None  │ nothing        │
      ├───────┼────────────────┤
      │ ' '   │ nothing        │
      ├───────┼────────────────┤
      │ ''    │ nothing        │
      ╘═══════╧════════════════╛

      Marker reference: <http://matplotlib.org/api/markers_api.html>
  --bar_hatchstyles BAR_HATCHSTYLES
      [optional, default to "auto", only used if type equal to "bar", "barh",
      "bar_stacked", and "barh_stacked"]
      If 'auto' will iterate through the available matplotlib hatch types.
      Otherwise on the command line a comma separated list, or a list of
      strings if using the Python API.
      ┌─────────────────┬───────────────────┐
      │ bar_hatchstyles │ Description       │
      ╞═════════════════╪═══════════════════╡
      │ /               │ diagonal hatching │
      ├─────────────────┼───────────────────┤
      │ \               │ back diagonal     │
      ├─────────────────┼───────────────────┤
      │ |               │ vertical          │
      ├─────────────────┼───────────────────┤
      │ •               │ horizontal        │
      ├─────────────────┼───────────────────┤
      │ •               │ crossed           │
      ├─────────────────┼───────────────────┤
      │ x               │ crossed diagonal  │
      ├─────────────────┼───────────────────┤
      │ o               │ small circle      │
      ├─────────────────┼───────────────────┤
      │ O               │ large circle      │
      ├─────────────────┼───────────────────┤
      │ .               │ dots              │
      ├─────────────────┼───────────────────┤
      │ •               │ stars             │
      ╘═════════════════╧═══════════════════╛

  --style STYLE
      [optional, default is None]
      Still available, but if None is replaced by 'colors', 'linestyles', and
      'markerstyles' options. Currently the 'style' option will override
      the others.
      Comma separated matplotlib style strings per time-series. Just combine
      codes in 'ColorMarkerLine' order, for example 'r*--' is a red dashed
      line with star marker.
  --logx
      DEPRECATED: use '--xaxis="log"' instead.
  --logy
      DEPRECATED: use '--yaxis="log"' instead.
  --xaxis XAXIS
      [optional, default is 'arithmetic']
      Defines the type of the xaxis. One of 'arithmetic', 'log'.
  --yaxis YAXIS
      [optional, default is 'arithmetic']
      Defines the type of the yaxis. One of 'arithmetic', 'log'.
  --xlim XLIM
      [optional, default is based on range of x values]
      Comma separated lower and upper limits for the x-axis of the plot. For
      example, '--xlim 1,1000' would limit the plot from 1 to 1000, where
      '--xlim ,1000' would base the lower limit on the data and set the
      upper limit to 1000.
  --ylim YLIM
      [optional, default is based on range of y values]
      Comma separated lower and upper limits for the y-axis of the plot. See
      xlim for examples.
  --secondary_y
      ${secondary_axis}
  --secondary_x
      ${secondary_axis}
  --mark_right
      [optional, default is True]
      When using a secondary_y axis, should the legend label the axis of the
      various time-series automatically.
  --scatter_matrix_diagonal SCATTER_MATRIX_DIAGONAL
      [optional, defaults to 'kde']
      If plot type is 'scatter_matrix', this specifies the plot along the
      diagonal. One of 'kde' for Kernel Density Estimation or 'hist' for a
      histogram.
  --bootstrap_size BOOTSTRAP_SIZE
      [optional, defaults to 50]
      The size of the random subset for 'bootstrap' plot.
  --bootstrap_samples BOOTSTRAP_SAMPLES
      [optional, defaults to 500]
      The number of random subsets of 'bootstrap_size'.
  --xy_match_line XY_MATCH_LINE
      [optional, defaults is '']
      Will add a match line where x == y. Set to a line style code.
  --grid
      [optional, default is False]
      Whether to plot grid lines on the major ticks.
  --label_rotation LABEL_ROTATION
      [optional]
      Rotation for major labels for bar plots.
  --label_skip LABEL_SKIP
      [optional]
      Skip for major labels for bar plots.
  --force_freq FORCE_FREQ
      [optional, output format]
      Force this frequency for the output. Typically you will only want to
      enforce a smaller interval where toolbox_utils will insert missing
      values as needed. WARNING: you may lose data if not careful with
      this option. In general, letting the algorithm determine the
      frequency should always work, but this option will override. Use
      PANDAS offset codes.
  --drawstyle DRAWSTYLE
      [optional, default is 'default']
      'default' connects the points with lines. The steps variants produce
      step-plots. 'steps' is equivalent to 'steps-pre' and is maintained
      for backward-compatibility.
      ACCEPTS:
      ['default' | 'steps' | 'steps-pre' | 'steps-mid' | 'steps-post']

  --por
      [optional]
      Plot from first good value to last good value. Strips NANs from beginning
      and end.
  --invert_xaxis
      [optional, default is False]
      Invert the x-axis.
  --invert_yaxis
      [optional, default is False]
      Invert the y-axis.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --plotting_position PLOTTING_POSITION
      [optional, default is 'weibull']
      ┌────────────┬────────┬──────────────────────┬────────────────────┐
      │ Name       │ a      │ Equation (i-a)/(n+1- │ Description        │
      │            │        │ -2*a)                │                    │
      ╞════════════╪════════╪══════════════════════╪════════════════════╡
      │ weibull    │ 0      │ i/(n+1)              │ mean of sampling   │
      │ (default)  │        │                      │ distribution       │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ filliben   │ 0.3175 │ (i-0.3175)/(n+0.365) │                    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ yu         │ 0.326  │ (i-0.326)/(n+0.348)  │                    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ tukey      │ 1/3    │ (i-1/3)/(n+1/3)      │ approx. median of  │
      │            │        │                      │ sampling distribu- │
      │            │        │                      │ tion               │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ blom       │ 0.375  │ (i-0.375)/(n+0.25)   │                    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ cunnane    │ 2/5    │ (i-2/5)/(n+1/5)      │ subjective         │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ gringorton │ 0.44   │ (1-0.44)/(n+0.12)    │                    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ hazen      │ 1/2    │ (i-1/2)/n            │ midpoints of n     │
      │            │        │                      │ equal intervals    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ larsen     │ 0.567  │ (i-0.567)/(n-0.134)  │                    │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ gumbel     │ 1      │ (i-1)/(n-1)          │ mode of sampling   │
      │            │        │                      │ distribution       │
      ├────────────┼────────┼──────────────────────┼────────────────────┤
      │ california │ NA     │ i/n                  │                    │
      ╘════════════╧════════╧══════════════════════╧════════════════════╛

      Where 'i' is the sorted rank of the y value, and 'n' is the total number
      of values to be plotted.
      The 'blom' plotting position is also known as the 'Sevruk and Geiger'.
      Only used for norm_xaxis, norm_yaxis, lognorm_xaxis, lognorm_yaxis,
      weibull_xaxis, and weibull_yaxis.
  --prob_plot_sort_values PROB_PLOT_SORT_VALUES
      [optional, default is 'descending']
      How to sort the values for the probability plots.
      Only used for norm_xaxis, norm_yaxis, lognorm_xaxis, lognorm_yaxis,
      weibull_xaxis, and weibull_yaxis.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --lag_plot_lag LAG_PLOT_LAG
      [optional, default to 1]
      The lag used if type "lag_plot" is chosen.
  --plot_styles PLOT_STYLES
      [optional, default is "default"]
      Set the style of the plot. One or more of Matplotlib styles "classic",
      "Solarize_Light2", "bmh", "dark_background", "fast",
      "fivethirtyeight", "ggplot", "grayscale", "seaborn",
      "seaborn-bright", "seaborn-colorblind", "seaborn-dark",
      "seaborn-dark-palette", "seaborn-darkgrid", "seaborn-deep",
      "seaborn-muted", "seaborn-notebook", "seaborn-paper",
      "seaborn-pastel", "seaborn-poster", "seaborn-talk", "seaborn-ticks",
      "seaborn-white", "seaborn-whitegrid", "tableau-colorblind10", and
      SciencePlots styles "science", "grid", "ieee", "scatter", "notebook",
      "high-vis", "bright", "vibrant", "muted", and "retro".
      If multiple styles then each over rides some or all of the characteristics
      of the previous.
      Color Blind Appropriate Styles
      The styles "seaborn-colorblind", "tableau-colorblind10", "bright",
      "vibrant", and "muted" are all styles that are setup to be able to
      be distinguished by someone with color blindness.
      Black, White, and Gray Styles
      The "ieee" style is appropriate for black, white, and gray, however the
      "ieee" also will change the chart size to fit in a column of the
      "IEEE" journal.
      The "grayscale" is another style useful for photo-copyable black, white,
      nd gray.
      Matplotlib styles:
        <https://matplotlib.org/3.3.1/gallery/style_sheets/style_sheets_-
        reference.html>

      SciencePlots styles:
        <https://github.com/garrettj403/SciencePlots>

  --hlines_y HLINES_Y
      [optional, defaults to None]
      Number or list of y values where to place a horizontal line.
  --hlines_xmin HLINES_XMIN
      [optional, defaults to None]
      List of minimum x values to start the horizontal line. If a list must be
      same length as hlines_y. If a single number will be used as the
      minimum x values for all horizontal lines. A missing value or None
      will start at the minimum x value for the entire plot.
  --hlines_xmax HLINES_XMAX
      [optional, defaults to None]
      List of maximum x values to end each horizontal line. If a list must be
      same length as hlines_y. If a single number will be the maximum x
      value for all horizontal lines. A missing value or None will end at
      the maximum x value for the entire plot.
  --hlines_colors HLINES_COLORS
      [optional, defaults to None]
      List of colors for the horizontal lines. If a single color then will be
      used as the color for all horizontal lines. If a list must be same
      length as hlines_y. If None will take from the color pallette in the
      current plot style.
  --hlines_linestyles HLINES_LINESTYLES
      [optional, defaults to None]
      List of linestyles for the horizontal lines. If a single linestyle then
      will be used as the linestyle for all horizontal lines. If a list
      must be same length as hlines_y. If None will take for the standard
      linestyles list.
  --vlines_x VLINES_X
      [optional, defaults to None]
      List of x values where to place a vertical line.
  --vlines_ymin VLINES_YMIN
      [optional, defaults to None]
      List of minimum y values to start the vertical line. If a list must be
      same length as vlines_x. If a single number will be used as the
      minimum x values for all vertical lines. A missing value or None
      will start at the minimum x value for the entire plot.
  --vlines_ymax VLINES_YMAX
      [optional, defaults to None]
      List of maximum x values to end each vertical line. If a list must be same
      length as vlines_x. If a single number will be the maximum x value
      for all vertical lines. A missing value or None will end at the
      maximum x value for the entire plot.
  --vlines_colors VLINES_COLORS
      [optional, defaults to None]
      List of colors for the vertical lines. If a single color then will be used
      as the color for all vertical lines. If a list must be same length
      as vlines_x. If None will take from the color pallette in the
      current plot style.
  --vlines_linestyles VLINES_LINESTYLES
      [optional, defaults to None]
      List of linestyles for the vertical lines. If a single linestyle then will
      be used as the linestyle for all vertical lines. If a list must be
      same length as vlines_x. If None will take for the standard
      linestyles list.

rank

$ tstoolbox rank --help
usage: tstoolbox rank [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--axis AXIS]
  [--method METHOD] [--numeric_only NUMERIC_ONLY] [--na_option NA_OPTION]
  [--ascending] [--pct] [--print_input] [--float_format FLOAT_FORMAT]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--round_index
  ROUND_INDEX] [--tablefmt TABLEFMT]

Equal values are assigned a rank depending on method.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --axis AXIS
      [optional, default is 0]
      0 or 'index' for rows. 1 or 'columns' for columns. Index to direct
      ranking.
  --method METHOD
      [optional, default is 'average']
      ┌─────────────────┬────────────────────────────────┐
      │ method argument │ Description                    │
      ╞═════════════════╪════════════════════════════════╡
      │ average         │ average rank of group          │
      ├─────────────────┼────────────────────────────────┤
      │ min             │ lowest rank in group           │
      ├─────────────────┼────────────────────────────────┤
      │ max             │ highest rank in group          │
      ├─────────────────┼────────────────────────────────┤
      │ first           │ ranks assigned in order they   │
      │                 │ appear in the array            │
      ├─────────────────┼────────────────────────────────┤
      │ dense           │ like 'min', but rank always    │
      │                 │ increases by 1 between groups  │
      ╘═════════════════╧════════════════════════════════╛

  --numeric_only NUMERIC_ONLY
      [optional, default is None]
      Include only float, int, boolean data. Valid only for DataFrame or Panel
      objects.
  --na_option NA_OPTION
      [optional, default is 'keep']
      ┌────────────────────┬────────────────────────────────┐
      │ na_option argument │ Description                    │
      ╞════════════════════╪════════════════════════════════╡
      │ keep               │ leave NA values where they are │
      ├────────────────────┼────────────────────────────────┤
      │ top                │ smallest rank if ascending     │
      ├────────────────────┼────────────────────────────────┤
      │ bottom             │ smallest rank if descending    │
      ╘════════════════════╧════════════════════════════════╛

  --ascending
      [optional, default is True]
      False ranks by high (1) to low (N)
  --pct
      [optional, default is False]
      Computes percentage rank of data.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

read

$ tstoolbox read --help
usage: tstoolbox read [-h] [--force_freq FORCE_FREQ] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--float_format
  FLOAT_FORMAT] [--round_index ROUND_INDEX] [--tablefmt TABLEFMT] [filenames
  ...]

Prints the read in time-series in the tstoolbox standard format.

WARNING: Accepts naive and timezone aware time-series by converting all to UTC
and removing timezone information.

positional arguments:
  filenames

options:
  -h | --help
      show this help message and exit
  --force_freq FORCE_FREQ
      [optional, output format]
      Force this frequency for the output. Typically you will only want to
      enforce a smaller interval where toolbox_utils will insert missing
      values as needed. WARNING: you may lose data if not careful with
      this option. In general, letting the algorithm determine the
      frequency should always work, but this option will override. Use
      PANDAS offset codes.
      ┌───────┬───────────────┐
      │ Alias │ Description   │
      ╞═══════╪═══════════════╡
      │ N     │ Nanoseconds   │
      ├───────┼───────────────┤
      │ U     │ microseconds  │
      ├───────┼───────────────┤
      │ L     │ milliseconds  │
      ├───────┼───────────────┤
      │ S     │ Secondly      │
      ├───────┼───────────────┤
      │ T     │ Minutely      │
      ├───────┼───────────────┤
      │ H     │ Hourly        │
      ├───────┼───────────────┤
      │ D     │ calendar Day  │
      ├───────┼───────────────┤
      │ W     │ Weekly        │
      ├───────┼───────────────┤
      │ M     │ Month end     │
      ├───────┼───────────────┤
      │ MS    │ Month Start   │
      ├───────┼───────────────┤
      │ Q     │ Quarter end   │
      ├───────┼───────────────┤
      │ QS    │ Quarter Start │
      ├───────┼───────────────┤
      │ A     │ Annual end    │
      ├───────┼───────────────┤
      │ AS    │ Annual Start  │
      ╘═══════╧═══════════════╛

      Business offset codes.
      ┌───────┬────────────────────────────────────┐
      │ Alias │ Description                        │
      ╞═══════╪════════════════════════════════════╡
      │ B     │ Business day                       │
      ├───────┼────────────────────────────────────┤
      │ BM    │ Business Month end                 │
      ├───────┼────────────────────────────────────┤
      │ BMS   │ Business Month Start               │
      ├───────┼────────────────────────────────────┤
      │ BQ    │ Business Quarter end               │
      ├───────┼────────────────────────────────────┤
      │ BQS   │ Business Quarter Start             │
      ├───────┼────────────────────────────────────┤
      │ BA    │ Business Annual end                │
      ├───────┼────────────────────────────────────┤
      │ BAS   │ Business Annual Start              │
      ├───────┼────────────────────────────────────┤
      │ C     │ Custom business day (experimental) │
      ├───────┼────────────────────────────────────┤
      │ CBM   │ Custom Business Month end          │
      ├───────┼────────────────────────────────────┤
      │ CBMS  │ Custom Business Month Start        │
      ╘═══════╧════════════════════════════════════╛

      Weekly has the following anchored frequencies:
      ┌───────┬─────────────┬───────────────────────────────┐
      │ Alias │ Equivalents │ Description                   │
      ╞═══════╪═════════════╪═══════════════════════════════╡
      │ W-SUN │ W           │ Weekly frequency (SUNdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-MON │             │ Weekly frequency (MONdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-TUE │             │ Weekly frequency (TUEsdays)   │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-WED │             │ Weekly frequency (WEDnesdays) │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-THU │             │ Weekly frequency (THUrsdays)  │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-FRI │             │ Weekly frequency (FRIdays)    │
      ├───────┼─────────────┼───────────────────────────────┤
      │ W-SAT │             │ Weekly frequency (SATurdays)  │
      ╘═══════╧═════════════╧═══════════════════════════════╛

      Quarterly frequencies (Q, BQ, QS, BQS) and annual frequencies (A, BA, AS,
      BAS) replace the "x" in the "Alias" column to have the following
      anchoring suffixes:
      ┌───────┬──────────┬─────────────┬────────────────────────────┐
      │ Alias │ Examples │ Equivalents │ Description                │
      ╞═══════╪══════════╪═════════════╪════════════════════════════╡
      │ x-DEC │ A-DEC    │ A Q AS QS   │ year ends end of DECember  │
      │       │ Q-DEC    │             │                            │
      │       │ AS-DEC   │             │                            │
      ├───────┼─QS-DEC───┼─────────────┼────────────────────────────┤
      │ x-JAN │          │             │ year ends end of JANuary   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-FEB │          │             │ year ends end of FEBruary  │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAR │          │             │ year ends end of MARch     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-APR │          │             │ year ends end of APRil     │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-MAY │          │             │ year ends end of MAY       │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUN │          │             │ year ends end of JUNe      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-JUL │          │             │ year ends end of JULy      │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-AUG │          │             │ year ends end of AUGust    │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-SEP │          │             │ year ends end of SEPtember │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-OCT │          │             │ year ends end of OCTober   │
      ├───────┼──────────┼─────────────┼────────────────────────────┤
      │ x-NOV │          │             │ year ends end of NOVember  │
      ╘═══════╧══════════╧═════════════╧════════════════════════════╛

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

regression

$ tstoolbox regression --help
usage: tstoolbox regression [-h] [--x_pred_cols X_PRED_COLS]
  [--input_ts INPUT_TS] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--dropna DROPNA] [--clean] [--round_index
  ROUND_INDEX] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--print_input] [--tablefmt TABLEFMT] [--por] method x_train_cols
  y_train_col

If optional x_pred_cols is given will return a time-series of the y predictions.
Otherwise returns dictionary of equation and statistics about the regression
fit.

positional arguments:
  method                The method of regression.  The chosen method will use x_train_cols as the
    independent data and y_pred_col as the dependent data.
    ARD
      Requires lots of memory.
      Fit the weights of a regression model, using an ARD prior. The weights of
      the regression model are assumed to be in Gaussian distributions.
      Also estimate the parameters lambda (precisions of the distributions
      of the weights) and alpha (precision of the distribution of the
      noise). The estimation is done by an iterative procedures (Evidence
      Maximization)

    BayesianRidge
      Fit a Bayesian ridge model. See the Notes section for details on this
      implementation and the optimization of the regularization parameters
      lambda (precision of the weights) and alpha (precision of the
      noise).

    ElasticNetCV
      Elastic Net model with iterative fitting along a regularization path.

    ElasticNet
      Linear regression with combined L1 and L2 priors as regularizer.

    Huber
      Linear regression model that is robust to outliers.
      The Huber Regressor optimizes the squared loss for the samples where
      abs((y - X'w) / sigma) < epsilon and the absolute loss for the
      samples where abs((y - X'w) / sigma) > epsilon, where w and sigma
      are parameters to be optimized. The parameter sigma makes sure that
      if y is scaled up or down by a certain factor, one does not need to
      rescale epsilon to achieve the same robustness. Note that this does
      not take into account the fact that the different features of X may
      be of different scales.
      This makes sure that the loss function is not heavily influenced by the
      outliers while not completely ignoring their effect.

    LarsCV
      Cross-validated Least Angle Regression model.

    Lars
      Least Angle Regression model.

    LassoCV
      Lasso linear model with iterative fitting along a regularization path.

    LassoLarsCV
      Cross-validated Lasso, using the LARS algorithm.

    LassoLarsIC
      Lasso model fit with Lars using BIC or AIC for model selection.

    LassoLars
      Lasso model fit with Least Angle Regression a.k.a. Lars. It is a Linear
      Model trained with an L1 prior as regularizer.

    Lasso
      Linear Model trained with L1 prior as regularizer (aka the Lasso).

    Linear
      LinearRegression fits a linear model with coefficients w = (w1, …, wp) to
      minimize the residual sum of squares between the observed targets in
      the dataset, and the targets predicted by the linear approximation.

    RANSAC
      RANSAC (RANdom SAmple Consensus) algorithm. RANSAC is an iterative
      algorithm for the robust estimation of parameters from a subset of
      inliers from the complete data set.

    RidgeCV
      Ridge regression with built-in cross-validation. By default, it performs
      Generalized Cross-Validation, which is a form of efficient
      Leave-One-Out cross-validation.

    Ridge
      This classifier first converts the target values into (-1, 1) and then
      treats the problem as a regression task (multi-output regression in
      the multiclass case).

    SGD
      Input must be scaled by removing mean and scaling to unit variance. Can
      use 'tstoolbox normalization ...' to scale the input.
      Linear model fitted by minimizing a regularized empirical loss with SGD.
      SGD stands for Stochastic Gradient Descent: the gradient of the loss
      is estimated each sample at a time and the model is updated along
      the way with a decreasing strength schedule (aka learning rate).
      The regularizer is a penalty added to the loss function that shrinks model
      parameters towards the zero vector using either the squared
      euclidean norm L2 or the absolute norm L1 or a combination of both
      (Elastic Net). If the parameter update crosses the 0.0 value because
      of the regularizer, the update is truncated to 0.0 to allow for
      learning sparse models and achieve online feature selection.

    TheilSen
      Theil-Sen Estimator: robust multivariate regression model.
      The algorithm calculates least square solutions on subsets with size
      n_subsamples of the samples in X. Any value of n_subsamples between
      the number of features and samples leads to an estimator with a
      compromise between robustness and efficiency. Since the number of
      least square solutions is “n_samples choose n_subsamples”, it can be
      extremely large and can therefore be limited with max_subpopulation.
      If this limit is reached, the subsets are chosen randomly. In a
      final step, the spatial median (or L1 median) is calculated of all
      least square solutions.


  x_train_cols          List of column names/numbers that hold the x value datasets used to
    train the regression. Perform a multiple regression if method allows by
    giving several x_train_cols. To include the index in the regression use
    column 0 or the index name.

  y_train_col           Column name or number of the y dataset used to train the
    regression.
    The y_train_col cannot be part of x_train_cols or x_pred_cols.


options:
  -h | --help
      show this help message and exit
  --x_pred_cols X_PRED_COLS
      [optional, if supplied will return a time-series of the y prediction based
      on x_pred_cols.]
      List of column names/numbers of x value datasets used to create the y
      prediction. Needs to be the same number of columns as x_train_cols.
      Can be identical columns to x_train_cols.
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.
  --por
      [optional, default is False]
      The por keyword adjusts the operation of start_date and end_date
      If "False" (the default) choose the indices in the time-series between
      start_date and end_date. If "True" and if start_date or end_date is
      outside of the existing time-series will fill the time- series with
      missing values to include the exterior start_date or end_date.

remove_trend

$ tstoolbox remove_trend --help
usage: tstoolbox remove_trend [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--round_index
  ROUND_INDEX] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--print_input] [--tablefmt TABLEFMT]

Subtracts from the data a linearly interpolated trend of the data.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

replace

$ tstoolbox replace --help
usage: tstoolbox replace [-h] [--round_index ROUND_INDEX]
  [--input_ts INPUT_TS] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type
  INDEX_TYPE] [--names NAMES] [--clean] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--print_input] [--tablefmt TABLEFMT]
  from_values to_values

Return a time-series replacing values with others.

positional arguments:
  from_values           All values in this comma separated list are replaced with the
    corresponding value in to_values. Use the string 'None' to represent a
    missing value. If using 'None' as a from_value it might be easier to use
    the "fill" subcommand instead.

  to_values             All values in this comma separated list are the replacement
    values corresponding one-to-one to the items in from_values. Use the string
    'None' to represent a missing value.


options:
  -h | --help
      show this help message and exit
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

rolling_window

$ tstoolbox rolling_window --help
usage: tstoolbox rolling_window [-h] [--groupby GROUPBY] [--window WINDOW]
  [--input_ts INPUT_TS] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type
  INDEX_TYPE] [--names NAMES] [--clean] [--span SPAN] [--min_periods
  MIN_PERIODS] [--center] [--win_type WIN_TYPE] [--on ON] [--closed CLOSED]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--print_input]
  [--tablefmt TABLEFMT] statistic

Calculate a rolling window statistic.

positional arguments:
  statistic             The statistic that will be applied to each
    window.
    ┌──────────┬────────────────────┐
    │ corr     │ correlation        │
    ├──────────┼────────────────────┤
    │ count    │ count of numbers   │
    ├──────────┼────────────────────┤
    │ cov      │ covariance         │
    ├──────────┼────────────────────┤
    │ kurt     │ kurtosis           │
    ├──────────┼────────────────────┤
    │ max      │ maximum            │
    ├──────────┼────────────────────┤
    │ mean     │ mean               │
    ├──────────┼────────────────────┤
    │ median   │ median             │
    ├──────────┼────────────────────┤
    │ min      │ minimum            │
    ├──────────┼────────────────────┤
    │ quantile │ quantile           │
    ├──────────┼────────────────────┤
    │ skew     │ skew               │
    ├──────────┼────────────────────┤
    │ std      │ standard deviation │
    ├──────────┼────────────────────┤
    │ sum      │ sum                │
    ├──────────┼────────────────────┤
    │ var      │ variance           │
    ╘══════════╧════════════════════╛



options:
  -h | --help
      show this help message and exit
  --groupby GROUPBY
      [optional, default is None, transformation]
      The pandas offset code to group the time-series data into. A special code
      is also available to group 'months_across_years' that will group
      into twelve monthly categories across the entire time-series.
  --window WINDOW
      [optional, default = 2]
      Size of the moving window. This is the number of observations used for
      calculating the statistic. Each window will be a fixed size.
      If it is an offset then this will be the time period of each window. Each
      window will be a variable sized based on the observations included
      in the time-period. This is only valid for datetimelike indexes.
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --span SPAN
      [optional, default = 2]
      DEPRECATED: Changed to 'window' to be consistent with pandas.
  --min_periods MIN_PERIODS
      [optional, default is None]
      Minimum number of observations in window required to have a value
      (otherwise result is NA). For a window that is specified by an
      offset, this will default to 1.
  --center
      [optional, default is False]
      Set the labels at the center of the window.
  --win_type WIN_TYPE
      [optional, default is None]
      Provide a window type.
      One of:
      boxcar
      triang
      blackman
      hamming
      bartlett
      parzen
      bohman
      blackmanharris
      nuttall
      barthann
      kaiser (needs beta)
      gaussian (needs std)
      general_gaussian (needs power, width)
      slepian (needs width)
      exponential (needs tau), center is set to None.

  --on ON
      [optional, default is None]
      For a DataFrame, column on which to calculate the rolling window, rather
      than the index
  --closed CLOSED
      [optional, default is None]
      Make the interval closed on the 'right', 'left', 'both' or 'neither'
      endpoints. For offset-based windows, it defaults to 'right'. For
      fixed windows, defaults to 'both'. Remaining cases not implemented
      for fixed windows.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

stack

$ tstoolbox stack --help
usage: tstoolbox stack [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--round_index ROUND_INDEX]
  [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names
  NAMES] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--clean]
  [--tablefmt TABLEFMT]

The stack command takes the standard table and converts to a three column table.

From:

Datetime,TS1,TS2,TS3
2000-01-01 00:00:00,1.2,1018.2,0.0032
2000-01-02 00:00:00,1.8,1453.1,0.0002
2000-01-03 00:00:00,1.9,1683.1,-0.0004

To:

Datetime,Columns,Values
2000-01-01,TS1,1.2
2000-01-02,TS1,1.8
2000-01-03,TS1,1.9
2000-01-01,TS2,1018.2
2000-01-02,TS2,1453.1
2000-01-03,TS2,1683.1
2000-01-01,TS3,0.0032
2000-01-02,TS3,0.0002
2000-01-03,TS3,-0.0004

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

stdtozrxp

$ tstoolbox stdtozrxp --help
usage: tstoolbox stdtozrxp [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--round_index
  ROUND_INDEX] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--rexchange REXCHANGE]

Print out data to the screen in a WISKI ZRXP format.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --rexchange REXCHANGE
      [optional, default is None]
      The REXCHANGE ID to be written into the zrxp header.

tstopickle

$ tstoolbox tstopickle --help
usage: tstoolbox tstopickle [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--round_index ROUND_INDEX]
  [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names
  NAMES] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--clean]
  filename

Can be brought back into Python with 'pickle.load' or 'numpy.load'. See also
'tstoolbox read'.

positional arguments:
  filename The filename to store the pickled data.

options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.

unstack

$ tstoolbox unstack --help
usage: tstoolbox unstack [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--round_index ROUND_INDEX]
  [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names
  NAMES] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--clean]
  [--tablefmt TABLEFMT] column_names

The unstack command takes the stacked table and converts to a standard tstoolbox
table.

From:

Datetime,Columns,Values
2000-01-01,TS1,1.2
2000-01-02,TS1,1.8
2000-01-03,TS1,1.9
2000-01-01,TS2,1018.2
2000-01-02,TS2,1453.1
2000-01-03,TS2,1683.1
2000-01-01,TS3,0.0032
2000-01-02,TS3,0.0002
2000-01-03,TS3,-0.0004

To:

Datetime,TS1,TS2,TS3
2000-01-01,1.2,1018.2,0.0032
2000-01-02,1.8,1453.1,0.0002
2000-01-03,1.9,1683.1,-0.0004

positional arguments:
  column_names          The column in the table that holds the column names
    of the unstacked data.


options:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional though required if using within Python, default is '-' (stdin)]
      Whether from a file or standard input, data requires a single line header
      of column names. The default header is the first line of the input,
      but this can be changed for CSV files using the 'skiprows' option.
      Most common date formats can be used, but the closer to ISO 8601 date/time
      standard the better.
      Comma-separated values (CSV) files or tab-separated values (TSV):
      File separators will be automatically detected.
      
      Columns can be selected by name or index, where the index for
      data columns starts at 1.

      Command line examples:
        ┌─────────────────────────────────┬───────────────────────────┐
        │ Keyword Example                 │ Description               │
        ╞═════════════════════════════════╪═══════════════════════════╡
        │ --input_ts=fn.csv               │ read all columns from     │
        │                                 │ 'fn.csv'                  │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,1           │ read data columns 2 and 1 │
        │                                 │ from 'fn.csv'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.csv,2,skiprows=2  │ read data column 2 from   │
        │                                 │ 'fn.csv', skipping first  │
        │                                 │ 2 rows so header is read  │
        │                                 │ from third row            │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.xlsx,2,Sheet21    │ read all data from 2nd    │
        │                                 │ sheet all data from       │
        │                                 │ "Sheet21" of 'fn.xlsx'    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.hdf5,Table12,T2   │ read all data from table  │
        │                                 │ "Table12" then all data   │
        │                                 │ from table "T2" of        │
        │                                 │ 'fn.hdf5'                 │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts=fn.wdm,210,110       │ read DSNs 210, then 110   │
        │                                 │ from 'fn.wdm'             │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-'                  │ read all columns from     │
        │                                 │ standard input (stdin)    │
        ├─────────────────────────────────┼───────────────────────────┤
        │ --input_ts='-' --columns=4,1    │ read column 4 and 1 from  │
        │                                 │ standard input (stdin)    │
        ╘═════════════════════════════════╧═══════════════════════════╛

      If working with CSV or TSV files you can use redirection rather than use
      --input_ts=fname.csv. The following are identical:
      From a file:
        command subcmd --input_ts=fname.csv
      From standard input (since '--input_ts=-' is the default:
        command subcmd < fname.csv
      Can also combine commands by piping:
        command subcmd < filein.csv | command subcmd1 > fileout.csv
      Python library examples:
      You must use the `input_ts=...` option where `input_ts` can be
      one of a [pandas DataFrame, pandas Series, dict, tuple, list,
      StringIO, or file name].

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in toolbox_utils pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain column order, you can rearrange columns when data is read
      in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) if a list or number of lines to skip at
      the start of the file if an integer.
      If used in Python can be a callable, the callable function will be
      evaluated against the row indices, returning True if the row should
      be skipped and False otherwise. An example of a valid callable
      argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, transformation]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
      MUST include a name for all columns in the input dataset, including the
      index column.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The purpose of this option is to specify target units for unit conversion.
      The source units are specified in the header line of the input or
      using the 'source_units' keyword.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair a input index, removing duplicate index
      values and sorting.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.