https://travis-ci.org/timcera/tstoolbox.svg?branch=master https://coveralls.io/repos/timcera/tstoolbox/badge.png?branch=master Latest release BSD-3 clause license tstoolbox downloads

Command Line

Help:

tstoolbox –help

about

$ tstoolbox about --help
usage: tstoolbox about [-h]

Display version number and system information.

optional arguments:
  -h, --help  show this help message and exit

accumulate

$ tstoolbox accumulate --help
usage: tstoolbox accumulate [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--clean]
  [--statistic STATISTIC] [--round_index ROUND_INDEX] [--skiprows SKIPROWS]
  [--index_type INDEX_TYPE] [--names NAMES] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--print_input] [--tablefmt TABLEFMT]

Calculate accumulating statistics.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --statistic STATISTIC
      [optional, default is 'sum', transformation]
      'sum', 'max', 'min', 'prod'
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

add_trend

$ tstoolbox add_trend --help
usage: tstoolbox add_trend [-h] [--input_ts INPUT_TS]
  [--start_date START_DATE] [--end_date END_DATE] [--skiprows SKIPROWS]
  [--columns COLUMNS] [--clean] [--dropna DROPNA] [--names NAMES]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--round_index
  ROUND_INDEX] [--index_type INDEX_TYPE] [--print_input] [--tablefmt TABLEFMT]
  start_offset end_offset

Add a trend.

positional arguments:
  start_offset The starting value for the applied trend. end_offset The ending
  value for the applied trend.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

aggregate

$ tstoolbox aggregate --help
usage: tstoolbox aggregate [-h] [--input_ts INPUT_TS] [--groupby GROUPBY]
  [--statistic STATISTIC] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--dropna DROPNA] [--clean] [--agg_interval
  AGG_INTERVAL] [--ninterval NINTERVAL] [--round_index ROUND_INDEX]
  [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--print_input]
  [--tablefmt TABLEFMT]

Take a time series and aggregate to specified frequency.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --groupby GROUPBY
      [optional, default is None, transformation]
      The pandas offset code to group the time-series data into. A special code
      is also available to group 'months_across_years' that will group
      into twelve categories for each month.
  --statistic STATISTIC
      [optional, defaults to 'mean']
      'mean', 'sem', 'sum', 'std', 'max', 'min', 'median', 'first', 'last' or
      'ohlc' to calculate on each group.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --agg_interval AGG_INTERVAL
      DEPRECATED: Use the 'groupby' option instead.
  --ninterval NINTERVAL
      DEPRECATED: Just prefix the number in front of the 'groupby' pandas offset
      code.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

calculate_fdc

$ tstoolbox calculate_fdc --help
usage: tstoolbox calculate_fdc [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--clean] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--percent_point_function PERCENT_POINT_FUNCTION] [--plotting_position
  PLOTTING_POSITION] [--source_units SOURCE_UNITS] [--target_units
  TARGET_UNITS] [--sort_values SORT_VALUES] [--sort_index SORT_INDEX]
  [--tablefmt TABLEFMT]

DOES NOT return a time-series.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --percent_point_function PERCENT_POINT_FUNCTION
      [optional, default is None]
      The distribution used to shift the plotting position values. Choose from
      'norm', 'lognorm', 'weibull', and None.
  --plotting_position PLOTTING_POSITION
      [optional, default is 'weibull']
      ┌────────────┬─────┬─────────────────┬───────────────────────┐
      │ Name       │ a   │ Equation        │ Description           │
      │            │     │ (1-a)/(n+1-2*a) │                       │
      ╞════════════╪═════╪═════════════════╪═══════════════════════╡
      │ weibull    │ 0   │ i/(n+1)         │ mean of sampling      │
      │            │     │                 │ distribution          │
      ├────────────┼─────┼─────────────────┼─(default)─────────────┤
      │ benard and │ 0.3 │ (i-0.3)/(n+0.4) │ approx. median of     │
      │ bos- leve- │     │                 │ sampling distribution │
      │ nbach      │     │                 │                       │
      ├────────────┼─────┼─────────────────┼───────────────────────┤
      │ tukey      │ 1/3 │ (i-1/3)/(n+1/3) │ approx. median of     │
      │            │     │                 │ sampling distribution │
      ├────────────┼─────┼─────────────────┼───────────────────────┤
      │ gumbel     │ 1   │ (i-1)/(n-1)     │ mode of sampling      │
      │            │     │                 │ distribution          │
      ├────────────┼─────┼─────────────────┼───────────────────────┤
      │ hazen      │ 1/2 │ (i-1/2)/n       │ midpoints of n equal  │
      │            │     │                 │ intervals             │
      ├────────────┼─────┼─────────────────┼───────────────────────┤
      │ cunnane    │ 2/5 │ (i-2/5)/(n+1/5) │ subjective            │
      ├────────────┼─────┼─────────────────┼───────────────────────┤
      │ california │ NA  │ i/n             │                       │
      ╘════════════╧═════╧═════════════════╧═══════════════════════╛

      Where 'i' is the sorted rank of the y value, and 'n' is the total number
      of values to be plotted.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --sort_values SORT_VALUES
      [optional, default is 'ascending']
      Sort order is either 'ascending' or 'descending'.
  --sort_index SORT_INDEX
      [optional, default is 'ascending']
      Sort order is either 'ascending' or 'descending'.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

calculate_kde

$ tstoolbox calculate_kde --help
usage: tstoolbox calculate_kde [-h] [--ascending] [--evaluate]
  [--input_ts INPUT_TS] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--clean] [--skiprows SKIPROWS] [--index_type
  INDEX_TYPE] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--names NAMES] [--tablefmt TABLEFMT]

Returns a time-series or the KDE curve depending on the evaluate keyword.

optional arguments:
  -h | --help
      show this help message and exit
  --ascending
      [optional, defaults to True]
      Sort order.
  --evaluate
      [optional, defaults to False]
      Whether or not to return a time-series of KDE density values or the KDE
      curve. Defaults to False, which would return the KDE curve.
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

clip

$ tstoolbox clip --help
usage: tstoolbox clip [-h] [--input_ts INPUT_TS] [--start_date START_DATE]
  [--end_date END_DATE] [--columns COLUMNS] [--dropna DROPNA] [--clean]
  [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--a_min
  A_MIN] [--a_max A_MAX] [--round_index ROUND_INDEX] [--source_units
  SOURCE_UNITS] [--target_units TARGET_UNITS] [--print_input] [--tablefmt
  TABLEFMT]

Return a time-series with values limited to [a_min, a_max].

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --a_min A_MIN
      [optional, defaults to None]
      All values lower than this will be set to this value. Default is None.
  --a_max A_MAX
      [optional, defaults to None]
      All values higher than this will be set to this value. Default is None.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

convert

$ tstoolbox convert --help
usage: tstoolbox convert [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--clean]
  [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--factor
  FACTOR] [--offset OFFSET] [--print_input] [--round_index ROUND_INDEX]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--float_format
  FLOAT_FORMAT] [--tablefmt TABLEFMT]

See the 'equation' subcommand for a generalized form of this command.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --factor FACTOR
      [optional, default is 1.0]
      Factor to multiply the time series values.
  --offset OFFSET
      [optional, default is 0.0]
      Offset to add to the time series values.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

convert_index

$ tstoolbox convert_index --help
usage: tstoolbox convert_index [-h] [--interval INTERVAL] [--epoch EPOCH]
  [--input_ts INPUT_TS] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--round_index ROUND_INDEX] [--dropna DROPNA]
  [--clean] [--names NAMES] [--source_units SOURCE_UNITS] [--target_units
  TARGET_UNITS] [--skiprows SKIPROWS] [--tablefmt TABLEFMT] to

Convert datetime to/from Julian dates from different epochs.

positional arguments:
  to                    One of 'number' or 'datetime'.  If 'number', the source time-series
    should have a datetime index to convert to a number. If 'datetime', source
    data should be a number and the converted index will be datetime.


optional arguments:
  -h | --help
      show this help message and exit
  --interval INTERVAL
      [optional, defaults to None]
      The interval parameter defines the unit time. One of the pandas offset
      codes. The default of 'None' will set the unit time for all defined
      epochs to daily except 'unix' which will defaults to seconds.
      You can give any smaller unit time than daily for all defined epochs
      except 'unix' which requires an interval less than seconds. For an
      epoch that begins with an arbitrary date, you can use any interval
      equal to or smaller than the frequency of the time-series.
      ┌───────┬─────────────────────────────┐
      │ Alias │ Description                 │
      ╞═══════╪═════════════════════════════╡
      │ B     │ business day                │
      ├───────┼─────────────────────────────┤
      │ C     │ custom business day         │
      │       │ (experimental)              │
      ├───────┼─────────────────────────────┤
      │ D     │ calendar day                │
      ├───────┼─────────────────────────────┤
      │ W     │ weekly                      │
      ├───────┼─────────────────────────────┤
      │ M     │ month end                   │
      ├───────┼─────────────────────────────┤
      │ BM    │ business month end          │
      ├───────┼─────────────────────────────┤
      │ CBM   │ custom business month end   │
      ├───────┼─────────────────────────────┤
      │ MS    │ month start                 │
      ├───────┼─────────────────────────────┤
      │ BMS   │ business month start        │
      ├───────┼─────────────────────────────┤
      │ CBMS  │ custom business month start │
      ├───────┼─────────────────────────────┤
      │ Q     │ quarter end                 │
      ├───────┼─────────────────────────────┤
      │ BQ    │ business quarter end        │
      ├───────┼─────────────────────────────┤
      │ QS    │ quarter start               │
      ├───────┼─────────────────────────────┤
      │ BQS   │ business quarter start      │
      ├───────┼─────────────────────────────┤
      │ A     │ year end                    │
      ├───────┼─────────────────────────────┤
      │ BA    │ business year end           │
      ├───────┼─────────────────────────────┤
      │ AS    │ year start                  │
      ├───────┼─────────────────────────────┤
      │ BAS   │ business year start         │
      ├───────┼─────────────────────────────┤
      │ H     │ hourly                      │
      ├───────┼─────────────────────────────┤
      │ T     │ minutely                    │
      ├───────┼─────────────────────────────┤
      │ S     │ secondly                    │
      ├───────┼─────────────────────────────┤
      │ L     │ milliseconds                │
      ├───────┼─────────────────────────────┤
      │ U     │ microseconds                │
      ├───────┼─────────────────────────────┤
      │ N     │ nanoseconds                 │
      ╘═══════╧═════════════════════════════╛

      Weekly has the following anchored frequencies:
      ┌───────┬───────────────────────────────┐
      │ Alias │ Description                   │
      ╞═══════╪═══════════════════════════════╡
      │ W-SUN │ weekly frequency (sundays).   │
      │       │ Same as 'W'.                  │
      ├───────┼───────────────────────────────┤
      │ W-MON │ weekly frequency (mondays)    │
      ├───────┼───────────────────────────────┤
      │ W-TUE │ weekly frequency (tuesdays)   │
      ├───────┼───────────────────────────────┤
      │ W-WED │ weekly frequency (wednesdays) │
      ├───────┼───────────────────────────────┤
      │ W-THU │ weekly frequency (thursdays)  │
      ├───────┼───────────────────────────────┤
      │ W-FRI │ weekly frequency (fridays)    │
      ├───────┼───────────────────────────────┤
      │ W-SAT │ weekly frequency (saturdays)  │
      ╘═══════╧═══════════════════════════════╛

      Quarterly frequencies (Q, BQ, QS, BQS) and annual frequencies (A, BA, AS,
      BAS) have the following anchoring suffixes:
      ┌───────┬───────────────────────────────┐
      │ Alias │ Description                   │
      ╞═══════╪═══════════════════════════════╡
      │ -DEC  │ year ends in December (same   │
      │       │ as 'Q' and 'A')               │
      ├───────┼───────────────────────────────┤
      │ -JAN  │ year ends in January          │
      ├───────┼───────────────────────────────┤
      │ -FEB  │ year ends in February         │
      ├───────┼───────────────────────────────┤
      │ -MAR  │ year ends in March            │
      ├───────┼───────────────────────────────┤
      │ -APR  │ year ends in April            │
      ├───────┼───────────────────────────────┤
      │ -MAY  │ year ends in May              │
      ├───────┼───────────────────────────────┤
      │ -JUN  │ year ends in June             │
      ├───────┼───────────────────────────────┤
      │ -JUL  │ year ends in July             │
      ├───────┼───────────────────────────────┤
      │ -AUG  │ year ends in August           │
      ├───────┼───────────────────────────────┤
      │ -SEP  │ year ends in September        │
      ├───────┼───────────────────────────────┤
      │ -OCT  │ year ends in October          │
      ├───────┼───────────────────────────────┤
      │ -NOV  │ year ends in November         │
      ╘═══════╧═══════════════════════════════╛

  --epoch EPOCH
      [optional, defaults to 'julian']
      Can be one of, 'julian', 'reduced', 'modified', 'truncated', 'dublin',
      'cnes', 'ccsds', 'lop', 'lilian', 'rata_die', 'mars_sol_date',
      'unix', or a date and time.
      If supplying a date and time, most formats are recognized, however the
      closer the format is to ISO 8601 the better. Also should check and
      make sure date was parsed as expected. If supplying only a date, the
      epoch starts at midnight the morning of that date.
      The 'unix' epoch uses a default interval of seconds, and all other defined
      epochs use a default interval of 'daily'.
      ┌───────────┬────────────────┬────────────────┬─────────────┐
      │ epoch     │ Epoch          │ Calculation    │ Notes       │
      ╞═══════════╪════════════════╪════════════════╪═════════════╡
      │ julian    │ 4713-01-01:12  │ JD             │             │
      │           │ BCE            │                │             │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ reduced   │ 1858-11-16:12  │ JD - 2400000   │ [ 1 ] [ 2 ] │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ modified  │ 1858-11-17:00  │ JD - 2400000.5 │ SAO 1957    │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ truncated │ 1968-05-24:00  │ floor (JD -    │ NASA 1979,  │
      │           │                │ 2440000.5)     │ integer     │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ dublin    │ 1899-12-31:12  │ JD - 2415020   │ IAU 1955    │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ cnes      │ 1950-01-01:00  │ JD - 2433282.5 │ CNES [ 3 ]  │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ ccsds     │ 1958-01-01:00  │ JD - 2436204.5 │ CCSDS [ 3 ] │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ lop       │ 1992-01-01:00  │ JD - 2448622.5 │ LOP [ 3 ]   │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ lilian    │ 1582-10-15[13] │ floor (JD -    │ Count of    │
      │           │                │ 2299159.5)     │ days of the │
      │           │                │                │ Gregorian   │
      │           │                │                │ calendar,   │
      │           │                │                │ integer     │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ rata_die  │ 0001-01-01[13] │ floor (JD -    │ Count of    │
      │           │ proleptic      │ 1721424.5)     │ days of the │
      │           │ Gregorian      │                │ Common Era, │
      │           │ calendar       │                │ integer     │
      ├───────────┼────────────────┼────────────────┼─────────────┤
      │ mars_sol  │ 1873-12-29:12  │ (JD - 2405522) │ Count of    │
      │           │                │ /1.02749       │ Martian     │
      ├───────────┼────────────────┼────────────────┼─days────────┤
      │ unix      │ 1970-01-01     │ JD - 2440587.5 │ seconds     │
      │           │ T00:00:00      │                │             │
      ╘═══════════╧════════════════╧════════════════╧═════════════╛

      1. Hopkins, Jeffrey L. (2013). Using Commercial Amateur Astronomical
      Spectrographs, p. 257, Springer Science & Business Media, ISBN
      9783319014425
      2. Palle, Pere L., Esteban, Cesar. (2014). Asteroseismology, p. 185,
      Cambridge University Press, ISBN 9781107470620
      3. Theveny, Pierre-Michel. (10 September 2001). "Date Format" The TPtime
      Handbook. Media Lab.
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

convert_index_to_julian

$ tstoolbox convert_index_to_julian --help
<string>:25: (WARNING/2) Option list ends without a blank line; unexpected unindent.
usage: tstoolbox convert_index_to_julian [-h] [--epoch EPOCH]
  [--input_ts INPUT_TS] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--round_index ROUND_INDEX] [--dropna DROPNA]
  [--clean] [--index_type INDEX_TYPE] [--names NAMES] [--source_units
  SOURCE_UNITS] [--target_units TARGET_UNITS] [--skiprows SKIPROWS]

For command line:

tstoolbox convert_index julian ...

For Python API:

from tstoolbox import tstoolbox
ndf = ntstoolbox.convert_index('julian', ...)

optional arguments:
  -h | --help
      show this help message and exit
  Option list ends without a blank line; unexpected unindent.
  --epoch EPOCH --input_ts INPUT_TS --columns COLUMNS --start_date START_DATE
  --end_date END_DATE --round_index ROUND_INDEX --dropna DROPNA --clean
  --index_type INDEX_TYPE --names NAMES --source_units SOURCE_UNITS
  --target_units TARGET_UNITS --skiprows SKIPROWS

converttz

$ tstoolbox converttz --help
<string>:20: (WARNING/2) Block quote ends without a blank line; unexpected unindent.
usage: tstoolbox converttz [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--round_index ROUND_INDEX]
  [--dropna DROPNA] [--clean] [--index_type INDEX_TYPE] [--names NAMES]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--skiprows
  SKIPROWS] [--tablefmt TABLEFMT] fromtz totz

Convert the time zone of the index.

positional arguments:
  fromtz The time zone of the original time-series.
    The 'EST', 'EDT', and 'America/New_York' could in some sense be thought of
    as the same, however 'EST' and 'EDT' would force the time index to have
    the same offset from UTC, regardless of daylight savings time, where
    'America/New_York' would implement the appropriate daylight savings
    offset.
  Block quote ends without a blank line; unexpected unindent.
  totz The time zone of the converted time-series.
    Same note applies as for fromtz.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

correlation

$ tstoolbox correlation --help
usage: tstoolbox correlation [-h] [--input_ts INPUT_TS] [--print_input]
  [--start_date START_DATE] [--end_date END_DATE] [--columns COLUMNS] [--clean]
  [--index_type INDEX_TYPE] [--names NAMES] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--skiprows SKIPROWS] [--tablefmt TABLEFMT]
  lags

Develop a correlation between time-series and potentially lags.

positional arguments:
  lags                  If an integer will calculate all lags up to and including the
    lag number. If a list will calculate each lag in the list. If a string must
    be a comma separated list of integers. If lags == 0 then will only cross
    correlate on the input time-series.


optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

createts

$ tstoolbox createts --help
usage: tstoolbox createts [-h] [--freq FREQ] [--fillvalue FILLVALUE]
  [--input_ts INPUT_TS] [--index_type INDEX_TYPE] [--start_date START_DATE]
  [--end_date END_DATE] [--tablefmt TABLEFMT]

Create empty time series, optionally fill with a value.

optional arguments:
  -h | --help
      show this help message and exit
  --freq FREQ
      [optional, default is None]
      To use this form --start_date and --end_date must be supplied also. The
      freq option is the pandas date offset code used to create the index.
  --fillvalue FILLVALUE
      [optional, default is None]
      The fill value for the time-series. The default is None, which generates
      the date/time stamps only.
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

date_offset

$ tstoolbox date_offset --help
usage: tstoolbox date_offset [-h] [--columns COLUMNS] [--dropna DROPNA]
  [--clean] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--input_ts INPUT_TS] [--start_date START_DATE] [--end_date END_DATE]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--round_index
  ROUND_INDEX] [--tablefmt TABLEFMT] intervals offset

Apply an offset to a time-series.

positional arguments:
  intervals             Number of intervals of offset to shift the time index.  A positive
    integer moves the index forward, negative moves it backwards.

  Definition list ends without a blank line; unexpected unindent.
  offset Pandas offset.
    ┌───────┬─────────────────────────────┐
    │ Alias │ Description                 │
    ╞═══════╪═════════════════════════════╡
    │ B     │ business day                │
    ├───────┼─────────────────────────────┤
    │ C     │ custom business day         │
    │       │ (experimental)              │
    ├───────┼─────────────────────────────┤
    │ D     │ calendar day                │
    ├───────┼─────────────────────────────┤
    │ W     │ weekly                      │
    ├───────┼─────────────────────────────┤
    │ M     │ month end                   │
    ├───────┼─────────────────────────────┤
    │ BM    │ business month end          │
    ├───────┼─────────────────────────────┤
    │ CBM   │ custom business month end   │
    ├───────┼─────────────────────────────┤
    │ MS    │ month start                 │
    ├───────┼─────────────────────────────┤
    │ BMS   │ business month start        │
    ├───────┼─────────────────────────────┤
    │ CBMS  │ custom business month start │
    ├───────┼─────────────────────────────┤
    │ Q     │ quarter end                 │
    ├───────┼─────────────────────────────┤
    │ BQ    │ business quarter end        │
    ├───────┼─────────────────────────────┤
    │ QS    │ quarter start               │
    ├───────┼─────────────────────────────┤
    │ BQS   │ business quarter start      │
    ├───────┼─────────────────────────────┤
    │ A     │ year end                    │
    ├───────┼─────────────────────────────┤
    │ BA    │ business year end           │
    ├───────┼─────────────────────────────┤
    │ AS    │ year start                  │
    ├───────┼─────────────────────────────┤
    │ BAS   │ business year start         │
    ├───────┼─────────────────────────────┤
    │ H     │ hourly                      │
    ├───────┼─────────────────────────────┤
    │ T     │ minutely                    │
    ├───────┼─────────────────────────────┤
    │ S     │ secondly                    │
    ├───────┼─────────────────────────────┤
    │ L     │ milliseconds                │
    ├───────┼─────────────────────────────┤
    │ U     │ microseconds                │
    ├───────┼─────────────────────────────┤
    │ N     │ nanoseconds                 │
    ╘═══════╧═════════════════════════════╛

    Weekly has the following anchored frequencies:
    ┌───────┬───────────────────────────────┐
    │ Alias │ Description                   │
    ╞═══════╪═══════════════════════════════╡
    │ W-SUN │ weekly frequency (sundays).   │
    │       │ Same as 'W'.                  │
    ├───────┼───────────────────────────────┤
    │ W-MON │ weekly frequency (mondays)    │
    ├───────┼───────────────────────────────┤
    │ W-TUE │ weekly frequency (tuesdays)   │
    ├───────┼───────────────────────────────┤
    │ W-WED │ weekly frequency (wednesdays) │
    ├───────┼───────────────────────────────┤
    │ W-THU │ weekly frequency (thursdays)  │
    ├───────┼───────────────────────────────┤
    │ W-FRI │ weekly frequency (fridays)    │
    ├───────┼───────────────────────────────┤
    │ W-SAT │ weekly frequency (saturdays)  │
    ╘═══════╧═══════════════════════════════╛

    Quarterly frequencies (Q, BQ, QS, BQS) and annual frequencies (A, BA, AS,
    BAS) have the following anchoring suffixes:
    ┌───────┬───────────────────────────────┐
    │ Alias │ Description                   │
    ╞═══════╪═══════════════════════════════╡
    │ -DEC  │ year ends in December (same   │
    │       │ as 'Q' and 'A')               │
    ├───────┼───────────────────────────────┤
    │ -JAN  │ year ends in January          │
    ├───────┼───────────────────────────────┤
    │ -FEB  │ year ends in February         │
    ├───────┼───────────────────────────────┤
    │ -MAR  │ year ends in March            │
    ├───────┼───────────────────────────────┤
    │ -APR  │ year ends in April            │
    ├───────┼───────────────────────────────┤
    │ -MAY  │ year ends in May              │
    ├───────┼───────────────────────────────┤
    │ -JUN  │ year ends in June             │
    ├───────┼───────────────────────────────┤
    │ -JUL  │ year ends in July             │
    ├───────┼───────────────────────────────┤
    │ -AUG  │ year ends in August           │
    ├───────┼───────────────────────────────┤
    │ -SEP  │ year ends in September        │
    ├───────┼───────────────────────────────┤
    │ -OCT  │ year ends in October          │
    ├───────┼───────────────────────────────┤
    │ -NOV  │ year ends in November         │
    ╘═══════╧═══════════════════════════════╛


optional arguments:
  -h | --help
      show this help message and exit
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.<string>:16: (WARNING/2) Definition list ends without a blank line; unexpected unindent.

date_slice

$ tstoolbox date_slice --help
usage: tstoolbox date_slice [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--clean]
  [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--round_index ROUND_INDEX] [--source_units SOURCE_UNITS] [--target_units
  TARGET_UNITS] [--float_format FLOAT_FORMAT] [--tablefmt TABLEFMT]

This isn't really useful anymore because "start_date" and "end_date" are
available in all sub-commands.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

describe

$ tstoolbox describe --help
usage: tstoolbox describe [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--transpose]
  [--tablefmt TABLEFMT]

Print out statistics for the time-series.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --transpose
      [optional, default is False]
      If 'transpose' option is used, will transpose describe output.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

dtw

$ tstoolbox dtw --help
usage: tstoolbox dtw [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--round_index ROUND_INDEX]
  [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names
  NAMES] [--clean] [--window WINDOW] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--tablefmt TABLEFMT]

Dynamic Time Warping.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --window WINDOW
      [optional, default is 10000]
      Window length.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

equation

$ tstoolbox equation --help
usage: tstoolbox equation [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--print_input
  PRINT_INPUT] [--round_index ROUND_INDEX] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--float_format FLOAT_FORMAT] [--tablefmt
  TABLEFMT] equation_str

The <equation_str> argument is a string contained in single quotes with 'x' used
as the variable representing the input. For example, '(1 - x)*sin(x)'.

positional arguments:
  equation_str String contained in single quotes that defines the equation.
    There are four different types of equations that can be used.
    ┌───────────────────────┬───────────┬─────────────────────────┐
    │ Description           │ Variables │ Examples                │
    ╞═══════════════════════╪═══════════╪═════════════════════════╡
    │ Equation applied to   │ x         │ x*0.3+4-x**2            │
    │ all values in the     │           │ sin(x)+pi*x             │
    │ dataset. Returns same │           │                         │
    │ number of columns as  │           │                         │
    ├─input.────────────────┼───────────┼─────────────────────────┤
    │ Equation used time    │ x and t   │ 0.6*max(x[t-1],x[t+1])  │
    │ relative to current   │           │                         │
    │ record. Applies       │           │                         │
    │ equation to each      │           │                         │
    │ column. Returns same  │           │                         │
    │ number of columns as  │           │                         │
    │ input.                │           │                         │
    ├───────────────────────┼───────────┼─────────────────────────┤
    │ Equation uses values  │ x1, x2,   │ x1+x2                   │
    │ from different        │ x3, ...   │                         │
    │ columns. Returns a    │ xN        │                         │
    │ single column.        │           │                         │
    ├───────────────────────┼───────────┼─────────────────────────┤
    │ Equation uses values  │ x1, x2,   │ x1[t-1]+x2+x3[t+1]      │
    │ from different        │ x3, ...x- │                         │
    │ columns and values    │ N, t      │                         │
    │ from different rows.  │           │                         │
    │ Returns a single      │           │                         │
    ╘═column.═══════════════╧═══════════╧═════════════════════════╛

    Mathematical functions in the 'np' (numpy) name space can be used.
    Additional examples:
    'x*4 + 2',
    'x**2 + cos(x)', and
    'tan(x*pi/180)'

    are all valid <equation> strings. The variable 't' is special representing
    the time at which 'x' occurs. This means you can do things like:
    'x[t] + max(x[t-1], x[t+1])*0.6'

    to add to the current value 0.6 times the maximum adjacent value.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --print_input PRINT_INPUT
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

ewm_window

$ tstoolbox ewm_window --help
usage: tstoolbox ewm_window [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--statistic
  STATISTIC] [--alpha_com ALPHA_COM] [--alpha_span ALPHA_SPAN]
  [--alpha_halflife ALPHA_HALFLIFE] [--alpha ALPHA] [--min_periods
  MIN_PERIODS] [--adjust] [--ignore_na] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--print_input] [--tablefmt TABLEFMT]

Exactly one of center of mass, span, half-life, and alpha must be provided.
Allowed values and relationship between the parameters are specified in the
parameter descriptions above; see the link at the end of this section for a
detailed explanation.

When adjust is True (default), weighted averages are calculated using weights
(1-alpha)**(n-1), (1-alpha)**(n-2), . . . , 1-alpha, 1.

When adjust is False, weighted averages are calculated recursively as:
  weighted_average[0] = arg[0]; weighted_average[i] =
  (1-alpha)*weighted_average[i-1] + alpha*arg[i].

When ignore_na is False (default), weights are based on absolute positions. For
example, the weights of x and y used in calculating the final weighted average
of [x, None, y] are (1-alpha)**2 and 1 (if adjust is True), and (1-alpha)**2 and
alpha (if adjust is False).

When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based on
relative positions. For example, the weights of x and y used in calculating the
final weighted average of [x, None, y] are 1-alpha and 1 (if adjust is True),
and 1-alpha and alpha (if adjust is False).

More details can be found at <http://pandas.pydata.org/pandas-docs/stabl-
e/computation.html#exponentially-weighted-windows>

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --statistic STATISTIC
      [optional, defaults to '']
      Statistic applied to each window.
      ┌──────┬────────────────────┐
      │ corr │ correlation        │
      ├──────┼────────────────────┤
      │ cov  │ covariance         │
      ├──────┼────────────────────┤
      │ mean │ mean               │
      ├──────┼────────────────────┤
      │ std  │ standard deviation │
      ├──────┼────────────────────┤
      │ var  │ variance           │
      ╘══════╧════════════════════╛

  --alpha_com ALPHA_COM
      [optional, defaults to None]
      Specify decay in terms of center of mass, alpha=1/(1+com), for com>=0
  --alpha_span ALPHA_SPAN
      [optional, defaults to None]
      Specify decay in terms of span, alpha=2/(span+1), for span1
  --alpha_halflife ALPHA_HALFLIFE
      [optional, defaults to None]
      Specify decay in terms of half-life, alpha=1-exp(log(0.5)/halflife), for
      halflife>0
  --alpha ALPHA
      [optional, defaults to None]
      Specify smoothing factor alpha directly, 0<alpha<=1
  --min_periods MIN_PERIODS
      [optional, default is 0]
      Minimum number of observations in window required to have a value
      (otherwise result is NA).
  --adjust
      [optional, default is True]
      Divide by decaying adjustment factor in beginning periods to account for
      imbalance in relative weightings (viewing EWMA as a moving average)
  --ignore_na
      [optional, default is False] Ignore missing values when calculating
      weights.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

expanding_window

$ tstoolbox expanding_window --help
usage: tstoolbox expanding_window [-h] [--input_ts INPUT_TS]
  [--columns COLUMNS] [--start_date START_DATE] [--end_date END_DATE] [--dropna
  DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--clean] [--statistic STATISTIC] [--min_periods MIN_PERIODS] [--center]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--print_input]
  [--tablefmt TABLEFMT]

Calculate an expanding window statistic.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --statistic STATISTIC
      [optional, default is '']
      ┌───────────┬──────────────────────┐
      │ statistic │ Meaning              │
      ╞═══════════╪══════════════════════╡
      │ corr      │ correlation          │
      ├───────────┼──────────────────────┤
      │ count     │ count of real values │
      ├───────────┼──────────────────────┤
      │ cov       │ covariance           │
      ├───────────┼──────────────────────┤
      │ kurt      │ kurtosis             │
      ├───────────┼──────────────────────┤
      │ max       │ maximum              │
      ├───────────┼──────────────────────┤
      │ mean      │ mean                 │
      ├───────────┼──────────────────────┤
      │ median    │ median               │
      ├───────────┼──────────────────────┤
      │ min       │ minimum              │
      ├───────────┼──────────────────────┤
      │ skew      │ skew                 │
      ├───────────┼──────────────────────┤
      │ std       │ standard deviation   │
      ├───────────┼──────────────────────┤
      │ sum       │ sum                  │
      ├───────────┼──────────────────────┤
      │ var       │ variance             │
      ╘═══════════╧══════════════════════╛

  --min_periods MIN_PERIODS
      [optional, default is 1]
      Minimum number of observations in window required to have a value
  --center
      [optional, default is False]
      Set the labels at the center of the window.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

fill

$ tstoolbox fill --help
usage: tstoolbox fill [-h] [--input_ts INPUT_TS] [--method METHOD]
  [--print_input] [--start_date START_DATE] [--end_date END_DATE] [--columns
  COLUMNS] [--clean] [--index_type INDEX_TYPE] [--names NAMES] [--source_units
  SOURCE_UNITS] [--target_units TARGET_UNITS] [--skiprows SKIPROWS]
  [--from_columns FROM_COLUMNS] [--to_columns TO_COLUMNS] [--tablefmt
  TABLEFMT]

Missing values can occur because of NaN, or because the time series is sparse.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --method METHOD
      [optional, default is 'ffill']
      String contained in single quotes or a number that defines the method to
      use for filling.
      ┌───────────┬────────────────────────────────────────┐
      │ method=   │ fill missing values with...            │
      ╞═══════════╪════════════════════════════════════════╡
      │ ffill     │ ...the last good value                 │
      ├───────────┼────────────────────────────────────────┤
      │ bfill     │ ...the next good value                 │
      ├───────────┼────────────────────────────────────────┤
      │ 2.3       │ ...with this number                    │
      ├───────────┼────────────────────────────────────────┤
      │ linear    │ ...with linearly interpolated values   │
      ├───────────┼────────────────────────────────────────┤
      │ nearest   │ ...nearest good value                  │
      ├───────────┼────────────────────────────────────────┤
      │ zero      │ ...zeroth order spline                 │
      ├───────────┼────────────────────────────────────────┤
      │ slinear   │ ...first order spline                  │
      ├───────────┼────────────────────────────────────────┤
      │ quadratic │ ...second order spline                 │
      ├───────────┼────────────────────────────────────────┤
      │ cubic     │ ...third order spline                  │
      ├───────────┼────────────────────────────────────────┤
      │ mean      │ ...with mean                           │
      ├───────────┼────────────────────────────────────────┤
      │ median    │ ...with median                         │
      ├───────────┼────────────────────────────────────────┤
      │ max       │ ...with maximum                        │
      ├───────────┼────────────────────────────────────────┤
      │ min       │ ...with minimum                        │
      ├───────────┼────────────────────────────────────────┤
      │ from      │ ...with good values from other columns │
      ╘═══════════╧════════════════════════════════════════╛

  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --from_columns FROM_COLUMNS
      [required if method='from', otherwise not used]
      List of column names/numbers from which good values will be taken to fill
      missing values in the to_columns keyword.
  --to_columns TO_COLUMNS
      [required if method='from', otherwise not used]
      List of column names/numbers that missing values will be replaced in from
      good values in the from_columns keyword.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

filter

$ tstoolbox filter --help
usage: tstoolbox filter [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean]
  [--print_input] [--cutoff_period CUTOFF_PERIOD] [--window_len WINDOW_LEN]
  [--float_format FLOAT_FORMAT] [--source_units SOURCE_UNITS] [--target_units
  TARGET_UNITS] [--round_index ROUND_INDEX] [--tablefmt TABLEFMT] filter_type

Apply different filters to the time-series.

positional arguments:
  filter_type           'flat', 'hanning', 'hamming', 'bartlett', 'blackman',
    'fft_highpass' and 'fft_lowpass' for Fast Fourier Transform filter in the
    frequency domain.


optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --cutoff_period CUTOFF_PERIOD
      [optional, default is None]
      For 'fft_highpass' and 'fft_lowpass'. Must be supplied if using
      'fft_highpass' or 'fft_lowpass'. The period in input time units that
      will form the cutoff between low frequencies (longer periods) and
      high frequencies (shorter periods). Filter will be smoothed by
      window_len running average.
  --window_len WINDOW_LEN
      [optional, default is 5]
      For the windowed types, 'flat', 'hanning', 'hamming', 'bartlett', and
      'blackman' specifies the length of the window.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

gof

$ tstoolbox gof --help
<string>:60: (WARNING/2) Option list ends without a blank line; unexpected unindent.
<string>:72: (WARNING/2) Block quote ends without a blank line; unexpected unindent.
usage: tstoolbox gof [-h] [--input_ts INPUT_TS] [--stats STATS]
  [--columns COLUMNS] [--start_date START_DATE] [--end_date END_DATE]
  [--round_index ROUND_INDEX] [--clean] [--index_type INDEX_TYPE] [--names
  NAMES] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--skiprows SKIPROWS] [--tablefmt TABLEFMT]

The first time series must be the observed, the second the predicted series. You
can only give two time-series.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  Option list ends without a blank line; unexpected unindent.
  --stats STATS --columns COLUMNS [optional, defaults to all columns, input
  filter]
    Columns to select out of input. Can use column names from the first line
    header or column numbers. If using numbers, column number 1 is the first
    data column. To pick multiple columns; separate by commas with no
    spaces. As used in tstoolbox pick command.
    This solves a big problem so that you don't have to create a data set with a
    certain order, you can rearrange columns when data is read in.
  Block quote ends without a blank line; unexpected unindent.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

lag

$ tstoolbox lag --help
usage: tstoolbox lag [-h] [--input_ts INPUT_TS] [--print_input]
  [--start_date START_DATE] [--end_date END_DATE] [--columns COLUMNS] [--clean]
  [--index_type INDEX_TYPE] [--names NAMES] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--skiprows SKIPROWS] [--tablefmt TABLEFMT]
  lags

Create a series of lagged time-series.

positional arguments:
  lags                  If an integer will calculate all lags up to and including the
    lag number. If a list will calculate each lag in the list. If a string must
    be a comma separated list of integers.


optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

normalization

$ tstoolbox normalization --help
usage: tstoolbox normalization [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--mode MODE]
  [--min_limit MIN_LIMIT] [--max_limit MAX_LIMIT] [--pct_rank_method
  PCT_RANK_METHOD] [--print_input] [--round_index ROUND_INDEX] [--source_units
  SOURCE_UNITS] [--target_units TARGET_UNITS] [--float_format FLOAT_FORMAT]
  [--tablefmt TABLEFMT]

Return the normalization of the time series.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --mode MODE
      [optional, default is 'minmax']
      minmax
        min_limit + (X-Xmin)/(Xmax-Xmin)*(max_limit-min_limit)

      zscore
        X-mean(X)/stddev(X)

      pct_rank
        rank(X)*100/N

  --min_limit MIN_LIMIT
      [optional, defaults to 0]
      Defines the minimum limit of the minmax normalization.
  --max_limit MAX_LIMIT
      [optional, defaults to 1]
      Defines the maximum limit of the minmax normalization.
  --pct_rank_method PCT_RANK_METHOD
      [optional, defaults to 'average']
      Defines how tied ranks are broken. Can be 'average', 'min', 'max',
      'first', 'dense'.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

pca

$ tstoolbox pca --help
usage: tstoolbox pca [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--clean] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--n_components
  N_COMPONENTS] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--round_index ROUND_INDEX] [--tablefmt TABLEFMT]

Does not return a time-series.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --n_components N_COMPONENTS
      [optional, default is None]
      The columns in the input_ts will be grouped into n_components groups.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

pct_change

$ tstoolbox pct_change --help
usage: tstoolbox pct_change [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--periods
  PERIODS] [--fill_method FILL_METHOD] [--limit LIMIT] [--freq FREQ]
  [--print_input] [--round_index ROUND_INDEX] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--float_format FLOAT_FORMAT] [--tablefmt
  TABLEFMT]

Return the percent change between times.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --periods PERIODS
      [optional, default is 1]
      The number of intervals to calculate percent change across.
  --fill_method FILL_METHOD
      [optional, defaults to 'pad']
      Fill method for NA. Defaults to 'pad'.
  --limit LIMIT
      [optional, defaults to None]
      Is the minimum number of consecutive NA values where no more filling will
      be made.
  --freq FREQ
      [optional, defaults to None]
      A pandas time offset string to represent the interval.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

peak_detection

$ tstoolbox peak_detection --help
usage: tstoolbox peak_detection [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--method
  METHOD] [--extrema EXTREMA] [--window WINDOW] [--pad_len PAD_LEN] [--points
  POINTS] [--lock_frequency] [--float_format FLOAT_FORMAT] [--round_index
  ROUND_INDEX] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--print_input PRINT_INPUT] [--tablefmt TABLEFMT]

Peak and valley detection.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --method METHOD
      [optional, default is 'rel']
      'rel', 'minmax', 'zero_crossing', 'parabola', 'sine' methods are
      available. The different algorithms have different strengths and
      weaknesses.
  --extrema EXTREMA
      [optional, default is 'peak']
      'peak', 'valley', or 'both' to determine what should be returned.
  --window WINDOW
      [optional, default is 24]
      There will not usually be multiple peaks within the window number of
      values. The different methods use this variable in different ways.
      For 'rel' the window keyword specifies how many points on each side
      to require a comparator(n,n+x) = True. For 'minmax' the window
      keyword is the distance to look ahead from a peak candidate to
      determine if it is the actual peak.
      '(sample / period) / f'
      where f might be a good choice between 1.25 and 4.
      For 'zero_crossing' the window keyword is the dimension of the smoothing
      window and should be an odd integer.
  --pad_len PAD_LEN
      [optional, default is 5]
      Used with FFT to pad edges of time-series.
  --points POINTS
      [optional, default is 9]
      For 'parabola' and 'sine' methods. How many points around the peak should
      be used during curve fitting, must be odd. The
  --lock_frequency
      [optional, default is False]
      For 'sine' method only. Specifies if the frequency argument of the model
      function should be locked to the value calculated from the raw peaks
      or if optimization process may tinker with it.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input PRINT_INPUT
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

pick

$ tstoolbox pick --help
usage: tstoolbox pick [-h] [--input_ts INPUT_TS] [--start_date START_DATE]
  [--end_date END_DATE] [--round_index ROUND_INDEX] [--dropna DROPNA]
  [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--clean]
  [--tablefmt TABLEFMT] columns

Can use column names or column numbers. If using numbers, column number 1 is the
first data column.

positional arguments:
  columns [optional, defaults to all columns, input filter]
    Columns to select out of input. Can use column names from the first line
    header or column numbers. If using numbers, column number 1 is the first
    data column. To pick multiple columns; separate by commas with no
    spaces. As used in tstoolbox pick command.
    This solves a big problem so that you don't have to create a data set with a
    certain order, you can rearrange columns when data is read in.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

plot

$ tstoolbox plot --help
usage: tstoolbox plot [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--clean] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--ofilename OFILENAME]
  [--type TYPE] [--xtitle XTITLE] [--ytitle YTITLE] [--title TITLE] [--figsize
  FIGSIZE] [--legend LEGEND] [--legend_names LEGEND_NAMES] [--subplots]
  [--sharex] [--sharey] [--colors COLORS] [--linestyles LINESTYLES]
  [--markerstyles MARKERSTYLES] [--style STYLE] [--logx] [--logy] [--xaxis
  XAXIS] [--yaxis YAXIS] [--xlim XLIM] [--ylim YLIM] [--secondary_y]
  [--mark_right] [--scatter_matrix_diagonal SCATTER_MATRIX_DIAGONAL]
  [--bootstrap_size BOOTSTRAP_SIZE] [--bootstrap_samples BOOTSTRAP_SAMPLES]
  [--norm_xaxis] [--norm_yaxis] [--lognorm_xaxis] [--lognorm_yaxis]
  [--xy_match_line XY_MATCH_LINE] [--grid] [--label_rotation LABEL_ROTATION]
  [--label_skip LABEL_SKIP] [--force_freq FORCE_FREQ] [--drawstyle DRAWSTYLE]
  [--por] [--invert_xaxis] [--invert_yaxis] [--round_index ROUND_INDEX]
  [--plotting_position PLOTTING_POSITION] [--prob_plot_sort_values
  PROB_PLOT_SORT_VALUES] [--source_units SOURCE_UNITS] [--target_units
  TARGET_UNITS] [--lag_plot_lag LAG_PLOT_LAG]

Plot data.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --ofilename OFILENAME
      [optional, defaults to 'plot.png']
      Output filename for the plot. Extension defines the type, for example
      'filename.png' will create a PNG file.
      If used within the Python API, if ofilename is None will return the
      Matplotlib figure that can then be changed or added to as needed.
  --type TYPE
      [optional, defaults to 'time']
      The plot type.
      Can be one of the following:
      time
        Standard time series plot. Time is the index, and plots each column of
        data.

      xy
        An (x,y) plot, also know as a scatter plot. Data must be organized as
        x1,y1,x2,y2,x3,y3,....

      double_mass
        An (x,y) plot of the cumulative sum of x and y. Data must be organized
        as x1,y1,x2,y2,x3,y3,....

      boxplot
        Box extends from lower to upper quartile, with line at the median.
        Depending on the statistics, the wiskers represent the range of
        the data or 1.5 times the inter-quartile range (Q3 - Q1). Data
        should be organized as y1,y2,y3,....

      scatter_matrix
        Plots all columns against each other in a matrix, with the diagonal
        plots either histogram or KDE probability distribution depending
        on scatter_matrix_diagonal keyword.

      lag_plot
        Indicates structure in the data. Only available for a single
        time-series.

      autocorrelation
        Plot autocorrelation. Only available for a single time-series.

      bootstrap
        Visually assess aspects of a data set by plotting random selections of
        values. Only available for a single time-series.

      histogram
        Calculate and create a histogram plot. See 'kde' for a smooth
        representation of a histogram.

      kde
        This plot is an estimation of the probability density function based on
        the data called kernel density estimation (KDE).

      kde_time
        This plot is an estimation of the probability density function based on
        the data called kernel density estimation (KDE) combined with a
        time-series plot.

      bar
        Column plot.

      barh
        A horizontal bar plot.

      bar_stacked
        A stacked column plot.

      barh_stacked
        A horizontal stacked bar plot.

      heatmap
        Create a 2D heatmap of daily data, day of year x-axis, and year for
        y-axis. Only available for a single, daily time-series.

      norm_xaxis
        Sort, calculate probabilities, and plot data against an x axis normal
        distribution.

      norm_yaxis
        Sort, calculate probabilities, and plot data against an y axis normal
        distribution.

      lognorm_xaxis
        Sort, calculate probabilities, and plot data against an x axis lognormal
        distribution.

      lognorm_yaxis
        Sort, calculate probabilities, and plot data against an y axis lognormal
        distribution.

      weibull_xaxis
        Sort, calculate and plot data against an x axis weibull distribution.

      weibull_yaxis
        Sort, calculate and plot data against an y axis weibull distribution.

      taylor
        Creates a taylor diagram that compares three goodness of fit statistics
        on one plot. The three goodness of fit statistics calculated and
        displayed are standard deviation, correlation coefficient, and
        centered root mean square deviation. The data columns have to be
        organized as 'observed,simulated1,simulated2,simulated3,...etc.'

      target
        Creates a target diagram that compares three goodness of fit statistics
        on one plot. The three goodness of fit statistics calculated and
        displayed are bias, root mean square deviation, and centered
        root mean square deviation. The data columns have to be
        organized as 'observed,simulated1,simulated2,simulated3,...etc.'

  --xtitle XTITLE
      [optional, default depends on type]
      Title of x-axis.
  --ytitle YTITLE
      [optional, default depends on type]
      Title of y-axis.
  --title TITLE
      [optional, defaults to '']
      Title of chart.
  --figsize FIGSIZE
      [optional, defaults to '10,6.5']
      The 'width,height' of plot in inches.
  --legend LEGEND
      [optional, defaults to True]
      Whether to display the legend.
  --legend_names LEGEND_NAMES
      [optional, defaults to None]
      Legend would normally use the time-series names associated with the input
      data. The 'legend_names' option allows you to override the names in
      the data set. You must supply a comma separated list of strings for
      each time-series in the data set.
  --subplots
      [optional, defaults to False]
      Make separate subplots for each time series.
  --sharex
      [optional, default to True]
      In case subplots=True, share x axis.
  --sharey
      [optional, default to False]
      In case subplots=True, share y axis.
  --colors COLORS
      [optional, default is 'auto']
      The default 'auto' will cycle through matplotlib colors. Otherwise at
      command line supply a comma separated matplotlib color codes, or for
      the Python API a list of color code strings.
      Separated 'colors', 'linestyles', and 'markerstyles' instead of using the
      'style' keyword.
      ┌──────┬─────────┐
      │ Code │ Color   │
      ╞══════╪═════════╡
      │ b    │ blue    │
      ├──────┼─────────┤
      │ g    │ green   │
      ├──────┼─────────┤
      │ r    │ red     │
      ├──────┼─────────┤
      │ c    │ cyan    │
      ├──────┼─────────┤
      │ m    │ magenta │
      ├──────┼─────────┤
      │ y    │ yellow  │
      ├──────┼─────────┤
      │ k    │ black   │
      ╘══════╧═════════╛

      ┌─────────┬───────────┐
      │ Number  │ Color     │
      ╞═════════╪═══════════╡
      │ 0.75    │ 0.75 gray │
      ├─────────┼───────────┤
      │ ...etc. │           │
      ╘═════════╧═══════════╛

      ┌──────────────────┐
      │ HTML Color Names │
      ╞══════════════════╡
      │ red              │
      ├──────────────────┤
      │ burlywood        │
      ├──────────────────┤
      │ chartreuse       │
      ├──────────────────┤
      │ ...etc.          │
      ╘══════════════════╛

      Color reference: <http://matplotlib.org/api/colors_api.html>
  --linestyles LINESTYLES
      [optional, default to 'auto']
      If 'auto' will iterate through the available matplotlib line types.
      Otherwise on the command line a comma separated list, or a list of
      strings if using the Python API.
      To not display lines use a space (' ') as the linestyle code.
      Separated 'colors', 'linestyles', and 'markerstyles' instead of using the
      'style' keyword.
      ┌──────┬──────────────┐
      │ Code │ Lines        │
      ╞══════╪══════════════╡
      │ -    │ solid        │
      ├──────┼──────────────┤
      │ --   │ dashed       │
      ├──────┼──────────────┤
      │ -.   │ dash_dot     │
      ├──────┼──────────────┤
      │ :    │ dotted       │
      ├──────┼──────────────┤
      │ None │ draw nothing │
      ├──────┼──────────────┤
      │ ' '  │ draw nothing │
      ├──────┼──────────────┤
      │ ''   │ draw nothing │
      ╘══════╧══════════════╛

      Line reference: <http://matplotlib.org/api/artist_api.html>
  --markerstyles MARKERSTYLES
      [optional, default to ' ']
      The default ' ' will not plot a marker. If 'auto' will iterate through the
      available matplotlib marker types. Otherwise on the command line a
      comma separated list, or a list of strings if using the Python API.
      Separated 'colors', 'linestyles', and 'markerstyles' instead of using the
      'style' keyword.
      ┌──────┬────────────────┐
      │ Code │ Markers        │
      ╞══════╪════════════════╡
      │ .    │ point          │
      ├──────┼────────────────┤
      │ o    │ circle         │
      ├──────┼────────────────┤
      │ v    │ triangle down  │
      ├──────┼────────────────┤
      │ ^    │ triangle up    │
      ├──────┼────────────────┤
      │ <    │ triangle left  │
      ├──────┼────────────────┤
      │ >    │ triangle right │
      ├──────┼────────────────┤
      │ 1    │ tri_down       │
      ├──────┼────────────────┤
      │ 2    │ tri_up         │
      ├──────┼────────────────┤
      │ 3    │ tri_left       │
      ├──────┼────────────────┤
      │ 4    │ tri_right      │
      ├──────┼────────────────┤
      │ 8    │ octagon        │
      ├──────┼────────────────┤
      │ s    │ square         │
      ├──────┼────────────────┤
      │ p    │ pentagon       │
      ├──────┼────────────────┤
      │ *    │ star           │
      ├──────┼────────────────┤
      │ h    │ hexagon1       │
      ├──────┼────────────────┤
      │ H    │ hexagon2       │
      ├──────┼────────────────┤
      │ +    │ plus           │
      ├──────┼────────────────┤
      │ x    │ x              │
      ├──────┼────────────────┤
      │ D    │ diamond        │
      ├──────┼────────────────┤
      │ d    │ thin diamond   │
      ├──────┼────────────────┤
      │ _    │ hline          │
      ├──────┼────────────────┤
      │ None │ nothing        │
      ├──────┼────────────────┤
      │ ' '  │ nothing        │
      ├──────┼────────────────┤
      │ ''   │ nothing        │
      ╘══════╧════════════════╛

      Marker reference: <http://matplotlib.org/api/markers_api.html>
  --style STYLE
      [optional, default is None]
      Still available, but if None is replaced by 'colors', 'linestyles', and
      'markerstyles' options. Currently the 'style' option will override
      the others.
      Comma separated matplotlib style strings per time-series. Just combine
      codes in 'ColorMarkerLine' order, for example 'r*--' is a red dashed
      line with star marker.
  --logx
      DEPRECATED: use '--xaxis="log"' instead.
  --logy
      DEPRECATED: use '--yaxis="log"' instead.
  --xaxis XAXIS
      [optional, default is 'arithmetic']
      Defines the type of the xaxis. One of 'arithmetic', 'log'.
  --yaxis YAXIS
      [optional, default is 'arithmetic']
      Defines the type of the yaxis. One of 'arithmetic', 'log'.
  --xlim XLIM
      [optional, default is based on range of x values]
      Comma separated lower and upper limits for the x-axis of the plot. For
      example, '--xlim 1,1000' would limit the plot from 1 to 1000, where
      '--xlim ,1000' would base the lower limit on the data and set the
      upper limit to 1000.
  --ylim YLIM
      [optional, default is based on range of y values]
      Comma separated lower and upper limits for the y-axis of the plot. See
      xlim for examples.
  --secondary_y
      [optional, default is False]
      Whether to plot on the secondary y-axis. If a list/tuple, which
      time-series to plot on secondary y-axis.
  --mark_right
      [optional, default is True]
      When using a secondary_y axis, should the legend label the axis of the
      various time-series automatically.
  --scatter_matrix_diagonal SCATTER_MATRIX_DIAGONAL
      [optional, defaults to 'kde']
      If plot type is 'scatter_matrix', this specifies the plot along the
      diagonal. One of 'kde' for Kernel Density Estimation or 'hist' for a
      histogram.
  --bootstrap_size BOOTSTRAP_SIZE
      [optional, defaults to 50]
      The size of the random subset for 'bootstrap' plot.
  --bootstrap_samples BOOTSTRAP_SAMPLES
      [optional, defaults to 500]
      The number of random subsets of 'bootstrap_size'.
  --norm_xaxis
      DEPRECATED: use '--type="norm_xaxis"' instead.
  --norm_yaxis
      DEPRECATED: use '--type="norm_yaxis"' instead.
  --lognorm_xaxis
      DEPRECATED: use '--type="lognorm_xaxis"' instead.
  --lognorm_yaxis
      DEPRECATED: use '--type="lognorm_yaxis"' instead.
  --xy_match_line XY_MATCH_LINE
      [optional, defaults is '']
      Will add a match line where x == y. Set to a line style code.
  --grid
      [optional, default is False]
      Whether to plot grid lines on the major ticks.
  --label_rotation LABEL_ROTATION
      [optional]
      Rotation for major labels for bar plots.
  --label_skip LABEL_SKIP
      [optional]
      Skip for major labels for bar plots.
  --force_freq FORCE_FREQ
      [optional, output format]
      Force this frequency for the files. Typically you will only want to
      enforce a smaller interval where tstoolbox will insert missing
      values as needed. WARNING: you may lose data if not careful with
      this option. In general, letting the algorithm determine the
      frequency should always work, but this option will override. Use
      PANDAS offset codes.
  --drawstyle DRAWSTYLE
      [optional, default is 'default']
      'default' connects the points with lines. The steps variants produce
      step-plots. 'steps' is equivalent to 'steps-pre' and is maintained
      for backward-compatibility.
      ACCEPTS:
      ['default' | 'steps' | 'steps-pre' | 'steps-mid' | 'steps-post']

  --por
      [optional]
      Plot from first good value to last good value. Strips NANs from beginning
      and end.
  --invert_xaxis
      [optional, default is False]
      Invert the x-axis.
  --invert_yaxis
      [optional, default is False]
      Invert the y-axis.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --plotting_position PLOTTING_POSITION
      [optional, default is 'weibull']
      ┌────────────┬─────┬─────────────────┬───────────────────────┐
      │ Name       │ a   │ Equation        │ Description           │
      │            │     │ (1-a)/(n+1-2*a) │                       │
      ╞════════════╪═════╪═════════════════╪═══════════════════════╡
      │ weibull    │ 0   │ i/(n+1)         │ mean of sampling      │
      │            │     │                 │ distribution          │
      ├────────────┼─────┼─────────────────┼─(default)─────────────┤
      │ benard and │ 0.3 │ (i-0.3)/(n+0.4) │ approx. median of     │
      │ bos- leve- │     │                 │ sampling distribution │
      │ nbach      │     │                 │                       │
      ├────────────┼─────┼─────────────────┼───────────────────────┤
      │ tukey      │ 1/3 │ (i-1/3)/(n+1/3) │ approx. median of     │
      │            │     │                 │ sampling distribution │
      ├────────────┼─────┼─────────────────┼───────────────────────┤
      │ gumbel     │ 1   │ (i-1)/(n-1)     │ mode of sampling      │
      │            │     │                 │ distribution          │
      ├────────────┼─────┼─────────────────┼───────────────────────┤
      │ hazen      │ 1/2 │ (i-1/2)/n       │ midpoints of n equal  │
      │            │     │                 │ intervals             │
      ├────────────┼─────┼─────────────────┼───────────────────────┤
      │ cunnane    │ 2/5 │ (i-2/5)/(n+1/5) │ subjective            │
      ├────────────┼─────┼─────────────────┼───────────────────────┤
      │ california │ NA  │ i/n             │                       │
      ╘════════════╧═════╧═════════════════╧═══════════════════════╛

      Where 'i' is the sorted rank of the y value, and 'n' is the total number
      of values to be plotted.
      Only used for norm_xaxis, norm_yaxis, lognorm_xaxis, lognorm_yaxis,
      weibull_xaxis, and weibull_yaxis.
  --prob_plot_sort_values PROB_PLOT_SORT_VALUES
      [optional, default is 'descending']
      How to sort the values for the probability plots.
      Only used for norm_xaxis, norm_yaxis, lognorm_xaxis, lognorm_yaxis,
      weibull_xaxis, and weibull_yaxis.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --lag_plot_lag LAG_PLOT_LAG
      [optional, default to 1]
      The lag used if type "lag_plot" is chosen.

rank

$ tstoolbox rank --help
usage: tstoolbox rank [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--axis AXIS]
  [--method METHOD] [--numeric_only NUMERIC_ONLY] [--na_option NA_OPTION]
  [--ascending] [--pct] [--print_input] [--float_format FLOAT_FORMAT]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--round_index
  ROUND_INDEX] [--tablefmt TABLEFMT]

Equal values are assigned a rank that is the average of the ranks of those
values

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --axis AXIS
      [optional, default is 0]
      0 or 'index' for rows. 1 or 'columns' for columns. Index to direct
      ranking.
  --method METHOD
      [optional, default is 'average']
      ┌─────────────────┬────────────────────────────────┐
      │ method argument │ Description                    │
      ╞═════════════════╪════════════════════════════════╡
      │ average         │ average rank of group          │
      ├─────────────────┼────────────────────────────────┤
      │ min             │ lowest rank in group           │
      ├─────────────────┼────────────────────────────────┤
      │ max             │ highest rank in group          │
      ├─────────────────┼────────────────────────────────┤
      │ first           │ ranks assigned in order they   │
      │                 │ appear in the array            │
      ├─────────────────┼────────────────────────────────┤
      │ dense           │ like 'min', but rank always    │
      │                 │ increases by 1 between groups  │
      ╘═════════════════╧════════════════════════════════╛

  --numeric_only NUMERIC_ONLY
      [optional, default is None]
      Include only float, int, boolean data. Valid only for DataFrame or Panel
      objects.
  --na_option NA_OPTION
      [optional, default is 'keep']
      ┌────────────────────┬────────────────────────────────┐
      │ na_option argument │ Description                    │
      ╞════════════════════╪════════════════════════════════╡
      │ keep               │ leave NA values where they are │
      ├────────────────────┼────────────────────────────────┤
      │ top                │ smallest rank if ascending     │
      ├────────────────────┼────────────────────────────────┤
      │ bottom             │ smallest rank if descending    │
      ╘════════════════════╧════════════════════════════════╛

  --ascending
      [optional, default is True]
      False ranks by high (1) to low (N)
  --pct
      [optional, default is False]
      Computes percentage rank of data.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

read

$ tstoolbox read --help
usage: tstoolbox read [-h] [--force_freq FORCE_FREQ] [--append APPEND]
  [--columns COLUMNS] [--start_date START_DATE] [--end_date END_DATE] [--dropna
  DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES]
  [--clean] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--float_format FLOAT_FORMAT] [--round_index ROUND_INDEX] [--tablefmt
  TABLEFMT] filenames

Prints the read in time-series in the tstoolbox standard format.

positional arguments:
  filenames             List of comma delimited filenames to read time series
    from.


optional arguments:
  -h | --help
      show this help message and exit
  --force_freq FORCE_FREQ
      [optional, output format]
      Force this frequency for the files. Typically you will only want to
      enforce a smaller interval where tstoolbox will insert missing
      values as needed. WARNING: you may lose data if not careful with
      this option. In general, letting the algorithm determine the
      frequency should always work, but this option will override. Use
      PANDAS offset codes.
      ┌───────┬─────────────────────────────┐
      │ Alias │ Description                 │
      ╞═══════╪═════════════════════════════╡
      │ B     │ business day                │
      ├───────┼─────────────────────────────┤
      │ C     │ custom business day         │
      │       │ (experimental)              │
      ├───────┼─────────────────────────────┤
      │ D     │ calendar day                │
      ├───────┼─────────────────────────────┤
      │ W     │ weekly                      │
      ├───────┼─────────────────────────────┤
      │ M     │ month end                   │
      ├───────┼─────────────────────────────┤
      │ BM    │ business month end          │
      ├───────┼─────────────────────────────┤
      │ CBM   │ custom business month end   │
      ├───────┼─────────────────────────────┤
      │ MS    │ month start                 │
      ├───────┼─────────────────────────────┤
      │ BMS   │ business month start        │
      ├───────┼─────────────────────────────┤
      │ CBMS  │ custom business month start │
      ├───────┼─────────────────────────────┤
      │ Q     │ quarter end                 │
      ├───────┼─────────────────────────────┤
      │ BQ    │ business quarter end        │
      ├───────┼─────────────────────────────┤
      │ QS    │ quarter start               │
      ├───────┼─────────────────────────────┤
      │ BQS   │ business quarter start      │
      ├───────┼─────────────────────────────┤
      │ A     │ year end                    │
      ├───────┼─────────────────────────────┤
      │ BA    │ business year end           │
      ├───────┼─────────────────────────────┤
      │ AS    │ year start                  │
      ├───────┼─────────────────────────────┤
      │ BAS   │ business year start         │
      ├───────┼─────────────────────────────┤
      │ H     │ hourly                      │
      ├───────┼─────────────────────────────┤
      │ T     │ minutely                    │
      ├───────┼─────────────────────────────┤
      │ S     │ secondly                    │
      ├───────┼─────────────────────────────┤
      │ L     │ milliseconds                │
      ├───────┼─────────────────────────────┤
      │ U     │ microseconds                │
      ├───────┼─────────────────────────────┤
      │ N     │ nanoseconds                 │
      ╘═══════╧═════════════════════════════╛

      Weekly has the following anchored frequencies:
      ┌───────┬───────────────────────────────┐
      │ Alias │ Description                   │
      ╞═══════╪═══════════════════════════════╡
      │ W-SUN │ weekly frequency (sundays).   │
      │       │ Same as 'W'.                  │
      ├───────┼───────────────────────────────┤
      │ W-MON │ weekly frequency (mondays)    │
      ├───────┼───────────────────────────────┤
      │ W-TUE │ weekly frequency (tuesdays)   │
      ├───────┼───────────────────────────────┤
      │ W-WED │ weekly frequency (wednesdays) │
      ├───────┼───────────────────────────────┤
      │ W-THU │ weekly frequency (thursdays)  │
      ├───────┼───────────────────────────────┤
      │ W-FRI │ weekly frequency (fridays)    │
      ├───────┼───────────────────────────────┤
      │ W-SAT │ weekly frequency (saturdays)  │
      ╘═══════╧═══════════════════════════════╛

      Quarterly frequencies (Q, BQ, QS, BQS) and annual frequencies (A, BA, AS,
      BAS) have the following anchoring suffixes:
      ┌───────┬───────────────────────────────┐
      │ Alias │ Description                   │
      ╞═══════╪═══════════════════════════════╡
      │ -DEC  │ year ends in December (same   │
      │       │ as 'Q' and 'A')               │
      ├───────┼───────────────────────────────┤
      │ -JAN  │ year ends in January          │
      ├───────┼───────────────────────────────┤
      │ -FEB  │ year ends in February         │
      ├───────┼───────────────────────────────┤
      │ -MAR  │ year ends in March            │
      ├───────┼───────────────────────────────┤
      │ -APR  │ year ends in April            │
      ├───────┼───────────────────────────────┤
      │ -MAY  │ year ends in May              │
      ├───────┼───────────────────────────────┤
      │ -JUN  │ year ends in June             │
      ├───────┼───────────────────────────────┤
      │ -JUL  │ year ends in July             │
      ├───────┼───────────────────────────────┤
      │ -AUG  │ year ends in August           │
      ├───────┼───────────────────────────────┤
      │ -SEP  │ year ends in September        │
      ├───────┼───────────────────────────────┤
      │ -OCT  │ year ends in October          │
      ├───────┼───────────────────────────────┤
      │ -NOV  │ year ends in November         │
      ╘═══════╧═══════════════════════════════╛

  --append APPEND
      [optional, default is 'columns']
      The type of appending to do. For "combine" option matching column indices
      will append rows, matching row indices will append columns, and
      matching column/row indices use the value from the first dataset.
      You can use "row" or "column" to force an append along either axis.
  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --float_format FLOAT_FORMAT
      [optional, output format]
      Format for float numbers.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

remove_trend

$ tstoolbox remove_trend --help
usage: tstoolbox remove_trend [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--round_index
  ROUND_INDEX] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--print_input] [--tablefmt TABLEFMT]

Remove a 'trend'.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

replace

$ tstoolbox replace --help
usage: tstoolbox replace [-h] [--round_index ROUND_INDEX]
  [--input_ts INPUT_TS] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type
  INDEX_TYPE] [--names NAMES] [--clean] [--source_units SOURCE_UNITS]
  [--target_units TARGET_UNITS] [--print_input] [--tablefmt TABLEFMT]
  from_values to_values

Return a time-series replacing values with others.

positional arguments:
  from_values           All values in this comma separated list are replaced
    with the corresponding value in to_values. Use the string 'None' to
    represent a missing value. If using 'None' as a from_value it might be
    easier to use the "fill" subcommand instead.

  to_values             All values in this comma separater list are the
    replacement values corresponding one-to-one to the items in from_values. Use
    the string 'None' to represent a missing value.


optional arguments:
  -h | --help
      show this help message and exit
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

rolling_window

$ tstoolbox rolling_window --help
usage: tstoolbox rolling_window [-h] [--groupby GROUPBY] [--window WINDOW]
  [--input_ts INPUT_TS] [--columns COLUMNS] [--start_date START_DATE]
  [--end_date END_DATE] [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type
  INDEX_TYPE] [--names NAMES] [--clean] [--span SPAN] [--min_periods
  MIN_PERIODS] [--center] [--win_type WIN_TYPE] [--on ON] [--closed CLOSED]
  [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--print_input]
  [--tablefmt TABLEFMT] statistic

Calculate a rolling window statistic.

positional arguments:
  statistic             +----------+--------------------+
    corr | correlation |

    Line block ends without a blank line.
    ┌──────────┬────────────────────┐
    │ count    │ count of numbers   │
    ├──────────┼────────────────────┤
    │ cov      │ covariance         │
    ├──────────┼────────────────────┤
    │ kurt     │ kurtosis           │
    ├──────────┼────────────────────┤
    │ max      │ maximum            │
    ├──────────┼────────────────────┤
    │ mean     │ mean               │
    ├──────────┼────────────────────┤
    │ median   │ median             │
    ├──────────┼────────────────────┤
    │ min      │ minimum            │
    ├──────────┼────────────────────┤
    │ quantile │ quantile           │
    ├──────────┼────────────────────┤
    │ skew     │ skew               │
    ├──────────┼────────────────────┤
    │ std      │ standard deviation │
    ├──────────┼────────────────────┤
    │ sum      │ sum                │
    ├──────────┼────────────────────┤
    │ var      │ variance           │
    ╘══════════╧════════════════════╛



optional arguments:
  -h | --help
      show this help message and exit
  --groupby GROUPBY
      [optional, default is None, transformation]
      The pandas offset code to group the time-series data into. A special code
      is also available to group 'months_across_years' that will group
      into twelve categories for each month.
  --window WINDOW
      [optional, default = 2]
      Size of the moving window. This is the number of observations used for
      calculating the statistic. Each window will be a fixed size.
      If its an offset then this will be the time period of each window. Each
      window will be a variable sized based on the observations included
      in the time-period. This is only valid for datetimelike indexes.
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --span SPAN
      [optional, default = 2]
      DEPRECATED: Changed to 'window' to be consistent with pandas.
  --min_periods MIN_PERIODS
      [optional, default is None]
      Minimum number of observations in window required to have a value
      (otherwise result is NA). For a window that is specified by an
      offset, this will default to 1.
  --center
      [optional, default is False]
      Set the labels at the center of the window.
  --win_type WIN_TYPE
      [optional, default is None]
      Provide a window type. If None, all points are evenly weighted. See the
      notes below for further information.
  --on ON
      [optional, default is None]
      For a DataFrame, column on which to calculate the rolling window, rather
      than the index
  --closed CLOSED
      [optional, default is None]
      Make the interval closed on the 'right', 'left', 'both' or 'neither'
      endpoints. For offset-based windows, it defaults to 'right'. For
      fixed windows, defaults to 'both'. Remaining cases not implemented
      for fixed windows.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --print_input
      [optional, default is False, output format]
      If set to 'True' will include the input columns in the output table.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.<string>:21: (WARNING/2) Line block ends without a blank line.

stack

$ tstoolbox stack --help
usage: tstoolbox stack [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--round_index ROUND_INDEX]
  [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names
  NAMES] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--clean]
  [--tablefmt TABLEFMT]

The stack command takes the standard table and converts to a three column table.

From:

Datetime,TS1,TS2,TS3
2000-01-01 00:00:00,1.2,1018.2,0.0032
2000-01-02 00:00:00,1.8,1453.1,0.0002
2000-01-03 00:00:00,1.9,1683.1,-0.0004

To:

Datetime,Columns,Values
2000-01-01,TS1,1.2
2000-01-02,TS1,1.8
2000-01-03,TS1,1.9
2000-01-01,TS2,1018.2
2000-01-02,TS2,1453.1
2000-01-03,TS2,1683.1
2000-01-01,TS3,0.0032
2000-01-02,TS3,0.0002
2000-01-03,TS3,-0.0004

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.

stdtozrxp

$ tstoolbox stdtozrxp --help
usage: tstoolbox stdtozrxp [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--dropna DROPNA] [--skiprows
  SKIPROWS] [--index_type INDEX_TYPE] [--names NAMES] [--clean] [--round_index
  ROUND_INDEX] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS]
  [--rexchange REXCHANGE]

Print out data to the screen in a WISKI ZRXP format.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --rexchange REXCHANGE
      [optional, default is None]
      The REXCHANGE ID to be written into the zrxp header.

tstopickle

$ tstoolbox tstopickle --help
usage: tstoolbox tstopickle [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--round_index ROUND_INDEX]
  [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names
  NAMES] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--clean]
  filename

Can be brought back into Python with 'pickle.load' or 'numpy.load'. See also
'tstoolbox read'.

positional arguments:
  filename The filename to store the pickled data.

optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.

unstack

$ tstoolbox unstack --help
usage: tstoolbox unstack [-h] [--input_ts INPUT_TS] [--columns COLUMNS]
  [--start_date START_DATE] [--end_date END_DATE] [--round_index ROUND_INDEX]
  [--dropna DROPNA] [--skiprows SKIPROWS] [--index_type INDEX_TYPE] [--names
  NAMES] [--source_units SOURCE_UNITS] [--target_units TARGET_UNITS] [--clean]
  [--tablefmt TABLEFMT] column_names

The unstack command takes the stacked table and converts to a standard tstoolbox
table.

From:

Datetime,Columns,Values
2000-01-01,TS1,1.2
2000-01-02,TS1,1.8
2000-01-03,TS1,1.9
2000-01-01,TS2,1018.2
2000-01-02,TS2,1453.1
2000-01-03,TS2,1683.1
2000-01-01,TS3,0.0032
2000-01-02,TS3,0.0002
2000-01-03,TS3,-0.0004

To:

Datetime,TS1,TS2,TS3
2000-01-01,1.2,1018.2,0.0032
2000-01-02,1.8,1453.1,0.0002
2000-01-03,1.9,1683.1,-0.0004

positional arguments:
  column_names          The column in the table that holds the column names
    of the unstacked data.


optional arguments:
  -h | --help
      show this help message and exit
  --input_ts INPUT_TS
      [optional, required if using Python API, default is '-' (stdin)]
      Whether from a file or standard input, data requires a header of column
      names. The default header is the first line of the input, but this
      can be changed using the 'skiprows' option.
      Most separators will be automatically detected. Most common date formats
      can be used, but the closer to ISO 8601 date/time standard the
      better.
      Command line:
      +-------------------------+------------------------+
      | --input_ts=filename.csv | to read 'filename.csv' |
      +-------------------------+------------------------+
      | --input_ts='-'          | to read from standard  |
      |                         | input (stdin).         |
      +-------------------------+------------------------+
      
      In many cases it is better to use redirection rather that use
      `--input_ts=filename.csv`.  The following are identical:
      
      From a file:
      
          command subcmd --input_ts=filename.csv
      
      From standard input:
      
          command subcmd --input_ts=- < filename.csv
      
      The BEST way since you don't have to include `--input_ts=-` because
      that is the default:
      
          command subcmd < filename.csv
      
      Can also combine commands by piping:
      
          command subcmd < filename.csv | command subcmd1 > fileout.csv

      As Python Library:
      You MUST use the `input_ts=...` option where `input_ts` can be one
      of a [pandas DataFrame, pandas Series, dict, tuple,
      list, StringIO, or file name].
      
      If result is a time series, returns a pandas DataFrame.

  --columns COLUMNS
      [optional, defaults to all columns, input filter]
      Columns to select out of input. Can use column names from the first line
      header or column numbers. If using numbers, column number 1 is the
      first data column. To pick multiple columns; separate by commas with
      no spaces. As used in tstoolbox pick command.
      This solves a big problem so that you don't have to create a data set with
      a certain order, you can rearrange columns when data is read in.
  --start_date START_DATE
      [optional, defaults to first date in time-series, input filter]
      The start_date of the series in ISOdatetime format, or 'None' for
      beginning.
  --end_date END_DATE
      [optional, defaults to last date in time-series, input filter]
      The end_date of the series in ISOdatetime format, or 'None' for end.
  --round_index ROUND_INDEX
      [optional, default is None which will do nothing to the index, output
      format]
      Round the index to the nearest time point. Can significantly improve the
      performance since can cut down on memory and processing
      requirements, however be cautious about rounding to a very course
      interval from a small one. This could lead to duplicate values in
      the index.
  --dropna DROPNA
      [optional, defauls it 'no', input filter]
      Set dropna to 'any' to have records dropped that have NA value in any
      column, or 'all' to have records dropped that have NA in all
      columns. Set to 'no' to not drop any records. The default is 'no'.
  --skiprows SKIPROWS
      [optional, default is None which will infer header from first line, input
      filter]
      Line numbers to skip (0-indexed) or number of lines to skip (int) at the
      start of the file.
      If callable, the callable function will be evaluated against the row
      indices, returning True if the row should be skipped and False
      otherwise. An example of a valid callable argument would be
      lambda x: x in [0, 2].
  --index_type INDEX_TYPE
      [optional, default is 'datetime', output format]
      Can be either 'number' or 'datetime'. Use 'number' with index values that
      are Julian dates, or other epoch reference.
  --names NAMES
      [optional, default is None, input filter]
      If None, the column names are taken from the first row after 'skiprows'
      from the input dataset.
  --source_units SOURCE_UNITS
      [optional, default is None, transformation]
      If unit is specified for the column as the second field of a ':' delimited
      column name, then the specified units and the 'source_units' must
      match exactly.
      Any unit string compatible with the 'pint' library can be used.
  --target_units TARGET_UNITS
      [optional, default is None, transformation]
      The main purpose of this option is to convert units from those specified
      in the header line of the input into 'target_units'.
      The units of the input time-series or values are specified as the second
      field of a ':' delimited name in the header line of the input or in
      the 'source_units' keyword.
      Any unit string compatible with the 'pint' library can be used.
      This option will also add the 'target_units' string to the column names.
  --clean
      [optional, default is False, input filter]
      The 'clean' command will repair an index, removing duplicate index values
      and sorting.
  --tablefmt TABLEFMT
      [optional, default is 'csv', output format]
      The table format. Can be one of 'csv', 'tsv', 'plain', 'simple', 'grid',
      'pipe', 'orgtbl', 'rst', 'mediawiki', 'latex', 'latex_raw' and
      'latex_booktabs'.