ltsm.models package

Submodules

ltsm.models.DLinear module

class ltsm.models.DLinear.DLinear(config, **kwargs)[source]

Bases: PreTrainedModel

Decomposition-Linear

config_class

alias of DLinearConfig

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.DLinear.moving_avg(kernel_size, stride)[source]

Bases: Module

Moving average block to highlight the trend of time series

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.DLinear.series_decomp(kernel_size)[source]

Bases: Module

Series decomposition block

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

ltsm.models.Informer module

class ltsm.models.Informer.Informer(config, **kwargs)[source]

Bases: PreTrainedModel

Informer with Propspare attention in O(LlogL) complexity

config_class

alias of InformerConfig

forward(x_enc, x_mark_enc, x_dec, x_mark_dec, enc_self_mask=None, dec_self_mask=None, dec_enc_mask=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

ltsm.models.PatchTST module

class ltsm.models.PatchTST.PatchTST(config, **kwargs)[source]

Bases: PreTrainedModel

config_class

alias of PatchTSTConfig

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

ltsm.models.base_config module

class ltsm.models.base_config.DLinearConfig(seq_len=336, pred_len=96, individual=0, enc_in=1, **kwargs)[source]

Bases: PretrainedConfig

DLinearConfig is a configuration class for the DLinear model. It contains all the necessary parameters to initialize the model.

class ltsm.models.base_config.InformerConfig(seq_len=336, pred_len=96, enc_in=1, dec_in=7, d_model=1024, n_heads=16, e_layers=2, d_ff=512, dropout=0.2, activation='gelu', output_attention=False, embed_type=0, freq='h', factor=1, distil=True, c_out=862, embed='timeF', **kwargs)[source]

Bases: PretrainedConfig

InformerConfig is a configuration class for the Informer model. It contains all the necessary parameters to initialize the model.

class ltsm.models.base_config.LTSMConfig(seq_len=336, pred_len=96, patch_size=16, pretrain=True, stride=8, prompt_len=133, gpt_layers=3, model_name_or_path='gpt2-medium', d_ff=512, d_model=1024, enc_in=1, dropout=0.2, n_heads=16, prompt_data_path=None, **kwargs)[source]

Bases: PretrainedConfig

LTSMConfig is a configuration class for the LTSM model. It contains all the necessary parameters to initialize the model.

class ltsm.models.base_config.PatchTSTConfig(seq_len=336, pred_len=96, enc_in=1, patch_len=16, stride=8, decomposition=False, max_seq_len=1024, n_layers=3, d_model=128, n_heads=16, d_k=None, d_v=None, d_ff=256, norm='BatchNorm', attn_dropout=0.0, dropout=0.0, act='gelu', key_padding_mask='auto', padding_var=None, attn_mask=None, res_attention=True, pre_norm=False, store_attn=False, pe='zeros', learn_pe=True, fc_dropout=0.0, head_dropout=0, padding_patch=None, pretrain_head=False, head_type='flatten', individual=False, revin=True, affine=True, subtract_last=False, verbose=False, embed='timeF', **kwargs)[source]

Bases: PretrainedConfig

PatchTSTConfig is a configuration class for the PatchTST model. It contains all the necessary parameters to initialize the model.

ltsm.models.embed module

class ltsm.models.embed.DataEmbedding(c_in, d_model, embed_type='fixed', freq='h', dropout=0.1)[source]

Bases: Module

forward(x, x_mark)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.embed.DataEmbedding_wo_pos(c_in, d_model, embed_type='fixed', freq='h', dropout=0.1)[source]

Bases: Module

forward(x, x_mark)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.embed.DataEmbedding_wo_time(c_in, d_model, embed_type='fixed', freq='h', dropout=0.1)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.embed.FixedEmbedding(c_in, d_model)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.embed.PatchEmbedding(d_model, patch_len, stride, dropout)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.embed.PositionalEmbedding(d_model, max_len=5000)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.embed.TemporalEmbedding(d_model, embed_type='fixed', freq='h')[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.embed.TimeFeatureEmbedding(d_model, embed_type='timeF', freq='h')[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.embed.TokenEmbedding(c_in, d_model)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

ltsm.models.ltsm_base module

class ltsm.models.ltsm_base.LTSMConfig(**kwargs)[source]

Bases: PretrainedConfig

load(json_file)[source]
update(**kwargs)[source]

Updates attributes of this class with attributes from config_dict.

Parameters:

config_dict (Dict[str, Any]) – Dictionary of attributes that should be updated for this class.

ltsm.models.ltsm_stat_model module

class ltsm.models.ltsm_stat_model.LTSM(configs, *model_args, **model_kwargs)[source]

Bases: PreTrainedModel

config_class

alias of LTSMConfig

forward(x, return_feature=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

model_prune(configs)[source]

ltsm.models.ltsm_ts_tokenizer module

class ltsm.models.ltsm_ts_tokenizer.LTSM_Tokenizer(configs)[source]

Bases: PreTrainedModel

config_class

alias of LTSMConfig

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

model_prune(configs)[source]

ltsm.models.ltsm_wordprompt module

class ltsm.models.ltsm_wordprompt.LTSM_WordPrompt(configs)[source]

Bases: PreTrainedModel

calcute_lags(x_enc)[source]
config_class

alias of LTSMConfig

forward(x_enc)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

model_prune(configs)[source]

ltsm.models.utils module

class ltsm.models.utils.FlattenHead(n_vars, nf, target_window, head_dropout=0)[source]

Bases: Module

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.utils.Normalize(num_features, eps=1e-05, affine=False, subtract_last=False, non_norm=False)[source]

Bases: Module

forward(x, mode)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class ltsm.models.utils.ReprogrammingLayer(d_model, n_heads, d_keys=None, d_llm=None, attention_dropout=0.1)[source]

Bases: Module

forward(target_embedding, source_embedding, value_embedding)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

reprogramming(target_embedding, source_embedding, value_embedding)[source]
ltsm.models.utils.freeze_parameters(model)[source]

Sets certain model parameters to non-trainable, and specific parameters to trainable, based on predefined lists of layer names to freeze or keep trainable.

ltsm.models.utils.print_trainable_parameters(model)[source]

Prints the names of parameters in the model that are trainable.

Module contents

ltsm.models.get_model(config, model_name, local_pretrain=None, hf_hub_model=None)[source]

Factory method to create a model by name.

Parameters:
  • config (PreTrainedConfig) – The configuration for the model.

  • model_name (str) – The name of the model to instantiate.

  • local_pretrain (bool) – If True, load the model from a local pretraining path.

  • hf_hub_model (str) – The Hugging Face Hub model name.

Returns:

Instantiated model.

Return type:

torch.nn.Module

Raises:

ValueError – If the model name is not found in model_dict.

ltsm.models.register_model(module, module_name)[source]

Registers a PreTrainedModel module into the model dictionary.

Parameters:
  • module – A Python module or class that implements a PreTrainedModel.

  • module_name (str) – The key name for the module in the model dictionary.

Raises:

AssertionError – If a model with the same name is already registered