Hysteresis modeling ideas
Pretraining
We can pretrain our models using either Jiles Atherton model, or Preisach. The Preisach model produces major loops more similar to the MBIs, however Jiles Atherton model produces simulations with more hysteresis.
In either case, it would be interesting to pretrain a transformer with field simulations, and then transfer to a measured dataset.
Ablation studies
Train EncoderDecoderLSTM, AttentionLSTM and TransformerLSTM using the same d_model, and compare performances.
- EncoderDecoderLSTM and AttentionLSTM does not have fully connected layers before the LSTMs, unlike TransformerLSTM.
Voltage
Curriculum learning
Learn model to use voltage
We use Curriculum Learning to try and teach the model to learn hysteresis from the induced voltage, however since the excitation is driven mainly by the current, and knowing the field in the past is kind of cheating, we can mask it out during initial phases of learning.
We can start by using it is more readily available (voltage was only available starting 2025).
- Learn a model without known field in the past, but learn using and residual of (in the past), and in the future.
- Exchange for with the datasets available in v9.1 and finetune
- Unmask B in the past encoder and finetune
For stages 2-3, use low learning rate or torch.optim.lr_scheduler.OneCycleLR for starting with low or .