Autoregressive predictions
Different models handle autoregressive predictions differently
- LSTM-based models output a hidden and cell state which is used as initial state for the next prediction
- Encoder-Decoder-style models output the target covariate, which is used for the next encoder call (potentially appended to previous target predictions if the sequence length is too short).
Probably it’s best to separate the two in two different classes that each handle predictions state fully in the different ways.
Signal preprocessing
Most input signals are noisy and need to be filtered. We need to apply
- Low pass filter for , , and
- Low amplitude filter for and
- Compute from I (most likely using 1 order savgol filter)
If applicable, the time axis must be correctly computed (shift + scale to seconds)
If applicable, RDP downsampling must be applied to the input signals prior to feeding it to the predictors.
Prediction post-processing
Predicted is returned together with the corresponding time steps for later upsampling to match the correction. If the signal was downsampled for prediction, the time axis must be downsampled with the same method.
Trimming
Plotting outputs
To plot the outputs predictions vs ground truth, we must wait for the measured data to come in, which is at the end of cycle. This means we need to hold on to the prediction until at least after measurements arrive.