Good results from Dedicated MD 2025-07-23, we show we can compensate SFT flat top after setting up injection, but comparably poor model performance due to use of incorrectly rounded calibration function.
New calibration function increased number of points at SFT flat top (4810 A to 4825 A) to enable Interpolating predicted fields for more robust field predictions
InterpolatedPredictors are now available in sps-app-hysteresis and do not show any prediction noise.
Dropout not completely disabled in inference due to a single call to torch.nn.functional.dropout instead of torch.nn.Dropout. This was fixed prior to the last dedicated MD.
Follow-up analysis on B-Train outputs during the last MD does show an offset at SFT injection up to 0.7-0.8 G when drift corrected, but not visible on raw data since measurements are the same. This suggests model output is correct since it is trained on data with drift-corrected injection plateaus.
Updates on field compensation
Finalized eddy current studies, with comprehensive joint fitting and analysis using JAX
Finalized package sps-mbi-eddy-current with functions and notebooks for full data pipeline (acquisition + fitting + analysis)
Switch to use B˙ instead of I˙ for calculating eddy current decay doe to significantly differing values at saturation currents REFERENCE NOTE
Significantly improved joint fits on both orbit and first turn (!).
BDOT ConverterUCAP.SPSBEAM/BDOT publishes programmed B˙ calculated from MBI/IREF and a calibration function using the chain rule \dvBt=\dvBI\dvIt since B is not defined through the entire cycle (only beam in), but I is.
A complementary converter UCAP.SPSBEAM/BDOT_PLAYED publishes actual programmed B˙ after cycle has started (since actual played B depends on whether we are cycling normally, FULLECO or DYNECO). We can subscribe to this to get the B˙ history for calculating future eddy current decay.
Uses Machine Mode detection converter to decide when to publish function.
B-Train drift-correction converter UCAP.BTRAIN.BMEAS.SP which subscribes to B-Trainmeasurements and moves the integration marker “jump” to the start of the cycle with a linear drift correction. The sps-app-hysteresis can now subscribe to this device instead of the B-Train for visualization and model input.
Parameter search underway, goal is to make a good TransformerLSTM model, keep the n_model parameter and train EncoderDecoderLSTM and AttentionLSTM with the same network size, and see which one performs better on validation and unseen test set, and we can motivate how well attention is working for our task, or rather parameter count. We can compare against the TFT, and it’s possible it will outperform it due to the simpler nature.
New models use RNN Packed Sequences from torch which masks out padded values when using randomized sequence length, which necessitated masking of padded values during loss calculation, and aligning decoder values on the left, introducing a breaking change (but with backwards compatibility) in transformertf. It turns out that the TFT loss evolution to incredibly good loss is primarily due to learning zero-padding values.
Discussion on whether we learn hysteresis or data
Future MDs
Preparation for Parallel MD 2025-08-11 to measure tune and chromaticity with/without MD1 with operational SFT injection settings.