Take derivative of a signal

One must first smooth the signal before calculating the derivative

graphics.cs.cmu.edu/courses/15-463/2004_fall/www/Lectures/filtering2.pdf

smooth before differentiation? two convolutions to smooth, then differentiate?

use derivative of gaussian filter. differentiation is convolution, and convolution is associative.

wiki:Finite difference coefficient, forward finite difference for the first value, central difference for middle values, backward finite difference for last value.

Warning

Higher-order finite difference gradients are more susceptible to noise amplification, making them less reliable on noisy data. While they offer improved accuracy on smooth data, this advantage is lost in noisy scenarios Lower-order methods, being less sensitive to noise, are often a better choice for gradient estimation when noise is present.

Finite difference gradient in PyTorch

The following is an implementation of fourth-order accurate central finite difference gradient, with second order forward and backward difference at the edges:

def gradient(x: torch.Tensor, time: torch.Tensor) -> torch.Tensor:
    """
    Computes the derivative of `x` with respect to `time` using second-order difference
    for edges and fourth-order central difference for interior points.
    Parameters
    ----------
    x : torch.Tensor
        Input tensor of shape [batch, sequence] representing the values for which
        the derivative is computed.
    time : torch.Tensor
        Monotonically increasing time tensor of shape [batch, sequence] that
        corresponds to the time coordinates for the input `x`.
    Returns
    -------
    torch.Tensor
        Derivative of `x` with respect to `time`, having the same shape as `x`.
    Notes
    -----
    - The first and last points use a second-order accurate finite difference method.
    - The second and second-last points also use a second-order method.
    - The interior points (from `i=2` to `i=sequence-3`) use a fourth-order central difference.
    Examples
    --------
    >>> x = torch.tensor([[1.0, 2.0, 3.0, 4.0, 5.0]])
    >>> time = torch.tensor([[0.0, 1.0, 2.0, 3.0, 4.0]])
    >>> dx_dt = high_order_difference(x, time)
    >>> print(dx_dt)
    tensor([[1., 1., 1., 1., 1.]])
    """
    # Initialize the derivative tensor
    dx_dt = torch.zeros_like(x)
    # Compute the first-order derivative using second-order difference for the first point
    dt0 = time[:, 1] - time[:, 0]
    dx_dt[:, 0] = (-3 * x[:, 0] + 4 * x[:, 1] - x[:, 2]) / (2 * dt0)
    # Compute the first-order derivative using second-order difference for the second point
    dt1 = time[:, 2] - time[:, 1]
    dx_dt[:, 1] = (x[:, 2] - x[:, 0]) / (2 * dt1)
    # Compute the first-order derivative using second-order difference for the last point
    dt_end = time[:, -1] - time[:, -2]
    dx_dt[:, -1] = (3 * x[:, -1] - 4 * x[:, -2] + x[:, -3]) / (2 * dt_end)
    # Compute the first-order derivative using second-order difference for the second-last point
    dt2 = time[:, -2] - time[:, -3]
    dx_dt[:, -2] = (x[:, -1] - x[:, -3]) / (2 * dt2)
    # Compute the first-order derivative using fourth-order central difference for interior points
    dt = time[:, 2:-2] - time[:, :-4]
    dx_dt[:, 2:-2] = (-x[:, 4:] + 8 * x[:, 3:-1] - 8 * x[:, 1:-3] + x[:, :-4]) / (
        12 * dt
    )
    # FIND IN PHYLSTM WHERE BDOT IS CALCULATED OR IF IT IS RAW DATA
    return dx_dt