We’ve recently seen the Libre2 and Libre3 approved for use with AID systems, so I thought it would be an interesting time to look at what happens in one of the available apps that purport to take the Libre 1 minute data and make it available at 5 minute intervals for use with open source AID tools.
Here we have data produced by Diabox. The graph shows the variation of 1 minute data (blue line) and then Diabox’s interpolation to create a 5 minute dataset (orange line). The data has been captured using a combination of Nightscout and xDrip. There’s obviously quite a dramatic difference.
This is only a small dataset, taken from a six hour period, but it allows us to see some of the challenges in using the minute-by-minute data that’s captured in some third party apps.
The minute-by-minute data appears to show a lot of microspikes while the five minute data is clearly using some sort of averaging or smoothing to eliminate this jumpiness. What’s not clear from any of the documentation is how it is being done.
What data are we actually looking at?
I asked Bubblan to provide some feedback on the source of both datasets. The feedback was that Diabox itself pulls raw data, but dependent on how you display it (Libre2 patched app on xDrip or data from Nightscout) you are likely to see different things.
The five minute data should be the same as one minute at five minute samples.
If, however, you feed it through xDrip, using the patched Libre2 app option, xDrip itself smooths the data, which would explain why the five minute data points don’t match the one minute ones every five mins.
If you’ve used the official Libre2 app, in general you don’t see the microspike effect evident in the graph, which is something that seems to be smoothed by what we’ve come to know in the open source world as the OOP algorithm. From what I can see, straight out of the box, this isn’t what Diabox does.
That’s not to say that Diabox doesn’t offer smoothing. It very much does, with two options.
Savitzky-Golay uses a local polynomial to establish the fit of the data within the smoothing. Borrowing an image from Wikipedia, this can be seen in operation:
Noise estimate correction I assume is taking a look back over a series of data points and estimating the noise, then applying a correction to the next data point.
For minute by minute data, using either of these is probably preferable to the smoothing free data shown in the initial graph.
Five minute data
The five minute data is smoothed by xDrip. The technique would need to be confirmed via xdrip documentation . Undertaking basic and weighted averaging over a series of data points didn’t reveal anything, so I assume this uses a more advanced smoothing technique. It’s certainly not taking the data point at a point in time as the value it uses.
Why do we care?
If you are using an Automated Insulin Delivery system, you should care.
Firstly, knowing what data you’re actually using for anything open source is probably a good idea, and as this shows, it’s not always immediately obvious what that data is. The more applications, the greater risk of reduced clarity.
Secondly, imagine an algorithm making a decision based on the microspikes shown in the raw one minute data compared to the five minute smoothed version.
We already know that the majority of commercial systems implement their own smoothing algorithms, and that both AndroidAPS and iAPS have also implemented similar. This helps to avoid the erroneous higher readings that could result in overdelivery of insulin.
It’s also beneficial if you’re dosing from an app, for similar reasons.
The takeaway is that if you’re using any tool like this, it’s worth considering enabling the smoothing functionality. It may not give you exactly what the sensor is reading right now, but doing so is much less likely to result in accidentally delivering too much insulin. Which is more important to you?