A close up view of Libre CGM data via Diabox…

We’ve recently seen the Libre2 and Libre3 approved for use with AID systems, so I thought it would be an interesting time to look at what happens in one of the available apps that purport to take the Libre 1 minute data and make it available at 5 minute intervals for use with open source AID tools.

Here we have data produced by Diabox. The graph shows the variation of 1 minute data (blue line) and then Diabox’s interpolation to create a 5 minute dataset (orange line). The data has been captured using a combination of Nightscout and xDrip. There’s obviously quite a dramatic difference.

This is only a small dataset, taken from a six hour period, but it allows us to see some of the challenges in using the minute-by-minute data that’s captured in some third party apps.

The minute-by-minute data appears to show a lot of microspikes while the five minute data is clearly using some sort of averaging or smoothing to eliminate this jumpiness. What’s not clear from any of the documentation is how it is being done.

What data are we actually looking at?

I asked Bubblan to provide some feedback on the source of both datasets. The feedback was that Diabox itself pulls raw data, but dependent on how you display it (Libre2 patched app on xDrip or data from Nightscout) you are likely to see different things.

The five minute data should be the same as one minute at five minute samples.

If, however, you feed it through xDrip, using the patched Libre2 app option, xDrip itself smooths the data, which would explain why the five minute data points don’t match the one minute ones every five mins.

If you’ve used the official Libre2 app, in general you don’t see the microspike effect evident in the graph, which is something that seems to be smoothed by what we’ve come to know in the open source world as the OOP algorithm. From what I can see, straight out of the box, this isn’t what Diabox does.

That’s not to say that Diabox doesn’t offer smoothing. It very much does, with two options.

Savitzky-Golay uses a local polynomial to establish the fit of the data within the smoothing. Borrowing an image from Wikipedia, this can be seen in operation:

Animation showing smoothing being applied, passing through the data from left to right. The red line represents the local polynomial being used to fit a sub-set of the data. The smoothed values are shown as circles.

Noise estimate correction I assume is taking a look back over a series of data points and estimating the noise, then applying a correction to the next data point.

For minute by minute data, using either of these is probably preferable to the smoothing free data shown in the initial graph.

Five minute data

The five minute data is smoothed by xDrip. The technique would need to be confirmed via xdrip documentation . Undertaking basic and weighted averaging over a series of data points didn’t reveal anything, so I assume this uses a more advanced smoothing technique. It’s certainly not taking the data point at a point in time as the value it uses.

Why do we care?

If you are using an Automated Insulin Delivery system, you should care.

Firstly, knowing what data you’re actually using for anything open source is probably a good idea, and as this shows, it’s not always immediately obvious what that data is. The more applications, the greater risk of reduced clarity.

Secondly, imagine an algorithm making a decision based on the microspikes shown in the raw one minute data compared to the five minute smoothed version.

We already know that the majority of commercial systems implement their own smoothing algorithms, and that both AndroidAPS and iAPS have also implemented similar. This helps to avoid the erroneous higher readings that could result in overdelivery of insulin.

It’s also beneficial if you’re dosing from an app, for similar reasons.

The takeaway is that if you’re using any tool like this, it’s worth considering enabling the smoothing functionality. It may not give you exactly what the sensor is reading right now, but doing so is much less likely to result in accidentally delivering too much insulin. Which is more important to you?

4 Comments

  1. Thanks for this very interesting article.

    Are the microspikes shown in the unsmoothed data a reason why it is takes so long for Libre 2 and Libre 3 cgms to be integrated with pumps like the Omnipod 5, etc as part of a hybrid closed loop?

    The Libre cgms have a tendency (when I wear them at least) to read a few mmol/l less than a bloodstick when glucose is heading towards the mid 4s downwards and a few mmol/l higher than a bloodstick when heading above the mid 9s upwards.

    • The reason closed loop integration took so long in the US is an interaction with high doses of vitamin C which renders inaccurate readings that could skip a low.

      A new Libre 2 and 3 that lasts 15 days is shipping this month in the US. It has a Vitamin C filter over the electrode, but otherwise works the same.

  2. Nice summary. I’ve used the Libre 2 Patched App via Xdrip for a couple of years as my BG source for AAPS.

    As I understand it, if you choose “Libre (patched app)” as the hardware data source in Xdrip then it applies a weighted average of the last 25 minutes. Not quite sure how to describe it mathematically but the newest values gets the biggest weighting. The code is here:
    https://github.com/NightscoutFoundation/xDrip/blob/master/app/src/main/java/com/eveningoutpost/dexdrip/LibreReceiver.java#L232

    It does really smooth out the fluctuations from the raw values, e.g. with temperature changes, compression lows and start/end of sensor life. However with a good sensor it does make the readings very laggy and thus the loop less responsive. Good to stop the loop being too aggressive at times, but that can mean you get quite a delay on low alarms.

    I’ve been thinking of experimenting with the new smoothing algorithms in AAPS but I think I’d need to use Juggulco to push data direct to AAPS. However I’d still like the (perhaps less aggressively smoothed) raw data to drive low alarms in Xdrip (rather than have alarms in Jugguloco itself). That or try Dexcom again.

    There’s a lot of options with DIY Libre data..

  3. Using Juggluco, I regularly (every two weeks) run two sensors in parallel. One new, one at 14days +. I occasionally run two sensors for longer periods.
    That gives a whole new view on smoothing as the first day (particularly with no preinsertion) is so much more erratic.
    Even day two and three can show increased variation across both of parallel sensors. And, of course, they are different, but show similar range of variability, decreasing over sensor (Libre) age.

Leave a Reply

Your email address will not be published.


*