Experiments with stepcounts. An n=1 experience

After a little over two weeks of running an AID system that includes the ability to make adjustments to sensitivity and aggressiveness using a model for activity, it’s time to provide some feedback.

The model I’ve implemented uses steps as a proxy for activity and offers three specific use cases.

  • Ability to determine when you are asleep and disable specific functionality in that case
  • Ability to spot inactivity and adjusting sensitivity and basal accordingly
  • Determination of activity and adjusting accordingly

It does all of the above based on stepcounts, as recounted here.

But how well does it work?

Overall Performance

The image below provides Time in Range data over the 16 days of trying this out, for the period between 7am and midnight. I’ve cut the overnight off for two reasons.

  1. Sensor placement over the 16 days has resulted in a lot of compression lows and lost data overnight, causing the overnight data to skew results.
  2. The Boost component of the algorithm was only active between these hours, meaning that inactivity detection was only available in that period.

This means that what we’re looking at here is solely the period when accelerated bolusing and stepcount effects were fully enabled.

Time in range 7am to midnight 08/05 to 23/05

When we look at the ambulatory glucose profile (AGP) for the period, we can see the overnight issues quite clearly, but there’s also an artefact that occurs early evening.

There’s a tendency towards lows between 6pm and 8pm, which is linked to the period when I walk home from work. We’ll discuss this in more detail later on.

Given the above views, what can we take from the three different pieces of stepcount related functionality?

Sleeping

The functionality to detect an extended sleep period and adapt whether the Boost functionality and extended bolusing starts working or is delayed has worked well. It has removed concerns about accelerated bolusing taking place while I’m sleeping in at a weekend, without having to remember to change settings.

In terms of overall user experience, this is useful functionality in a non-learning model, as it reduces the amount of interaction required around making changes to settings.

Inactivity

The inactivity detection also seems to work pretty well, increasing basal rates and decreasing sensitivity whilst I’m sitting in the office at work and really not moving very much. The alternative approach for this, that I’d been using for a while, was location based profile switching, which also works. This approach allows a slightly more granular effect though, and I think it’s reflected in the “middle of the day” AGP.

Having said that, similar to the activity function, lack of stepcount for inactivity is limited by not being aware of what you will do, and thus, retains “less sensitive” settings that you might want to end earlier if you are intending to move from inactive to active.

Activity

For me, this was the most interesting of the three. There has been a lot of discussion about variation of the effects of AID taking into account step count, heart rate and potentially other factors.

I find that the discussion tends to ignore Insulin On Board (IOB), which is the most important factor in determining whether lows are more or less likely in relation to exercise for me.

I disabled settings that modified profiles pre-emptively, to give the stepcount based model a proper test.

Initial concerns

As stated, I’ve considered IOB to be the determining factor for whether exercise is likely to result in low glucose levels, and as a result have learned that there are ways to remediate this ahead of time, using variable targets, profiles and mealtime bolusing techniques. 

The idea that stepcounts that make changes post exercise starting has always felt too late.

This was my concern with this model. I didn’t expect it to prevent the need for pre-emptive behaviour.

Outcomes

Given what I’ve said in the previous section, the results here aren’t unexpected but do have some points worth further consideration.

The image above shows the 4pm to 8pm period for weekdays, a period where stepcounts increase and the trigger for activity is activated.

During the test period, there was a consistent pattern of drops that needed treating, as the stepcount activity effects were occurring too late to make a difference, due to existing IOB.

Similarly, at the weekends, where I’m generally more active, the lack of pre-empted change resulted in greater variation of glucose levels than I’d like to have seen.

Having said that, once pre-emptive measures have ended, ongoing activity would continue to be detected and reductions in the insulin delivery applied, which is beneficial.

What’s worth mentioning though, is that there is a clear timezones and step counted pattern to behaviour, and I think that’s where the real benefit of using activity could be.

Activity and machine learning

While taking account of activity real-time made little difference to shorter term effects on glucose levels, where pre-emptive actions are known to work, the data could serve as a useful input into a machine learning model.

If we were trying to create an ML model that predicted future glucose based on a series of inputs, trained against outputs, step count might be something we’d look at, in association with time.

If we looked at time windows, or some sort of rolling “DIA time” window, it may be possible to determine that on specific days, a pattern occurred where predictions based on the standard model are often wrong and changes need to be made now to improve outcomes three hours in the future.

There’s a lot of work to do there, but as an input parameter in something like a lite-GBM approach, you could get some good results.

Overall then?

There’s a lot to like about the use of stepcounts to identify realtime activity, but to maximise effectiveness, it needs to be applied to the appropriate functions.

In this case, sleep detection seems to fit that bill when combined with time, while activity and inactivity were less effective to different degrees, and both required additional tools to get the most out of them.

The main thing this has highlighted is that IOB is critical in managing activity, and that to get the best results, action needs to be taken ahead of activity to minimise IOB as best as possible.

With this in mind, I think steps and heart rate on their own aren’t the only solution to managing exercise. They help to reduce risks if activity is spontaneous, but they don’t beat the challenges of the decay time of today’s insulins.

That requires a little more prescience, or perhaps machine learning and pattern recognition.

That being said, I’m sure there are other scenarios that steps can assist with that I haven’t thought about, and it would be great to see some ideas around those.

2 Comments

  1. Really interesting post.
    I am an alpinist with T1D who struggle to be on range while doing my up and downhill activities. I am currently studying to start with an AAPS system and looking if some other datas implementation could fix some recurrent high and lows during my activities. These are obviously not costant in energy expenditure demand, giving that I alternate uphill and downhill with different steepness.
    I fully agree with you that basing the adjustment on the HR o step count variation could have some trouble in the transitions periods, both from inactivity to activity and from low to high intensity activity.
    I figured out that, in my case, correlating altitude variations/time with energy expenditure (coming from a sport watch for example) and to ISF, I could be able to tell the algoritm what will happen during the future activity.
    Of course having the possibility to confront the energy expenditure forecast with the effective one registered by the watch will make the machine to better predict the next one.
    I recognize that this approach will still need some previous input by the user, but will maybe show better result in terms of control.
    I write these lines to understand if someone already tried out this route or think he or she would be interested in trying to figure out how to make it possible.

    • To be brutally honest, there are few implementations of anything that use additional sensors to adapt algorithm behaviour.

      One the one hand, adapting the code to accept and use the data is one part. The harder part is identifying an algorithm that is broader than n=1 and then testing it. It sounds as though you have a start there, but any systematic implementation would need to take that into account.

Leave a Reply

Your email address will not be published.


*