Everybody’s moving – integrating stepcounts into Open Source Automated Insulin Delivery

As I’ve written before, I’ve observed that when I’m not doing a lot of activity, my insulin sensitivity tends to decrease.

Whilst I’ve been managing that with automations that profile switch according to location and time, I decided that it was time to integrate stepcounts into Boost and see how that might change things.

Taking advantage of work that Edward Robinson and Mathieu Tellier had undertaken, I have added step counts to Boost to experiment with how they might be used.

Steps in recent increments of 5, 10, 15, 30 and 60 mins.

Between Boost and AIMI versions of AAPS, we’re now gaining real world experience of the benefits and shortfalls of using the various tools in the Android ecosystem to use steps as an allegory for activity and modify AID behaviour.

How are you using steps?

Within Boost there are three ways that steps can be used.

  • Inactivity identification
  • Sleeping in
  • Activity identification
Settings for using stepcount based adaptations

Inactivity identification

In the case of inactivity identification, as mentioned earlier, when the stepcount over an hour is below a certain level, a modifier is applied to make insulin sensitivity lower.

Right now, each individual needs to estimate the number that qualifies as inactive for them, but it provides a way to start identifying when you are being sedentary and make changes to the system.

Sleeping in

Within Boost, the accelerated bolusing has a user defined start and end time. By enabling the system to identify that you might not have woken up yet, it allows the functioning of accelerated bolusing to be delayed.

This uses the start time entered by a user, and a number of steps below which the user might be considered to still be in bed.

Whilst this is useful in the context of Boost, it may also provide means by which time asleep and awake could be identified with limited or no external input and allow different behaviours at this time.

I can envisage use to reduce foot-to-the-floor effects, for example, specifying that any steps above a certain number after a certain time trigger a low temp target, to reduce waking hyperglycaemia.

Activity identification

This is the most difficult of all the settings to qualify using stepcounts alone. Whilst it allows the identification of a user walking somewhere, it doesn’t really help with any other form of exercise, so is limited in its usefulness.

Integration with smartwatches can help, with the use of other metrics, but these are often not realtime into Android.

Right now, Boost identifies a number of steps in the last 60 mins, above which it considers activity to be underway and then allows the user to define a percentage decrease that is applied to basal rate and sensitivity. There is also the possibility of integrating Boost disablement when activity is detected.

By using the last 60 minutes, the changes persist after the exercise has finished, when sensitivity is likely to be higher.

Whilst this may be helpful, it doesn’t have any assistance relating to existing insulin on board, which plays a large part in exercise hypoglycemia, and it may be useful to have the increase in step count trigger a reminder to eat something if steps are detected along with higher IOB.

It also doesn’t really help with activities that don’t significantly change steps, and use of things like the gyroscope, accelerometers and external heart rate monitors may be useful in that.

AIMI does something slightly different with activity, using steps in the last 30 mins as the trigger, and then effectively reducing profile by 60% and setting a higher target.

Does it work?

From previous experience, I know that inactivity adaptation has made a difference for me in the past, but I’ve not been able to test it for long enough to confirm whether, in this form, it does.

The sleep-in function has worked very well, and I can see it being the precursor to alternative ways of handling day and night in AID systems.

Activity I’m less sure about. I’ve very publicly stated that I think it’s more complex than steps, and I still think this is true. As we start to integrate additional sensors, I think it will become more beneficial, but right now, for me specifically, IOB matters too much for adjustment post exercise start to be significant.

Here we are right on the edge of what we can do with AID systems. There’s very little experience of automatically adapting for exercise and how it changes what we do, but there are a huge amount of possibilities here, and the learning would be valuable to both open source and commercial systems.

What it does highlight is the benefit of open source. With open code, we can continue to push the boundaries of what can be done…

1 Comment

  1. Absolutely right…quite interesting thoughts.To continue to push the boundaries is really of great need. I didn’t test the step integration so far, but as you mentioned, i think it is just a first approach to integration of more sensor data. Thank you for working on these ideas…

Leave a Reply

Your email address will not be published.


*