Data and user interventions. Multiple challenges for future artificial pancreas systems.

Data and user interventions. Multiple challenges for future artificial pancreas systems.
Data and user interventions. Multiple challenges for future artificial pancreas systems.

 

If there’s one thing I’ve learned from my experiences with Fiasp over the past month or so, it’s how important access to my data is. Without it, determining what was going wrong and how to fix it would have been very hard.

In particular, what was extremely useful was the ability to review how much insulin I was taking in association with carb consumption, and construct a model that allowed me to determine what had changed and by how much, and equally, which components of insulin delivery had changed.

This allowed me to see that whilst there was little difference in basal, my meal related bolusing needed to increase. A lot. This is presented clearly, and obviously, using NightScout, and of course, if I choose to I can download the data from the database to which this is attached in JSON form, then run JQ to filter and format it to make it easily available in Excel, which makes pretty graphs and scenario analysis very straightforward. Likewise, it can be extracted as a csv for direct data import into Excel.

The key aspect of this is that I have a relatively simple way to access my data. This makes life much easier when trying to spot patterns, such as how my insulin needs might have changed, and what adjustments I might need to make.

The most useful of these being things like the Percentile Chart and Distribution, which enabled me to see how variation was taking over (views below as a quick reminder):

While these don’t seem like very much, the real benefit of them is that in order to create the graphs, I’ve only had to enter the details of carbs into the system, and everything else has been pulled together by NightScout.

If I was using Libre, I’d have to overlay the Libre data with the pump, or if I was using a Medtronic system, based on my previous experience of Carelink, while I’d have reports, access to the underlying data is more or less non-existent. And let’s be fair, access to CareLink isn’t exactly easy, given the multitude of Java issues there always seem to be.

But what’s the point I’m making? It’s that while Hybrid Closed Loops with Self Healing Algorithms are amazing, they have limitations as to how fast they can adjust to big changes. If I take this Fiasp issue as an example, both looking at Autotune and Autosens, you can see that Autotune doesn’t pick it up as the issue isn’t really with the basal insulin, and therefore neither does Autosens, and in the meantime, the Carb Ratio calculation running in Autotune is still bounded by safety limits (for good reasons) so doesn’t adjust as fast as we might like. That’s not to say it wouldn’t get there in the end, but that the mechanism that it works under normal circumstances has a level of safety net on it.

The same would be true for any commercial system, and as we’ve seen with the feedback from the 670G, it can take quite a long time for it to adjust to meet the needs of some users. If I had been trying to use Fiasp with this platform, I’d have found it really tough as it takes no notice of user entered details when using auto mode. So as the changes occurred it would have bumped into the inbuilt safeguards and left me with high glucose levels, probably for quite a long period of time.

Data Access is one key component

So what’s my point? It’s that as end users of a system, we need access to a reporting structure that gives us good, timely and useful information in an easy to read and see manner. That’s the first point. This is what helps us to identify that there may be issues.

But that’s not all. Once we can see that level of information, a fair few of us will want to dig down into the data and start to look further into what’s going on. And then we need access to the data ourselves. In a world where we are expected to manage our own conditions, it’s imperative that we can pull data out of the repository that a company is using to build reports for us so that we can try and understand what is going on.

The challenge then how we get there. I’ve not seen a 670G dataset or tools, but I assume it uses CareLink. Fortunately that includes an option to download the data in a file format that allows access via Excel, so I’d have been able to apply similar techniques to those I used on the data I had from OpenAPS.

What are Bigfoot and BetaBionics planning? The only reports I’ve seen from iLet testing don’t look to be all that impressive. Do they have the ability to access all the data and run it through Excel, or some other set of software or filtering techniques? Given the background of the key players in both of these, I’d hope that there are appropriate tools to enable data access and review, and that they have good reporting tools.

The key point here though is that the data is ours, and access to the data is imperative, so whatever the commercial companies are planning, it had better be inclusive of open access, otherwise some of the aspects of living with T1D and a closed loop potentially become extremely difficult.

End user adjustment may not always be required, but there are times when it should be possible

The second key point here is that, with the best will in the world, things don’t always work out, and the requirement that user settings are ignored may not always be the best policy where changes are as severe as many of us have seen with the 670G.

Whilst I’ve benefited from running with OpenAPS, if I was using the 670G I’d have been in all sorts of trouble with high glucose levels as the auto-mode tried and failed to handle it, then gave up after being too high for too long.

Whilst I could see some of this in the data that I had, I’d also have a drop out of data and then little further information to handle it, and instead I’d probably spend a couple of weeks tweaking things while the 670G learned all over again where I was at.

I wouldn’t want to have gone through the Fiasp experience with that going on.

What are you saying?

There are two conclusions that I’ve drawn from my recent experiences:

  1. Access to data is far more important than I had realised and I hope that the forthcoming APS solutions recognise and support the access to and availability of the user data.
  2. There are occasions where, however good an algorithm is, it will get stuck on safety limits that need to be imposed on it. In these circumstances, some form of expert user over-ride is not a bad thing. Unless of course we aren’t expecting expert users on many of these systems.

And therein lies the challenge. How will future APS manufacturers balance these points with the needs of the regulator and ensuring safety for the “Normal” user. That’s sure a tough one to answer!

Be the first to comment

Leave a Reply

Your email address will not be published.


*