@openAPS – more steps forward… Does it ever stand still? Not if #wearenotwaiting

@openAPS – more steps forward… Does it ever end? Not if #wearenotwaiting
@openAPS – more steps forward… Does it ever end? Not if #wearenotwaiting

When I wrote about using OpenAPS last time around, I had a list of features that I was having a few issues with. Since then I’ve made yet more changes and am now running a more physically consolidated, if functionally different, set-up. Prior to this, I had used the new set-up scripts to build a quickly installed Dev branch featuring Advanced Meal Assist and Sensitivity adjustment.

Both are great features, but they quickly highlighted and enhanced my concerns and added something new to solve.

Taking a step back, the list of my concerns with what I had was:

  1. Poor connectivity between the 640G and the Pi, making glucose reading erratic
  2. Overlap of bluetooth signals causing timeouts between the CNL and the Transmitter, leaaving me with missing data
  3. The Carelink USB stick.

And the new?

  • Exceeding Azure quotas on the free option.

Updating the OpenAPS install and Raspberry Pi hardware

The first two of these were easily rectified. I’ve dropped the Medtronic glucose monitoring and switched to Dexcom G5. It’s all linked to my iPhone, so my openAPS is pulling all its data from the cloud, but the reality is that in this world, I’m rarely without an internet connection, so it works okay. Offline is less important than I had thought.

Having made the change, I noticed throughout the logs that there was a bit of an issue with the Carelink stick. I was seeing a lot of failures to connect to the pump and this was really affecting the efficiency of the system. Dana had suggested it might be a dying stick, and I didn’t really want to find out by it doing just that. In addition, the range wasn’t great with the Carelink. Just about eight feet.

Time to replace the radio connectivity to the pump with something with greater range and hopefully a little more reliability then. This would be Slice of Radio time, available from modmypi.co.uk.

I’d already bought a Slice of Radio, so it was time to add it to the Pi and see where that took me. The process was really two parts. Get all the tools to flash the firmware, and do it, and then update the OpenAPS install with “mmeowlink“.

I’d already had an issue building the requested tool for the CC-Debugger on Linux, so I had to dig around and find another way. Fortunately, Texas Instruments do provide a Windows tool to flash cards, and it is designed to work with the type of file that needs to be flashed to the Slice of Radio, so I was able to use this. It made the process very straightforward.

Then I needed to set-up mmeowlink.

The earlier-linked Wiki provides a very straightforward approach to building what you need to use, with a few minor gotchas. If you are not in the US, there’s a couple of items that you need to do in order to get the system running effectively.

  1. There’s a script that needs to be run before you try the mmtune function – that’s now been updated in the wiki.
  2. Once you’ve got mmeowlink on your Pi, you need to re-setup the system for WorldWide frequencies. You do this by changing the region from US to WW on line 14 of the ~/src/mmeowlink/mmeowlink/mmtune.py file, then re-running ~src/mmeowlink/setup.py using `sudo python setup.py develop`. The slice of radio then adheres to non-US frequencies, which worked nicely.

Having completed these steps, the pi connected to the pump without the carelink stick. The script and mmtune need to be in the error handling part of the pump-loop code in case of lost connectivity.

Having run through all of this, with the help of Scott Leibrand via Gitter, I ran a trial loop. And it worked. Pretty much flawlessly. The next step? Flip out the “old” Carelink loop and replace it with the mmeowlink loop in Cron. That’s a dead easy change and was very, very easy.

The result? My stablest loop. It’s run with almost no errors, and it handles the errors way better than the Carelink loop did. No more USB port issues, or missing bytes. A much better set-up all round in fact! And a significantly reduced footprint. The APS box now looks like this with its battery:

aps-reduced

Previously, it was all wires and bits sticking out:

img_0030

So much more portable!

The OpenAPS rig is therefore, more or less, complete.

Fixing the capacity constraints of Azure – by migrating to Heroku

Completion brought my attention to the issue with Azure. The new set up via the set-up script runs more frequently than the old one, meaning that it polls Nightscout and posts to it more frequently. Azure Free (Pay-as-you-go) option only allows 60 minutes of CPU time daily, running from midnight UTC to midnight UTC. With 24 hour looping, I’m hitting this limit at around 7pm UTC. I’d limited my loop to run every five minutes, but that wasn’t fixing it. To upgrade to a basic package is £35 per month and provides far more resource than I need.

So I have built the Heroku Nightscout. It also has constraints on the basic, free package, but the cost per month of the “Hobby” package is $7, and that’s based on a pay-as-you-go plan. As a result, the Heroku capacity model looks to be significantly better if you’re running on OpenAPS and reliant on Cloud based data.

If you’ve built on Azure, then the set-up on Heroku is relatively, similar and straightforward. The major difference is that the Bridge component for Dexcom Share is a separate Heroku app. That can be found here.

Once again, there is the slight difference for EU users that’s not entirely clear in the documentation. You’ll need to add an unlisted config variable DEXCOM_SERVER  and it’s value needs to be EU. The platform then happily feeds the Dexcom data to your Nightscout site. Due to the Auto Sense requiring 288 entries, it’s worth letting the Azure and Heroku sites run in parallel, then swapping over to the Heroku site after 24 hours (or whenever your Azure site locks you out again!).

That’s the last week of work for me on my OpenAPS set up. Aside from building a secondary Pi with a clone of the disk image on it, the next steps are likely to be building Loop to run on my iPhone, and then trying to make the core Algorithm of the Loopkit the oref0 algorithm. The Advanced Meal Assist functions are good enough to make that worthwhile.

I’ve got a feeling that’s going to take me a while though!

Be the first to comment

Leave a Reply

Your email address will not be published.


*