Imagine, for a moment that that you’re walking down the street and all of a sudden an alert appears in the corner of your vision to say “I think you may need 10g of carbs, your glucose levels are falling a little more quickly than you’d like”. So you have a couple of jelly babies.
A little later, you arrive at a cafe for lunch with a friend. The club sandwich appears on your plate, and you look at it briefly, then an alert appears, again at the corner of your vision, saying, “Inject yourself, your dose is ready”. You do. About an hour later, a similar alert appears, as the protein and fat content in your meal is taken into account, and the additional bolus to cover that is suggested.
It sounds far fetched, and yet, the technology to achieve a lot of this is starting to become available.
For further details, read on…
Whilst sitting in the audience of the Diabetes UK Cymru Type 1 Technology conference watching @Nerdabetic (or Kamil as we like to call him) discussing Foodvisor, I thought that maybe, just maybe, the future isn’t so far away.
Firstly, if you want to look at food and see what its nutritional value might be, try http://www.foodvisor.io. It takes a photo of food, then works out what it is and the portion size, before advising you on nutrition data and glycaemic index.
While it’s not yet perfect, as it applies user advice to update the database, accuracy will hopefully improve, and even then, what’s there now is not too bad.
But that’s only food. What about giving you that advice on insulin? That’s where we move to https://quintech.io/.
Quin is a start up with a very interesting aim:
We use science, engineering and design to help people who take insulin make the best possible decisions.
We are in awe of what millions of experts who take insulin are doing to keep themselves going everyday.
Along the way, we turn their knowledge into new science to help others who take insulin and advance research into insulin-treated diabetes.
The devices we carry with us every day are a powerful platform for creating, managing and formalising self-care experience.
By combining them with large measures of empathy and ingenuity, we can create new insights that will inspire new approaches to treatment and research. Together.
Interpreting that, you might think it’s just another app, however, they capture user data in their app, using all sorts of sensors and are able to make recommendations as to what to bolus when. Signing up right now is under NDA, however, they have a CE mark, are undertaking ongoing research and development and expect to clinical trial early in 2020. They take user-centricity as the core of their design and apply machine learning to it meaning that they can provide personalised advice relating to how you live your life with diabetes and improve what you do and when you should do it.
I’ve previously talked about Pendiq 2.0, which has a two way bluetooth connection that provides the ability to send dose information to the pen and just press go.
Having something like Quin identify your location, that you’re sat down, have heard you order food, checked the food content of the what’s been presented in Foodvisor, after hearing the waiter provide it, or having just heard you order, and recommending a small pre-bolus, then acting on the data from Foodvisor when it’s heard the waiter deliver the food and send the dosing advice to your pen. Somehow it sounds far-fetched, and yet….
We’ve discussed Foodvisor identifying your meal, but what about how it does that? And how does the software give you advice that you need carbs or your pen is ready? That’s where you probably need “Smartglasses”, and something like these from Vuzix, or Focals, by North.
While this type of technology is still at an early stage, and both these solutions are a little clunky (as were the Google Glass products), they’d provide an interesting model for interaction. Of course, you could also utilise something like Amazon’s Echo Buds to talk to the system, and have it talk to you, instead of visual cues via smartglasses…
Could this work with artificial pancreas systems?
There’s no reason why not. In fact, it’s probably easier to get there with those. Using GPS and some form of machine learning (like Quin) you could identify by time and location whether you may be about to eat, and invoke “eating soon” mode. With either the microphones or glasses camera supplying meal data via Foodvisor, you could have a meal content sent to your APS system, without you having to enter anything manually, and with OpenAPS’s UAM and SMB features, if you were to use a faster acting insulin like Fiasp, you’d effectively have an even more capable artificial pancreas that knew about your location, recognised your food and acted accordingly.
I’m sure there are currently developers in the WeAreNotWaiting world that could write the code to do a lot of this integration already.
Do you think this could really happen?
All I’ve done is take a number of things that are available either in development or in the wild right now, and consider what the art of the possible might be. While what’s described here might take a little while, I think that we’ll have some of these tools available for people to use via people with diabetes doing the work themselves. And I’m fairly sure that we’ll see some of this within a couple of years. Given a cure always takes at least five, I think I’d put my money on this!