Over the past weeks, I’ve explored the capabilities of a number of generic LLMs across carb counting, diabetes advice and diabetes data interpretation, as these are generally the forms of AI that people encounter most frequently.
That’s all changing now with multiple Diabetes AI “systems” becoming available.
From here on in, I’m going to focus on the capabilities of the self-confessed Diabetes AI systems and see how they perform.
Within this space, I intend to look at the VisionAI study, Snaq, Diabetes Cockpit and others (if I can find them).
The first encounter is with Snaq. Back in 2019 I published this article, imagining a closed loop system that could automagically recognise food that you were eating and provide your closed loop system with what it needed to dose insulin correctly. Snaq looks like a step in that direction.
What is Snaq?
Taken from Snaq’s own marketing,
No more guesswork around meals
Count carbs by snapping a picture and get insights on your glucose levels after meals
As it says on the tin, download the app, link it up to your CGM system, and then photograph your food and let Snaq tell you the carb content. It will also learn about your glucose response to meals.
In 2022, in the Snaq blog, they even voted themselves as the number 1 carb counting app for people with diabetes…

The website landing page content doesn’t explicitly talk about “AI” but the website itself uses the .AI suffix and the blog article voting themselves number 1 describes how it uses AI in food recognition and learning about glucose rises:
SNAQ’s Advanced Food Recognition AI identifies your meal and with just a few clicks gives you the nutritional breakdown in carbs, fats, and protein.
Seems straightforward. And indeed it is, but it’s worth bearing in mind that this is a pay to use service. It costs around £100 per year with a 7 day free trial. You can’t use it at all without signing up.
According to the Google play store, it’s had 50,000 or so downloads, but scores 3.9/5 on its star rating. That rating seems to consist mainly of people complaining about having to pay for it, with little feedback on app performance. While in the title it states In-App purchases, the way it’s described does not make it completely clear that this is only a pay-to-use service, describing itself as a:
Premium, ad-free food and glucose tracking app
The idea here is fantastic. Take photos of your food and let AI work out the nutritional content. Everyone who has ever carb counted understands that this is a difficult task, so having something that alleviates this challenge can only be a good thing. To put this in context, while a number of systems out there allow you to scan barcodes and enter food details, none so far have given this flexibility.
How does it work?
Food recognition
Almost exactly as it describes itself. Open the app, add a meal and take a photo of your food.
The app then provides you with an assessment of what it thinks you’ve shown it, and provides nutrition information.
However, as we saw with the generic LLMs, food recognition without package and weight info can be difficult, so if it doesn’t give a good response, then it’s down to you to update the food data and provide a description of what the app is seeing.
In theory, then, it should be able to learn about pictures it’s not seen before through user input.
This is where we come to the second catch.
You’re paying a subscription yet you’re also helping to train the model. It’s almost like this is a paid for beta experiment that needs more data, and unfortunately, to get the most out of it, you’re going to have to help it out.
I guess that if it helps, then it doesn’t matter, but it does seem a little cheeky.
Glucose patterns
Snaq tells you once you connect a CGM source that it needs three days of data to start being able to estimate rises based on the food data it receives. You can also connect activity monitoring.
I assume it uses activity data, food macros and temporal data to learn how it needs to adapt its model to fit your patterns.
After entering foods, the following screen is shown in the first three days:

As can be seen, whilst it’s learning, it won’t tell you the rise.
Once it has learned a little, you get the following:
Given these two separate sets of functionality within the app, I’m going to split this into two articles. This one, which will look at the carb counting capabilities and a second that looks at the glucose patterns.
How well does it work?
The million dollar question, and the one that the 7 day trial needs to answer to justify the subscription.
I’ve tested Snaq over a week according to the constraints documented in the app and on the playstore to see how well it does the tasks it says it can. While 7 days may not be enough to fully customise the glucose reaction model, it’s what the free trial gives us.
Freeform Food recognition
I started off using the food recognition capabilities with breakfast. Specifically, it was two slices of wholemeal sourdough toast with butter on both and Marmite on one, marmalade on the other.
These are the three different interpretations of the same meal over three days:

None of these identified the bread correctly, and all missed the marmite and the sizes of the portions.
Corrected, the meal looked like this:

Ultimately, it significantly underestimated the value on all three occasions and appeared not to learn anything from the entry I provided.
When I tried the same meal with different shaped bread, I got the following:

The highlight here was something that I found consistent with every photo evaluation. It significantly underestimated the size of portions. Each slice of bread weighed 44g, not 44g for the combined two slices:

Similarly, with this meal (which to be fair is quite difficult, as it’s a homemade curry with rice), it estimated about half the value for the rice, and as I have no idea about the curry, I can’t assess the accuracy of the response. Only that it isn’t stew.


The Katsu Curry Test
And of course, this wouldn’t be a food-related test on Diabettech without the introduction of Chicken Katsu Currey somewhere. And here it is.

Now, I look at that and see Katsu curry. But what does Snaq see?
This:

I don’t know where it got the Okra from, but it can also only say what it sees, so I gave it a little shuffle and asked again.
This time it got the noodles hidden under the sauce.

But it also missed the curry sauce. Once again, the portion size is an issue. Whilst this isn’t a Wagamama Katsu curry, theirs does have a readily available set of nutrition data. I know that what Snaq has arrived at is well under what I’d expect to see (from plenty of experience).

I then tried to correct it by adding Katsu curry sauce. This revealed another aspect of the Snaq set up. It seems to have a very US-centric list of foods.
These aren’t easy examples by any means, but the consistent inability to get it right isn’t great.
A simple Starbucks Biscuit?
It also struggled with what should be easy. A Starbucks cup shaped gingerbread biscuit should be fine, right? They’re easily found on the web, I’d expect them to be eaten by plenty of people and they’re available in multiple countries. This is taken from Starbucks website (for Switzerland).

That seems pretty easy. Instead, this is what the app saw:

It got the Starbucks bit right, but I can’t explain how it arrived at a chocolate chip cookie.
By now you can probably sense my palpable sense of frustration. But the best was yet to come. I decided to eat some candy and asked it to estimate the carbs. How do you think it responded?
Look below!

It counted 15 pieces of candy and assumed 10g carbs per piece.
Firstly there were 11.
Secondly there 5g per candy, not 10g.
If I’d blindly used the carb count to dose from, it would have been a bit of a mess.
Overall then, the food recognition for non-packaged food (the major selling point of the app) has been very poor. I’d give it 1/5. It can’t count, failed to recognise most things and nearly always got portion sizes wrong.3
Making it easier? Packaged food
Given these challenges, I thought it would be appropriate to give it the chance to recognise some packaged foods. These should be easier and the data should be more easily obtainable via some sort of web trawling. It was given a single person pack of pringles, a sachet of Alpen, a Tim’s Raspberry Greek-style Yoghurt and a Nature Valley Canadian Maple Syrup Crunchy Bar.
All of these items are clearly described on the packaging and show their brand names. For these, we got a 50% hit rate on the recognition of what they are, and even when it new what they are, it got the portions wrong. It’s worth bearing in mind that most of them have their portion sizes and information on the packaging as well, so you’d only be using these for tracking your food intake and in relation to the glucose response mapping.




There are issues in each of these assessments. Whether for Alpen and Pringles, where it either under or over-estimated the size of the portion, for the yoghurt, it somehow missed the Raspberry on the lid describing it, or for the Nature Valley bar, where it chose another product in the range, rather than the one in front of it.
Sadly, although Snaq tell us they have barcode recognition, taking photos of the barcodes didn’t work on any of the above examples and in the Android app, I didn’t think there was an option to scan a barcode. It turned out that this is buried within the add food menu, not on the page where the option to add a photo, talk to the system, enter text or use a saved meal is, but another layer down. It’s almost as though they don’t want you to use it.
I tried Carbs and Cals to see how that worked in comparison. The Nature Valley bar was identified straight away. That’s perhaps a little unfair as a comparison, as Carbs and Cals is required to maintain a food database to make this work, but at least it can be added to by a user, which is verified and then remembered. In my earlier experiment with toast, Snaq failed to manage this. I was able to add meal details, but it didn’t recall them with a similar picture.
How does the model work?
It’s a good question that’s not got an immediately obvious answer. There’s clearly a level of image recognition in use, but is it simply that with the foods I tried, the dataset against which any learning had taken place didn’t meet my usage?
It feels like the “AI” component is missing something. Whether that’s because it’s more focused on the glucose patterns than the carb counting and image recognition, I’m not sure. Actions such as failing to count the correct number of pieces of candy or completely missing words on a product package that would help with identifying nutritional information feel like a big miss. They make me wonder what the training data was, and which food databases were used?
It also raises questions about my use of the app. Am I hampering its ability to recognise food because of the photos I’m taking? Do I need to do something different?
Subsequent to my use of Snaq, I headed over to Diabettech.info where you can find Justin discussing Snaq with Aurelien, one of the founders of the platform. In it they discuss how to build the databas, they use 3-D volumetric data. More importantly Aurelien mentions that Lidar sensors on phones (currently only available new in iPhone 16 Pro and Pro Max) can be enabled to access volumetric data, and their freeform food recognition only really works on a plate. Additionally, we learn from this video that it can’t recall a meal if you enter a picture of similar food, only if you use a saved meal. These both feel like unsignposted flaws.
Now, I’ve reviewed the Play store description and the app itself, and cannot find any reference to either of these points, which, if they are critical to the functioning of the food recognition AI, should be massively and overwhelmingly obvious and signposted.
Or in other words, a high end iOS device, you’re likely to face constraints and unless you eat every meal off a plate, Snaq is unlikely to work very well, as we’ve seen. And Snaq should be telling you this. I should not have to find this out by listening to a podcast with the founder.
Even then, Snaq’s inability to recognise branded goods or to ask questions about the exact packaging size has left me feeling underwhelmed.
Would I pay for the subscription?
On the performance of the carb counting AI alone, the answer would have to be no. Throughout the entire process I’ve outlined, it failed to provide me with output that I could use, especially in relation to portion sizes, is charging a fairly high chunk of cash, and is partially reliable on users to update the food database, without seemingly taking any of that into account when I photograph repeat, similar meals.
It also won’t say that it doesn’t recognise something, and seems to operate in a way that is similar to the LLMs that I tried, offering up a poor answer, then leaving it to the user to decide to change/update/fix rather than saying “I’m sorry Dave, I don’t know that.”, which would be a far less annoying response.
The other aspect of the performance is the regionality of the food recognition. Even with the AI component, it tended towards US-centric outputs, rather than recognising I am in the UK and tailoring responses accordingly. This may be a function of the users it has had but it may also be a function of the training data that was used with it. Needless to say, it was frustrating.
Overall, I don’t think the food recognition was significantly better than that seen in the generic LLM models, which, given the specificity and the fee for Snaq, feels like short change. It certainly puts pressure on the upcoming Dexcom food recognition functionality.
Throughout the app it is explicit that you shouldn’t use its output to make clinical decisions. Based on my experience with it, is is right to do so.
But of course, carb counting and meal logging are not the only things that this app does. It also allows you to look at glucose patterns and provides feedback on expected mealtime glucose outcomes given your history.
Find out in the next article if there is utility enough in this to change my mind.
Footnote: Following the revelation that food needed to be on a plate, I had enchilada and chips with coleslaw, on a plate.
It was photographed from two different angles. In neither did the AI recognise the enchilada.


In each picture it changed the amount of sides, and couldn’t decide between fish and lasagne.
Go figure…
Thanks for this. Eating out is the one of the most difficult challenges of managing diabetes.
Over 15 years ago I created a company to offer a free service to restaurants that helped them calculate carb/nutrition data for their menus. Only one took it up but then dropped it as they said “ladies who lunch don’t want to know this”.
We’re slightly better now as the larger restaurant groups offer detailed nutritional info in a shadow menu, but in most cases we’re still guessing carbs as types of info is supplied aren’t of use to a diabetic. Many diabetics are hoping AI will fix it, but would I trust this tech now? No way!
I have just read Technofeudalism by Yanis Varoufakis and now Understand how these folks want us to do all the work and pay for the privilege – classic action to make us all cloud serfs
Fair take, I’ve had better luck w/ Snaq in the US. Barcode/Label scan actually works fine here, and once it’s seen a few of my usual meals, it gets pretty close (esp on iPhone w/ LiDAR). Still needs a sanity check, but it’s ahead of other carb-counting apps I’ve tried. I synced the app to import CGM, bolus + basal data from Nightscout.
Snaq, on a Lidar based phone, works really well. And it should. What irked me rather is that they sell it on multiple platforms that aren’t Lidar based and therefore don’t work as well, and don’t advertise this.
In the US at least, I think it leaves them open to lawsuits on the basis of miss-selling, which no start up should want or consider.