Sinocare’s iCan i3 CGM. And another one…

In the words of DJ Khaled, here’s another CGM. The next of the recent CGMs to come to market in Europe, although much less heavily promoted than SiBionics recent offerings, is the Sinocare iCan i3 CGM. Which is a bit of a mouthful.

An introduction to the Sinocare iCan i3

According to Sinocare, this is a Gen 3 CGM that uses Direct Electron Transfer to generate a signal, rather than using a mediator. In theory, it should create a more stable sensor over the sensor life by reducing enzyme degradation, but that’s a whole other article.

Sinocare claims regarding gen 3 sensor and MARD

As ever, with a new sensor, it has a headline figure in the 8% to 9% range. But that’s expected, as no-one these days would launch a sensor with anything other than this. The user guide states that they arrived at this number from a mix of 60 adult T1 and T2 participants in their study.

Performance data from the Sinocare user guide

Let’s just say that the indications are that this isn’t really a representative MARD value.

To add to that, we have another case where the study data is hidden somewhere, so we only have the user manual to go on. As you’ll note, this states the study was adults only, however, the website marketing is suggesting family use and following. Naughty, naughty…

Image from Sinocare website suggesting sharing family glucose data

Finally, the user guide states that this is a non-adjunctive device, which means that you are officially able to use it to make dosing decisions. If you want to.

Section 1.2 of iCan i3 user guide stating that it can be used to replace fingerpricks

Unboxing and application

But what we’re more interested in here is applying this new sensor.

Why? Because it’s the most complicated process I’ve come across to connect a sensor up.

Here are a few videos showing the unboxing, set up and insertion.

It’s worth noting that Sinocare themselves seem to realise this is a little bit complex, so won’t allow you to insert the first sensor without watching all the videos they provide in app to understand what to do. And even then, I had to refer to the paper instructions…..

Unboxing video

That was the easy bit. The next two videos cover the sensor set-up and start. As you’ll see, even for an experienced CGM user, this is not the most straightforward sensor to setup and apply. With the videos interjecting constantly, it’s also not all that fast!

Sensor set-up and application

Finally, enabling the sensor. As quick and easy as any other!

Starting the sensor

Truth be told, that’s one of the most annoying and difficult sensor applications I’ve ever done, and I’d be surprised if anyone thought it was a straightforward process.

Next steps

The sensor lasts for 15 days, so as usual, I’ll be collecting data and fingerpricks to see how well the two compare.

11 Comments

  1. I haven’t seen a follow-up to this video yet, but the real question is what the glucose patterns look like. Regarding your comment about MARD, there’s a good article that explains that “MARD” is probably not a good way to evaluate the quality of a CGM, because MARD compares two separate and single samples of fluids, and glucose is highly volatile in fluids. Moreover, glucose is not evenly distributed throughout the body. In short, these factors mean there’s an upper limit on MARD, simply due to noise. What you really need is systemic glucose values, and that relies on a different kind of analysis of glucose patterns from one reading to the next.

    All of this is explained and demonstrated in this article: https://danheller.substack.com/p/the-dexcom-g7-vs-g6-which-is-better

    Ideally, to measure to performance of any CGM, you want to compare how well you manage your T1D using the data it presents to you. Of course, this relies on how often you actually look at your CGM, and then how good you are at making in-the-moment decisions on the patterns you see. If you do neither of these, it raises the question on the degree in which a CGM’s “accuracy” even is.

    • Hi Dan, if you follow Diabettech, you know we’re well aware that MARD as a measure is nearly always used as a marketing tool, and studies that are undertaken to generate a MARD number are often of poor quality with populations and configurations designed around a result rather than around a number that might reflect user experience (https://www.diabettech.com/cgm/lies-damned-lies-and-statistics-the-art-of-the-cgm-accuracy-study/).

      These tests compare Interstitial Glucose sensors to capillary blood. They are intended to give an indication of the difference between making decisions based on each of these. That’s why I tend to wear a reference sensor in comparison (usually either Libre2/3 or Dexcom G6/One). They are also intended to encourage users to exhibit curiosity about manufacturers numbers. If one person has tried a number of sensors and seen values that are vastly different from marketing numbers, should I consider a sensor as a worthwhile option?

      If a sensor produces values that are miles from capillary blood, do I even want to consider using it to make dosing decisions and using the data to manage diabetes? In the case of some of these sensors, no, not at all.

  2. there’s a lot to unpack here. Starting with MARD, it’s required by the FDA in order to get devices approved, which eventually trickles down to the marketing strategies you described. But to your point about comparing interstitial Glucose sensors to capillary blood, the MARD values change considerably at different glucose levels AND at different rates of change (ROC) of glucose levels. All CGMs will report the best MARD values for the most stable glucose patterns around 100, but as you get above 200, 250, 300 and beyond, MARD values degrade quickly and rapidly. The dexcom G7’s MARD goes way above 30 at high glucose levels and high ROC.

    So do BGMs for that matter.

    The key thing that the article I cited points out is that “systemic” glucose levels cannot be measured by single fluidic samples, unless that systemic level is low (~100 mg/dL) and very stable. As you go outside of that range, glucose itself is too volatile and unstable to get any sense of SYSTEMIC glucose levels accurately, regardless of the device.

    So, when you say “If a sensor produces values that are miles from capillary blood, do I even want to consider using it to make dosing decisions,” the reality is that most T1Ds have glucose levels and rates of change that makes it less possible it is to measure “systemic” glucose levels at all, and certainly not rates of change. Those are the only factors that should always be used when making any kind of dosing decisions (insulin or carbs).

    To assess systemic levels, you need a series of consecutive readings over at least 30 minutes, which will also show rate of change. Even then, the decisions you make must also also take into account what you will be doing in the next 30-90 minutes that can affect glucose levels.

    I really think it’s worth reading the article I linked to.

    • I’ve read your article Dan. I agree with the points you raised on the G7 in terms of the data that it produces, but as a DIY closed loop user, I don’t necessarily agree with your conclusions – one of the reasons that recent G6 and G7 data is less smoothed was due to AID manufacturers asking for the smoothing to be removed. They deploy their own smoothing algorithms to give less jumpy readings and effectively make the readings systemically more useful.

      But back to your points. Yes, at various glucose levels, MARD gets way out. But on that basis, in a relatively normal range the new sensors would be expected to be reasonably well aligned with those of the more established brands. The issue here is that they are not, but produce marketing that tries to suggest that they are, based on studies that contain large proportions of people for whom glycaemic variation is very low.

      What we’re looking at here is whether the publicised MARD values come anywhere near traditionally more “accurate” sensors or blood. And the patterns that the iCan isnproducing right now suggest otherwise.

      • Thanks for the reply and reading the article. But I don’t think you’re seeing the most important point that glucose is too volatile for any single measurement–or even a small set of readings–to give useful information on making in-the-moment decisions. Even a BGM values are pointless–one needs to see trends. The paradox of greater accuracy is that the volatility introduces too much noise in the data.
        To that point, the G6 data isn’t “smoothed,” per se. The algorithm was developed by a physicist whose expertise was in molecular movements in fluidic mediums, and he knew and understood how glucose volatility would give varying degrees of values depending on certain systemic conditions–namely, inferred systemic glucose levels and rates of change. There’s quite a bit of analysis going on… it’s not as simple as using math to smooth out individual reads.
        Regarding AID manufacturers, I had proposed to Dexcom that such manufacturers simply get the same “raw” data that the G7 produces, but then use the G6 algorithm to render the kind of data the humans need to see. (I would also argue that automated pumps should as well, but that’s their business.)
        Insofar as insulin pumps–especially closed loop systems–I’ve been conducting research in this area as well. I recently published the second of a three-part series, which you can read here:
        https://danheller.substack.com/p/benefits-and-risks-of-closed-loop-insulin-pumps
        Part three is being wrapped up soon, and it will cover all the technical hurdles that pump manufacturers face in improving the systems beyond where they are now.

        • Given that without CGM, the majority of people use single point BGM readings to make decisions, and even with it, a majority look at the last data point and decide what to do, without necessarily taking the trends into account, comparing individual datapoints isn’t really an irrelevant approach (let’s face it, even the MARD tests that are done in lab for “accuracy” studies are reported based on this approach, even if they are venous every fifteen minutes).

          Prior to CGM, when I made dosing decisions, I used to take two fingerprick readings 5-10 mins apart to try and determine both direction and rough level, which was far more useful in decisions making, so I get your point.

          It still doesn’t change the point that in making a decision, trends aren’t much use if the base of the trend is out by a significant amount (if you’re treating a rising trend and you think the base value is valid within the trend, but the real value is 20% higher, for example).

          With regard to the G6, I assume you’re referring to the onboard transmitter algorithm rather than the retrospective smoothing?

          • correct–the retrospective algorithm, not the retrospective smoothing.

            To your other points about how you (or anyone) makes dosing decisions, I regret that this is a much more complex and difficult topic to truly tackle, because T1D is just plain complex and taxing to manage. All our discussion about CGM accuracy is–as my own article concedes–somewhat moot, because most people just can’t or don’t (or don’t know how) to engage with the tech in ways the serve them best.

            But I don’t want to let the tech companies off the hook, though. There still is true science at play, and they’re not moving us in the right direction. On a related note, I just came across a deeply technical article called “Limits to the Evaluation of the Accuracy of Continuous Glucose Monitoring Systems by Clinical Trials” — https://www.mdpi.com/2079-6374/8/2/50

            The authors state: “We present a general picture of the topic as well as tools which allow to correct or at least to estimate the uncertainty of measures of CGM system performance.” In short, they provide a more detailed explanation of my assertion that the volatility of glucose among and between individual fluidic samples is subject to so much variability (due to fluidic characteristics that cause stochasticity–that is, randomness), that glucose readings are just inherently too volatile to achieve any level of MARD beyond what we have today. (Hinting that it’s the wrong type of measurement for assigning to CGMs.)

            This is highly related to your comment about insulin pumps, and especially closed-loop systems. I just updated the article that I sent you before with new data (in addition to the collection I already had) that showed the T1D’s A1c values haven’t really changed over the past ten years, despite the introduction of pumps and CLS.

            As they say about data and automated systems, “garbage in, garbage out.” If the data you’re inputting isn’t good, the results you get out will be similarly poor.

            For reference, my article on pumps is https://danheller.substack.com/p/benefits-and-risks-of-closed-loop-insulin-pumps

          • I’m familiar with the article you provide related to the limits of the evaluation of accuracy of CGM systems. It was one of the places I first went to when I started to look at the new sensors that have started to proliferate in Europe appeared. A couple of the authors are deeply involved in ongoing work to standardise clinical trial method and output to enable better comparison of sensors.

            I took a different interpretation of the paper and the various relationships, namely that small sample size generates far more noise in terms of glucose variability, and larger sample sizes are required with a statistically significant number of points at high and low glucose levels, but yes the key takeaway is that MARD is not a very good measure and easily manipulated.

            I’m less familiar with the US data and much more familiar with the UK, so defer to your view on that, but within the UK dataset, there has been a consistent split on Hba1C levels up until very recently, when CGM became available to everyone. That has seen the number of people with Hba1C < 7.5% change from around 29% to around 37%, which is a step in the right direction. To put that in context, CGMs are available to all those with type 1, whereas pumps have only reached around 10% of the population. One of the things I'll publish in the next update on SiBionics is the TIR data compared to Dexcom. Hint. It isn't great.

    • I’ve read your article Dan. I agree with the points you raised on the G7 in terms of the data that it produces, but as a DIY closed loop user, I don’t necessarily agree with your conclusions – one of the reasons that recent G6 and G7 data is less smoothed was due to AID manufacturers asking for the smoothing to be removed. They deploy their own smoothing algorithms to give less jumpy readings and effectively make the readings systemically more useful.

      But back to your points. Yes, at various glucose levels, MARD gets way out. But on that basis, in a relatively normal range the new sensors would be expected to be reasonably well aligned with those of the more established brands. The issue here is that they are not, but produce marketing that tries to suggest that they are, based on studies that contain large proportions of people for whom glycaemic variation is very low.

      What we’re looking at here is whether the publicised MARD values come anywhere near traditionally more “accurate” sensors or blood. And the patterns that the iCan is producing right now suggest otherwise.

  3. Hello,

    Thank you for this article it was quite useful.

    I’m new to the world of CGMs and still doing my research on what to get as a first time buyer. IMO that setup did not seem that complex. However i’m more interest in your results and comparison to other CGMs you use. Did you do a follow-up write and i am being blind, if not could you provide your conclusion on the device and then it’s rating against other devices please?

    • I haven’t yet provided feedback on it, because the results were bad enough that it needs testing with another sensor. That’s a data gathering process that takes a bit of time (and finger damage).

      The application process for this system is significantly more complicated than any of the others out there (most of which don’t require a complicated build process) that are press down and push a button.

Leave a Reply

Your email address will not be published.


*