Medtrum Nano CGM… A tale of two sensors

Late in 2023, I received two Medtrum Nano sensors to try out, given to me by Medtrum. Historically, I’ve found that their CGM products have left me non-plussed at best, as I found the product to be very unreliable.

More recently, they introduced the Nano, which has to vie for the title of “Smallest CGM currently available”.

It has a very small form factor. It’s first release, earlier in 2023 was, apparently, not that great, and an update took place early in the second half of 2023, after which I received these two.

Details of the Medtrum Nano

Aside from being a tiny sensor, the Nano can be used in the Medtrum Easypatch app to enable low glucose suspend functionality with the Medtrum pump systems, or standalone with the Easysense app. It wouldn’t work in Easysense for me, but this may have been because I had both apps installed. It also has an online reporting system.

It can be used either with, or without factory calibration. Without triggers a calibration after warm up. With requires entry of a code.

It also has an option to be used as a seven or 14 day sensor. I imagine most users plump for 14.

Easypatch app – CGM settings

These two different settings produced noticeably different results for me, but we’ll get into that later.

The overall “Too Long, Don’t want to read” data is that I don’t think Medtrum have moved on very much. I wouldn’t want to use this sensor, factory calibrated or otherwise, in either a low glucose suspend based system or a hybrid closed loop, because it’s performance in my use was terrible.

We’ll go into the details of this assessment next.

Overall performance across two sensors

Time in range data

The image below shows the time in range that the Nano measured across both sensors.

Medtrum Easyview Time in Range across both sensors

That doesn’t look all that good, considering that throughout the period, I was using a hybrid closed loop.

Let’s put that into context with Dexcom data and a consensus error grid.

The data was collected across two noncontiguous periods, so I’ve had to revert to Excel to create the appropriate candlestick.

Dexcom TIR data candlestick

In this we can see a rather different TIR dataset. Far less time below range than the Medtrum detected. What’s perhaps interesting is that the time above range didn’t increase by the same amount as was lost from below range compared to the Medtrum, rather the time in range saw the increase.

Consensus Error Grid

When we look at the consensus error grid for the Medtrum, this appears to make a little more sense.

Medtrum Nano Consensus Error Grid

The consensus error grid shows quite a wide dispersion of points with a clear bias on the lower side compared to blood. What’s also clear is that when readings are compared at higher fingerpricks sample levels, the Nano has frequently produced numbers that appear to be higher than the blood tests, which would perhaps explain the discrepancy in time in range graphs.

MARD values vs fingerpricks

There is always discussion about the value of including this data in any evaluation, however, given the performance of this sensor, it’s useful as a point of interest.

Table of Nano MARD data vs fingerpricks

Over the two sensors, it’s clear that there are more days with a negative bias (the sensor generally reads lower than the fingerpricks) than with a positive bias, however the slightly scary numbers come from the MARD calculations.

Bear in mind that these shouldn’t be considered a measure of accuracy, rather a measure of proximity to fingerprick testing, as the alternative to using a sensor to dose insulin. On some days, the MARD (typically across 8 or 9 tests) was more than 20% different from my blood tests, which would make a significant difference to the decisions one might make when dosing.

The multiple Dexcom sensors used over the same period produced values of 6.1% when comparing to the first sensor and 9.5% when comparing to the second. The differences might reflect the sensors or the glucose variability having an impact, but both values are significantly lower than those from the two Medtrum sensors.

A tale of two sensors?

As I mentioned in the introduction, this test was run across both the sensors provided by Medtrum.

Sensor 1 was used without factory calibration.
Sensor 2 was used with it.

This resulted in significantly different MARD vs fingerpricks Fingerpricks, which, given similar TIR data, is somewhat concerning.

When using factory calibration, the MARDf was 22%. Without it, 15.1%. Obviously there’s not really enough data here to draw conclusions but as (n=1)^2, it’s a little disconcerting.

This suggests that, as mentioned in the previous section, either there was significant difference in glucose variability over the two periods, or there is questionable performance from the factory calibration. Two sensors isn’t really enough to draw conclusions.


One thing that has improved compared to my previous tests of the Medtrum CGM systems is the reliability. Both of these sensors lasted a lot longer, with one giving data for the entire  14 days and the other lasting 13. That’s a significant improvement on previous experiences.

The new sensor profile is also great. It really is tiny.

That’s about where the positives end though.

As I’ve shown here, when I use the Medtrum sensors, they produce a lot of useless data. Comparing the output here with my first encounter, back in 2018, while the form factor is vastly better, the results aren’t (the one sensor back then produced a MARDf of 23.3%).

As I mentioned right at the start, I wouldn’t want to use these sensors as part of a low glucose suspend system, let alone as part of a hybrid closed loop. I’m not even sure I’d be willing to participate in a clinical trial for a product that used them. They just aren’t good enough. 

1 Comment

  1. Thanks for the disappointing report…i hoped so much for these sensors to be good!😏
    The design of the pump is nice,the design of sensors too, but useless if it doesn’t perform well!😪

Leave a Reply

Your email address will not be published.