BetaBionics’ iLet and the MAUDE Red Flag: The Severe Hypo Signal You Can’t Ignore

Earlier in January, BetaBionics’ share price took a very visible hit. The immediate catalyst was a “numbers” story: the company pre-announced Q4 revenue and disclosed that new patient starts (a key leading indicator for future pump+consumables revenue) came in below Street expectations, prompting at least one prominent downgrade and a sharp move in the stock (company pre-announcement / newswire via Yahoo Finance; Drug Delivery Business; Investing.com).

That narrative might be true. But in parallel, a different story was bubbling up in patient channels: posts about insulin overdosing, “the algorithm chased me into the floor,” and—on the flip side—periods of stubborn hyperglycaemia after corrections, suspends, or after the system “learned” the wrong thing.

If adoption is slower than expected, is it just commercial friction, training bandwidth, and the usual channel dynamics… or is there something in the lived experience that’s quietly suppressing enthusiasm?

There’s generally no smoke without fire, so I decided to dive in. I started with anecdotal signal across the web (especially Reddit, plus what’s visible from public Facebook posts). Then I pulled the FDA’s MAUDE database via openFDA to see what falls out when you apply the same lens across iLet and other US AID systems.

The anecdotal pattern (what users are reporting)

On Reddit, the iLet conversation often bifurcates. Some users describe a genuine reduction in cognitive load and better outcomes. But a recurring critical theme is perceived over-delivery around small snacks/meal announcements (“no true snack mode”), followed by volatility: lows → defensive suspension → rebound highs → more corrections (Examples: Longer-term iLet pump issues; “Buyer beware” experience report; Week 2 from a Tandem user; My experience with the new iLet).

Public Facebook groups are harder to use because many of the discussions occur inside private groups dedicated to users of the system. Publicly visible posts are fragmented and non-representative, so I’ve treated that stream as “signal for where to look,” not evidence. Reddit, at least, leaves a trail you can audit.

It’s worth bearing in mind that these links are examples to make the discussion auditable, not a representative sample or an estimate of event rates.


What MAUDE is (and what manufacturers are required to report)

Before we go any further: MAUDE isn’t a “study,” and it isn’t a curated registry. It’s the FDA’s public repository of Medical Device Reports (MDRs)—reports of suspected device-associated deaths, serious injuries, and certain malfunctions. It exists because the FDA requires mandatory reporters (manufacturers, importers, and device user facilities such as hospitals) to submit MDRs when specific reporting triggers are met.

Manufacturers, in particular, must file when they become aware that their device may have caused or contributed to a death or serious injury, or when a malfunction would be likely to cause or contribute to a death or serious injury if it were to recur. Voluntary reports can also be submitted by patients, clinicians, and others.

In other words: MAUDE is a post-market surveillance intake, not an adjudicated verdict—useful for identifying patterns, but noisy by design (FDA: MDR overview; FDA: MDR regulations (21 CFR Part 803); FDA: MAUDE database limitations).

MAUDE is not incidence — but it can still be a powerful signal

MAUDE is not a curated clinical dataset. Reports are not adjudicated. And reporting behaviour varies wildly by manufacturer and over time. That means MAUDE is best treated as a signal-detection tool, not a scoreboard.

And here’s the part that trips up almost every “MAUDE comparison” you’ll see online: malfunction volume is not severity. The FDA’s MDR system explicitly includes malfunctions (not just injuries/deaths), and the agency receives millions of reports. Raw MDR totals often reflect a mix of true clinical harms and compliance-driven reporting volume (FDA: Medical Device Reporting overview; FDA: MAUDE database limitations).

This is not just theory. You can find MDR narratives where a “malfunction” is documented but the report explicitly states there was no adverse impact to blood glucose—useful for QA signal, but not comparable to a clinical injury (Example: MAUDE MDR example).

Peer-reviewed work has also highlighted substantial limitations and variability in MAUDE reporting (under-reporting, missing data fields, contributor differences), which is exactly why denominator choice and query design matter (Mishali et al., 2025 (MAUDE reporting trends); Mishali et al., 2025 (variation among reporting sources); BMJ, 2025 (late adverse event reporting)).

How did we get the data?

Working with ChatGPT to understand the data tagging within MAUDE, we created a Python query to interrogate the database. The details are shown below, and the scripts can be made available for anyone who wishes to reproduce the analysis.

Data source: MAUDE via the openFDA device/event endpoint (text-based MDR narratives).

Why narratives: structured “problem code” fields are often sparsely populated for modern AID MDRs, so the usable signal is mostly in free-text narrative fields.

Why not raw MDR totals: we deliberately avoid comparing total MDR volume because malfunction reporting is heavily manufacturer-dependent and not a proxy for severity.

Denominator: we focus on injury + death (not total MDRs) to reduce distortion from manufacturer differences in malfunction reporting volume.

UltraTier1 hypoglycaemia filter: we used high-specificity escalation markers only: seizure, loss of consciousness/unconscious, EMS/ambulance/paramedic/911. We intentionally excluded softer terms like “unresponsive” and “glucagon” because they can be mentioned without reflecting the index event.

Mechanism tags (not causality): within UltraTier1 reports we counted overlap with three narrative buckets: algorithm-related language, delivery interruption/occlusion language, and DKA/ketone language. We then classified UltraTier1 reports into algorithm-only, delivery-failure-only, DKA-only, mixed (≥2 overlaps), or unclassified.

DKA cross-check: to test whether iLet is simply “generally different” (rather than hypo-skewed), we also ran a separate UltraTier1 DKA lens and compared it across systems using the same injury/death denominator.

Anecdotal web feedback: Reddit/Facebook content in this article is treated as illustrative examples of themes reported by users, not as a systematic review, representative sample, or estimate of incidence.


The headline result — UltraTier1 hypo inside injury/death reports

This is the metric that matters if you care about hypoglycaemia risk and you want to avoid “malfunction inflation.” It asks: when MAUDE categorises a report as injury or death, how often does the narrative include UltraTier1 hypoglycaemia escalation markers?

UltraTier1 hypoglycaemia as a percentage of MAUDE injury/death reports for iLet, Omnipod 5, Medtronic 670/770/780G, and Tandem Control-IQ
Figure 1. UltraTier1 hypoglycaemia (seizure / loss of consciousness / EMS escalation markers) as a percentage of MAUDE injury + death reports, by AID system (openFDA MAUDE extract through 2026-01-20).

 

System UltraTier1 Hypo (% of injury+death) UltraTier1 Hypo reports (count) Injury+death reports (count)
Beta Bionics iLet 37.9 29 76
Insulet Omnipod 5 12.9 76 589
Medtronic MiniMed 670G/770G/780G 5.1 260 5065
Tandem t:slim X2 Control-IQ 1.9 29 1508
Table 1. Data shown in Figure 1. Percentages rounded to 1 decimal.

 

The separation is still meaningful. In this dataset, iLet sits at 37.9% of injury/death reports containing UltraTier1 markers. Omnipod 5 is 12.9%, Medtronic 5.1%, and Tandem 1.9%.

Two caveats: iLet’s post-launch timeframe is shorter than older systems, and smaller denominators can make proportions “lumpy”—which is exactly why this should be treated as a signal, not an incidence estimate.

MAUDE cannot tell you incidence, but it can tell you something very specific: when harm is reported to MAUDE for iLet, the narrative is far more likely to include strong escalation language consistent with an ultra-severe hypoglycaemic story. That is the signal. Everything else is interpretation.

Sanity check: the other end of the severity spectrum (DKA)

At this point, the obvious counterargument is “that’s just MAUDE weirdness.” And sometimes it is. So we ran a second, independent check at the other end of the severe-spectrum: DKA.

Why does this matter? Because the dominant “severe endpoint” signature differs by device ecosystem. DKA narratives often align with delivery interruption (occlusion/dislodgement), prolonged under-delivery, and delayed detection of rising glucose/ketones. Severe hypoglycaemia narratives, by contrast, often align with over-delivery, aggressive corrections, or unstable controller–human interactions. If iLet is simply “generally noisier” in MAUDE, you’d expect it to show a similarly extreme pattern for DKA. If it’s genuinely skewed, the DKA chart should look different.

UltraTier1 DKA language as a percentage of MAUDE injury/death reports for iLet, Omnipod 5, Medtronic 670/770/780G, and Tandem Control-IQ
Figure 2. UltraTier1 DKA language as a percentage of MAUDE injury + death reports, by AID system (same time window and same denominator approach as Figure 1).

 

System UltraTier1 DKA (% of injury+death) UltraTier1 DKA reports (count) Injury+death reports (count)
Beta Bionics iLet 6.6 5 76
Insulet Omnipod 5 23.5 138 589
Medtronic MiniMed 670G/770G/780G 8.6 438 5065
Tandem t:slim X2 Control-IQ 1.8 27 1508
Table 2. Data shown in Figure 2. Percentages rounded to 1 decimal.

 

This matters because the DKA pattern does not mirror the hypoglycaemia pattern. Under this DKA lens, iLet does not separate in the same way—while Omnipod 5 is the system that stands out on DKA language in serious reports.

Anchoring it with numbers: Omnipod 5 is 23.5%, versus Medtronic 8.6%, iLet 6.6%, and Tandem 1.8% under the same injury/death-conditioned framework.

The cleanest view: Hypo vs DKA, side-by-side, same denominator

The grouped chart below puts both severe endpoints on one frame: the same injury/death denominator, but two different UltraTier1 narrative lenses. This is the closest thing MAUDE can give you to a “signature comparison” without pretending it’s incidence.

Grouped bar chart comparing UltraTier1 hypoglycaemia vs UltraTier1 DKA as a percentage of MAUDE injury/death reports across AID systems
Figure 3. Grouped comparison of UltraTier1 hypoglycaemia vs UltraTier1 DKA as % of MAUDE injury + death reports (same denominator; different severe endpoints).

 

System UltraTier1 Hypo (% of injury+death) UltraTier1 DKA (% of injury+death)
Beta Bionics iLet 37.9 6.6
Insulet Omnipod 5 12.9 23.5
Medtronic MiniMed 670G/770G/780G 5.1 8.6
Tandem t:slim X2 Control-IQ 1.9 1.8
Table 3. Data shown in Figure 3. Percentages rounded to 1 decimal.

 

This is where the story tightens: iLet looks hypo-skewed, while Omnipod 5 looks DKA-skewed under the same injury/death denominator. Tandem sits low on both in this framing. That doesn’t make this a “winner” chart. It makes it a failure-mode signature chart.

DKA risk isn’t hypothetical — and Omnipod’s MAUDE “severe signature” skews that way

If we’re going to be intellectually honest, we can’t frame iLet as the only system with a distinctive severe-event signature. The DKA cross-check exists for a reason: it tests whether one product is “just noisy” versus “skewed toward a specific failure mode.” And when you run that lens, Omnipod doesn’t get to hide behind the iLet headline.

In the injury/death–conditioned DKA chart (Figure 2) and the head-to-head comparison (Figure 3), Omnipod 5 shows a materially higher proportion of injury/death reports containing UltraTier1 DKA language than the other ecosystems in the same framework. That doesn’t mean “Omnipod causes DKA.” It means that when serious harm is reported to MAUDE in connection with Omnipod 5, the narrative is relatively more likely to include DKA / ketoacidosis framing with escalation cues.

Mechanistically, that’s not a shocking place for a patch-pump ecosystem to land. DKA is the classic “delivery failure” endpoint: prolonged under-delivery, occlusion-equivalent failures, cannula issues, adhesion/dislodgement all result in compromised insulin delivery. When these events happen, they can deteriorate fast. It also raises a question as to whether there are certain demographics that this is more likely to occur in and whether they are also more likely to choose Omnipod over alternatives, which we haven’t investigated here.

Put bluntly: iLet looks hypo-skewed; Omnipod looks DKA-skewed. Neither is a “gotcha.” It’s a signature. One ecosystem concentrates severe narratives around over-delivery/instability (or the perception of it), while another concentrates severe narratives around delivery interruption and the absence of a basal backstop.

There’s a second, uncomfortable point here for anyone trying to use MAUDE as a scoreboard: severity-conditioning doesn’t magically fix the reporting biases. “Injury” and “Death” are still MAUDE categories, not adjudicated clinical endpoints. But by running the DKA cross-check, we at least demonstrate that the severe signal is not simply “one company is loud.” Different ecosystems light up in different ways.

If Insulet (or any patch-pump maker) wanted to make this boring, the path is the same as for BetaBionics: publish denominator-based surveillance. Show DKA admissions per patient-year, stratified by time-on-pod and suspected delivery failure modes, so we can separate “patch-pump physics” from specific preventable failure patterns. Until then, the MAUDE signature is what it is: a warning light, not a calibrated meter.

There is a practical implication (without pretending this is medical advice): if you’re on a patch pump, treat “unexplained sustained highs” as a delivery problem until proven otherwise. Check ketones earlier than you think you need to, swap the pod sooner than you want to, and don’t let an algorithm reassure you when insulin might not be reaching your body. And also a selling point for integration of the Abbott dual CGM/CKM (continuous ketone monitoring) sensor

What kind of stories are these? Are they multi-variate or are they attributable to a single issue?

Once you see iLet separate on the UltraTier1 metric, the obvious question is: what’s driving it? MAUDE can’t adjudicate causality, but it can tell you what kind of story is being written when those escalation markers appear.

To get beyond “it’s severe” and into “what’s being described,” we decomposed UltraTier1 reports using three narrative tags: algorithm-related language, delivery interruption/occlusion language, and DKA/ketone language. Then we classified UltraTier1 reports into “algorithm-only,” “delivery-failure-only,” “DKA-only,” “mixed,” or “unclassified.”

Grouped bar chart showing UltraTier1 hypoglycaemia pathway composition: algorithm-only, delivery-failure-only, DKA-only, mixed, and unclassified, as a percent of UltraTier1 reports by AID system
Figure 4. Composition of UltraTier1 hypoglycaemia reports (as % of UltraTier1) by narrative mechanism tags: Algorithm-only, Delivery-failure-only, DKA-only, Mixed (≥2 tags present), and Unclassified (UltraTier1 present but none of the three tags matched). Text-tagging is based on MDR narrative language and is not causal adjudication.

 

System Algorithm-only (% of UltraTier1) Delivery-failure-only (% of UltraTier1) DKA-only (% of UltraTier1) Mixed (≥2 tags) (% of UltraTier1) Unclassified (% of UltraTier1)
Beta Bionics iLet 5.0 11.2 13.3 19.5 51.0
Insulet Omnipod 5 1.8 34.1 8.1 15.8 40.1
Medtronic MiniMed 670G/770G/780G 50.8 3.2 2.6 32.1 11.2
Tandem t:slim X2 Control-IQ 5.6 9.9 6.6 2.8 75.0
Table 4. Data shown in Figure 4. Percentages rounded to 1 decimal.

 

The patterns are meaningfully different by system. Omnipod 5’s UltraTier1 narratives skew strongly toward delivery-failure-only language, which is broadly consistent with patch-pump failure modes dominating the severe-event narratives. Medtronic’s UltraTier1 narratives are dominated by algorithm-tagged language, which likely reflects how often Auto Mode behaviour is documented in MDR narratives rather than establishing blame.

iLet’s UltraTier1 narratives show a larger mixed component and substantial DKA/ketone language and delivery-interruption language appearing in the same narratives. That doesn’t mean iLet “causes DKA and ultra-severe hypoglycaemia at the same time.” More plausibly, it suggests that many of the worst narratives are described as multi-factor, unstable sequences—exactly the kind of “instability spiral” patients describe when they talk about over-delivery → defensive suspension → rebound chaos.


What this could mean mechanistically (and what it doesn’t prove)

Let’s start by being careful. A narrative containing “ambulance” does not tell you whether the ambulance was called because of the device, because of user behaviour, or because of an unrelated chain of events. This is not incidence and it is not causal adjudication.

But when you restrict to MAUDE’s highest-signal slice (injury/death) and apply an UltraTier1 definition that is intentionally hard to trigger, you are left with a small set of plausible explanations for a strong outlier pattern.

One is the simplest: a genuine difference in ultra-severe hypoglycaemia risk (at least in the population and timeframe represented by MAUDE reporting). Another is a reporting artefact specific to iLet: narrative style, categorisation habits, or the way severe stories are written and submitted. A third is the one that matters commercially: early user experience problems that erode trust, even if they become manageable later.

This is where iLet’s philosophy becomes relevant. iLet’s promise is simplification: no ratios, no ISF, fewer knobs, less tuning. For the right person, that’s liberation. For the wrong person—or the right person during the wrong phase—it can feel like being strapped into an algorithm you cannot meaningfully steer.

It’s also worth acknowledging iLet’s research lineage. Beta Bionics has long positioned iLet as a platform that could support both insulin-only and bihormonal control approaches. That matters because it influences how people interpret behaviour at the bedside: users will sometimes attribute a “style” of control to assumptions baked into the design philosophy, even when the shipping product is insulin-only.

And when the perceived failure mode is “it overdosed me,” the emotional response isn’t mild frustration. It’s fear. Fear suppresses adoption faster than any competitor feature list.

Why might iLet be showing this signal?

Everything below is deliberately framed as hypothesis, not conclusion. MAUDE narratives are messy, reporters vary, and none of this proves causality. But when a system shows this level of separation on a severity-conditioned signal, you’re allowed to ask: what mechanisms could plausibly generate the pattern?

1) Early “learning” overshoot + insulin physics. iLet’s value proposition is that it adapts dosing without the traditional user-tuned parameters (basal rates, ISF, I:C ratios). That simplification is powerful, but it also changes the failure mode. If the algorithm commits aggressively early—especially around meal announcements or correction behaviour—and then adapts in the wrong direction, the system can overshoot into lows. Suspension can reduce future insulin, but it can’t retract what’s already on board. In practice, that can create the classic loop spiral: low → suspend → rebound high → corrective dosing → renewed volatility. A subset of users report exactly this lived pattern.

2) Minimal user controls can turn edge cases into “you’re along for the ride.” In mature AID ecosystems, users and clinicians have a toolkit for safety and comfort: temporary targets, activity modes, sensitivity adjustments, correction limits, basal profile shaping, manual mode escape hatches, and sometimes highly granular control of bolus behaviour. iLet’s philosophy is the opposite: fewer knobs, fewer ways to “tune” the controller. That simplification is the product — but it also changes what happens when you hit an edge case. If someone is trending low because of exercise, alcohol, delayed gastric emptying, a sensor artefact, or just a badly-timed meal announcement, the inability to apply targeted damping (e.g., a temporary higher target, a more conservative correction stance, or a deliberate mode switch) can make the experience feel binary: trust the black box or fight it with carbs and manual corrections. And when users start fighting the controller, volatility is exactly what you get.

In short: removing knobs reduces burden on good days — but it can increase helplessness on bad days.

3) Correction culture: when the user fights the controller. A surprising number of severe narratives in MAUDE across all AID systems are not “the algorithm did X” but “the person did X because the algorithm did Y.” If a user loses trust and starts layering manual corrections on top of adaptive control, you can get a feedback loop where the algorithm adapts to behaviour that is itself reactive. That can temporarily look like “overdosing” when the real issue is controller + human oscillating in the same direction. iLet’s “hands-off” design goal may actually make this worse for certain personalities: when people can’t see or adjust the usual knobs, some compensate by correcting more, not less.

4) Conservative-to-aggressive transitions during onboarding. Many AID systems have a “ramp” problem: early-phase conservatism (to avoid lows) can produce persistent highs, which leads to user frustration and corrective behaviour. If the system then adapts to that behaviour quickly, you can swing from high bias to low bias. If iLet’s adaptive pacing is faster in some circumstances, the onboarding period could concentrate volatility into the first weeks—exactly the timeframe where churn, social media posts, and MAUDE reporting likelihood may be highest.

5) CGM mismatch and physiology surprises. Severe events aren’t always “algorithm failure.” If CGM readings are wrong (compression, lag, sensor error) or if physiology changes abruptly (exercise, alcohol, gastroparesis, illness), any closed-loop controller can be misled. What differs between systems is how much slack exists: temp targets, manual mode escape hatches, and user-driven tuning can sometimes dampen these scenarios. A system optimised to remove user tuning can, in edge cases, feel less forgiving—even if the root cause isn’t the controller.

6) Reporting and categorisation artefacts. Finally, the boring explanation: iLet may simply be over-represented in MAUDE injury/death narratives due to reporting behaviours. A smaller user base can generate “lumpy” signals, and early-market products often have a disproportionate share of onboarding-related escalations and heightened vigilance. If iLet users (or educators) are more likely to mention EMS involvement explicitly, that alone can inflate a text-based UltraTier1 filter without implying higher incidence. This is precisely why the right next step is denominator-based, post-market surveillance per patient-year.

7) “Phantom glucagon” (a what-if that would be easy to falsify). A recurring community suspicion with iLet is a kind of “phantom glucagon” hypothesis: if the control philosophy and safety assumptions were originally optimised for a future bihormonal world (where glucagon can actively rescue lows), could an insulin-only implementation ever behave as if there’s a rescue channel available—i.e., tolerate or even induce deeper lows on the assumption that recovery can be actively accelerated? To be clear: there is no public evidence that the commercial insulin-only iLet is literally running a bihormonal algorithm with glucagon removed. But as a “what if,” it’s a useful framing question because it’s testable. If something like this were true at a behavioural level, you’d expect to see patterns such as: (a) earlier/more aggressive insulin commitments, (b) rescue that relies heavily on suspension rather than proactive avoidance, and (c) user reports of lows that feel “engineered” rather than incidental.

The productive way to handle this is not conspiracy. It’s transparency: publish controller guard-rail logic at a high level (what conditions cap meal dosing and corrections; how rapidly adaptation can change; what low-avoidance constraints exist), and publish post-market severe hypo rates per patient-year stratified by time-on-system. If the “phantom glucagon” intuition is wrong (it probably is), the data will make it boring very quickly.

None of these hypotheses require malice, incompetence, or a “bad algorithm.” They do, however, share one implication: if a subset of users is hitting a volatility pattern that feels like over-delivery plus rebound chaos, you’ll see it in anecdotes, you’ll see it in churn, and you may see it echoed—imperfectly—in MAUDE.

Why this matters for uptake

This is where the January share price story and the MAUDE story collide. People don’t churn AID systems because the algorithm isn’t perfect. They churn because the system is unpredictably scary or predictably annoying. AID adoption is a trust economy: users will tolerate quirks if they feel in control; they will not tolerate repeated events they interpret as the system trying to kill them.

If the early iLet experience is mixed and a subset of users repeatedly run into “over-delivery” dynamics, you get the commercial pattern investors hate: early adopters sign up because the promise is compelling; some have a rough start; they post about it publicly; clinicians become cautious; uptake softens.

What would settle this properly

If BetaBionics wants to rebut this convincingly, the path is straightforward: publish denominator-based post-market safety surveillance that answers the exact questions MAUDE can’t.

Here’s what would actually settle this: severe hypoglycaemia rescue events per patient-year, stratified by time-on-system (first weeks vs steady state), by phenotype proxies (baseline A1c, total daily dose bands, prior severe hypo), and by onboarding model (education cadence and follow-up).

Likewise for Insulet: DKA admissions per patient-year, stratified by time-on-pod and suspected delivery failure modes, so we can separate “patch-pump physics” from specific preventable failure patterns.

If the explanation is “early learning hump,” the curve should show it. If the explanation is “reporting artefact,” denominator-based rates should be reassuring. If the explanation is “real risk for a subset,” that will show too—and then the response isn’t marketing; it’s engineering: guard rails, adaptive pacing, and user-facing safety controls.


What do we take from this?

Two things can be true at the same time: iLet is an ambitious attempt to simplify AID by removing the settings burden—and insulin pharmacodynamics punishes brave automation when it commits early and learns fast. You can’t unsend insulin.

As David Kliff (Diabetic Investor) has said on multiple occasions:

“These systems are 90% autopilot, but the patient still has to be the pilot for that critical 10%.”

As we said earlier in this article: removing knobs reduces burden on good days — but it can increase helplessness on bad days.

This analysis is not a verdict. It’s a signal. And the signal is that serious MAUDE narratives are not uniform across AID ecosystems: iLet’s injury/death reports are disproportionately rich in UltraTier1 hypoglycaemia escalation markers, while Omnipod’s serious narratives skew more toward DKA language under the same denominator lens. That’s not a scoreboard. It’s a set of failure-mode signatures worth interrogating further.

First, don’t use the MAUDE data to panic. Use it to ask better questions.

If BetaBionics can publish high-quality, denominator-based post-market surveillance, this entire discussion becomes unnecessary.

But if they can’t?

The market’s January question — “why isn’t this taking off?” — may have a quieter, more human answer: trust.

Sources & links (for readers who want more)

Market / share price catalyst: Beta Bionics preliminary Q4 announcement (newswire via Yahoo Finance); Analyst downgrade coverage (Drug Delivery Business); Downgrade + new patient starts context (Investing.com).

Examples of Reddit discussion (anecdotal): Longer-term iLet issues; “Buyer beware”; Week 2 from a Tandem user; My experience with the new iLet.

Why MAUDE is “noisy” (and why denominator choice matters): FDA MDR overview; FDA MDR regulations (21 CFR Part 803); FDA MAUDE limitations; Mishali et al., 2025 (MAUDE reporting trends); Mishali et al., 2025 (variation among reporting sources); BMJ, 2025 (late manufacturer reporting); Example “no adverse impact to BG” malfunction MDR.

Be the first to comment

Leave a Reply

Your email address will not be published.


*