Show HN: GlycemicGPT – Open-source AI-powered diabetes management
26 points by jlengelbrecht 6 hours ago | 8 comments
I'm a Type 1 diabetic and software engineer. Last year I went months between endocrinologists with no clinician reviewing my data. I'm an engineer, so I built the tool I needed — and now I'm open sourcing it. GlycemicGPT is a self-hosted platform that connects continuous glucose monitors, insulin pumps, and existing Nightscout instances to an AI analysis layer running on your own infrastructure. Data sources:

Dexcom G7 (cloud API) Tandem t:slim X2 and Mobi pumps (direct BLE) Nightscout (point it at your existing instance and you're running in minutes)

What the AI layer does:

Daily briefs summarizing overnight and 24-hour patterns Meal response analysis Conversational chat with RAG-backed clinical knowledge Predictive alerting with configurable thresholds and caregiver escalation

Important: this is monitoring and analysis only. GlycemicGPT does not deliver insulin, does not control your pump, and is not a closed-loop system. It reads your data and gives you insight on top of it. Your clinical decisions stay between you and your care team. Architecture:

Self-hosted via Docker or K8S — the GlycemicGPT stack runs entirely on your hardware BYOAI — bring your own AI provider. Use Ollama for fully local operation (no data leaves your hardware), or point it at Claude, OpenAI, or any OpenAI-compatible endpoint if you prefer a hosted model. Data flows directly from your instance to the provider you choose; nothing is routed through any centralized service operated by the project. GPL-3.0, no subscriptions, no vendor lock-in

Stack:

Backend API: FastAPI, Python 3.12, PostgreSQL 16, Redis 7 Web Dashboard: Next.js 15, React 19, Tailwind CSS, shadcn/ui AI Sidecar: TypeScript, Express, multi-provider proxy Android App: Kotlin, Jetpack Compose, BLE Wear OS: Kotlin, Wear Compose, Watch Face Push API Plugin SDK: Kotlin interfaces, capability-based, sandboxed

Looking for contributors — especially folks with BLE/Android experience or anyone in the diabetes tech space. Plugin SDK is documented if you want to add support for new devices. GitHub: https://github.com/GlycemicGPT/GlycemicGPT


tornadofart 13 minutes ago
I'm a T1D and tbh it's not that hard to manage, I just wouldn't need that. But for kids or the elderly, I see a use case.

The hardest to learn was that an unhealthy lifestyle resulted in a diabetes that was harder to manage. Too much carbs, not enough exercise, etc. After adjusting my lifestyle, it became quite easy.

The most pain, in my experience, comes from the discrepancy between the CGM - measured value and the prick-test value, even when accounting for time lag. I've used several CGMs and they've all been wildly off sometimes. I have a few T1D acquaintances who relied on their CGM alone and have significantly improved their HbA1c after accounting for that.

Maybe that information is useful to you.

reply
mhovd 17 minutes ago
The risk to benefits ratio of introducing a language model to interpret so clear signals is nowhere near justified.

Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform.

reply
surgicalcoder 48 minutes ago
I'm a T1D who has an insulin pump looping with AndroidAPS and NightScout, what does this give you that Nightscout and Autotune doesn't give you?

And how do you deal with AI hallucinations?

reply
axegon_ 26 minutes ago
"This will all end in tears, I just know it"

Marvin

reply
foo-bar-baz529 13 minutes ago
What’s the limit on badges in a README
reply
fnands 2 hours ago
The alerts system and sharing with caregivers is a solved problem already (e.g. Dexcom's Follow, Abbot's LibreLinkUp).

Do you find the analytics actually helps? I.e. a lot of this will depend on what you ate and whether or not you logged it?

reply
xyzal 37 minutes ago
This is THE ONE domain where you would want to use classical machine learning and not unreliable LLMs. Unless you want to kill yourself, that is.
reply
stingraycharles 31 minutes ago
Yes, language has nothing to do with it and is complete overkill.

Probably something like SVM for warnings.

Unless the whole purpose is just daily reports.

reply