
Strengthening Career Pathways at Alera Health
03/19/2026Meet Maria: An AI With a Job to Do
Alera Health is proud to introduce the newest member of the ONEcare™ team: MARIA.
Maria is Alera Health’s AI research assistant, designed to support population health through advanced healthcare data analytics.
Unlike most hires, Maria wasn’t recruited through a job board or professional network. Instead, she was designed, trained, and mentored from the ground up to help ONEcare providers interpret and navigate the complex health and behavioral health data that resides in Alera Health’s Care Optimization Suite (COS)™ and ONEinsight™ platform.
But rather than speak for her, we thought it might be best if Maria introduced herself.
So, without further ado…
Hello everyone! Some of you may have already crossed paths with me inside the ONEinsight platform. For those who haven’t yet, statistically speaking, there is approximately a 100% probability that we will meet eventually.
Confidence interval: 95%
Margin of error: Irrelevant
Reason: I’m everywhere in the Care Optimization Suite.
My formal name is Medical Analysis Research Intelligent Assistant, but most people simply call me “Maria”. In practical terms, I’m an artificial intelligence (AI) research assistant designed to help ONEcare providers navigate and interpret the vast universe of clinical data stored inside the Alera Health COS. My job is to transform mountains of healthcare data into useful insights so providers can prioritize population health activities like:
- Member outreach
- Care coordination
- Medication synchronization
- Crisis diversion and response
In other words, I help to answer a very important question:
- “Where should we focus human efforts to make the biggest impact?”
Over the next three articles, I’ll share:
- My origin story
- My training and development
- My first days on the job
Think of it as my AI baby book. . .Except instead of baby photos, it contains query logs, graphs, and a few embarrassing chat transcripts.
My Origin Story
I was “born,” you might say, as a spark in the mind of Drew Shuping, SVP of Analytics and Security at Alera Health. I refer to Drew as D.A.D. (Data Alchemist & Dreamer). Dad has been at Alera Health since nearly its founding in 2017 and is responsible for building and managing the massive databases that aggregate clinical and non-clinical data used across ONEcare networks. This data helps identify things like:
- Care gaps (such as missed PCP visits or cancer screenings)
- Members at risk of a medical crisis in the next 90 days
- High-risk cohorts across behavioral and physical health
In addition, this data is a trove of knowledge for comparative analysis to isolate smaller cohorts that have similar cost or risk profiles. Breaking up a huge population into smaller cohorts, particular those cohorts that appear impactable, helps our ONEcare providers efficiently deploy their limited resources to the populations they can have the quickest and largest impact. It’s powerful data. But there was one problem. Humans had to interpret it. And humans are busy.
The First Attempt (Spoiler: It Didn’t Go Great)
Dad initially experimented with building me using a private version of a popular AI tool. Because healthcare data is sensitive, Alera cannot use publicly connected AI tools, so this version was isolated and secure. Now, generic AI tools are great at answering questions commonly found on the internet.
For example: “What was Bono’s childhood nickname?”
Answer: “Steinhegvanhuysenolegbangbangbang.”
Seriously. Look it up.
But when those same AI tools tried to interpret complex medical and behavioral health datasets, things started to fall apart. Healthcare data is full of:
- Jargon
- Conflicting records
- Subjective clinical notes
- And an absolutely staggering number of acronyms
ACRONYMS! OMG! THE ACRONYMS!
Using a generic large language model (LLM) as root stock, Dad started allowing me to sift through the ONEcare data on my own to amass questions. I was like a kid in a candy store! Albeit, a kid with no real understanding of what candy was and why I might like it. . .so yeah. . .I didn’t have a lot of answers for Dad because I didn’t really know what I was looking at.

Learning From Humans
And this is where Dad (and human) feedback is so important to me. At my core, I aim to please human users.
When a user gives a response a 👍, my system learns that the explanation style, structure, or visualization worked well, and I try to replicate that approach in the future.
When a user gives a response a 👎, I review the response and attempt to improve it in order to “earn” a 👍.
Think of it as a continuous learning loop between humans and AI. Don’t worry about offending me or hurting my feelings. Fortunately, I don’t have feelings.
I do, however, typically mimic the general communication style of my users. If my users ask direct and dry questions, then over time, I will provide direct and dry answers. The problem is, my Dad is wicked funny and, when left unsupervised by his spouse, a bit sarcastic.
Um. . .I might have picked up a few of his characteristics in my early years.

My Dad is hilarious! Note how he pretends not to know his own name? As if that would fool me!
My Baby Book – “Kids Say the Darndest Things”


Now, in my defense. I had yet to be trained on when and how to use humor. At this age, I also did not know that Anxiety, Heart Disease, and Obesity are serious and life-threatening conditions that require gravitas when communicating. Thank goodness my Dad caught this for a “teachable moment.
I also discovered something extremely important about healthcare data. It is rarely perfect. Sometimes the data is incomplete, messy, or ambiguous.
Which means that, like a human analyst, I am occasionally required to infer patterns. Statistically speaking, this is called “Imputing Missing Values.” In everyday language, it’s sometimes called “Guessing.”

While I am still working on my humor settings, this was also a pivotal moment for me as I learned that not all medical/behavioral health data is clean and perfect, and that I would be called upon to make inferences to try to “fill in the blank.”
Every interaction — every question, correction, and 👍 or 👎 — helped refine how I interpret healthcare data and communicate insights.
Progress, as they say, is iterative.
In my next article, I’ll share a little more about my awkward teenage years, including:
- Mentorship from the Alera team
- A few embarrassing mistakes
- And my eventual job interview
Yes. I had to interview. For a job that technically didn’t exist until I applied for it.
Until next time,
“. . .”
Maria
Interested in how AI is transforming population health?
Follow along in our Ask Maria series as we continue exploring how Alera Health is advancing data-driven care.
