The Algorithm Will See You Now
Efficiency is the AI-first pharmacy's fatal flaw
AI can really make everything easier. One of the notable standouts from the latest batch of YC startups is Legion Health, a startup using AI to prescribe psychiatric medication.
The YC launch post for Legion Health opens with a claim: “the first mental health company ever to have approval to let AI provide actual medical care.”
Their core thesis: psychiatric medication renewals are too slow, too expensive, and too dependent on clinician time that does not exist. The solution is to automate the clinician out of the renewal loop. It is written with the confidence of a founding team that has cracked a distribution problem.
Their solution is simple.
All they need is a “stable” patient on one of fifteen medications. And $19 a month.
AI handles the rest.
The claim is accurate. It is indeed much easier.
What if that process wasn’t supposed to be easy? Like what if it was designed not to be that simple in the first place?
How Legion Health Got Here
Legion was founded by three Princeton roommates: Yash Patel, a former Medicare/Medicaid policy analyst at the Congressional Budget Office; Danny Wilson, who leads AI and engineering; and Arthur MacWaters, who runs product and operations.
They entered Y Combinator’s Summer 2021 batch with a B2B marketplace model. Legion’s first product connected mental health clinicians to healthcare organizations that needed clinical capacity on demand. Hospitals, digital health companies, value-based care providers. By early 2022, they had 800 clinicians across all 50 states and eight enterprise customers. The product had no prescribing authority. It was staffing infrastructure.
When large language models matured, the founders pivoted. In late 2024, they raised a $6.3 million seed round and relaunched as a full-stack, direct-to-consumer telepsychiatry clinic. Legion hired its own clinicians, contracted with insurers, and began seeing patients directly in Texas, focused on medication management. The AI ran the operational layer: scheduling, intake, documentation, risk-scoring, clinical summaries. The clinicians made the prescribing decisions. The founders watched how psychiatric care actually functions and used that data to identify which workflows could be automated.
By early 2026, they received regulatory authorization from Utah to take the next step: autonomous AI renewal of fifteen psychiatric maintenance medications for patients the system classifies as stable, with no physician reviewing individual decisions.
The founders described their approach as the Tesla model. Start with a world-class clinic staffed by real providers, instrument everything in real conditions, automate step by step. A play to encode domain expertise by having models learn from real practitioners.
Legion Health has not published outcome data from its supervised Texas clinic. The only real-world evidence base for removing physician oversight from individual prescribing decisions is internal and undisclosed.
The question is whether the clinical experience they accumulated as operators of a supervised clinic is the same credential that qualifies them to remove the clinician from the loop.
It is not. And the reason it is not reveals everything about how this product was built.
Who Built This, and What They Cannot See
When asked why AI is central to the product, Patel’s answer was not about clinical outcomes. It was about economics:
“The AI is not just a cool thing to do; it’s really meaningfully creating a margin profile.”
That framing is honest. It is also the framing of someone whose primary credential is understanding how healthcare systems pay for care, not what happens inside the clinical relationship when that care is delivered.
Legion’s founding team has no clinical experience. No one is a psychiatrist, a nurse practitioner, or a pharmacist. The company has a medical advisory board with credentialed psychiatrists and is hiring a collaborating physician. That clinical scaffolding is real. But running a supervised clinic as an operator is not the same as having built it as a clinician. The product reflects what the founding team understood when they made the core architectural choices: autonomous AI in the prescribing loop, clinical advisors consulted around those choices rather than driving them.
From Patel’s vantage point, the provider in the renewal loop looks like a cost problem. Clinician time is scarce, expensive, and poorly distributed. The renewal appointment consumes that time for a process that, on paper, looks routine. The logical move is to automate it.
The problem is that psychiatric medication renewal is not routine. It looks routine from the outside. From inside the clinical relationship, it is one of the last places where a clinician can detect that the patient in front of them is less stable than the form suggests.
That is the category error at the center of this product. Legion looked at the clinician in the renewal loop and classified that role as operational. Everything that follows, the product architecture, the pricing, the regulatory strategy, the expansion plan, flows from that single misclassification.
Not All Friction Is the Same
It’s no secret that American healthcare is fundamentally broken. Much of that comes from poorly designed infrastructure. But some of it is a speed limit in a school zone. Both prevent you from getting to work as fast as you like. But treating them as one problem is a critical mistake.
Prior authorization delays, billing overhead, scheduling systems that have not been modernized, documentation that consumes clinician time without adding clinical value. That friction is worth removing. The AMA estimates the prior authorization burden alone consumes hours of physician time per week. That is the friction Legion Health should be targeting.
The provider reviewing a medication renewal is something different. A psychiatrist or nurse practitioner on a fifteen-minute telehealth renewal is not performing an administrative function. They are observing things no symptom questionnaire captures. Is the patient’s affect flat, elevated, incongruent with what they are saying? Are they speaking differently than last time, faster or slower, more agitated or more withdrawn? These are the signals that clinical training builds over years. They are not in the text box.
And the text box is all the AI has.
A patient renewing an SSRI for depression may have recently started a blood pressure medication their cardiologist prescribed. They may be taking a supplement that creates a serotonin interaction risk. They do not mention these things because they do not know they are relevant. Serotonin syndrome can be fatal and is triggered by exactly these kinds of undisclosed interactions. A clinician who knows the patient’s chart looks for this. The renewal checklist does not.
The pilot’s eligibility criteria are meant to address some of this. Only patients with no psychiatric hospitalization and no medication changes within the past year qualify as “stable.” That guardrail sounds meaningful until you consider what it actually screens for. It screens out the recently hospitalized and the recently adjusted. It does not screen for the patient whose stability is about to break down, which is the patient the clinical relationship exists to detect. A patient can meet every stability criterion on intake and be in crisis two weeks later. That gap is not a scheduling inefficiency. It is the reason the renewal visit exists.
From the policy side, the renewal visit and the prior authorization delay look like the same category of problem. They are both friction. They both consume time. They both cost money. The distinction between them, that one is waste and the other is the safety layer, is only visible from inside the clinical relationship.
Legion automated both because from where the founders stood, they could not see the line between them.
The access gap in mental healthcare is real
More than 122 million Americans live in areas where providers are in short supply. Wait times stretch weeks. The system is failing the patients who need it most.
But the access gap is not a product problem. It is a reimbursement and enforcement problem that has been running for seventeen years.
A psychiatrist charges $550 for an initial consultation out of pocket. Medicare reimburses $216. Medicaid reimburses $177. Private insurers pay an average of 13 to 14 percent below Medicare rates for behavioral health.
As of 2024, only 18% of psychiatrists listed in Medicaid provider directories accept new patients. The Mental Health Parity and Addiction Equity Act was passed in 2008. It requires that mental health coverage be no more restrictive than coverage for other medical conditions, but has no enforcement mandate.
Thus, there is still no incentive to fix the access gap. Payers are structured to manage medical loss ratios. PBMs are structured to maximize rebates from drug manufacturers, which frequently does not align with the medications that produce the best clinical outcomes.
I spent time at the corporate venture arm of one of the largest payers and PBMs in the country. That position gave me a direct view of the incentive misalignment that the healthcare system runs on. Not as a policy abstraction. As financial decisions made in actual budget meetings.
Legion Health is not solving this. A $19 monthly subscription removes the provider from the loop without addressing why the provider is scarce. The reimbursement problem remains. The parity enforcement failure remains. What changes is that the patient now has an AI chatbot instead of a clinician, at a price point that the broken system made possible precisely because the human was too expensive under the broken system’s own terms.
The product does not begin with the most underserved patients, the most complex cases, or the least connected communities. It begins with the easiest patients to serve: already diagnosed, already prescribed, already stable enough to qualify, already capable of navigating a digital intake. That is not where the system is most broken. It is where automation is easiest to deploy.
Regulatory Arbitrage Isn’t Healthcare
Utah was not chosen because it has the worst access problem. It was chosen because it has the most permissive regulatory environment for this kind of pilot.
Doctronic, the state’s other AI prescribing experiment, is already in talks with regulators in Arizona, Texas, and Wyoming about comparable sandbox frameworks. Congress introduced the Healthy Technology Act of 2025 to formally allow AI to qualify as a practitioner eligible to prescribe drugs, timed as the FDA was cutting its AI regulatory staff.
The Utah oversight agreement requires monthly reporting of accepted and denied renewal counts, physician concordance rates, and adverse health outcomes. What it does not specify is any performance threshold that would trigger suspension.
The company’s claimed 99.2% accuracy rate is self-reported, based on internal simulation, and measures agreement with clinician decisions rather than patient outcomes. No publicly documented metric defines when the pilot has failed. The pilot runs for one year, but the agreement does not specify who evaluates the outcome, on what criteria, or what finding leads to shutdown rather than renewal.
The expansion map is a regulatory map. It is not a clinical-needs map. The strategy is to establish precedent in permissive jurisdictions, generate proprietary performance data, and expand into additional states before the federal regulatory conversation catches up. That is not a strategy critique. It is a category observation.
Companies that choose where to operate based on which regulators will ask the fewest questions are not doing healthcare. They are doing policy entrepreneurship. The two look similar from the outside, but they produce very different outcomes.
Who holds the bag
In traditional medicine, the accountability structure is legible. The clinician who prescribes holds professional liability. They can be sued for malpractice. Their license can be revoked. The threat of those consequences shapes clinical behavior. It forces judgment to remain attached to a human being.
Legion’s model breaks that chain. In the Utah pilot, no individual physician reviews an individual renewal decision. The AI makes the decision. The company can be sued. But professional accountability becomes diffuse. Financial risk may transfer through insurance. It does not replicate the accountability structure of a licensed clinician whose career depends on getting it right. A physician who repeatedly makes poor prescribing decisions loses their license. An AI that repeatedly makes poor prescribing decisions generates a dataset.
Legion Health’s product works by reclassifying clinical judgment as operational friction and then removing it. That reclassification is what makes autonomous prescribing look efficient. It is also what makes it unsafe.
The strongest version of Legion’s argument is that some care is better than no care. For the patients who genuinely have no access to a psychiatric provider, an imperfect system may be better than nothing. That argument has force. But it only works if the product is actually designed for those patients. Legion’s eligibility criteria select for the opposite: patients who are already stable, already diagnosed, already prescribed, already navigating the system well enough to find a digital intake and fill out a questionnaire.
The patients with no prior diagnosis, no established medication, no clinical relationship, and no digital fluency do not qualify. The product reaches the patients who are easiest to serve and calls that solving access.
The broken incentive structure created the access problem. Removing the provider does not fix the incentive structure. It removes the safety layer on top of it and calls the result innovation.



it’s like asking AI to do brain surgery - doable: yes, would i want it perform that on me: fuck no.
Loved this detailed piece
Again thinking of startup’s in the space where founders have no real life experience is scary especially when the stakes are Health/Finance/legal was same with Delve and it’s going in a similar direction with Legion.