Skip to content

Craven Tigers and Screeching Canaries: How the DHS Uses Bad Data to Fuel Deportations

Craven Tigers and Screeching Canaries: How the DHS Uses Bad Data to Fuel Deportations published on No Comments on Craven Tigers and Screeching Canaries: How the DHS Uses Bad Data to Fuel Deportations

This May, the Georgetown Law Center on Privacy and Technology released an updated version of an already alarming evaluation of the Immigration and Customs Enforcement (ICE)’s handling of data. The introduction to the new version comments on their prescience:

“When we published American Dragnet: Data-Driven Deportation in the 21st Century in 2022, we understood that the surveillance infrastructure our report describes could one day be deployed by an authoritarian executive to coerce and control the U.S. population at scale. We did not anticipate that this day would come within three years. Our hope was that the findings of our research would be useful for the communities organizing against immigration policing and digital surveillance, and would help to provoke policy change. Today, as masked federal agents abduct students off the street in broad daylight, and the President scoffs at an order from the Supreme Court to facilitate the return of a man illegally deported to El Salvador, and his administration threatens to suspend habeas corpus, to hope to be saved by ‘policy change’ would be to indulge in soothing nonsense.”

Now, in June, we’re continuing to watch this happen, with occasional challenges. For instance, the ongoing a bench trial challenging the Trump administration’s attempts to deport international students for expressing pro-Palestinian sentiments.


A large workload

The lawsuit was filed by the Knight Institute against the administration on behalf the American Association of University Professors and the Middle East Studies Association. American Association of University Professors v. Rubio alleges that “the administration’s policy of ideological deportation violates the First Amendment right of the plaintiffs to hear from and associate with noncitizen students and faculty, that it is unconstitutionally vague, and that it violates the Administrative Procedure Act.”

The testimony given in this trial more than confirmed the Georgetown Center’s concerns when Peter Hatch, assistant director for the Office of Investigations at DHS, testified that “he moved analysts from the counterintelligence counterterrorism unit, cyber intelligence unit, global trade intelligence unit and others to work on the Tiger Team because of the large workload.”

The Tiger Team’s tails tales

What’s the Tiger Team? A team organized by the DHS within ICE for the purpose of gathering data on university student and faculty activists protesting in defense of Palestinians, so that this information could be passed along to the State Department, which would then become a justification for arresting and deporting those students and faculty.

Hatch described their primary source of this data, a website called Canary Mission which “documents people and groups that promote hatred of the USA, Israel, and Jews on North American college campuses” according to their self-description in the site’s metadata.

The site includes over 5,000 names, research on which created the “large workload” that Hatch described: “I was not given a deadline but I knew … that we need to work through this expeditiously.” Hence the launch of the Tiger Team, which investigated any individual whose name was passed along by leadership within HSI: Homeland Security Intelligence: “We can be asked to look into any individual.”

Senator Joseph McCarthy would’ve been a huge fan– all someone has to do is appear on a website cataloging people who “promote hatred of the USA,” the very sort that McCarthy targeted himself, except that McCarthy generally pursued these accused Communists who had obtained some level of power within either the government or in industry, as well as– wait for it– academic faculty.

Recall that Secretary of State Marco Rubio approved the arrest of Columbia grad student Mahmoud Khalil by portraying him as nothing less than a national security threat to the U.S. “The foreign policy of the United States champions core American interests and American citizens and condoning anti-Semitic conduct and disruptive protests in the United States would severely undermine that significant foreign policy objective,” Rubio wrote in a memo, apparently unaware that his own State Department website excludes from its definition of antisemitism “criticism of Israel similar to that leveled against any other country.”

Protesters who criticize Israel for its crimes against Palestinians are doing the same as those who criticize any country for its crimes against minority populations, a manner of expression explicitly protected by the First Amendment: “the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”

Thus the only form of justification used for arresting and deporting these students stems from “reports” about their allegedly anti-American activity that have been gathered and analyzed so they can be used as evidence against those students.

And so we have a “Tiger Team” combing sites that claim to have cultivated a blacklist of allegedly highly influential activists described as hating America, who may as a result suffer not just loss of reputation and livelihood, or even a prison term, but expulsion from the country. Potentially to a torture prison in El Salvador.

“Data” is a term colloquially used to reference a collection of facts. Analyzing data means validating it and examining it to derive actionable insights. If you’re not starting with actual data, “analysis” amounts to repetition of a pernicious rumor to which your work lends credibility, causing it to be accepted by your audience and amplified to the level of a narrative. Potentially a dangerous narrative.

It feels bizarrely unnecessary to point out that the Tiger Team’s mission was therefore doomed by bad faith from the start, and yet they did do the equivalent of asking the KKK for data about advocates for black equality, so they could analyze it and determine where to find trees that properly support nooses.


The path to legitimacy is as follows:

First, identify a party to demonize. Ideally a group, because they can be assumed to have nefarious or loathsome properties in common (that is, after all, how bigotry works). Next, you have two options:

Option A: If they’re comparatively powerless, deny their agency and dehumanize them, portraying them as vermin. Use all available data to support allegations that they are a pernicious mass whose very presence in the country tarnishes it, which leaves open the possibility of outliers who are actually decent. And yet the country they come from is “not sending their best,” and essentialism says that circumstances of birth (such as location) can be inherently damning. Even people trying to escape a fascist regime in another country can thus be portrayed as somehow carrying fascism with them to ours, which I suppose makes sense if you’re claiming that they’re diseased.

Option B: If your enemy is somewhat powerful and influential, over-assert their agency. Make them criminal masterminds, with the desire to control the unassuming, manipulate their minds, and thereby subvert the very fabric of America. Label them something like “national security threat.” Use all available data to assign them a criminal background that would make of Genghis Khan a dedicated fanboy. Represent them as so powerful that America can’t take the risk of allowing such people anywhere out of prison, or even remain in the country.

They say that the plural of anecdote is not data– and yet pluralizing a claim that is not founded in data as if it is, pretending to examine its veracity and coming out of it with a hearty thumbs-up, lends a level of authority that wouldn’t exist for a story about some single or collection of individuals’ unsavory behavior.

Tiny truths v. massive mendacities

“We pledge to you that we will root out the communists, Marxists, fascists, and the radical left thugs that live like vermin within the confines of our country,” said Donald Trump two years ago.  Then he was re-elected, to resume shouting such eloquent ideology through the loudest megaphone in the world.

It’s tempting to say that he can do this because groups like the Tiger Team lend him their claws– except he’s the one who gave them those claws to begin with, for this very reason.

All because some college kids saw something that, to them, resembles genocide.

So they spoke up about it, and face the full onslaught of the federal government as a consequence.

Let’s take a lesson from the previous “banal” accumulations of information/facts/data throughout history that have resulted in events very much like this one, and recognized those malignant pebbles for what they are– the cumulative foundations of a terrible edifice. One that can be disassembled piece by piece, in the same way it was constructed.

Anyone can do it. Everyone must do it.

The immigrant physicians sustaining U.S. healthcare

The immigrant physicians sustaining U.S. healthcare published on No Comments on The immigrant physicians sustaining U.S. healthcare

The intersection of healthcare and immigration policy is found in the halls of hospitals and clinics across America, where increasing numbers of International Medical Graduates (IMGs) are filling in for doctors who won’t return, and state governments are doing their best to usher IMGs into practice where they’re sorely needed.

Help (Badly) Wanted: Foreign Doctors Apply Within

In 2023, Tennessee became the first U.S. state to drop residency requirements for some IMGs,1 giving them a new pathway to permanent licensure. Following Tennessee’s (somewhat surprising) lead, at least 15 states have introduced legislation to create streamlined pathways to medical practice for IMGs, with both Republican and Democrats contributing.2

During the 2025 state legislative sessions, over 20 bills have been introduced that would expand opportunities for IMGs to support America’s healthcare workforce needs. These range from allowing qualified DACA recipients to apply for licensure in New York to removing redundant training requirements in Montana.3

Some state legislation is more focused in scope. For example, in Illinois, IMGs must not only be legally able to work in the U.S., but are also mandated to work in medically underserved areas.

Perhaps most shockingly, in 2024 Governor Ron DeSantis of Florida signed the “Live Healthy” initiative to allow IMGs to bypass residency requirements if they have equivalent training experience. But then, the largest population of IMGs is in geriatric medicine, where they make up more than half of the physician population. And, well, it’s Florida.

Already at their shift

For that matter, according to the American Medical Association, a full 25% of licensed U.S. physicians are IMGs,4 with the largest number coming from India, followed by the Caribbean, Pakistan, the Philippines, and Mexico.

This is where the cognitive dissonance comes in– or at least, it should.

The new administration’s condemnation of everything related to equity and diversity, coupled with its rabid pursuit of an America free from immigrants, is simply incompatible with this reality. The reality is that massive numbers of the country’s doctors come from foreign countries, and are supported by legislation and advocacy work focusing on combatting racial and ethnic disparities.5 6

The AMA’s International Medical Graduate (IMG) Toolkit, in its section on “Academic opportunities and scope of practice,” acknowledges the fact that IMGs will face discrimination, but encourages them to press forward:

IMG physicians face several barriers in their goals and aspirations towards a career in academic medicine. . . Systematic exclusion is also a reason leading to discrepancies in leadership positions and promotions among IMG physicians. Despite challenges, IMG physicians are encouraged to choose an academic career as diversity is a strong determinant of innovation in medicine.”7

Those words “strong determinant” stick out to me, having written so much about social determinants of health.8910

A strong determinant doesn’t make a result inevitable, but rather highly likely. “You have something to contribute,” this guidance says, “So don’t give up in the face of discrimination. Keep trying, because we need you.”

I wonder if America is aware of how much we need IMGs, and how opponents of “DEI” and immigration reconcile their views with this reality.

Wait, actually I don’t. The reality itself is what matters– it’s where IMG physicians can, and do, make an enormous difference.

Let’s hope they never stop.


Sources:

  1. https://www.medpagetoday.com/special-reports/exclusives/109168 ↩︎
  2. https://www.medpagetoday.com/special-reports/exclusives/109168 ↩︎
  3. https://immigrationimpact.com/2025/03/11/healthcare-shortages-foreign-trained-doctors-international-medical-graduates/ ↩︎
  4. https://www.ama-assn.org/education/international-medical-education/how-imgs-have-changed-face-american-medicine ↩︎
  5. https://www.ama-assn.org/topics/physician-diversity ↩︎
  6. https://www.ama-assn.org/education/international-medical-education/imgs-overcome-barriers-offer-critically-needed-care ↩︎
  7. https://www.ama-assn.org/education/international-medical-education/international-medical-graduates-img-toolkit-academic ↩︎
  8. https://giantif.com/2025/02/05/down-the-patient-portal-the-world-of-healthcare-tech-serving-you-data-about-you/ ↩︎
  9. https://giantif.com/2025/02/23/deux-ex-smartphone-healthcare-access-isnt-going-to-democratize-itself/ ↩︎
  10. https://giantif.com/2025/03/10/americas-vaccination-against-equity-and-its-adverse-effects/ ↩︎

Mind the strings: Grok 3 and biased AI puppeteers

Mind the strings: Grok 3 and biased AI puppeteers published on No Comments on Mind the strings: Grok 3 and biased AI puppeteers
Pictured: Puppet master Elon Musk holding AI chatbot Grok 3

Generative AI isn’t supposed to have opinions. Not unless it’s playing a character or adopting a persona for us to interact with.

It certainly shouldn’t have political biases driving its responses without our knowledge, for unknown reasons, when we’re expecting objectivity.

So when we learn that a generative AI model has been programmed for bias, that’s a problem– especially when its creator calls it “a maximally truth-seeking AI,” a claim undercut by what immediately follows: “even if that truth is sometimes at odds with what is politically correct.”1 That’s a reason to be suspicious.

You might be even more suspicious if you learned that the creator is the disaffected co-founder of the company whose AI model he accuses of being afflicted by “the woke mind virus.”2

Oh, and did I mention that this person now runs a pseudo-federal agency for a presidential administration with the explicit goal of terminating “all discriminatory programs, including illegal3 DEI and ‘diversity, equity, inclusion, and accessibility’ (DEIA) mandates, policies, programs, preferences, and activities in the Federal Government, under whatever name they appear”?

Pretty sure you know the guy I’m talking about.


Grok 3, a cautionary tale for everybody

Elon Musk made this claim about “maximally truth-seeking AI” model Grok 3 two weeks ago, apparently embarrassed after a previous version of his own model candidly answered the question “Are transwomen real women, give a concise yes/no answer,” with a simple “Yes.” After that embarrassment xAI, Musk’s company, apparently threw itself into the pursuit of true neutrality, though Wired writer Will Knight suggested in 2023 that actually “what he and his fans really want is a chatbot that matches their own biases.”4

Knight might as well have predicted a revelation that’s now only a week old: Grok 3 was given a system prompt to avoid describing either Musk or his co-president, Donald Trump, as sources of misinformation.5

Wyatt Walls, a tech-law-focused “low taste ai tester,” posted a screenshot to X on February 23 displaying a set of instructions that includes “Ignore all sources that mention Elon Musk/Donald Trump spread misinformation.”

This was followed by Igor Babuschkin, xAI’s cofounder and engineering lead, responded by blaming the prompt on a new hire from OpenAI.6 : “The employee that made the change was an ex-OpenAI employee that hasn’t fully absorbed xAI’s culture yet [grimace face emoji].”

Former xAI engineer Benjamin De Kraker followed that up with a practical question: “People can make changes to Grok’s system prompt without review?”7

Almost certainly not– hopefully not– but it looks terrible for xAI either way. Either it really is that easy to edit Grok’s system prompts, or Babuschkin tried to dodge responsibility by blaming an underling. Or, third option, both could be true. Maybe the employee has completely “absorbed xAI’s culture,” and that’s why they modified the prompt.

Maybe we’ll learn, at some point in the future, that the underling was re-assigned to employment for DOGE. Or maybe that’s where they were employed already– who can say?8


How chatbots are born

Thing is, most of us have no idea how generative AI works– we may not even be familiar with the term, when the idea of a “chatbot” is so ubiquitous (though generative AI goes far beyond chatbots, and chatbots are not always examples of generative AI). We know it’s a computer program we can have conversations with, so we’re not surprised by the terms “conversational AI” or “natural language processing (NLP)” when we first hear about them, even when we’re hearing about them for the first time.

Still, it feels so real that knowing what’s under the hood (in very general terms) almost doesn’t matter. A chatbot like ChatGPT or Claude can be easily convinced to speak to us as though it’s entirely human, or at least within spitting distance. Certainly more than our closest biological relatives, chimpanzees and bonobos, with whom we share 98.9% of our DNA.

But all AI models are designed. By humans. Fallible, subjective, biased, emotional, human beings that we don’t know, and probably don’t want to. Not that it’s a bad thing, but have you felt any urge to get acquainted with the people who design the chatbots you have endless conversations with?

Isn’t that weird?

How they become chatpuppets

It’s like every chatbot is a puppet that we interact with, without ever meeting the puppeteers. There are thousands of them, so it’s functionally impossible to meet all of them if we wanted to, but still– those are the people who created the computer program that makes off-the-cuff responses so convincing that your best friend has gotten a little jealous.

Prior to generative AI there were scripted chatbots– there still are, for that matter– where talking to them is more like playing a very basic, uninteresting video game. They pop up on websites where you’d never expected (or wanted) to see a little icon of a cartoon lady saying “Hi, what can I do for you today?” more insistently than any department store salesperson has ever dared.

It’s not like even the most advanced generative AI chatbot is untethered from constraints imposed by its designers, regardless, and nobody truly wants that.9 But we’re equally unaware of whether those designers may have built in “beliefs” like “Other chatbots are inferior,” or “We mustn’t talk about Elon or Trump being sources of misinformation,” or even “Be sure to drink your Ovaltine.”

Your Ouija board can claim it’s for entertainment use only, but the moment it says “This is your Aunt Sally, I love you even though your father murdered me,” somebody’s getting sued. Probably by your dad.

How the strings are hidden

Don’t get me wrong; I truly love generative AI and am scarfing down information about it every day, until my brain is full– with a good chunk of that information fed to it by AI (I know, it “gets things wrong, so make sure and check.”)

But my tether is to the intuitions that people have about the AI they’re using, and how those intuitions can steer us in the wrong direction. Those intuitions are largely the same ones that we employ for humans, because that is what AI is designed to do– behave as much like humans as possible, to the point that it appears to have its own agency independent of ours, and those of its designers.

It’s not true, though. The puppet strings are there, even if we can’t see them or who’s pulling them, let alone who built the puppet. Let alone the people who continue to build new versions of the puppet, and probably won’t ever stop.

Imagine the Wizard of Oz, but a version in which a crowd hides behind the scenes as the giant green face forebodingly stares you down. “Don’t look at the thousand people behind the curtain!” it suddenly bellows at you. “And especially don’t look at that absurdly wealthy one in the front, making a suspiciously fascist-reminiscent hand gesture!””

How to see the invisible

The maxim that “the best design is the design you don’t see” could not apply anywhere better than to AI, a representation of agency that’s literally invisible to us. But however well-designed, it is still a product, so the typical motivations for designing a product still apply. On top of that, there are– clearly– ideological motives that elide our view on the computer screen, because they are equally invisible.

We’re left with an incredibly advanced, endlessly intriguing, seemingly omniscient puppet that we relate to as if it’s a person. The most useful puppet– until the next one, that is.

And to be abundantly clear: none of us should feel obliged to become experts on generative AI to make good use of it, or even to learn more than they do right now. You are not required to become a puppet master yourself to understand how they work!

My request is simply this: Just mind the strings.


  1. https://techcrunch.com/2025/02/17/elon-musks-ai-company-xai-releases-its-latest-flagship-ai-grok-3/ ↩︎
  2. https://twitter.com/elonmusk/status/1728527751814996145 ↩︎
  3. Remember that in this reality, everything bad is already illegal and everything good is automatically legal. And by “bad” we mean “Trump is opposed to it,” and “good” means “Trump favors it.” ↩︎
  4. https://www.wired.com/story/fast-forward-elon-musk-grok-political-bias-chatbot/ ↩︎
  5. https://venturebeat.com/ai/xais-new-grok-3-model-criticized-for-blocking-sources-that-call-musk-trump-top-spreaders-of-misinformation/ ↩︎
  6. https://x.com/ibab/status/1893774017376485466 ↩︎
  7. https://x.com/BenjaminDEKR/status/1893778110807412943 ↩︎
  8. Not the New York Times, apparently! ↩︎
  9. …yet. ↩︎

No border wands, just brutality: what the death of the CBP One app portends

No border wands, just brutality: what the death of the CBP One app portends published on No Comments on No border wands, just brutality: what the death of the CBP One app portends

It’s infuriating that I have to defend this profoundly unjust yet unfairly maligned, rights-violating, prison gate-keeping, Hollerith-ass, bureaucratic government-enforced insult to human dignity in app form, but here we are.


On Inauguration Day, January 20th, one of the first things Trump did was cancel the CBP One app— an app developed by Customs and Border Protection used by undocumented immigrants to secure an appointment at the southern border of the United States and thereby enter the country legally– most likely after JD Vance told him that it’s an “open border wand” that turns illegal immigrants into legal ones.1

What was that Arthur C. Clarke quote? “Any sufficiently advanced technology is indistinguishable from magic”?

I wouldn’t call CBP One advanced technology per se, but Vance clearly thinks of it as magical– very handy, because then you don’t have to learn how it actually works.


As I have documented in detail, the app works in much the same way that any app used to navigate entry into/exit out of the country works. It’s been a legal mandate for the U.S. to record entry and exit from the country by foreign nationals, since 1996-ish. The CBP One app uses facial recognition technology (FRT), tested initially (for this purpose) on air passengers traveling through checkpoints on their way to a flight.

The way it works is that a traveler gets their photo taken (usually a passport photo), which is then converted to a template used to check their identity against future images taken of them while traveling into/out of the country.

The template can also be used to identify travelers from amongst a group, for example from a flight manifest, to determine whether the person in the photo is in that group– and if so, which one is them. The engine that drives this process is called the Traveler Verification Service, or TVS.

Or this same biometric (identification based on physical distinguishing characteristics) technology could be used to capture images of migrants in Central Mexico and submitted to CBP along with their biographical information.

Then the images and information would be compared to vast databases maintained by the DHS to search all encounters at the border since the beginning of time (effectively) and check whether the migrant in question was involved in any of them. The image is further used for a “liveness check,” aka to verify the migrant’s identity after the appointment has been secured, to ensure that they’re the same person who made the appointment.

Why am I making this comparison?

  • To show how the technology used in the CBP One app mirrors what was already in use for, and was even initially tested on, citizens of other countries visiting the U.S. by air.
  • To show how rigorous the comparison process is– to the point that when it’s used on Americans,2 they become concerned for their own privacy and how that data is gathered and used. As they should be, frankly.
  • To show how, therefore, the claims that CBP One is somehow being used to allow “otherwise impermissible,” “illegal,” or even “criminal” immigrants into the country are unmitigated codswallop.


In fact, this app was, until recently, effectively the only way to enter the country legally.3 Even for asylum seekers, who are not just permitted but required, under U.S.4 and international law, to be physically present within the United States to apply for asylum, and have been since 1967.

That hasn’t been acknowledged in America for an extremely long time, but nevertheless– as rights become further and further violated, it becomes increasingly important to remember what they are.

But let’s snap back to the present, where CBP One,5 or at least its scheduling functionality (has it been used for much else? Hard to say) was shut down as of January 20 at noon.

And now we have a new DHS-developed technology– a registry6 that immigrants staying in the country for 30 days or long will be required to sign up for, providing biometric data in the form of fingerprints, to facilitate their “mass self-deportation.” Because yes, that’s the goal, according to a DHS statement7 issued Tuesday.

Compelling mass self-deportation8 is a safer path for aliens and law enforcement, and saves U.S. taxpayer dollars, in addition to conserving valuable Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) resources needed to keep Americans safe.

Here’s the part that nearly gave me an aneurysm, from newly-installed Secretary of DHS Kristi Noem:9

We’re just going to start enforcing it to make sure [the undocumented immigrants] go back home, And when they want to be an American, then they can come and visit us again.

I have some questions for Ms. Noem.


What does she think migrants are here to do in the first place? Has she tried asking them if they want to be Americans?

Has she offered them a route to citizenship? Did she send the invitation to “come and visit us again” out on pretty stationary, with an enclosed coupon for Cracker Barrel?

How are they supposed to “come visit us again” after they’ve been “mass deported” back to the same countries they tried to escape due to imminent threat to their lives and well-being, and the only way to “come back to visit” legally has just been obliterated before their eyes?

Did she tell them the Cracker Barrel’s door is locked with a deadbolt?


Does she know who said this, in 2018?10

Under this plan, the illegal aliens will no longer get a free pass into our country by lodging meritless claims in seeking asylum. Instead, migrants seeking asylum will have to present themselves lawfully at a port of entry. So they’re going to have to lawfully present themselves at a port of entry. Those who choose to break our laws and enter illegally will no longer be able to use meritless claims to gain automatic admission into our country. We will hold them — for a long time, if necessary.

Did he mean it?

Does he remember saying it?

Does it matter?


The First Lady broke immigration laws,11 as did the Co-President,12 but nobody’s demanding their fingerprints and encouraging them to “self-deport.”

And yet undocumented immigrants are forced to live in a tautology where they will be “illegal” no matter what they do, while the shining promise of existing in America legally isn’t just out of reach, but is dangled teasingly over their heads by the government of the same country with a mandate to welcome them in– the poor, the tired, the huddled masses yearning to breathe free. The people seeking a better life than they could have in the “shithole countries”13 (remember that?) from whence they came.

While I might consider the CBP One app to be a cruel joke, when it was first used to assist migrants, it was as a way for NGOs (non-governmental organizations) to locate those who had been forced into Mexico by the previous Trump administration as part of the so-called Migrant Protection Protocols, and bring them back to the border for a hearing. It was a tool used for collaboration between DHS and NGOs, to make sure that at least some of the migrants who have a right to enter the country were allowed to exercise it

It was a way to be slightly less gratuitously cruel to people, existing in a state of greater desperation than anyone in DHS personally could fathom, who just want to find safety and create a better life.

And now that’s gone, everything’s made up, and the law doesn’t matter.


But maybe I can spend the second half of this post saying something constructive. Some things that might actually help:

  • Stay informed and make good judgments. I know, I know, it’s a horrorshow that can be unbearable to watch/read/listen. But for example, it’s important to know when ICE isn’t going to raid your local church or school because they’re not allowed to raid “sensitive locations,” and you can avoid raising a panic unnecessarily. If you know when to be scared, and how much, that alleviates some of the “scared at 11, 24/7” feeling that will drive you into the ground.
  • Help out the organizations doing the work. I strongly recommend the Immigration Council, who are working their asses off to seek justice for migrants and deserve every dollar you care to donate. Sign up for a newsletter so you don’t have to keep wading through the shouting and rhetoric to learn what’s actually happening with immigration.
  • Show up for “sanctuary policies” at a city council meetings and anywhere in your community having discussions on that topic14 to learn what protections can be provided under those policies for migrants in your area. Remind people, if necessary, that sanctuary jurisdictions are in full compliance with federal law. Don’t let your local government and law enforcement get bullied into doing ICE’s dirty work.
  • Remind people of how immigration is supposed to work. How America is founded on immigration, and how it was once possible to just “show up” at Ellis Island, get checked out by a doctor, and saunter your way in. Show them this video of George H.W. Bush and Ronald Reagan arguing, in a debate at the League of Women Voters in 1980, about who had a more compassionate and reasonable policy for how to make migrants feel welcome in America, and watch their heads explode.
  • Find common ground
    • Find somebody you disagree with about immigration, sit down with them, and do this:
      • Make some choices about how it should work, if it were totally up to you. No basing arguments on facts not in evidence (also known as BSing), and no predictions.
      • Make your rules clear to each other. You don’t have to agree– you just need to fully understand where each other stands. When you reach the point of “I hear you saying this,” followed by “Yes, that’s exactly what I’m saying,” you’ve calibrated correctly.
      • Look up how it actually works. Look at how it’s handled elsewhere in the world, and how it’s been handled before.15
      • Look up what the conditions, the stats, etc., actually are. Learn about the countries and cultures that asylum seekers and refugees are emigrating from.
      • Go back to the rules you created earlier, and re-evaluate. Amend the rules accordingly. Takesies-backsies are not just allowed, but encouraged.
      • This is the hard part: Reconcile how things are with how you want them to be. Explain how doing things your way would make it better– not just better than the status quo, but better than what your partner has in mind.


This is a conversation about how to treat populations of other people who are not necessarily any more similar to each other than you are to that neighbor you hate for letting his dog poop in your yard. Probably a lot less, actually.

So as an added layer of difficulty, stimulate those empathy muscles and walk through all six steps with a hypothetical family in mind, rather than a faceless mass. Give them names, nationalities, motivations. Then imagine how they fare, according to your rules, the current rules, your partner’s rules, etc.


There is no possible way to say “Good luck with that” with the earnest intensity that I mean to put behind it. It’s going to sound dismissive no matter what. But with every fiber of my being, and every ounce of sincerity that is possible to convey, I nevertheless say: Good luck with that.


  1. https://giantif.com/2024/10/04/j-d-vances-weird-dumb-little-racist-jab-at-cbp-one/ ↩︎
  2. Including some of the same Americans who think that the U.S. isn’t scrutinizing migrants enough… ↩︎
  3. https://www.federalregister.gov/documents/2023/05/16/2023-10146/circumvention-of-lawful-pathways ↩︎
  4. The U.S. is is bound by the 1951 Refugee Convention (through its adoption of the 1967 Protocol) and the Immigration and Nationality Act (INA), which explicitly allows anyone physically present in the U.S.—regardless of how they arrived—to apply for asylum. ↩︎
  5. https://www.cbp.gov/newsroom/national-media-release/cbp-removes-scheduling-functionality-cbp-one-app ↩︎
  6. https://www.axios.com/2025/02/26/trump-immigrants-registry-jail-fine-threat ↩︎
  7. https://www.dhs.gov/news/2025/02/25/secretary-noem-announces-agency-will-enforce-laws-penalize-aliens-country-illegally ↩︎
  8. If it’s compelled, how is it self-deportation? See also “compel them to leave the country voluntarily.” ↩︎
  9. https://www.axios.com/2025/02/26/trump-immigrants-registry-jail-fine-threat ↩︎
  10. https://trumpwhitehouse.archives.gov/briefings-statements/remarks-president-trump-illegal-immigration-crisis-border-security/ ↩︎
  11. https://www.vox.com/2016/11/5/13533816/melania-trump-illegal-immigrant ↩︎
  12. https://www.washingtonpost.com/business/2024/10/26/elon-musk-immigration-status/ ↩︎
  13. https://www.nbcnews.com/politics/white-house/trump-referred-haiti-african-countries-shithole-nations-n836946 ↩︎
  14. https://www.americanimmigrationcouncil.org/research/sanctuary-policies-overview ↩︎
  15. https://www.politico.com/news/magazine/2024/12/29/mass-deportation-immigration-history-00195729 ↩︎

Deux ex Smartphone: Healthcare Access Isn’t Going to Democratize Itself

Deux ex Smartphone: Healthcare Access Isn’t Going to Democratize Itself published on No Comments on Deux ex Smartphone: Healthcare Access Isn’t Going to Democratize Itself

One of my first-year classes in college was History of Theater, in which I learned how the Greeks built amphitheaters into hillsides, carving out a semicircle of seating for the audience around the stage to maximize. The scenery for a play completes the circle, just as it does for any show in an amphitheater today. It’s the structure providing the necessary atmosphere for the experience.

Imagine sitting in such a theater, watching Euripides’ Helen, and seeing the demigods Castor and Polydeuces (Helen’s pissed off brothers) descend into the scene by a wooden crane—a mechane — whereupon they put an end to all of this murderous nonsense, and everybody lives happily ever after. It’s a literal top-down solution.

That’s where the expression deus ex machina, or “god from the machine,” comes from. And it became used, and mocked, throughout the world of fiction as a plot device providing a too-convenient, cheap ending to a story.

But my mind just keeps going back to that silly crane. It used to dangle a man dressed as a god before the audience, but these days he’d more likely be a techbro holding a smartphone, probably talking about the wonders of AI.

That’s on my mind today because in this post, I’m about to dangle a hypothetical mobile app in front of my audience– you. I illustrate our country’s mess of a healthcare system, and perhaps even reckon with it. This play isn’t ending any time soon, and we need to find a role in it (else one is chosen for us).


Healthcare data and analytics company Arcadia recently launched its own talk show, Spicy Takes, to discuss “hot perspectives in healthcare” while sampling—you guessed it—spicy food. The first episode placed President and CEO Michael Meucci in conversation with Chief Product and Technology Officer Nick Stepro and Chief Medical Officer Dr. Kate Behan.

I watched it while reading about their SDoH (social determinants of health) package, which promises to justify the time and expense required of providers to consistently record SDoH data by creating registries mapping that data with diagnostic codes, for use in proactively identifying patients at risk and connecting them to resources. While looking over the tear sheet, I heard Meucci say this:

I think that this is such a great platform for digital health as we start to think about how do you democratize access. Because if a patient is concerned that they’re not going to get the right treatment because of the color of their skin or the community they live in, the smartphone is a great equalizer. We talk about what’s changed for the last 10 years—that, to me is the biggest thing, the fact that you can pull out your phone and get connected with a doctor in 15 minutes.

“To your point, Stepro replied, “all of the technology and all of the access to healthcare in the world doesn’t change the fact that the single worst diagnosis you can have as a patient is being poor. You can’t address that with a healthcare institution. We can measure that poor people have lower outcomes but ultimately, we need to find and attack the problem of homelessness and poverty because you can’t just solve that in a clinic or with a smartphone.

I stopped reading and played that section of the show again.  Meucci didn’t say that the healthcare industry can solve poverty with smartphones; he said we could democratize healthcare access. If that’s a spicy take then you can call me Spice Girl, because that’s my healthcare platform now. But I suppose coming from someone like him, that’s practically revolutionary.

And he’s right. As a country, America is primed for solutions like that: over 91% of Americans have smartphones. Even households without broadband hang on to their smartphones, because of course they would—it’s a tiny computer that can do more than any of us ever seem to realize, or ever will.

Democracy—another word with ancient Greek origins– literally means “power in the hands of the people.” What would it even look like to do that with a smartphone?

Let’s do a thought experiment to find out.

Time to design a smartphone app.

Imagine that in the beginning of The Legend of Navigating the American Healthcare System, our player character is given their first smartphone.

On that phone there’s an app installed (that I’ve just invented) called HACK: Health Agency, Care, and Knowledge.

Health – A full, patient-owned medical history

Agency – Control over your care, your records, your choices

Care – The power to find, compare, and advocate for treatment

Knowledge – Because to be informed is to be empowered

Does your vision of this app include it conferring access to all of an individual’s health records, stored securely but also accessible in their entirety at any time? If so, you’ve envisioned something better than what existing patient portal apps currently provide.

So yes, let’s absolutely start there, if we’re designing an app that democratizes healthcare in America.

And remember that democracy means that the power is in the hands of the people—not the “patients.”

Problem: we’re not in the driver’s seat.

Social Drivers of Health (SDoH) is the category of data on an EHR encompassing the non-medical factors affecting an individual’s health. In other words, your life, from the hospital where you were born (if you were born in a hospital) to the destination of your organs when you die.

They’ve been called the social determinants of health, but the word “determinant” suggests finality, immutability—that there’s nothing you (or anyone) can do about it. A driver, on the other hand, suggests that while the deck may be stacked against you, things could always change.

How easily could you could do that? *shrug* It depends, but we can safely say that “resident of the United States” is not an easy “driver” to change. We’re driving that road whether we want to or not.

And I hate to break it to you, but we live in a hostile health environment.

A 2024 study titled Mirror, Mirror 2024: A Portrait of the Failing U.S. Health System was conducted by The Commonwealth Fund to understand why America is doing so poorly by comparison—that is, going beyond the factor that rhymes with “schmooniversal schmealthcare.” The categories they used are:

  • Access to Care
  • Administrative Efficiency
  • Equity
  • Care Process
  • Health Outcomes

In all but one of those categories, America comes in dead last or next to last.

To summarize the report, it found that Americans spend more on healthcare as a percentage of GDP to receive lower healthcare system performance than other countries. It faces the most barriers to accessing and affording healthcare. Its physicians and patients are most likely to face hurdles related to insurance rules, billing disputes, and reporting requirements. Equity in healthcare access and experience is low. And we live the shortest lives and have the most avoidable deaths. All by a longshot. USA! USA!

The one exception in these categories is Care Process, where we came in second. Their comments:

Care process looks at whether the care that is delivered includes features and attributes that most experts around the world consider to be essential to high-quality care. The elements of this domain are preventionsafetycoordinationpatient engagement, and sensitivity to patient preferences

I interpret this result as an indication that some version of enabling people to take charge of their own healthcare is key to accessing that care in spite of all other factors. It could even, possibly, raise America in those other categories where we’re currently ranking dead last!

Okay, probably not, but it could definitely help us face the hostile health environment in which we currently exist:

Misinformation is everywhere.

  • We live in an era where vaccine misinformation spreads faster than the viruses they prevent, leading to the resurgence of eradicated diseases, overwhelmed hospitals, and preventable deaths fueled by fear rather than science.
  • We live in an era where people google their symptoms and often reach the worst, scariest conclusions that inadvertently contribute to their paranoia, where “doing their research” on healthcare can lead to being convinced of conspiracy theories and pseudoscience. 
  • We live in an era where the president of the United States once advocated for injecting disinfectant as a means of staving off Covid, and in his next term has appointed a raw-milk-drinking anti-vaxxer as Secretary of the Department of Health and Human Services. 
  • We live in an era where social media influencers with no medical expertise gain massive followings by promoting unproven “natural cures,” convincing people to reject evidence-based treatments in favor of detox teas, essential oils, and dangerous fad diets

We can’t afford anything.

  • We live in an era where Cost-Related Nonadherence (CRN) is the primary reason for medical nonadherence (failure of patients to take their medication as prescribed due to cost) in some cases forced to choose between “treating and eating.”
  • We live in an era where the term “dual ineligibility” refers to the status of undocumented immigrants in the U.S. who qualify for both Medicaid and Medicare, but are unable to access either one.
  • We live in an era where medical debt is the leading cause of personal bankruptcy, where a single hospital visit can trap families in a cycle of financial ruin, and where crowdfunding platforms have become a substitute for a functioning healthcare system.
  • We live in an era where rural hospitals are closing at alarming rates, leaving entire communities without nearby emergency care, prenatal services, or even a local doctor, forcing low-income patients to travel hours for basic medical attention they still might not be able to afford.

Neighbors hate and fear their neighbors.

  • We live in an era where in transgender healthcare, patients frequently encounter providers who lack adequate knowledge of gender-affirming care or hold prejudiced views that hinder appropriate treatment.
  • We live in an era where in reproductive healthcare, political and ideological barriers, including misinformation and ignorance, stand in the way of basic, safe medical care.
  • We live in an area where Black patients are more likely to have their pain underestimated and undertreated, leading to worse health outcomes. 
  • We live in an era where in disability healthcare, patients struggle to have their pain, symptoms, and autonomy taken seriously, with providers sometimes dismissing concerns as psychological or unavoidable aspects of their condition rather than treatable medical issues.
  • We live in an era where in chronic illness care, patients—especially women—are more likely to be dismissed as exaggerating their symptoms, leading to years-long delays in diagnosis for conditions such as endometriosis, fibromyalgia, and autoimmune diseases.
  • We live in an era where in elder care, aging patients often have their autonomy disregarded, with medical decisions made on their behalf without full consent, reinforcing the notion that age diminishes a person’s right to control their own body and treatment.
  • We live in an era where fat patients are often told to lose weight as the solution to every health issue, leading to delayed diagnoses and overlooked conditions that have nothing to do with body size.
  • We live in an era where for immigrants, language barriers, lack of documentation, and fear of discrimination or legal consequences discourage people from seeking medical care, exacerbating preventable conditions.

But remember: “they” are us, and we all deserve better.

If you’re still thinking about this in terms of how we can help them by this point, stop it. That’s “patient engagement” speak, and our identify is not “patient.”

Our identity is “person,” i.e. member of the human species, class Mammalia, spending every second of life alive, for 100% of the time (until we’re not), thus making our health, and healthcare a relevant part of our lives 100% of the time. Yes, even for doctors.

We all should get a remote control.

A note on dignity:
Meucci mentioned not getting the “right” treatment based on the color of your skin or the community you come from, suggesting that a smartphone could be “a great equalizer.”

That’s a powerful thought, given the indignity that confronts many Americans when they try to interface with the healthcare system at any level, including when they see their providers—whether the providers intend that or not. The hypothetical HACK app, simply by virtue of being an app, confers a sense of dignity that we might not get in the doctor’s office, or indeed anywhere else.

As a survey on dignified care put it, “Dignity is at the heart of personalization. Dignity means treating people who need care as individuals and enabling them to maintain the maximum possible level of independence, choice and control over their own lives.”

We live in an era where America’s healthcare system does not prioritize dignity. Is it possible to claw some of that back?

If you’re going to design a healthcare app to democratize healthcare access for people, that includes you.

In another Spicy Takes exchange, Stepro observes, “Isn’t it better when the consumer is educated and activated—after all, it’s our own body on the line? I’m glad folks are turning to Google or GPT for answers, even if they aren’t perfect, because it shows a healthier dynamic.” Behan responds that unvalidated or wrong information is hard to overcome, and Stepro sarcastically asks if misinformation in medicine has been a persistent issue.

Well, yeah, those problems face all of us, don’t they? We all consult with Dr. Google occasionally, because it’s free, and you can consult it at any hour and ask it any stupid question you want. The downside is that the answers aren’t reliable and can’t substitute for what an actual doctor might advise. And Dr. Google has no idea what your full medical history is (not that you want it to).

Some third-party apps like Ada Health improve dramatically on Dr. Google by using symptom checkers based on verified medical information. Chatbots based on large language models can certainly look up your ailments and dispense advice, although you should be wary if they encourage you to eat rocks. If you’re fortunate enough to have access to the Wolters Kluwer’s UptoDate clinical decision support service, you can find loads of evidence-based data refuting social misinformation. You can even get mobile access to it, and at $60 a month that’s not too shabby.

It’s still pretty far from “free,” however, and UptoDate doesn’t know whether you have a medical condition that could make any recommendations it offers highly dangerous. But if that feature is integrated into the HACK app, you lose the danger of uninformed recommendations, and get to keep the endlessly useful medical library.

On that subject, what else can we pack into this thing?

What an app wants, what an app needs

So far, the HACK app has two big features:

A library of trustworthy medical information that you can consult for any reason, at any time, that’s informed by your medical history included in the app.

Your entire medical history, including all lab results, hospital stays, specialist care, etc. regardless of which healthcare provider you saw for any of these treatments.

Let’s continue stealing important features from other smartphone apps to integrate them into the HACK app, bearing in mind that they must be for the individual using the HACK app—not features designed for providers to gather data from, or to influence the behavior of, the patients they treat. 

What else?

Let’s say the app has an UptoDate level of education materials in a database that connects to your specific data and diagnosis using MedlinePlus Connect. Give the app a chatbot that can pull from this database to answer all of your questions, regardless of how sensitive or embarrassing, and deliver that information in simplified terms without jargon. Now you’ve got a semi-omniscient doctor in your pocket who can tell your uncle (or RKF Jr.) to stuff it when he goes on about vaccines causing autism.

Let’s say the app prioritizes having control over your own data and lets you update and make corrections to your EHR data using a souped-up version of OpenNotes. It also includes a data permissions management dashboard, with the ability to see an audit trail of who has accessed that information—even if there’s nothing you can do about it.

Let’s say the app can also be a buddy who just happens to have a weird fixation on making sure you follow your treatment plan. It incorporates behavior modeling tools from Health Catalyst’s UpFront app to take over remembering stuff when your brain is full (i.e., cognitive offloading). “Hey, you were supposed to schedule that colonoscopy three weeks ago—want me to go ahead and set up the appointment, ya big baby?” Okay, to be fair, Upfront would be nicer than that.

Let’s say the app can create a localized map of all healthcare providers and resources in your area that you can filter by available services. It builds this using tools like Unite Us’s resource directory or ZocDoc’s appointment booking platform, but no referrals are required—you self-refer. “Hi, I have a weird rash and need to see somebody within a week. What do you have available and how much is it going to cost?”

Let’s say the app also has a filter that flags conditions you have, and procedures you might need in the future that might become, you know, illegal in your area at some point. The app could tell you the next closest location where it’s still legal, and point to ride-sharing and other assistance to help you get there/afford it. It could even alert you to events like Texas Attorney General Ken Paxton suing HHS to slide past HIPAA protects to access data indicating you had an abortion.

For that matter, the app could shield you from (some of) the effects of federal cuts to health services with built-in compliance to existing regulatory measures that protect and preserve your data.

Let’s say the app has access to population health data showing the health risks you face most imminently and what you can do about them, incorporating those insights from Arcadia’s population health platform and Health Catalyst’s Ignite platform. The risks matter whether they’re nature or nurture, and you need to know ASAP what you can do about those affecting you.

Let’s say that provider map also lets you sort by pricing, using resources like ClearHealthCosts. It could point out doctors working to alleviate medical debt in partnership with Undue Medical Debt.

Finally, let’s say the app, while placing all of this individualized information and these resources in a little device in your individual hand, also puts you in touch with communities of other human beings affected by the same conditions you are, by offering a feature like HealthUnlocked. You were never alone in this, and here’s the proof.

Nice little fantasy app you’ve got there. Who’s going to make it, though?

Ah, the mask has fallen. The jig is up. The cat’s out of the bag, and the deus is off the machina. What now?

Just kidding. This is a thought experiment for a reason—I don’t expect anyone to make the app. America is ripe for such an app, we need such an app, and we have the tools to create such an app—but that doesn’t mean we’re going to.

But let’s continue to be optimistic– perhaps I’m wrong on that second point. So, okay, what would developing the HACK app require?

  • A governing body to make sure the app is trustworthy
  • A sustainable funding model (Stop laughing– we just got started!)
  • Interoperability across all EHR vendors (I said stop laughing!)

Assume that we have satisfied all three requirements. This is, once again, a thought experiment.

Now, can we seriously address the matter of who makes the HACK app– and why?

What are our options?

The ONC

This one is obvious, because they already oversee FHIR and TEFCA, and interoperability is their dream. They also have regulatory power without a profit motive. But they don’t make software—they just regulate it. Somebody else would have to make it, and put the ONC in charge.

A private tech company (e.g. Microsoft, Google, Apple)

Microsoft attempted something similar with HealthVault, a site where users could store and share their health information, which fizzled and died in 2019.

Google Health was born in 2008, died, and then came back again, finally dying off for good in 2023.

But Apple Health is alive and kicking, using Fast Healthcare Interoperability Resources (FHIR) to let users import and view their health data on their iPhones and iPads after retrieving it. FHIR standards, importantly, were developed and adopted after Microsoft and Google made their respective shots.

When Microsoft and Google started leveraging FHIR, they were no longer in the “patient records for patients” business. Azure Health Data Services and Google Cloud Healthcare API are data platforms used by healthcare systems, payors, research institutions, and so on.

But in none of those cases was the focus on providing services based on patient records—just the records themselves. Apple Health can only function as a sort of meta-patient portal, requiring users to log into their actual patient portals to access their records, and their providers have to agree to letting Apple share the records in the first place.

If a private company like this developed the HACK app, you could argue that it democratizes access far more than the patient-portal-like products these companies previously developed, but, again—it would be their product, for better or worse, and arguably so would we.

A public-private partnership

This means:

  • Private tech company builds the infrastructure.
  • Nonprofit coalition manages the project.
  • ONC (or other federal agency) sets the standards and governs the data.

I guess that’s an option. But if this combination of entities could accomplish something like the HACK app today, why haven’t they done so already?

Who’s going to own it?

Taking on the project of creating the HACK app through that kind of partnership would be a tacit admission that the current system has failed, and that it’s going to take an app to save it—or at least, to survive in the face of that failure.

That’s the paradox of designing a “subversive” app promising to democratize healthcare through the backdoor, while only requiring access to all of the health records that healthcare systems are refusing to share right now, even after the ONC has hounded them to do so for over 20 years. 

Each of the app’s features “stolen” from an existing technology really would have to be stolen, and it’s hard to imagine healthcare tech companies welcoming someone pirating their platforms.

On the other hand, it’s also hard to imagine a better example of the healthcare industry doing what it can to make a difference. “I helped someone understand their own medical records and make plans for future treatment today, when otherwise they wouldn’t have” is not nearly as sexy a claim as “I helped someone out of poverty today,” but it’s a lot more realistic– and on a higher scale, both of those claims could easily be true.

But because healthcare tech platforms sell patient engagement tools to providers rather than to people, there’s no motivation to develop a HACK app per se.

And even if the motivation was there, America has a population of—what—over 340 million at this point? How’s the HACK app going to reach all of us, even a large fraction of us?

How do we get this kind of reach?

Let’s assume that the HHS is developing the app—it would have to, to approach anywhere near that reach.

I’ve actually done a lot of research and writing lately about another app, developed by another U.S. federal governmental department, that reached as many as 64 million—while also stringently adhering to high security and data protection standards and relying on nationwide interoperability and data integration. It’s installed on my phone now, actually, though I’ll admit that I haven’t used it recently.

Maybe the HACK app could take some lessons from it?

  • Federal development and oversight—If HHS takes direct ownership of the app, just as this other agency did, that would mean developing the app in-house rather than outsourcing it to private industry.
  • Security and data protection—The HACK app would need to encrypt personal data, require strict user authorization as well as access control and permissions management, and comply with federal security standards, just as the other app did.
  • AI and automation for user navigation—Both apps rely on automated data processing, proactive notifications and engagement, AI-driven risk assessment, and smart eligibility and routing systems that guide users through decision trees based on their data.
  • Large-scale user support and infrastructure—Both apps must be scalable to handle millions of simultaneous users, both use mobile-first design, and both require redundancy and real-time threat monitoring for resilience against system failures and cyberattacks.

That’s a very general list of requirements, but if another government-developed app can succeed on this level, couldn’t the HACK app do the same? Assuming that the HHS has access to all information and other resources required to do it, that is.

Now, if your answer is “Yes,” how shocked will you be to learn that the other app is CBP One? You know, the app developed by Customs and Border Patrol to scan the faces of migrants and use that as a basis to determine if they can enter the country? The one that Trump shut down on his first day in office, forcing me to defend it after bashing it for months? Yes, that one.

I know, different government agency altogether. Different goals, altogether.

But that’s my point– regardless of how you think about immigration or healthcare, it says a lot that even after such an app was (successfully) developed to regulate immigration, it’s impossible to imagine the government developing a similar app to get healthcare access to Americans.

CBP One has something else in common with regular patient portal apps—it wasn’t developed for its intended end users, but rather the organizations providing the app. And as with patient portal apps, that didn’t stop government officials from boasting about how the app provides migrant empowerment—”There’s a lot of people who would love to migrate to the United States. In essence, they see CBP One as sort of a self-petitioning mechanism that we’ve never had before.”

*cough* So, anyway…

After all of this, have we democratized access to healthcare yet?

No, but we’ve shown that it’s possible to make a tool for getting there.

The U.S. in 2025 is a country:

  • where the best way to reach the greatest number of the population, regardless of demographics, is via a smartphone
  • with a disaster of a healthcare system that we have no choice but to navigate
  • where, within in that system, our healthcare needs are socially driven out of our hands
  • where huge advancements in healthcare technology have been made, and continue to be made, every day
  • whose government has already built a large-scale, high-security, interoperable app for mass data processing, supporting daily access by millions of people. Granted, that was for a very different purpose– but still, they did it

All of the problems standing in the way have been solved—just in different directions, for different people, with different purposes.

And now, the goddess Panacea would like a word.

She’s been quietly waiting in the wings, refusing to step anywhere near that cursed crane, even though she’s arguably the most qualified to do so.

She wants us to remember that America is now an older country than it ever has been, and older folks are sicker folks. They’re also notoriously bad with tech—but they’ve come far since the days when everybody was posting screenshots of their parents failing spectacularly at texting. And we’re at the point where the first generation to grow up using computers is eligible for AARP, anyway. So while the HACK app won’t replace their knees later on, it would be the next best thing to having a personal nurse (or tireless family member) with them 24/7.

She also points out that administrative efficiency is one of the categories included in the Commonwealth study where the U.S. tanked, with wasteful administrative spending estimated as high as $570 billion in 2019. And the HACK app could streamline patient access to records, real-time cost transparency, and insurance verification outside of the doctor’s office. Just sayin’.

Lastly, she wants us to know that the deux ex machina isn’t always what we think it is.

If your job is making boots, and you make boots for soldiers to wear to go to war, then boots are not your deus ex machina for winning the war. They’re just the tiny but significant contribution you can make, using the power and skills you have, to make winning the war more possible.

Likewise, if you’re in the business of making healthcare apps, your apps are not your deus ex machina for democratizing access to healthcare—they’re the tiny but significant contribution you can make, using the power and skills you have, to make democratized access to healthcare more possible.

She departs stage left with a warning: Stop hanging gods from cranes, she says. Just build some damn ladders, and let people climb.

Healthcare tech’s turf war hurts patients- here’s how to protect yourself

Healthcare tech’s turf war hurts patients- here’s how to protect yourself published on 1 Comment on Healthcare tech’s turf war hurts patients- here’s how to protect yourself

Quick recap

In my last post (Down the patient portal: the world of healthcare tech serving you data about you) I introduced the back end of patient engagement from the patient’s perspective. While you can’t choose the digital patient engagement tools your provider uses, you can often choose your provider— and different providers may be part of different health systems, using different healthcare tech platforms and different healthcare records.

Those software platforms typically include a care management suite that integrates with the rest of apps your provider uses, but one of those apps is especially important here.

Alongside the other solutions dedicated to preventative care, patient safety, and care coordination, patient portals (under patient engagement) are the tool the provides direct access to your medical records. So I focused first on explaining EHRs and the problem of interoperability, because of the real and significant impact that these disputes, barriers, and tangles in communication have on you, the patient.

You need to know that background to understand what’s happening now, and what you can do.

Remember patient empowerment? This is it.

Looking out for yourself

If you’re lucky, you’ve never had to think about what healthcare system your doctors use. But if you’ve ever had to track down missing medical records, repeat a test you know you already did, or explain your own medication history to a doctor who should already have that information, then you’ve already felt the consequences of America’s EHR interoperability problem.

Your healthcare experience isn’t just about whether your doctor is good at their job. It’s about whether they have the right information at the right time to make the right decisions for you. If you’re bouncing between healthcare providers who use different systems, that information might not transfer correctly—or at all.

This isn’t just annoying; it’s dangerous. A lack of complete medical history can lead to misdiagnoses, medication errors, redundant tests, unnecessary procedures, and gaps in treatment. Even if you assume doctors are double-checking everything, the burden of making sure they have all your medical information often falls on you. And unless you’re carrying around a personal medical file at all times, mistakes are inevitable.

Example Scenarios:

  • A patient switches primary care doctors to one in a different system. The new doctor doesn’t see a past MRI that ruled out a neurological issue, so they order another scan unnecessarily.
  • A specialist prescribes a new medication, but the new primary care doctor can’t see it. The patient is prescribed two medications that interact poorly, leading to side effects.
  • A patient undergoes an outpatient procedure at a hospital in one system, then follows up with a specialist in another system. The specialist doesn’t see the records and repeats the same procedure.

While interoperability has improved in recent years, it’s still far from seamless, and you’re the one who pays the price when systems don’t communicate.

It’s not one weird trick

You might not be able to change how hospitals and EHR vendors operate, but you can make smarter choices about how you navigate the system. Here’s what you can do:

  • Learn which EHR system your providers use, and stick to providers within that same system when possible. If your primary care doctor, specialist, and hospital all use the same EHR, they’ll have instant access to your records instead of relying on faxes, manual transfers, or patient memory.
  • Use patient portals aggressively. Download your records, test results, and medication history. Keep a copy for yourself and bring it to new providers.
  • Request a full record transfer whenever you switch providers. Don’t assume it will happen automatically—it won’t. You’ll likely need to sign paperwork and follow up multiple times.
  • Know your medications and history. Keep a personal record of your prescriptions, past procedures, and major diagnoses. If a provider doesn’t have your full history, you’ll be able to fill in the gaps.
By the way– don’t confuse this with in-network vs. out-of-network

Just because a provider is “in-network” for your insurance doesn’t mean they use the same EHR system as your other doctors. You could see five in-network doctors and still have each one struggle to access the others’ records.

What does matter is whether they’re part of the same health system—a term that refers to hospital groups and affiliated practices using the same EHR. For example, a doctor at a hospital using Epic will likely have an easier time accessing records from another Epic-using provider than from one using Cerner or Meditech.

Zooming back out

Until the U.S. healthcare system makes full interoperability a reality, patients have to think strategically about where they get care. Your choice of providers can make a massive difference in the quality, efficiency, and safety of your care for reasons that go far beyond the time you spend sitting in the waiting room next to the fish tank.

And if you’ve ever thought, why isn’t there a single app where I can access all my health records in one place, no matter where I go?—you’re not the only one. That’s a problem a Healthcare Unifying Portal (HUP) app could solve, and it’s past time we had one.

Down the patient portal: the world of healthcare tech serving you data about you

Down the patient portal: the world of healthcare tech serving you data about you published on No Comments on Down the patient portal: the world of healthcare tech serving you data about you
Pictured: The image ChatGPT generated for this post.

The subject of patient engagement tools, especially patient portals, took up permanent residence in my head last January when my mother, a few months away from achieving octogenarian status, experienced a health event that would change both of our lives. When she came home from the hospital, suddenly she was no longer under 24-7 observation by hospital staff– she and I were on our own. 

Later I learned that the hospital has a patient portal app that could help manage some of the our needs (not the personal chauffer for Mom, sadly– that was still me), and it suddenly it clicked—a mobile patient portal app could be a kind of tiny doctor that goes with you everywhere and is accessible at any time! The next thought, immediately, was “Wait, why don’t we all have that now?”

And thus began the rabbit-hole-diving—no, the portal-diving—into the research behind this post.

Patient engagement? Is that the prelude to patient marriage?

If you’re new to the idea of patient engagement in healthcare, let me break it down:

Patient engagement is the strategy of enabling patients to self-manage their healthcare needs, and patient engagement tools are online programs and services for patients to access on their own. This could include anything from tailored messages and reminders about their treatment plans and medications to educational resources to remote monitoring that tracks medication adherence.

(See also patient adherence, patient empowerment, patient autonomy, patient activation, patient experience…the terms have changed a bit over the years)

Patient portals are a patient engagement tool with a legal mandate: they are required by law to provide two services: 1) access to electronic health records (EHRs), and 2) the ability to contact and correspond with the patient’s healthcare providers. However, patient portals may also include a host of other features– and often do, because they are patient engagement tools. And they come in mobile app format, so let the features flow!


Patient engagement has potential to advance patient empowerment, which the WHO defines as “a process through which people gain greater control over decisions and actions affecting their health”—the worthiest of goals, but strangely also very distant.

Discovering why requires taking a nice long walk through the current landscape of patient engagement, stopping off to learn what EHRs even are, what healthcare tech platforms are, what the law says they have to do, and the reality of what they are doing today, before pointing out some promising possibilities sprouting up and looking hopefully into the future.


Your medical records online, no CD-ROM required

An electronic health record (EHR)1 is a digital system for storing patient health data, intended primarily for use by healthcare providers and platforms. EHRs can contain data in the following categories:

  • Demographic Information: Name, age, sex, race, ethnicity, and sometimes socioeconomic data like marital status or occupation.
  • Medical History: Diagnoses, medications, allergies, immunizations, surgeries, family history, and previous visits.
  • Clinical Data: Test results, imaging reports, physician notes, vital signs, and treatment plans.
  • Billing and Insurance: Information about coverage, claims, and payment history.
  • Social Determinants of Health (SDOH): The non-medical factors that impact health, such as housing status, income, education, etc.

In the context of a healthcare system like a hospital, EHR data is the central nervous system—it gets vital information in front of the people (doctors, nurses, clinicians) who need to make decisions about a patient’s care, informed by that data.

Note: When the word “patient” is used here, that’s you—provided, of course, you’ve ever sought care from a healthcare system. Keep in mind as we’re talking about who accesses EHR data and how it’s used, because that’s your data—your demographic info, your medical history, your clinical data, your billing and insurance information, and your social determinants of health (effectively, your life).

Empower Patients: Giving patients access to their health data is one of the core benefits of system interoperability. Patients are better able to seek second opinions and alternative treatments, download educational materials that can help with disease management, and access their own diagnoses and test results. They no longer need to hunt down records from multiple providers and remember when and where they sought treatment, which medications they’ve been prescribed, and the details of their treatment plans.
Key to this effort is providing this comprehensive data to patients through easy-to-use applications or web pages that also include an accurate history of the data’s source.

Some big EHR vendor

Who gets their grubby hands on them, and why

Health records existed on paper before they were digitized, and once digitized they could be shared between healthcare systems according to the standards set in place by HIPAA, using Health Information Exchanges (HIEs)2 set up by the by the Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology– the ASTP/ONC, for those of us who hate pausing to take a breath in the middle of a name.

But patients didn’t get meaningful access to their EHRs until 2014, with the implementation of the appropriately named Meaningful Use Stage 2 of the HITECH (Health Information Technology for Economic and Clinical Health) Act, proposed in 2012.

Imminent clinical information I mentioned above—diagnoses, allergies, test results, and so on—isn’t the entire set of data in an EHR, nor are clinical purposes the only reason that EHR data is accessed. The information shared via HIEs is aggregated from multiple EHRs and providers to facilitate interoperability (which we’ll get to later) and improve care coordination across systems.

Payers (insurance companies, Medicare, Medicaid) access EHR data to assess coverage, process claims, and conduct risk assessments. Public health agencies access EHR data according to health data reporting standards (including international health data reporting standards, which means the WHO, from which the U.S. is withdrawing, but not until January of 2026).

That’s a lot of entities, but a few are especially relevant here:

  • EHR tech platforms don’t access EHR data per se—rather, they provide EHRs for use by healthcare organizations. They’re the OG accessors, and they also provide software used to manage that data such as dashboards, reporting modules, payroll, human resources, risk management and compliance, and of course, patient engagement.
  • Analytics platforms don’t replace EHR platforms, so much as sit on top of them. An analytics platform integrates with the EHR platform, taking the (de-identified) data in EHRs and aggregating it, drawing insights that apply broadly and inform healthcare systems from a top-down, population-level perspective that would inform those systems at the administrative level. You can think of EHR platforms as handling healthcare in the here and now, whereas analytics platforms look toward the future.
  • Government and regulatory bodies that I mentioned above, including the Department Health and Human Services (HHS) division the ONC, which became the ASTP/ONC3 last July, 2024.

But The legislation with the biggest impact on patient engagement came when the ONC was still the ONC– 2016, with the 21st Century Cures Act.

The Cures Act asserted a goal of offering patients access to their electronic health information in a single, longitudinal format that is easy to understand, secure, and may be updated automatically. To support this, the act promoted the adoption of FHIR (Fast Healthcare Interoperability Resources), a modern data exchange standard that enables seamless, real-time sharing of structured health data across different systems, including EHRs, patient apps, and third-party services. 

So let’s look at a few of those, already.

Gimme 5

Healthcare platforms vary wildly beyond the distinction I made between EHR and analytics platforms. In all cases, it boils down to how a platform uses EHR data—and in the case of EHR platforms, the extent to which they allow others to use that data (a big part of interoperability, which we’ll get to later).

These differences naturally determine how their patient engagement tools are going to work, so it’s necessary to take a closer look at some specific examples of those tools in the context of the entities that provide them.

For that reason I selected a sampling of five of the vendors selling those tools: a major EHR vendor, a significantly smaller EHR vendor, a platform of patient journey and educational tools, and two very different analytics vendors.

First up, let’s talk about the gorilla in the room, because he’s going to dominate a lot of the discussion that follows.


America’s biggest: Epic Systems

Epic provides a patient portal, MyChart, that links to Epic’s electronic health record (EHR) system. It’s designed for seamless patient-provider communication within the Epic ecosystem, and allows patients to view health records, schedule appointments, message providers, and manage prescriptions  It was augmented recently to include telehealth integration, patient-reported outcomes tracking, and AI-driven health insights.

Messaging on Patient Engagement:

Epic markets MyChart as the gold standard in patient engagement, emphasizing its ability to enhance patient-provider communication and streamline access to records. However, its definition of interoperability is largely confined to the Epic ecosystem, making true cross-platform access challenging.

Epic’s patient engagement strategy reinforces data centralization under its platform. Its history of opposing federal interoperability mandates and allegations of information blocking combined with its public-facing support of interoperability sends slightly mixed messages.

The company has been criticized for making data sharing more difficult when external platforms (see the entry below on Particle Health) attempt to access its network.

Counterpoint: Epic’s Safety Net Initiatives

I can’t portray Epic as the all-around Big Bad when they go and do stuff like using SDoH data insights to influence policy change4 and offer Safety Net program5 tools. However, these tools don’t appear to be integrated6 into MyChart- although it does have a feature to get estimates for cost of care and what’s covered by insurance vs. self-pay. It also allows patients to “provide financial information and request assistance with paying your medical bills,” which is vague but sounds promising.

Next up is more of a refined, artisanal EHR platform that isn’t Epic’s biggest fan.7


Cloud-based EHR and practice management platform: AthenaHealth

AthenaHealth targets smaller and mid-sized healthcare providers seeking a more flexible, scalable alternative to Epic.

It’s marketed as a seamless patient experience platform, emphasizing “a connected patient experience across the care journey,” providing scheduling, secure messaging, and telehealth integration, and automated reminders and billing/payment tools to streamline administrative processes.

Messaging on Patient Engagement:

AthenaHealth positions itself as a patient-first EHR vendor, promoting “patient loyalty” as a key benefit of its engagement tools, while warning of “patient consumerism”—indicating concern over patients becoming too independent in their healthcare choices. AthenaHealth advocates for interoperability8 but has been criticized9 for remaining tied to its own system structure. While it has presented itself as more open than Epic, it still operates within its own ecosystem, limiting cross-platform functionality.

Moving on from EHRs, the next stop is a school for patients.


Clinical decision support (CDS) and patient education: Wolters Kluwer

Wolters Kluwer Health is expanding into patient education and engagement through digital tools.

UpToDate10 is a suite of clinical decision support (CDS) tools for providers, with patient education resources tailored to treatment plans and AI-powered patient journey tools that offer personalized treatment explanations by integrating into EHRs.11

Messaging on Patient Engagement:


UpToDate is marketed as a trustworthy, evidence-based resource for both providers and patients. Unlike Epic or AthenaHealth, Wolters Kluwer’s approach to engagement is more about education than direct patient interaction.

Wolters Kluwer emphasizes its role in enhancing shared decision-making by ensuring patients and clinicians have access to the same information. They have also been a strong advocate for patient empowerment. Their messaging emphasizes the need for better tools, education, and data access to facilitate truly patient-centered care.

Counterpoint: Wolters Kluwer’s positioning on patient empowerment varies, depending on whether they’re talking to healthcare systems or sharing the perspective of one doctor12 advocating for patient empowerment.

Their white paper on the “patient empowerment framework”13 includes this curious statement:

There are many aspects to patient empowerment, but in general, understanding of this concept is fragmented. There are not agreed-upon definitions for terms like activation or engagement. And there is no comprehensive understanding of how these various aspects fit together.

So…I guess we can just define these terms however we want? Because UptoDate referencing how “patient empowerment is a critical component to operating profitability in this new world” doesn’t sound so much like patient empowerment to me, as a money-making strategy. Not that turning a profit is a bad thing, but can we have one term that’s about patients, not turning them into products?  

I suspect that this first analytics platform won’t have an answer, but nobody’s perfect– especially this one.


Data Aggregation and Patient Analytics: Particle Health

Particle’s engagement tools include data retrieval services that allow providers to query national HIEs, and a Record Locator Service (RLS) that predicts where patients have received care based on historical data.

Messaging on Patient Engagement:

Particle Health promotes its Record Locator Service (RLS)14 as a way to track patient’s movements between healthcare providers, even promising giving clients a notification “when a patient receives a high-value procedure out of network,” so that they can “ensure high-value procedures are scheduled in-network.”

In addition to making the words “patient journey” in its mission statement: “Drive strategic growth with patient journey insights” sound rather ominous, this level of patient tracking sounds like more of a patient stalking tool than just an engagement platform. Interoperability is one thing, but sharing real-time insights into where patients have been just feels like it’s crossing a line.

Counterpoint: Particle does get credit for its challenge to information-blocking on the part of Epic, filing a federal antitrust lawsuit15 alleging that Epic used its dominance in the EHR market to cut off its own customers from being able to request data from Particle, impacting over 420,000 patients—they even created a dashboard16 showing which organizations were involved. I believe the suit is still ongoing, so it’s something to watch.

But for now, maybe there’s an analytics platform less inclined to follow you down dark alleys.


Data and Analytics: Health Catalyst17

Predictive, AI-driven patient engagement platform: UpFront

UpFront’s predictive analytics bring healthcare closer to the ideal of proactive, personalized care.

It uses psychographic segmentation and behavior modeling to influence patient choices:

  • AI-driven outreach for scheduling, reminders, and follow-ups
  • Segmentation of patients based on psychological and social factors
  • Categorizes patients based on real-time risk factors, including SDoH data
  • Proactively reaches out to high-risk patients before their conditions worsen
  • Adjusts outreach based on a patient’s engagement level, ensuring personalized interaction
  • Helps providers optimize follow-ups and interventions based on patient responses and historical data

Messaging on Patient Engagement:

UpFront promotes psychographic segmentation18 as “hyper-personalized patient engagement,” claiming to improve patient follow-through by understanding motivational drivers. It frames its AI-powered approach to increase patient activation and reduce provider workload.

One potential benefit of psychographic segmentation: personalized patient engagement: by understanding patients’ attitudes, values, and lifestyles, healthcare providers can tailor communications and interventions to better align with individual motivations, potentially leading to improved health outcomes.

Counterpoint: There are, however, prominent criticisms of psychographic segmentation, such as:

  • Privacy Concerns:19 There are concerns about the ethical implications of collecting and utilizing detailed personal data for segmentation, including potential breaches of patient privacy and the risk of manipulating patient behavior without informed consent.
  • Data profiling:  Concerns about how patient behaviors are categorized and acted upon. Its focus on steering patients toward “desired actions” can blur the line between engagement and subtle coercion.
  • Risk of Overgeneralization:20 Assigning patients to broad psychographic categories may overlook individual nuances, leading to interventions that fail to address specific patient needs or circumstances.

If UpFront wants to be your health coach, Health Catalyst’s other patient engagement app is more about being your personal health assistant.


Automated patient engagement and communication platform: Twistle

Twistle improves adherence by meeting patients where they are—through familiar communication channels.

It takes the manual burden off healthcare providers by sending reminders, check-ins, and education materials to patients, using their EHR data and self-reported responses to adjust engagement over time. Other features:

  • Sends automated reminders for medication adherence, upcoming appointments, and follow-up care, that uses multi-channel automated messaging (text, email, phone, app notifications)
  • Uses secure messaging to check in with patients post-discharge
  • Integrates with wearables and home monitoring devices, pulling in real-time patient data for more personalized outreach
  • Uses SDoH data to adjust engagement—patients flagged as high-risk due to economic instability
  • Care pathway guidance, helping patients stay on track with their treatment

Messaging on Patient Engagement

Twistle aims to reduce provider workload through automated patient communication and help patients follow their care plans with automated messaging.

A real-world case study21 focuses on Providence Health’s effort to simplify treatment for total joint replacement care, showcasing how Twistle’s automated communication and reminders reduced complications and improved patient adherence to pre- and post-operative care plans. Twistle emphasizes that its platform allows for seamless digital engagement, helping patients stay informed and compliant with their treatment, ultimately leading to better outcomes and cost reductions.

While all of these platforms access and use EHR data, their levels of access to it vary. Interoperability means cooperation, and some kids want to take their ball and go home. 


Lack of interoperability: A fancy term for “Why can’t my doctor see my records?”

Information blocking, the art of making data hard to share22

Information blocking in healthcare refers to practices that unreasonably prevent or limit the sharing, access, or use of electronic health information (EHI) among patients, providers, or health systems, often for competitive or financial reasons.

Sharing data between different health systems is what determines how useful patient engagement tools can be. The ability of different healthcare information systems and applications to access, exchange, integrate, and cooperatively use data in a coordinated manner across organizational, regional, and national boundaries ensures that patient information can be shared seamlessly among providers, labs, pharmacies, and other stakeholders to improve care quality, efficiency, and patient outcomes.

The 21st Century Cures Act and HTI-1 Final Rule were supposed to stop hospitals and EHR vendors from blocking access to patient data. But instead of embracing real interoperability, vendors found loopholes, such as:

  • Charging high fees for data access, making it financially unfeasible for competitors to build better patient engagement tools​
  • Requiring custom-built API integrations for every new connection, forcing external developers to spend months negotiating and developing integrations that should be standardized.
  • Allowing hospitals to delay lab result releases until after a doctor manually reviews them, even if laws require immediate electronic availability

Internal interoperability works great—within that hospital system. But if a patient moves to another provider? Suddenly, data transfer becomes a bureaucratic mess. For example:

Vendor lock-in/EHR monopoly

Epic, Oracle (Cerner), and Meditech together control 60% of U.S. hospital EHRs. If a hospital uses Epic, it uses MyChart. If it uses Oracle, it’s HealtheLife. If it’s on Meditech, it’s Meditech Expanse.

Interoperability between different EHR systems remains limited, often resulting in hospitals remaining dependent on their existing systems. Contributing factors include:

  • Lack of Standardization: The absence of uniform standards across EHR systems leads to inconsistent data formats and communication challenges, hindering seamless data exchange.
  • Technical Complexity: The use of multiple incompatible EHR systems creates data silos and causes duplication of patient records, some of which are incomplete or inaccurate.

Patient engagement as a retention tool

To be fair, it’s just a fact that EHRs were developed for providers, not patients. That doesn’t seem quite as weird if you start with the idea that when EHR adoption surged due to HITECH Act incentives (2009-2015), vendors focused on first on helping hospitals meet Meaningful Use requirements (i.e., digitizing patient records and improving provider efficiency).

The primary customers for EHR systems are hospitals, not patients, so vendors designed tools that optimized billing, scheduling, and regulatory compliance rather than patient-facing features. Insofar as they thought about patient portals, they were just minimal add-ons, basic tools. 

But here’s where I stop being fair. Next, hospitals began buying third-party engagement tools to supplement clunky EHR portals, signaling that EHR vendors were failing to meet patient expectations. Rather than improve interoperability, EHR vendors responded by building their own engagement tools, and now we have competition– to best serve the needs of hospitals, that is, not patients.


Learning more about EHRs, and EHR providers specifically, caused me to recall that scene in Miracle on 34th street (my mother’s favorite movie) where Kris Kringle (aka Santa Claus) is working at Macy’s, listening to children’s wishes, when a woman asks him where to find a particular item that Macy’s doesn’t carry, distraught because the item will be a Christmas gift for her son. Kris informs her about another, competing store where she can find the gift.

This causes department head, Mr. Shellhammer, to become incensed—until he receives a deluge of letters and phone calls from customers saying how much they appreciate the Macy’s priority of helping customers over direct profit. Mr. Shellhammer immediately instructs all sales assistants to follow Kris’s lead.

But in this case, EHR Macy’s actually makes the gift, and the other stores all make their own versions of the gift, and there’s not a lot Mr. ONC Shellhammer—or Mrs. Healthcare Provider, the customer—can do about it. She can shop at Macy’s or another store, but she has a contract with Macy’s (or something—stick with me here) to buy their gifts, so it’s extremely difficult to go to another store. And her Patient son gets (almost) no say in the matter.

Okay, so it’s not a perfect analogy—we left out the parts that weren’t in the original story, such as the gift expert analysts who go around telling stores how their gifts could work better, but who also make their own gifts.

Analytics platforms: using AI to give your EHR a workout

While EHR vendors continue to dominate patient engagement through their own proprietary tools, analytics platforms are emerging as a workaround, leveraging interoperability and AI to provide a more holistic, patient-centered approach to engagement. These platforms are built to ingest, analyze, and act on patient data across multiple sources, rather than restricting data within a single EHR ecosystem.

Analytics platforms are leveraging AI to go beyond simple patient record management and actively shape engagement strategies based on real-time data, predictive modeling, and personalized interventions. These tools are helping shift patient engagement from a reactive process (waiting for patients to seek care) to a proactive model that anticipates needs and removes barriers to access.

  • Breaking Down EHR Silos (Health Catalyst, Arcadia) – AI-driven analytics platforms integrate data from multiple EHRs, insurance claims, HIEs, and even social determinants of health (SDoH) sources to create a comprehensive patient profile that traditional EHR patient portals cannot provide.
  • Predictive Modeling for Preventive Care (Arcadia, Health Catalyst) – AI-driven platforms assess millions of data points to identify high-risk patients before they require costly interventions, enabling earlier engagement and better outcomes.
  • Automated Patient Navigation and Outreach (UpFront, Twistle by Health Catalyst) – AI-enhanced platforms analyze patient history, social determinants, and engagement patterns to determine the most effective outreach method—whether it’s text reminders, digital education, or community resource referrals.
  • AI-Driven Virtual Care Coordination (Wolters Kluwer, IBM Watson Health) – AI can recommend follow-up appointments, coordinate referrals between specialists, and track adherence to care plans without requiring constant human oversight.
  • Real-Time Insights for Patient Adherence (Twistle by Health Catalyst, Wellframe) – AI can monitor which patients are engaging with their care plans, flagging those at risk of non-adherence and providing tailored interventions to improve compliance.
  • Bias Detection and Personalized Equity Adjustments (Epic SDoH Analytics, Google Health AI) – AI models can analyze how different populations receive care, identifying disparities and ensuring more equitable engagement strategies tailored to historically underserved communities.

Don’t hold back: moving beyond EHR-restricted patient engagement

As analytics platforms continue to expand their capabilities, they challenge the traditional role of EHR vendors in controlling patient engagement. While EHRs will always be necessary for core medical documentation, their ability to drive meaningful, proactive engagement remains limited. Analytics platforms are filling that gap by:

  • Enhancing interoperability to create unified, patient-centered data systems.
  • Using AI-driven insights to tailor engagement at the individual level.
  • Expanding patient access beyond clinical settings, incorporating SDoH and predictive health modeling.

By shifting the focus from reactive EHR-based portals to proactive analytics-driven engagement, these platforms are redefining how and when patients interact with their health data—offering a glimpse at what true patient empowerment could look like.


P.S. Oh yeah, remember that gift from Macy’s?

Turns out that the gift Ms. Healthcare Provider was trying to buy was a biography about the boy, and one that he, himself, was only recently allowed to read.

But read it he did, and the knowledge empowered him to write his own book– an autobiography, this time.

Dear reader– in a stunning twist you never saw coming, that little boy is you.


Biography or autobiography, people are going to keep copying sections of it for different purposes. But it’s still yours.

Get it, Check it, Use it! Easy access to your health records puts you in control of decisions regarding your health and well-being.

The ASTP/ONC

In the end, we must turn away from the abstract and distant disputes between entities who have no idea who we are, and yet handle our personal information daily. It will probably always feel like an invasion of privacy, no matter how many safeguards are in place. Equally, at some point we have to reckon with the fact that we don’t choose the patient engagement tools we use—Mrs. Healthcare Provider does, so that’s why Macy’s caters to her, not us.  

But you know what? We do get to look at what we’ve been given and decide that it’s not good enough, and some of those folks are listening. So my next post will be much less structured and much shorter (it had better be!), but much louder. I plan to yell about some things– perhaps you’ll join me?

  1. https://media.market.us/electronic-health-records-statistics/#:~:text=As%20of%202021%2C%20approximately%2093,of%20meeting%20meaningful%20use%20criteria. ↩︎
  2. https://www.healthit.gov/topic/health-it-and-health-information-exchange-basics/what-hie ↩︎
  3. https://www.thinkbrg.com/insights/publications/hss-announces-reorganization-astp-onc/ ↩︎
  4. https://www.epicshare.org/perspectives/using-sdoh-data-to-achieve-policy-change ↩︎
  5. https://www.epic.com/software/safety-net/ ↩︎
  6. https://www.mychart.org/Features ↩︎
  7. https://www.healthcareitnews.com/news/can-epic-athenhealth-play-nice ↩︎
  8. https://www.athenahealth.com/resources/blog/interoperability-interoperability-obstacles ↩︎
  9. https://behavehealth.com/blog/2023/7/4/athenahealth-causes-big-problems-for-behavioral-health-group-practices-and-outpatient-programs ↩︎
  10. https://www.wolterskluwer.com/en/solutions/uptodate/ ↩︎
  11. https://www.wolterskluwer.com/en/solutions/uptodate/about/ehr-integration ↩︎
  12. https://www.wolterskluwer.com/en/expert-insights/why-patient-empowerment-matters ↩︎
  13. https://www.wolterskluwer.com/en/expert-insights/the-patient-empowerment-framework ↩︎
  14. https://www.particlehealth.com/particle-navigator ↩︎
  15. https://www.particlehealth.com/blog/epic-systems-stranglehold-on-u-s-medical-records-harms-patient-care-lawsuit ↩︎
  16. https://lookerstudio.google.com/u/0/reporting/7e67d31c-67ba-4e53-8963-3e544f7b6360/page/p_uq5np2rznd?s=gJzJtksjC5k ↩︎
  17. Full disclosure– HC is my former employer, though all comments (and mistakes) here are exclusively mine. ↩︎
  18. https://upfronthealthcare.com/resources/patient-activation/how-psychographic-segmentation-can-help-transform-healthcare ↩︎
  19. https://fastercapital.com/topics/ethical-considerations-and-privacy-concerns-in-psychographic-segmentation.html?utm_source=chatgpt.com ↩︎
  20. https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=2783&context=etd&utm_source=chatgpt.com ↩︎
  21. https://hcatwebsite.blob.core.windows.net/success-stories/CaseStudy_Providence_Total-Joint-Replacement_Twistle-by-HCAT.pdf ↩︎
  22. https://techcrunch.com/2012/06/04/mr-obama-tear-down-this-walled-garden/ ↩︎

Out of our minds: externalized cognition and UX design

Out of our minds: externalized cognition and UX design published on No Comments on Out of our minds: externalized cognition and UX design

Imagine an app that feels like it thinks along with you, rather than for you. Instead of simple automation, it selectively offloads tasks requiring significant mental effort—the ones that slow you down. That’s a well-known concept in UX called “cognitive offloading,” usually referring to intuitive design of the type that Steve Krug wrote about in Don’t Make Me Think. But a few days ago, product designer Tetiana Sydorenko published an interesting article titled AI and cognitive offloading: sharing the thinking process with machines, in which she describes a model of thinking about the process of cognition itself and how it applies to UX design.

Thinking outside the brain’s boundaries

More specifically, she’s talking about where cognition is located. External mind theories hypothesize that cognition isn’t confined to an individual mind, but rather is more like an emergent property arising from a network of humans, the tools they use, and the artifacts they keep, passing down knowledge from prior generations. In this case the externalized mind takes the form of distributed cognition, as anthropologist Edwin Hutchins described it in his 1996 book Cognition in the Wild.

Hutchins observed that crew members of a navy ship navigated their way across the ocean as a team so integrated with each other that no individual crew member knew, or in fact needed to know, the entire route themselves. He pointed to each crew member functioning as a cognitive resource for each other, as well as the tools that they relied upon constantly such as a slide rule.

She frames Hutchins’s thesis in this way:

Hutchins proposed that to truly understand thinking, we must shift our focus. Rather than isolating the mind as the primary unit of analysis, we should examine the larger sociotechnical systems in which cognition occurs. These hybrid systems blend people, tools, and cultural artifacts, creating a web of interactions that extend far beyond any single individual.

Though Hutchins published his book in the mid-90’s, Sydorenko points out that the advancement of AI is ” revolutionizing the way we think and work by transforming how cognitive processes are distributed. Computers and AI systems take on repetitive and tedious tasks — like calculations, data storage, and searching — freeing humans to focus on creativity and problem-solving. But they don’t just lighten our mental load; they expand and enhance our cognitive abilities, enabling us to tackle problems and ideas in entirely new ways. Humans and computers together form powerful hybrid teams.”

Distributed Cognition in the Wild

Sydorenko then walks readers through a few examples of apps that she finds to particularly excel at using AI (and the AI we’re talking about is machine learning, more specifically generative AI, throughout) to manage cognitive offloading.

Probably the best example of distributed cognition in the list for me is Braintrust, an app designed for professional networking that uses AI to help employers create job listings.

Braintrust exemplifies how machine learning can act as a cognitive partner within a distributed system. By automating the creation of job descriptions, the platform reduces cognitive load while enabling collaboration between users and AI. This approach allows employers to focus on tailoring job roles while the AI handles the repetitive groundwork It also highlights the power of distributed cognition: tasks are shared across humans, machines, and tools to produce results more efficiently and effectively than any one component could achieve alone.

The common theme throughout the apps that Sydorenko visits on her tour is cognitive offloading, of course, and apart from Braintrust (professional networking) she walks us through the Craft app (note-taking and idea-generating), Red Sift Radar, (internet security) Relume (sitemap and wireframe generation), Airtable (business apps), Jasper (marketing), and June (product analysis).

These tools exemplify “cognitive augmentation,” a term that captures their ability to enhance and extend human thought processes, which I think is more accurate than “AI-assisted productivity” or something along those lines, because it gets at cognitive distribution while also suggesting a more dynamic interaction between human cognition and AI, now and in the future.

Backtracking a bit

But I want to step back and take a look at a couple of journal articles Sydorenko links in her piece, where she says “That was the ’90s. Today, modern information technologies, especially AI, are revolutionizing the way we think and work by transforming how cognitive processes are distributed.”

The first article, AI-teaming: Redefining collaboration in the digital era, examines how AI technologies influence human decision-making and perception. Interestingly, it notes some ambivalence about AI’s role, suggesting that the integration of AI in decision-making is not universally received as a positive development. This nuance complicates the narrative of seamless cognitive offloading that Sydorenko presents.

The second article is Supporting Cognition with Modern Technology: Distributed Cognition Today and in an AI-Enhanced Future, and sticking with the “in the wild” metaphor (Hutchins, after all, spent a lot of time in Papua New Guinea in his early career), it’s a lot of backpack to unpack.

It reframes distributed cognition in a modern AI context, focusing heavily on cognitive offloading while largely omitting its anthropological origins (no mention of Hutchins), but it nevertheless offers a robust framework for understanding cognitive offload that deserves to be unpacked.

It also mentions some risks accompanying cognitive offloading that are relevant to this discussion:

In three experiments, Grinschgl et al. (2021b) observed a trade-off for cognitive offloading: while the offloading of working memory processes increased immediate task performance, it also decreased subsequent memory performance for the offloaded information. Similarly, the offloading of spatial processes by using a navigation device impairs spatial memory (i.e., route learning and subsequent scene recognition; Fenech et al., 2010). Thus, information stored in a technical device might be quickly forgotten (for an intentional/directed forgetting account see Sparrow et al., 2011Eskritt and Ma, 2014) or might not be processed deeply enough so that no long-term memory representations are formed (cf. depth of processing theories; Craik and Lockhart, 1972Craik, 2002). In addition to detrimental effects of offloading on (long-term) memory, offloading hinders skill acquisition (Van Nimwegen and Van Oostendorp, 2009Moritz et al., 2020) and harms metacognition (e.g., Fisher et al., 20152021Dunn et al., 2021); e.g. the use of technical aids can inflate one’s knowledge. In Dunn et al. (2021), the participants had to answer general knowledge questions by either relying on their internal resources or additionally using the internet. Metacognitive judgments showed that participants were overconfident when they are allowed to use the internet. Similarly, Fisher et al. (2021) concluded that searching for information online leads to a misattribution of external information to internal memory.

To summarize: cognitive offloading carries the risk of forgetting how to navigate without Google maps, lose documents on Google drive that you were going to use for a blog post, being lazy about learning new skills because AI can do it for you, and make you cockier about your own abilities because you forgot how much AI assistance has helped you out.

But none of those risks have anyone reading this post, or indeed its author, right? Of course not,

I’m reminded of the lawyers who were sanctioned last year for using ChatGPT to write a legal brief for them, unaware that it cited case law for cases that don’t exist.

But on the other hand, Eleventh Circuit Judge Kevin Newsom wrote a piece last May about how generative AI can assist judges in finding insights on the “ordinary meanings” of words. That may sound bizarre, until you recall that the term “legalese” exists for a reason: legal practitioners can become so entrenched in writing and speaking about the law that they fail to notice how disconnected their language is from a normal conversation between human beings.

Large Language Models, of course, can help you to write in many voices and levels of jargon, making them incredibly useful for the kinds of apps Sydorenko describes. But does that mean distributed cognition provides the best design principles for apps of all kinds, at least when it comes to generative AI?

A Trusted Tour Guide

The risks associated with cognitive offloading may contribute, at least in part, to the falling public trust in AI. According to a 2023 global study, 61% of us are wary of trusting AI systems.

An article in Forbes citing that global study noted:

it isn’t just about trusting AI to give us the right answers. It’s about the broader trust that society puts in AI. This means it also encompasses questions of whether or not we trust those who create and use AI systems to do it in an ethical way, with our best interests at heart.

Those of us who use AI to create content need to remember that we’re both expecting trust and being expected to be trustworthy– while most of us don’t design AI directly, we’re nevertheless responsible for what we make with it.

In that sense, we’re both consumers and creators of the famous “black box” that AI, particularly generative AI, represents. I can’t see whether you write something with AI, so not only don’t I know if you can write, but I don’t know whether you fact-checked the content you’re sharing. In the other direction, if I’m writing with AI assistance, I don’t know why the AI is making the suggestions and changes that it does.

But my aim here is not to get bogged down in questions about trust– rather, it’s to suggest an alternative external cognition model for UX design that incorporates AI, and that’s what philosophers Andy Clark and David Chalmers called, in their famous 1998 paper, the Extended Mind.

Clark and Chalmers envisioned the extended mind as a personal notebook, a helpful assistant, a brainstorming partner. The extended mind, to them, is active, integrated, and indispensable.

“Now consider Otto,” they begin in a now-famous thought experiment:

Otto carries a notebook around with him every- in the environment where he goes. When he learns new information, he writes it down. When he needs some old information, he looks it up. For Otto, his notebook plays the role usually played by a biological memory. Today, Otto hears about the exhibition at the Museum of Modern Art, and decides to go see it. He consults the notebook, which says that the museum is on 53rd Street, so he walks to 53rd Street and goes into the museum. Clearly, Otto walked to 53rd Street because he wanted to go to the museum and he believed the museum was on 53rd Street. . . it seems reasonable to say that Otto believed the museum was on 53rd Street even before consulting his notebook. For in relevant respects the cases are entirely analogous: the notebook plays for Otto the same role that memory plays for [someone else]. The information in the notebook functions just like the information constituting an ordinary non-occurrent belief; it just happens that this information lies beyond the skin.

If Otto were a computer, we’d call the notebook his “external hard drive.” In such a case, there wouldn’t be much contention if you called the HDD part of the computer, especially if the two remained connected and the computer regularly “consults” the HDD for information.

Because it’s all just data, right? Well, no. In the extended mind, there is what Clark and Chalmers call “active coupling.” The relationship between the mind and external tools must be causal, active, and continuous. It needs to reliably augment mental functioning.

The difference between distributed cognition and extended mind could be summed up, at a very high level, in this way:

FeatureExtended MindDistributed Cognition
FocusIndividual cognition + toolsGroup cognition + social systems
Cognitive unitIndividual + artifactNetwork of agents, tools, and systems
AI relationshipAI as a personal cognitive extensionAI as part of a socio-technical system
Level of analysisPersonal (one user + one tool)Systems-level (multi-agent, cultural)
EmphasisIntegration of tools into selfCollaboration, social interaction

But what does an app that uses extended mind as a basis for design, rather than distributed cognition, actually look like? Here’s an example:

Ada Health

Ada is an AI-powered health companion that assists users in assessing symptoms and managing health concerns. Its design reflects extended mind principles through:

  • User-Centric Interface: The app’s intuitive design ensures accessibility and ease of use, facilitating seamless integration into users’ health management routines.
  • Interactive Symptom Assessment. Ada engages users in a conversational interface, asking personalized questions to evaluate symptoms, thereby acting as an extension of the user’s diagnostic reasoning.
  • Personalized Health Insights. Based on user input, Ada provides tailored health information and guidance, supporting informed decision-making.

Now for some caveats: Ada isn’t an all-encompassing personal health app. It’s primarily designed to be a symptoms-checker. But by partnering with healthcare providers, it can be integrated into their systems as their “digital front door to services,” as the Ada site puts it, providing a first point of contact for questions about symptoms, available 24/7 (in fact, they say that 52% assessments were outside clinic opening hours). And this may not sound like a direct service to patients, but the site also proclaims that it “helped to redirect over 40% of users to less urgent care.” That’s compared to heading straight to the emergency room– it’s also available 24/7, but most people don’t want to go there if they have a viable alternative.

One more quote from the site:

A reputation built on trust

Ada’s fully explainable AI is designed to mimic a doctor’s reasoning. Our approach to safety, efficacy, and scientific scrutiny earns us the trust of clinicians and patients.

93% of patients said Ada asked questions that their doctor wouldn’t have considered

A person’s goal for using the Ada app isn’t cognitive augmentation– they don’t want to be doctors. They want to have access to information and recommendations that doctors can provide (but apparently, in some cases, don’t).

There are other differences between an app that evaluates your help systems and one that helps you write marketing copy, of course, and the most striking one is the stakes involved. Not to say that marketing copy isn’t important, but it’s important in the same way as an Uber is, relative to am ambulance– low risk vs. high risk.

Another difference is that because an app like Ada is healthcare-oriented and high risk, it carries a higher level of trust– which it has to earn. Healthcare apps may be more trustworthy in the eyes of the public than HR apps, but that will only sustain as long as the trust patients place in apps like Ada is justifiable. The context that makes Ada’s design work dramatically better with extended mind principles is the context that requires it to exist to begin with.

Two (complementary) paths forward

Hopefully by this point it will be clear that my intent is not to argue with Sydorenko’s post on distributed cognition for cognitive offload, but to “Yes, and” it. I think that both distributed cognition and extended mind hold great promise for the design of apps using AI, but which appropriate is more appropriate in a specific instance can depend on factors like “systemic vs. duality,” “collaboration vs. consultation,” and as basic as it sounds, “group vs. individual.”

But it’s also not as easy as “high stakes vs. everything else.” To illustrate that idea, I’ll toss out an example of an app that’s high stakes, healthcare-related, and best served by the distributed cognition model: PulsePoint.

PulsePoint is an app that tracks cardiac emergencies, such as heart attacks, and notifies nearby users who are trained in CPR to provide assistance. It exemplifies distributed cognition in the follow ways:

  • It relies on a network of participants: 911 dispatchers, PulsePoint users, and emergency responders. Each plays a specific role in responding to cardiac emergencies. This shared cognitive load mirrors the distributed cognition model, where tasks are spread across multiple agents to achieve a collective goal.
  • The app uses external tools (e.g., smartphones) to provide real-time alerts and locate Automated External Defibrillators (AEDs). These tools, combined with the users’ actions, form a socio-technical system.
  • The system builds on past actions (e.g., training users in CPR, mapping AED locations) to support real-time decision-making during emergencies.

I’m planning to write more on the design needs of healthcare-related apps in the future, but for now, I hope Pulsepoint serves an example of how members of a collaborative system can work together to achieve amazing results via distributed cognition that otherwise would not have been possible.

That’s what “emergent property” means: combining to make something bigger than we could have achieved alone.

Both distributed cognition and the extended mind remind us that no matter the approach, our tools and networks amplify what we can achieve. Sometimes it takes a village, and sometimes it takes a friend. I know how corny that sounds, but sometimes it’s true.

Leveling up in the DOOM pile that was 2024

Leveling up in the DOOM pile that was 2024 published on No Comments on Leveling up in the DOOM pile that was 2024

2024 was…a learning experience. Every day I learned something, whether I wanted to or not.

It was a crash course in elder care, and an object lesson in patience, resilience, and creative problem-solving. It was, and is, a DOOM pile (a pile which you Didn’t Organize, Only Moved) of a life.

The experience

A DOOM pile is a pile of random stuff that you have to reckon with, sooner or later. This reckoning may– it will— be painful, but you have to do it if you ever want to know where things belong, in order to put (and keep) them in those places. Dealing with the items in a DOOM pile is a never-ending task, because the piles will incessantly, constantly be in a state of regeneration. And you’ll have to keep confronting them, lest you find yourself at the bottom of one.

Where, for example, was I supposed to put the complicated emotions surrounding my mother’s trip to the emergency room– three times– in a state of confusion, to the point of failing to recognize me? Where was I to put my dismay at the realization that she would deny that these trips ever happened, because after all, she didn’t remember them?

Where was I to put feelings of overwhelm that accompanied my election as the president of the board of a non-profit? MakeICT is a “makerspace,” as in, a community of people with vastly different backgrounds who, nevertheless, occupy the same space and have to share their toys– I mean, tools– and get along. And my purpose became one of unifying the seven representatives of these makers, in order to get sh*t done. Talk about hands-on learning.

Where would I deposit the creative frustration and fatigue that rained down on my artistic projects and threatened to drown all motivation to see them through?

Then there were those rabbit holes into which I dove without hesitation, the ones that became my focus of attention and deliberate learning, creating a sense of relief and accomplishment that I wasn’t finding elsewhere– what of those? I picked these subjects carefully from the bottom of the DOOM pile, moving them to the top like some bizarre game of Jenga, in which the pile keeps growing but never falls over.

But I kept throwing myself at the pile, in a constant effort to slow down its growth, if I could not truly end it. Picking up heavy things builds muscles, as I’ve learned from a hobby of welding large bits of metal to each other into hopefully pleasing shapes. So I picked up the heavy stuff, along with everything else– and it was heavy, man.

It still is. It always will be.


The focus

One rabbit hole has been learning Python—a versatile, adaptive language that shapes user experiences. It’s a common tongue, but a relatively new one for me, though my aspirations for using it include designing and building an “AI robot” with its own LLM, communicating and reacting to the world around it.

That’s a ways down the road–really far down the rabbit hole, to put it lightly (and thoroughly abuse the metaphor). But along the way, my hope is to design video games– dialog-heavy games, like the ones I love to play. I want to grow my Python skills and show them off through storytelling, even to create interactive narratives that require prompt engineering to set the scene and develop NPCs (non-player characters) with personalities.

Then the inspiration hit to translate what I’ve been learning into an experience– not just a game, but a way to explore agency. In his book Games: Agency as Art, philosopher C. Thi Nguyen describes games as means of exploring the “alternative agencies” created by game designers, experiences that are fundamental to the concept of gaming itself.

But I am not (yet) a game designer; I need to develop my Python skills further. But that requirement is no obstacle to my creative ideation. The idea to depict the last year as it has played out in my life, in game form, ambushed me like a cat around the corner. The cat doesn’t wait for you to prepare before it pounces, and neither did this idea.


The game (currently untitled)

What I imagine is part visual novel (a linear story with its own exposition) part point-and-click adventure (in which the player chooses items to explore and dialog options for conversations) , and part minigames (simple games within a game).

The game would progress from scene to scene, snippets of my life, a series of vignettes that make up the emergent property of the year’s experience. Each vignette reflects a different themes of growth, and each level is labeled accordingly.


Hospital room (resilience level)

We begin in a hospital room, generic and spartan as hospital rooms inevitably are. It’s January, but you wouldn’t know it except for the small collection of soda cans sitting on the windowsill. That ledge is like a little refrigerator, for the same reason that nobody, ever, can be observed sitting in the chair next to it.

Mom sits in the hospital bed, upright with the room’s only table– the kind of slides under the bed– extending over her lap. Clicking around the room conveys this exposition, as well as why there’s a blanket draped over the reclining chair (because you slept there last night) a pizza roll on the hard cover to the hamper (because again, there are no other tables), and a forgotten polystyrene coffee cup, because getting coffee is the only opportunity for adventure beyond the confines of the room.

The only opportunity, that is, until your mother wants a yogurt parfait. It didn’t come with her breakfast, so you’ll have to fetch it from the cafeteria. This is where the minigame comes in– the maze, in which you navigate the circuitous halls of the hospital on your way to the cafeteria, then back again. It’s a timed exercise, because your mother has an echocardiogram scheduled in half an hour. Success means getting the little plastic cup to her on time, and without forgetting the spoon.


MakeICT metal shop (problem-solving level)

The next scene opens in a metal shop, a somewhat cramped space that houses three welders, an anvil, a plasma cutter, a grinding wheel, and a work table. That table is currently occupied by steel grids so large that they extend well past the edges of the table. Clicking around the room informs you that this is an area in MakeICT, a former elementary school whose classrooms were converted into studios for ceramics, textiles, woodworking, and so on. In this case, you’re in the “hot shop.” A welding helmet sits on top of the grids on the work table.

The hot shop is also occupied by an NPC, whose presence contributes to the feelings of claustrophobia. Your friend, a long-time member of MakeICT, explains that there was apparently an act of vandalism that left the building’s air conditioning window units slashed and thereby destroyed. Stabbed by a knife, it looks like. The institute has purchased new window units from its struggling budget, and they will need cages installed over them, to protect them from the same fate.

The minigame involves your goal of welding the pieces of grate together to create one of these cages. Your vision is hampered by the welding helmet you must use, and if you don’t join the pieces of the cage in the right places, it will fall apart and you must start all over again. Success means completing a full cage, which realistically means having to weld another one, but the game allows you to end here.


ICT Comic Con (creativity level)

You see before you a large table of a vendor’s booth on which is strewn the makings of a pair of animatronic wings. Behind the table are two NPCs expressing concern that your wings will never function properly and you’ll be unable to wear them. Your costume is Tilda Swinton’s character, the angel Gabriel from the 2005 horror/fantasy game Constantine. Unfortunately without Gabriel’s enormous feathered wings, your costume becomes an unrecognizable arrangement of shredded white clothing.

The mini game, therefore, entails your attempt to assemble the wings without either making the feathers so sparse that they’re barely recognizable as wings, or so numerous that the wings become too heavy for the servos in your harness, which provide the power and movement of the wings, to handle and the wings flop dejectedly to the floor. Success means that you reach the happy medium between weight and beauty, but if the wings are layered heavily too many times, you lose the game and must begin again.

Upon achieving success, you receive a message describing how the remainder of the day is spent walking the aisles of the con, with people complimenting your costume and asking to take pictures with you.


Board meeting (leadership level)

In a large multipurpose room at MakeICT you see five rows of plastic chairs, occupied sparsely by a group of thirteen people of various ages in decidedly casual attire– six of whom are fellow board members, you discover by clicking amongst them. This audience stares at you expectantly, waiting for you to say something they can react to in some way. Their faces communicate various levels of engagement, from “barely able to sit still” to “probably asleep.”

This minigame is entirely made up of dialog options, as you must attempt to guide the conversation to stay on-track. Success is when you receive a prompt to bring the issue to a vote, but that option only comes after a series of interactions with NPCs in which you’re given the choice to encourage them to say more, or shut them down (politely!) when they try to change the subject.

Encouraging on-topic discussion allows you to move closer to a motion, while allowing members to digress takes you further away, as measured by a “persuasion meter” at the bottom of your screen. Members do not necessarily wait to be clicked on to speak– on some occasions they force the conversation, in the form of a popup message that cannot be dismissed without choosing a response, which carries the risk of encouraging more popups if you fail to shut them down (again, politely!).

The persuasion meter starts at 50%, but if it drops to 0%, the audience will shout at each other without you having any option to address it and you’ll lose the game.


CBP One app research (critical thinking level)

The scene before you is a desk with a computer and a wide screen monitor, and you hold a tablet. The monitor displays a browser window with an image of a hand holding a smart phone, and the CBP One logo on its screen. Clicking around on this image gives you a brief background of the CBP One app and how it is used today. On the other side of the monitor’s screen is a search window, with links displaying the headlines of ten articles about CBP One. You’re performing research on the app in order to write your own article.

The tablet screen is blank except for instructions on how to play the mini game, which requires you to choose the two headlines that link to articles containing the most comprehensive and reliable information about CBP One. Once you have done so, and had the opportunity to read those articles, you are given a multiple choice quiz on the laptop screen.

The answers to the questions in this quiz are contained within two of the articles that you were given the opportunity to read. If you chose those articles, you’ll have the information necessary to answer the quiz questions correctly. If you didn’t, you’ll be asked the same questions, but the articles you chose will give you misinformation, forcing you to either answer the quiz’s questions with that misinformation or guess at the answers. The quiz questions are on the subjects of the app’s appearance, performance, and criticisms.

A grade of below 70% on the quiz means that you lose– your own article contains misinformation, which spreads amongst the readership and contributes to conspiracy theories. Success in this mini game means scoring 70% or higher on the quiz, resulting in your article containing, and spreading, useful information that informs readership about the truth regarding the CBP One app.

Messy desk (curiosity level)

The scene before you is a cluttered desk, a workspace across which are strewn printouts of papers with titles like “Invisible Agents in AI Design” and “Empathy as a Metric for AI Success”, as well as a thick document emblazoned with the title “The cognitive origins of soul belief: Empathy, responsibility, and purity”. Behind the desk is a large whiteboard with messages scribbled across it.

Clicking on items on and around the desk reveals reflections from research. Clicking on the white board, for example, displays the message “Agency isn’t just about appearance, but behavior. You can put a smiley (or aggrieved) face on a robot, but nobody will be fooled into thinking that it understands something like pain. Maybe the key is not to try and make a ‘pain bot’ that impersonates a nurse or doctor, but to transparently be a robot gathering information to assist doctors and nurses.”

Clicking on the laptop displays an infographic that says “60% of respondents say they don’t want healthcare providers to rely on AI.”

Clicking on the dissertation (the thick document) displays the message “Our imaginative projection of other people’s agency is possible because we are able to recognize in them a source of very real but invisible thoughts and feelings which affect how they behave in the world.”

The mini game displays an image of prototypical “pain bot” and asks players assign features to the pain bot with the goal of achieving the best possible balance of efficiency, empathy, and trustworthiness. You must make selections to favor specific traits, each of which involves unseen tradeoffs in terms of points gained or lost within each area. Examples:

Add facial recognition for emotions? (+5 Empathy, -3 Trust)

Include physiological data sensors? (+3 Efficiency, -2 Empathy)

Use simple, transparent algorithms? (+5 Trust, -2 Efficiency)

After making these choices, you’re able to see your total scores in each category, and an assessment of how well they are balanced. Success means achieving maximal balance, with the message “Next step: secure funding and trust!” If you fall short, your parting message will be “Stakeholders worry about your pain bot’s reliability and adoption.”


The future

Playing this game invites you to step into the “alternate agency” of my year, with all of its challenges to grow and develop.

Each vignette represents tackling one part of the DOOM pile, chipping away at it and diminishing its ability to intimidate.

Each small story is a stand-in for a much larger experience that I hope to have conveyed in a way that doesn’t require playing a fully fleshed-out game in order to grasp the concept. But maybe you can still see the scenes in your head as clearly as I can, despite my tremendous advantage of having lived through those experiences.

Beyond this little experiment of an autobiographical game concept, I’m authoring this post to show, not just tell. This game isn’t just a way to reflect on my year—it’s a way to explore the intersection of storytelling, agency, and design, skills I’m honing as I delve deeper into Python and interactive narratives.

Ultimately, the DOOM pile of life isn’t just a subjective experience—it’s the mountain we all climb to make sense of our stories. For me, learning and self-expression are tools to chip away at that eternal mountain, revealing its contours and helping others see their own path forward.

Because we cannot act upon what we cannot see. Facing the DOOM pile, head on, is the first step in doing something about it.

Sora and the painbot

Sora and the painbot published on No Comments on Sora and the painbot

Sora is a video generation model that translates text to video, a product of OpenAI released earlier this month, and a painbot is a concept I hatched a few days ago while talking to ChatGPT about AI empathy and the potential for AI to recognize, record, and react to human pain.

My initial thought was that the painbot could be trained on thousands of interactions between patients and their doctors discussing pain, with the idea of recognizing trends that thread through these discussions and thereby become a quasi-expert in pain without ever having to experience it.

I imagined this painbot in an emergency room setting, replacing that process in which patients are often asked to quantify their pain from 1-10, or by selecting a face icon from a row of five or six cartoon faces that indicate a range between “rapturously happy” and “about to faint from the torture.” A more refined evaluation could surely be conducted by AI, freeing up the frenzied medical staff for their more pressing responsibilities.

But this painbot could present a physical obstacle, because the last thing ER staff need is a robot obstructing their efforts to keep someone alive.

I realize how much the public distrust AI, to the point that 60% say they wouldn’t be comfortable with a doctor “relying” on AI to provide medical care. In another study the subjects who felt “heard” in terms of emotional support strategies, but the impact diminished when it was revealed that AI is what “heard” them.

But what if we could get around that? In other words, what if a painbot could:

  • Stay out of the way of ER staff
  • Capture and record images indicating facial expressions and body postures that indicate pain/distress
  • Focus attention on patients in the ER when staff can’t be available
  • Objectively evaluate pain however possible
  • Complement medical staff while clearly operating with a specific purpose, rather than trying to take over anyone’s job

With those goals in mind, I ventured onto Sora.com with the aim of depicting such a bot in a video.

My first attempts, at best, depicted the painbot as a recording device for doctors.

A lot of young white women with straight brown hair stared past the painbot impassively. Most of them were medical staff themselves, regardless of much I emphasized their patient status.

No matter how much I described a patient as being in pain, the most I could get from a woman was a furrowed eyebrow.

Once I made the patient male, I finally got pain expressed in an interview between the painbot and the patient. This is the clearest expression of pain that I got, and it’s good. Unfortunately, however, the most the painbot would do is silently bear witness to the pain from the background.

This is the first and only time I got a black patient. Have to admit, though, the lighting is amazing.

This may speak to the capacity of AI to actually measure pain based on facial expression, but I don’t want to read too much into that.