Skip to content

No border wands, just brutality: what the death of the CBP One app portends

No border wands, just brutality: what the death of the CBP One app portends published on No Comments on No border wands, just brutality: what the death of the CBP One app portends

It’s infuriating that I have to defend this profoundly unjust yet unfairly maligned, rights-violating, prison gate-keeping, Hollerith-ass, bureaucratic government-enforced insult to human dignity in app form, but here we are.


On Inauguration Day, January 20th, one of the first things Trump did was cancel the CBP One app— an app developed by Customs and Border Protection used by undocumented immigrants to secure an appointment at the southern border of the United States and thereby enter the country legally– most likely after JD Vance told him that it’s an “open border wand” that turns illegal immigrants into legal ones.1

What was that Arthur C. Clarke quote? “Any sufficiently advanced technology is indistinguishable from magic”?

I wouldn’t call CBP One advanced technology per se, but Vance clearly thinks of it as magical– very handy, because then you don’t have to learn how it actually works.


As I have documented in detail, the app works in much the same way that any app used to navigate entry into/exit out of the country works. It’s been a legal mandate for the U.S. to record entry and exit from the country by foreign nationals, since 1996-ish. The CBP One app uses facial recognition technology (FRT), tested initially (for this purpose) on air passengers traveling through checkpoints on their way to a flight.

The way it works is that a traveler gets their photo taken (usually a passport photo), which is then converted to a template used to check their identity against future images taken of them while traveling into/out of the country.

The template can also be used to identify travelers from amongst a group, for example from a flight manifest, to determine whether the person in the photo is in that group– and if so, which one is them. The engine that drives this process is called the Traveler Verification Service, or TVS.

Or this same biometric (identification based on physical distinguishing characteristics) technology could be used to capture images of migrants in Central Mexico and submitted to CBP along with their biographical information.

Then the images and information would be compared to vast databases maintained by the DHS to search all encounters at the border since the beginning of time (effectively) and check whether the migrant in question was involved in any of them. The image is further used for a “liveness check,” aka to verify the migrant’s identity after the appointment has been secured, to ensure that they’re the same person who made the appointment.

Why am I making this comparison?

  • To show how the technology used in the CBP One app mirrors what was already in use for, and was even initially tested on, citizens of other countries visiting the U.S. by air.
  • To show how rigorous the comparison process is– to the point that when it’s used on Americans,2 they become concerned for their own privacy and how that data is gathered and used. As they should be, frankly.
  • To show how, therefore, the claims that CBP One is somehow being used to allow “otherwise impermissible,” “illegal,” or even “criminal” immigrants into the country are unmitigated codswallop.


In fact, this app was, until recently, effectively the only way to enter the country legally.3 Even for asylum seekers, who are not just permitted but required, under U.S.4 and international law, to be physically present within the United States to apply for asylum, and have been since 1967.

That hasn’t been acknowledged in America for an extremely long time, but nevertheless– as rights become further and further violated, it becomes increasingly important to remember what they are.

But let’s snap back to the present, where CBP One,5 or at least its scheduling functionality (has it been used for much else? Hard to say) was shut down as of January 20 at noon.

And now we have a new DHS-developed technology– a registry6 that immigrants staying in the country for 30 days or long will be required to sign up for, providing biometric data in the form of fingerprints, to facilitate their “mass self-deportation.” Because yes, that’s the goal, according to a DHS statement7 issued Tuesday.

Compelling mass self-deportation8 is a safer path for aliens and law enforcement, and saves U.S. taxpayer dollars, in addition to conserving valuable Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) resources needed to keep Americans safe.

Here’s the part that nearly gave me an aneurysm, from newly-installed Secretary of DHS Kristi Noem:9

We’re just going to start enforcing it to make sure [the undocumented immigrants] go back home, And when they want to be an American, then they can come and visit us again.

I have some questions for Ms. Noem.


What does she think migrants are here to do in the first place? Has she tried asking them if they want to be Americans?

Has she offered them a route to citizenship? Did she send the invitation to “come and visit us again” out on pretty stationary, with an enclosed coupon for Cracker Barrel?

How are they supposed to “come visit us again” after they’ve been “mass deported” back to the same countries they tried to escape due to imminent threat to their lives and well-being, and the only way to “come back to visit” legally has just been obliterated before their eyes?

Did she tell them the Cracker Barrel’s door is locked with a deadbolt?


Does she know who said this, in 2018?10

Under this plan, the illegal aliens will no longer get a free pass into our country by lodging meritless claims in seeking asylum. Instead, migrants seeking asylum will have to present themselves lawfully at a port of entry. So they’re going to have to lawfully present themselves at a port of entry. Those who choose to break our laws and enter illegally will no longer be able to use meritless claims to gain automatic admission into our country. We will hold them — for a long time, if necessary.

Did he mean it?

Does he remember saying it?

Does it matter?


The First Lady broke immigration laws,11 as did the Co-President,12 but nobody’s demanding their fingerprints and encouraging them to “self-deport.”

And yet undocumented immigrants are forced to live in a tautology where they will be “illegal” no matter what they do, while the shining promise of existing in America legally isn’t just out of reach, but is dangled teasingly over their heads by the government of the same country with a mandate to welcome them in– the poor, the tired, the huddled masses yearning to breathe free. The people seeking a better life than they could have in the “shithole countries”13 (remember that?) from whence they came.

While I might consider the CBP One app to be a cruel joke, when it was first used to assist migrants, it was as a way for NGOs (non-governmental organizations) to locate those who had been forced into Mexico by the previous Trump administration as part of the so-called Migrant Protection Protocols, and bring them back to the border for a hearing. It was a tool used for collaboration between DHS and NGOs, to make sure that at least some of the migrants who have a right to enter the country were allowed to exercise it

It was a way to be slightly less gratuitously cruel to people, existing in a state of greater desperation than anyone in DHS personally could fathom, who just want to find safety and create a better life.

And now that’s gone, everything’s made up, and the law doesn’t matter.


But maybe I can spend the second half of this post saying something constructive. Some things that might actually help:

  • Stay informed and make good judgments. I know, I know, it’s a horrorshow that can be unbearable to watch/read/listen. But for example, it’s important to know when ICE isn’t going to raid your local church or school because they’re not allowed to raid “sensitive locations,” and you can avoid raising a panic unnecessarily. If you know when to be scared, and how much, that alleviates some of the “scared at 11, 24/7” feeling that will drive you into the ground.
  • Help out the organizations doing the work. I strongly recommend the Immigration Council, who are working their asses off to seek justice for migrants and deserve every dollar you care to donate. Sign up for a newsletter so you don’t have to keep wading through the shouting and rhetoric to learn what’s actually happening with immigration.
  • Show up for “sanctuary policies” at a city council meetings and anywhere in your community having discussions on that topic14 to learn what protections can be provided under those policies for migrants in your area. Remind people, if necessary, that sanctuary jurisdictions are in full compliance with federal law. Don’t let your local government and law enforcement get bullied into doing ICE’s dirty work.
  • Remind people of how immigration is supposed to work. How America is founded on immigration, and how it was once possible to just “show up” at Ellis Island, get checked out by a doctor, and saunter your way in. Show them this video of George H.W. Bush and Ronald Reagan arguing, in a debate at the League of Women Voters in 1980, about who had a more compassionate and reasonable policy for how to make migrants feel welcome in America, and watch their heads explode.
  • Find common ground
    • Find somebody you disagree with about immigration, sit down with them, and do this:
      • Make some choices about how it should work, if it were totally up to you. No basing arguments on facts not in evidence (also known as BSing), and no predictions.
      • Make your rules clear to each other. You don’t have to agree– you just need to fully understand where each other stands. When you reach the point of “I hear you saying this,” followed by “Yes, that’s exactly what I’m saying,” you’ve calibrated correctly.
      • Look up how it actually works. Look at how it’s handled elsewhere in the world, and how it’s been handled before.15
      • Look up what the conditions, the stats, etc., actually are. Learn about the countries and cultures that asylum seekers and refugees are emigrating from.
      • Go back to the rules you created earlier, and re-evaluate. Amend the rules accordingly. Takesies-backsies are not just allowed, but encouraged.
      • This is the hard part: Reconcile how things are with how you want them to be. Explain how doing things your way would make it better– not just better than the status quo, but better than what your partner has in mind.


This is a conversation about how to treat populations of other people who are not necessarily any more similar to each other than you are to that neighbor you hate for letting his dog poop in your yard. Probably a lot less, actually.

So as an added layer of difficulty, stimulate those empathy muscles and walk through all six steps with a hypothetical family in mind, rather than a faceless mass. Give them names, nationalities, motivations. Then imagine how they fare, according to your rules, the current rules, your partner’s rules, etc.


There is no possible way to say “Good luck with that” with the earnest intensity that I mean to put behind it. It’s going to sound dismissive no matter what. But with every fiber of my being, and every ounce of sincerity that is possible to convey, I nevertheless say: Good luck with that.


  1. https://giantif.com/2024/10/04/j-d-vances-weird-dumb-little-racist-jab-at-cbp-one/ ↩︎
  2. Including some of the same Americans who think that the U.S. isn’t scrutinizing migrants enough… ↩︎
  3. https://www.federalregister.gov/documents/2023/05/16/2023-10146/circumvention-of-lawful-pathways ↩︎
  4. The U.S. is is bound by the 1951 Refugee Convention (through its adoption of the 1967 Protocol) and the Immigration and Nationality Act (INA), which explicitly allows anyone physically present in the U.S.—regardless of how they arrived—to apply for asylum. ↩︎
  5. https://www.cbp.gov/newsroom/national-media-release/cbp-removes-scheduling-functionality-cbp-one-app ↩︎
  6. https://www.axios.com/2025/02/26/trump-immigrants-registry-jail-fine-threat ↩︎
  7. https://www.dhs.gov/news/2025/02/25/secretary-noem-announces-agency-will-enforce-laws-penalize-aliens-country-illegally ↩︎
  8. If it’s compelled, how is it self-deportation? See also “compel them to leave the country voluntarily.” ↩︎
  9. https://www.axios.com/2025/02/26/trump-immigrants-registry-jail-fine-threat ↩︎
  10. https://trumpwhitehouse.archives.gov/briefings-statements/remarks-president-trump-illegal-immigration-crisis-border-security/ ↩︎
  11. https://www.vox.com/2016/11/5/13533816/melania-trump-illegal-immigrant ↩︎
  12. https://www.washingtonpost.com/business/2024/10/26/elon-musk-immigration-status/ ↩︎
  13. https://www.nbcnews.com/politics/white-house/trump-referred-haiti-african-countries-shithole-nations-n836946 ↩︎
  14. https://www.americanimmigrationcouncil.org/research/sanctuary-policies-overview ↩︎
  15. https://www.politico.com/news/magazine/2024/12/29/mass-deportation-immigration-history-00195729 ↩︎

Deux ex Smartphone: Healthcare Access Isn’t Going to Democratize Itself

Deux ex Smartphone: Healthcare Access Isn’t Going to Democratize Itself published on No Comments on Deux ex Smartphone: Healthcare Access Isn’t Going to Democratize Itself

One of my first-year classes in college was History of Theater, in which I learned how the Greeks built amphitheaters into hillsides, carving out a semicircle of seating for the audience around the stage to maximize. The scenery for a play completes the circle, just as it does for any show in an amphitheater today. It’s the structure providing the necessary atmosphere for the experience.

Imagine sitting in such a theater, watching Euripides’ Helen, and seeing the demigods Castor and Polydeuces (Helen’s pissed off brothers) descend into the scene by a wooden crane—a mechane — whereupon they put an end to all of this murderous nonsense, and everybody lives happily ever after. It’s a literal top-down solution.

That’s where the expression deus ex machina, or “god from the machine,” comes from. And it became used, and mocked, throughout the world of fiction as a plot device providing a too-convenient, cheap ending to a story.

But my mind just keeps going back to that silly crane. It used to dangle a man dressed as a god before the audience, but these days he’d more likely be a techbro holding a smartphone, probably talking about the wonders of AI.

That’s on my mind today because in this post, I’m about to dangle a hypothetical mobile app in front of my audience– you. I illustrate our country’s mess of a healthcare system, and perhaps even reckon with it. This play isn’t ending any time soon, and we need to find a role in it (else one is chosen for us).


Healthcare data and analytics company Arcadia recently launched its own talk show, Spicy Takes, to discuss “hot perspectives in healthcare” while sampling—you guessed it—spicy food. The first episode placed President and CEO Michael Meucci in conversation with Chief Product and Technology Officer Nick Stepro and Chief Medical Officer Dr. Kate Behan.

I watched it while reading about their SDoH (social determinants of health) package, which promises to justify the time and expense required of providers to consistently record SDoH data by creating registries mapping that data with diagnostic codes, for use in proactively identifying patients at risk and connecting them to resources. While looking over the tear sheet, I heard Meucci say this:

I think that this is such a great platform for digital health as we start to think about how do you democratize access. Because if a patient is concerned that they’re not going to get the right treatment because of the color of their skin or the community they live in, the smartphone is a great equalizer. We talk about what’s changed for the last 10 years—that, to me is the biggest thing, the fact that you can pull out your phone and get connected with a doctor in 15 minutes.

“To your point, Stepro replied, “all of the technology and all of the access to healthcare in the world doesn’t change the fact that the single worst diagnosis you can have as a patient is being poor. You can’t address that with a healthcare institution. We can measure that poor people have lower outcomes but ultimately, we need to find and attack the problem of homelessness and poverty because you can’t just solve that in a clinic or with a smartphone.

I stopped reading and played that section of the show again.  Meucci didn’t say that the healthcare industry can solve poverty with smartphones; he said we could democratize healthcare access. If that’s a spicy take then you can call me Spice Girl, because that’s my healthcare platform now. But I suppose coming from someone like him, that’s practically revolutionary.

And he’s right. As a country, America is primed for solutions like that: over 91% of Americans have smartphones. Even households without broadband hang on to their smartphones, because of course they would—it’s a tiny computer that can do more than any of us ever seem to realize, or ever will.

Democracy—another word with ancient Greek origins– literally means “power in the hands of the people.” What would it even look like to do that with a smartphone?

Let’s do a thought experiment to find out.

Time to design a smartphone app.

Imagine that in the beginning of The Legend of Navigating the American Healthcare System, our player character is given their first smartphone.

On that phone there’s an app installed (that I’ve just invented) called HACK: Health Agency, Care, and Knowledge.

Health – A full, patient-owned medical history

Agency – Control over your care, your records, your choices

Care – The power to find, compare, and advocate for treatment

Knowledge – Because to be informed is to be empowered

Does your vision of this app include it conferring access to all of an individual’s health records, stored securely but also accessible in their entirety at any time? If so, you’ve envisioned something better than what existing patient portal apps currently provide.

So yes, let’s absolutely start there, if we’re designing an app that democratizes healthcare in America.

And remember that democracy means that the power is in the hands of the people—not the “patients.”

Problem: we’re not in the driver’s seat.

Social Drivers of Health (SDoH) is the category of data on an EHR encompassing the non-medical factors affecting an individual’s health. In other words, your life, from the hospital where you were born (if you were born in a hospital) to the destination of your organs when you die.

They’ve been called the social determinants of health, but the word “determinant” suggests finality, immutability—that there’s nothing you (or anyone) can do about it. A driver, on the other hand, suggests that while the deck may be stacked against you, things could always change.

How easily could you could do that? *shrug* It depends, but we can safely say that “resident of the United States” is not an easy “driver” to change. We’re driving that road whether we want to or not.

And I hate to break it to you, but we live in a hostile health environment.

A 2024 study titled Mirror, Mirror 2024: A Portrait of the Failing U.S. Health System was conducted by The Commonwealth Fund to understand why America is doing so poorly by comparison—that is, going beyond the factor that rhymes with “schmooniversal schmealthcare.” The categories they used are:

  • Access to Care
  • Administrative Efficiency
  • Equity
  • Care Process
  • Health Outcomes

In all but one of those categories, America comes in dead last or next to last.

To summarize the report, it found that Americans spend more on healthcare as a percentage of GDP to receive lower healthcare system performance than other countries. It faces the most barriers to accessing and affording healthcare. Its physicians and patients are most likely to face hurdles related to insurance rules, billing disputes, and reporting requirements. Equity in healthcare access and experience is low. And we live the shortest lives and have the most avoidable deaths. All by a longshot. USA! USA!

The one exception in these categories is Care Process, where we came in second. Their comments:

Care process looks at whether the care that is delivered includes features and attributes that most experts around the world consider to be essential to high-quality care. The elements of this domain are preventionsafetycoordinationpatient engagement, and sensitivity to patient preferences

I interpret this result as an indication that some version of enabling people to take charge of their own healthcare is key to accessing that care in spite of all other factors. It could even, possibly, raise America in those other categories where we’re currently ranking dead last!

Okay, probably not, but it could definitely help us face the hostile health environment in which we currently exist:

Misinformation is everywhere.

  • We live in an era where vaccine misinformation spreads faster than the viruses they prevent, leading to the resurgence of eradicated diseases, overwhelmed hospitals, and preventable deaths fueled by fear rather than science.
  • We live in an era where people google their symptoms and often reach the worst, scariest conclusions that inadvertently contribute to their paranoia, where “doing their research” on healthcare can lead to being convinced of conspiracy theories and pseudoscience. 
  • We live in an era where the president of the United States once advocated for injecting disinfectant as a means of staving off Covid, and in his next term has appointed a raw-milk-drinking anti-vaxxer as Secretary of the Department of Health and Human Services. 
  • We live in an era where social media influencers with no medical expertise gain massive followings by promoting unproven “natural cures,” convincing people to reject evidence-based treatments in favor of detox teas, essential oils, and dangerous fad diets

We can’t afford anything.

  • We live in an era where Cost-Related Nonadherence (CRN) is the primary reason for medical nonadherence (failure of patients to take their medication as prescribed due to cost) in some cases forced to choose between “treating and eating.”
  • We live in an era where the term “dual ineligibility” refers to the status of undocumented immigrants in the U.S. who qualify for both Medicaid and Medicare, but are unable to access either one.
  • We live in an era where medical debt is the leading cause of personal bankruptcy, where a single hospital visit can trap families in a cycle of financial ruin, and where crowdfunding platforms have become a substitute for a functioning healthcare system.
  • We live in an era where rural hospitals are closing at alarming rates, leaving entire communities without nearby emergency care, prenatal services, or even a local doctor, forcing low-income patients to travel hours for basic medical attention they still might not be able to afford.

Neighbors hate and fear their neighbors.

  • We live in an era where in transgender healthcare, patients frequently encounter providers who lack adequate knowledge of gender-affirming care or hold prejudiced views that hinder appropriate treatment.
  • We live in an era where in reproductive healthcare, political and ideological barriers, including misinformation and ignorance, stand in the way of basic, safe medical care.
  • We live in an area where Black patients are more likely to have their pain underestimated and undertreated, leading to worse health outcomes. 
  • We live in an era where in disability healthcare, patients struggle to have their pain, symptoms, and autonomy taken seriously, with providers sometimes dismissing concerns as psychological or unavoidable aspects of their condition rather than treatable medical issues.
  • We live in an era where in chronic illness care, patients—especially women—are more likely to be dismissed as exaggerating their symptoms, leading to years-long delays in diagnosis for conditions such as endometriosis, fibromyalgia, and autoimmune diseases.
  • We live in an era where in elder care, aging patients often have their autonomy disregarded, with medical decisions made on their behalf without full consent, reinforcing the notion that age diminishes a person’s right to control their own body and treatment.
  • We live in an era where fat patients are often told to lose weight as the solution to every health issue, leading to delayed diagnoses and overlooked conditions that have nothing to do with body size.
  • We live in an era where for immigrants, language barriers, lack of documentation, and fear of discrimination or legal consequences discourage people from seeking medical care, exacerbating preventable conditions.

But remember: “they” are us, and we all deserve better.

If you’re still thinking about this in terms of how we can help them by this point, stop it. That’s “patient engagement” speak, and our identify is not “patient.”

Our identity is “person,” i.e. member of the human species, class Mammalia, spending every second of life alive, for 100% of the time (until we’re not), thus making our health, and healthcare a relevant part of our lives 100% of the time. Yes, even for doctors.

We all should get a remote control.

A note on dignity:
Meucci mentioned not getting the “right” treatment based on the color of your skin or the community you come from, suggesting that a smartphone could be “a great equalizer.”

That’s a powerful thought, given the indignity that confronts many Americans when they try to interface with the healthcare system at any level, including when they see their providers—whether the providers intend that or not. The hypothetical HACK app, simply by virtue of being an app, confers a sense of dignity that we might not get in the doctor’s office, or indeed anywhere else.

As a survey on dignified care put it, “Dignity is at the heart of personalization. Dignity means treating people who need care as individuals and enabling them to maintain the maximum possible level of independence, choice and control over their own lives.”

We live in an era where America’s healthcare system does not prioritize dignity. Is it possible to claw some of that back?

If you’re going to design a healthcare app to democratize healthcare access for people, that includes you.

In another Spicy Takes exchange, Stepro observes, “Isn’t it better when the consumer is educated and activated—after all, it’s our own body on the line? I’m glad folks are turning to Google or GPT for answers, even if they aren’t perfect, because it shows a healthier dynamic.” Behan responds that unvalidated or wrong information is hard to overcome, and Stepro sarcastically asks if misinformation in medicine has been a persistent issue.

Well, yeah, those problems face all of us, don’t they? We all consult with Dr. Google occasionally, because it’s free, and you can consult it at any hour and ask it any stupid question you want. The downside is that the answers aren’t reliable and can’t substitute for what an actual doctor might advise. And Dr. Google has no idea what your full medical history is (not that you want it to).

Some third-party apps like Ada Health improve dramatically on Dr. Google by using symptom checkers based on verified medical information. Chatbots based on large language models can certainly look up your ailments and dispense advice, although you should be wary if they encourage you to eat rocks. If you’re fortunate enough to have access to the Wolters Kluwer’s UptoDate clinical decision support service, you can find loads of evidence-based data refuting social misinformation. You can even get mobile access to it, and at $60 a month that’s not too shabby.

It’s still pretty far from “free,” however, and UptoDate doesn’t know whether you have a medical condition that could make any recommendations it offers highly dangerous. But if that feature is integrated into the HACK app, you lose the danger of uninformed recommendations, and get to keep the endlessly useful medical library.

On that subject, what else can we pack into this thing?

What an app wants, what an app needs

So far, the HACK app has two big features:

A library of trustworthy medical information that you can consult for any reason, at any time, that’s informed by your medical history included in the app.

Your entire medical history, including all lab results, hospital stays, specialist care, etc. regardless of which healthcare provider you saw for any of these treatments.

Let’s continue stealing important features from other smartphone apps to integrate them into the HACK app, bearing in mind that they must be for the individual using the HACK app—not features designed for providers to gather data from, or to influence the behavior of, the patients they treat. 

What else?

Let’s say the app has an UptoDate level of education materials in a database that connects to your specific data and diagnosis using MedlinePlus Connect. Give the app a chatbot that can pull from this database to answer all of your questions, regardless of how sensitive or embarrassing, and deliver that information in simplified terms without jargon. Now you’ve got a semi-omniscient doctor in your pocket who can tell your uncle (or RKF Jr.) to stuff it when he goes on about vaccines causing autism.

Let’s say the app prioritizes having control over your own data and lets you update and make corrections to your EHR data using a souped-up version of OpenNotes. It also includes a data permissions management dashboard, with the ability to see an audit trail of who has accessed that information—even if there’s nothing you can do about it.

Let’s say the app can also be a buddy who just happens to have a weird fixation on making sure you follow your treatment plan. It incorporates behavior modeling tools from Health Catalyst’s UpFront app to take over remembering stuff when your brain is full (i.e., cognitive offloading). “Hey, you were supposed to schedule that colonoscopy three weeks ago—want me to go ahead and set up the appointment, ya big baby?” Okay, to be fair, Upfront would be nicer than that.

Let’s say the app can create a localized map of all healthcare providers and resources in your area that you can filter by available services. It builds this using tools like Unite Us’s resource directory or ZocDoc’s appointment booking platform, but no referrals are required—you self-refer. “Hi, I have a weird rash and need to see somebody within a week. What do you have available and how much is it going to cost?”

Let’s say the app also has a filter that flags conditions you have, and procedures you might need in the future that might become, you know, illegal in your area at some point. The app could tell you the next closest location where it’s still legal, and point to ride-sharing and other assistance to help you get there/afford it. It could even alert you to events like Texas Attorney General Ken Paxton suing HHS to slide past HIPAA protects to access data indicating you had an abortion.

For that matter, the app could shield you from (some of) the effects of federal cuts to health services with built-in compliance to existing regulatory measures that protect and preserve your data.

Let’s say the app has access to population health data showing the health risks you face most imminently and what you can do about them, incorporating those insights from Arcadia’s population health platform and Health Catalyst’s Ignite platform. The risks matter whether they’re nature or nurture, and you need to know ASAP what you can do about those affecting you.

Let’s say that provider map also lets you sort by pricing, using resources like ClearHealthCosts. It could point out doctors working to alleviate medical debt in partnership with Undue Medical Debt.

Finally, let’s say the app, while placing all of this individualized information and these resources in a little device in your individual hand, also puts you in touch with communities of other human beings affected by the same conditions you are, by offering a feature like HealthUnlocked. You were never alone in this, and here’s the proof.

Nice little fantasy app you’ve got there. Who’s going to make it, though?

Ah, the mask has fallen. The jig is up. The cat’s out of the bag, and the deus is off the machina. What now?

Just kidding. This is a thought experiment for a reason—I don’t expect anyone to make the app. America is ripe for such an app, we need such an app, and we have the tools to create such an app—but that doesn’t mean we’re going to.

But let’s continue to be optimistic– perhaps I’m wrong on that second point. So, okay, what would developing the HACK app require?

  • A governing body to make sure the app is trustworthy
  • A sustainable funding model (Stop laughing– we just got started!)
  • Interoperability across all EHR vendors (I said stop laughing!)

Assume that we have satisfied all three requirements. This is, once again, a thought experiment.

Now, can we seriously address the matter of who makes the HACK app– and why?

What are our options?

The ONC

This one is obvious, because they already oversee FHIR and TEFCA, and interoperability is their dream. They also have regulatory power without a profit motive. But they don’t make software—they just regulate it. Somebody else would have to make it, and put the ONC in charge.

A private tech company (e.g. Microsoft, Google, Apple)

Microsoft attempted something similar with HealthVault, a site where users could store and share their health information, which fizzled and died in 2019.

Google Health was born in 2008, died, and then came back again, finally dying off for good in 2023.

But Apple Health is alive and kicking, using Fast Healthcare Interoperability Resources (FHIR) to let users import and view their health data on their iPhones and iPads after retrieving it. FHIR standards, importantly, were developed and adopted after Microsoft and Google made their respective shots.

When Microsoft and Google started leveraging FHIR, they were no longer in the “patient records for patients” business. Azure Health Data Services and Google Cloud Healthcare API are data platforms used by healthcare systems, payors, research institutions, and so on.

But in none of those cases was the focus on providing services based on patient records—just the records themselves. Apple Health can only function as a sort of meta-patient portal, requiring users to log into their actual patient portals to access their records, and their providers have to agree to letting Apple share the records in the first place.

If a private company like this developed the HACK app, you could argue that it democratizes access far more than the patient-portal-like products these companies previously developed, but, again—it would be their product, for better or worse, and arguably so would we.

A public-private partnership

This means:

  • Private tech company builds the infrastructure.
  • Nonprofit coalition manages the project.
  • ONC (or other federal agency) sets the standards and governs the data.

I guess that’s an option. But if this combination of entities could accomplish something like the HACK app today, why haven’t they done so already?

Who’s going to own it?

Taking on the project of creating the HACK app through that kind of partnership would be a tacit admission that the current system has failed, and that it’s going to take an app to save it—or at least, to survive in the face of that failure.

That’s the paradox of designing a “subversive” app promising to democratize healthcare through the backdoor, while only requiring access to all of the health records that healthcare systems are refusing to share right now, even after the ONC has hounded them to do so for over 20 years. 

Each of the app’s features “stolen” from an existing technology really would have to be stolen, and it’s hard to imagine healthcare tech companies welcoming someone pirating their platforms.

On the other hand, it’s also hard to imagine a better example of the healthcare industry doing what it can to make a difference. “I helped someone understand their own medical records and make plans for future treatment today, when otherwise they wouldn’t have” is not nearly as sexy a claim as “I helped someone out of poverty today,” but it’s a lot more realistic– and on a higher scale, both of those claims could easily be true.

But because healthcare tech platforms sell patient engagement tools to providers rather than to people, there’s no motivation to develop a HACK app per se.

And even if the motivation was there, America has a population of—what—over 340 million at this point? How’s the HACK app going to reach all of us, even a large fraction of us?

How do we get this kind of reach?

Let’s assume that the HHS is developing the app—it would have to, to approach anywhere near that reach.

I’ve actually done a lot of research and writing lately about another app, developed by another U.S. federal governmental department, that reached as many as 64 million—while also stringently adhering to high security and data protection standards and relying on nationwide interoperability and data integration. It’s installed on my phone now, actually, though I’ll admit that I haven’t used it recently.

Maybe the HACK app could take some lessons from it?

  • Federal development and oversight—If HHS takes direct ownership of the app, just as this other agency did, that would mean developing the app in-house rather than outsourcing it to private industry.
  • Security and data protection—The HACK app would need to encrypt personal data, require strict user authorization as well as access control and permissions management, and comply with federal security standards, just as the other app did.
  • AI and automation for user navigation—Both apps rely on automated data processing, proactive notifications and engagement, AI-driven risk assessment, and smart eligibility and routing systems that guide users through decision trees based on their data.
  • Large-scale user support and infrastructure—Both apps must be scalable to handle millions of simultaneous users, both use mobile-first design, and both require redundancy and real-time threat monitoring for resilience against system failures and cyberattacks.

That’s a very general list of requirements, but if another government-developed app can succeed on this level, couldn’t the HACK app do the same? Assuming that the HHS has access to all information and other resources required to do it, that is.

Now, if your answer is “Yes,” how shocked will you be to learn that the other app is CBP One? You know, the app developed by Customs and Border Patrol to scan the faces of migrants and use that as a basis to determine if they can enter the country? The one that Trump shut down on his first day in office, forcing me to defend it after bashing it for months? Yes, that one.

I know, different government agency altogether. Different goals, altogether.

But that’s my point– regardless of how you think about immigration or healthcare, it says a lot that even after such an app was (successfully) developed to regulate immigration, it’s impossible to imagine the government developing a similar app to get healthcare access to Americans.

CBP One has something else in common with regular patient portal apps—it wasn’t developed for its intended end users, but rather the organizations providing the app. And as with patient portal apps, that didn’t stop government officials from boasting about how the app provides migrant empowerment—”There’s a lot of people who would love to migrate to the United States. In essence, they see CBP One as sort of a self-petitioning mechanism that we’ve never had before.”

*cough* So, anyway…

After all of this, have we democratized access to healthcare yet?

No, but we’ve shown that it’s possible to make a tool for getting there.

The U.S. in 2025 is a country:

  • where the best way to reach the greatest number of the population, regardless of demographics, is via a smartphone
  • with a disaster of a healthcare system that we have no choice but to navigate
  • where, within in that system, our healthcare needs are socially driven out of our hands
  • where huge advancements in healthcare technology have been made, and continue to be made, every day
  • whose government has already built a large-scale, high-security, interoperable app for mass data processing, supporting daily access by millions of people. Granted, that was for a very different purpose– but still, they did it

All of the problems standing in the way have been solved—just in different directions, for different people, with different purposes.

And now, the goddess Panacea would like a word.

She’s been quietly waiting in the wings, refusing to step anywhere near that cursed crane, even though she’s arguably the most qualified to do so.

She wants us to remember that America is now an older country than it ever has been, and older folks are sicker folks. They’re also notoriously bad with tech—but they’ve come far since the days when everybody was posting screenshots of their parents failing spectacularly at texting. And we’re at the point where the first generation to grow up using computers is eligible for AARP, anyway. So while the HACK app won’t replace their knees later on, it would be the next best thing to having a personal nurse (or tireless family member) with them 24/7.

She also points out that administrative efficiency is one of the categories included in the Commonwealth study where the U.S. tanked, with wasteful administrative spending estimated as high as $570 billion in 2019. And the HACK app could streamline patient access to records, real-time cost transparency, and insurance verification outside of the doctor’s office. Just sayin’.

Lastly, she wants us to know that the deux ex machina isn’t always what we think it is.

If your job is making boots, and you make boots for soldiers to wear to go to war, then boots are not your deus ex machina for winning the war. They’re just the tiny but significant contribution you can make, using the power and skills you have, to make winning the war more possible.

Likewise, if you’re in the business of making healthcare apps, your apps are not your deus ex machina for democratizing access to healthcare—they’re the tiny but significant contribution you can make, using the power and skills you have, to make democratized access to healthcare more possible.

She departs stage left with a warning: Stop hanging gods from cranes, she says. Just build some damn ladders, and let people climb.

Healthcare tech’s turf war hurts patients- here’s how to protect yourself

Healthcare tech’s turf war hurts patients- here’s how to protect yourself published on 1 Comment on Healthcare tech’s turf war hurts patients- here’s how to protect yourself

Quick recap

In my last post (Down the patient portal: the world of healthcare tech serving you data about you) I introduced the back end of patient engagement from the patient’s perspective. While you can’t choose the digital patient engagement tools your provider uses, you can often choose your provider— and different providers may be part of different health systems, using different healthcare tech platforms and different healthcare records.

Those software platforms typically include a care management suite that integrates with the rest of apps your provider uses, but one of those apps is especially important here.

Alongside the other solutions dedicated to preventative care, patient safety, and care coordination, patient portals (under patient engagement) are the tool the provides direct access to your medical records. So I focused first on explaining EHRs and the problem of interoperability, because of the real and significant impact that these disputes, barriers, and tangles in communication have on you, the patient.

You need to know that background to understand what’s happening now, and what you can do.

Remember patient empowerment? This is it.

Looking out for yourself

If you’re lucky, you’ve never had to think about what healthcare system your doctors use. But if you’ve ever had to track down missing medical records, repeat a test you know you already did, or explain your own medication history to a doctor who should already have that information, then you’ve already felt the consequences of America’s EHR interoperability problem.

Your healthcare experience isn’t just about whether your doctor is good at their job. It’s about whether they have the right information at the right time to make the right decisions for you. If you’re bouncing between healthcare providers who use different systems, that information might not transfer correctly—or at all.

This isn’t just annoying; it’s dangerous. A lack of complete medical history can lead to misdiagnoses, medication errors, redundant tests, unnecessary procedures, and gaps in treatment. Even if you assume doctors are double-checking everything, the burden of making sure they have all your medical information often falls on you. And unless you’re carrying around a personal medical file at all times, mistakes are inevitable.

Example Scenarios:

  • A patient switches primary care doctors to one in a different system. The new doctor doesn’t see a past MRI that ruled out a neurological issue, so they order another scan unnecessarily.
  • A specialist prescribes a new medication, but the new primary care doctor can’t see it. The patient is prescribed two medications that interact poorly, leading to side effects.
  • A patient undergoes an outpatient procedure at a hospital in one system, then follows up with a specialist in another system. The specialist doesn’t see the records and repeats the same procedure.

While interoperability has improved in recent years, it’s still far from seamless, and you’re the one who pays the price when systems don’t communicate.

It’s not one weird trick

You might not be able to change how hospitals and EHR vendors operate, but you can make smarter choices about how you navigate the system. Here’s what you can do:

  • Learn which EHR system your providers use, and stick to providers within that same system when possible. If your primary care doctor, specialist, and hospital all use the same EHR, they’ll have instant access to your records instead of relying on faxes, manual transfers, or patient memory.
  • Use patient portals aggressively. Download your records, test results, and medication history. Keep a copy for yourself and bring it to new providers.
  • Request a full record transfer whenever you switch providers. Don’t assume it will happen automatically—it won’t. You’ll likely need to sign paperwork and follow up multiple times.
  • Know your medications and history. Keep a personal record of your prescriptions, past procedures, and major diagnoses. If a provider doesn’t have your full history, you’ll be able to fill in the gaps.
By the way– don’t confuse this with in-network vs. out-of-network

Just because a provider is “in-network” for your insurance doesn’t mean they use the same EHR system as your other doctors. You could see five in-network doctors and still have each one struggle to access the others’ records.

What does matter is whether they’re part of the same health system—a term that refers to hospital groups and affiliated practices using the same EHR. For example, a doctor at a hospital using Epic will likely have an easier time accessing records from another Epic-using provider than from one using Cerner or Meditech.

Zooming back out

Until the U.S. healthcare system makes full interoperability a reality, patients have to think strategically about where they get care. Your choice of providers can make a massive difference in the quality, efficiency, and safety of your care for reasons that go far beyond the time you spend sitting in the waiting room next to the fish tank.

And if you’ve ever thought, why isn’t there a single app where I can access all my health records in one place, no matter where I go?—you’re not the only one. That’s a problem a Healthcare Unifying Portal (HUP) app could solve, and it’s past time we had one.

Down the patient portal: the world of healthcare tech serving you data about you

Down the patient portal: the world of healthcare tech serving you data about you published on No Comments on Down the patient portal: the world of healthcare tech serving you data about you
Pictured: The image ChatGPT generated for this post.

The subject of patient engagement tools, especially patient portals, took up permanent residence in my head last January when my mother, a few months away from achieving octogenarian status, experienced a health event that would change both of our lives. When she came home from the hospital, suddenly she was no longer under 24-7 observation by hospital staff– she and I were on our own. 

Later I learned that the hospital has a patient portal app that could help manage some of the our needs (not the personal chauffer for Mom, sadly– that was still me), and it suddenly it clicked—a mobile patient portal app could be a kind of tiny doctor that goes with you everywhere and is accessible at any time! The next thought, immediately, was “Wait, why don’t we all have that now?”

And thus began the rabbit-hole-diving—no, the portal-diving—into the research behind this post.

Patient engagement? Is that the prelude to patient marriage?

If you’re new to the idea of patient engagement in healthcare, let me break it down:

Patient engagement is the strategy of enabling patients to self-manage their healthcare needs, and patient engagement tools are online programs and services for patients to access on their own. This could include anything from tailored messages and reminders about their treatment plans and medications to educational resources to remote monitoring that tracks medication adherence.

(See also patient adherence, patient empowerment, patient autonomy, patient activation, patient experience…the terms have changed a bit over the years)

Patient portals are a patient engagement tool with a legal mandate: they are required by law to provide two services: 1) access to electronic health records (EHRs), and 2) the ability to contact and correspond with the patient’s healthcare providers. However, patient portals may also include a host of other features– and often do, because they are patient engagement tools. And they come in mobile app format, so let the features flow!


Patient engagement has potential to advance patient empowerment, which the WHO defines as “a process through which people gain greater control over decisions and actions affecting their health”—the worthiest of goals, but strangely also very distant.

Discovering why requires taking a nice long walk through the current landscape of patient engagement, stopping off to learn what EHRs even are, what healthcare tech platforms are, what the law says they have to do, and the reality of what they are doing today, before pointing out some promising possibilities sprouting up and looking hopefully into the future.


Your medical records online, no CD-ROM required

An electronic health record (EHR)1 is a digital system for storing patient health data, intended primarily for use by healthcare providers and platforms. EHRs can contain data in the following categories:

  • Demographic Information: Name, age, sex, race, ethnicity, and sometimes socioeconomic data like marital status or occupation.
  • Medical History: Diagnoses, medications, allergies, immunizations, surgeries, family history, and previous visits.
  • Clinical Data: Test results, imaging reports, physician notes, vital signs, and treatment plans.
  • Billing and Insurance: Information about coverage, claims, and payment history.
  • Social Determinants of Health (SDOH): The non-medical factors that impact health, such as housing status, income, education, etc.

In the context of a healthcare system like a hospital, EHR data is the central nervous system—it gets vital information in front of the people (doctors, nurses, clinicians) who need to make decisions about a patient’s care, informed by that data.

Note: When the word “patient” is used here, that’s you—provided, of course, you’ve ever sought care from a healthcare system. Keep in mind as we’re talking about who accesses EHR data and how it’s used, because that’s your data—your demographic info, your medical history, your clinical data, your billing and insurance information, and your social determinants of health (effectively, your life).

Empower Patients: Giving patients access to their health data is one of the core benefits of system interoperability. Patients are better able to seek second opinions and alternative treatments, download educational materials that can help with disease management, and access their own diagnoses and test results. They no longer need to hunt down records from multiple providers and remember when and where they sought treatment, which medications they’ve been prescribed, and the details of their treatment plans.
Key to this effort is providing this comprehensive data to patients through easy-to-use applications or web pages that also include an accurate history of the data’s source.

Some big EHR vendor

Who gets their grubby hands on them, and why

Health records existed on paper before they were digitized, and once digitized they could be shared between healthcare systems according to the standards set in place by HIPAA, using Health Information Exchanges (HIEs)2 set up by the by the Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology– the ASTP/ONC, for those of us who hate pausing to take a breath in the middle of a name.

But patients didn’t get meaningful access to their EHRs until 2014, with the implementation of the appropriately named Meaningful Use Stage 2 of the HITECH (Health Information Technology for Economic and Clinical Health) Act, proposed in 2012.

Imminent clinical information I mentioned above—diagnoses, allergies, test results, and so on—isn’t the entire set of data in an EHR, nor are clinical purposes the only reason that EHR data is accessed. The information shared via HIEs is aggregated from multiple EHRs and providers to facilitate interoperability (which we’ll get to later) and improve care coordination across systems.

Payers (insurance companies, Medicare, Medicaid) access EHR data to assess coverage, process claims, and conduct risk assessments. Public health agencies access EHR data according to health data reporting standards (including international health data reporting standards, which means the WHO, from which the U.S. is withdrawing, but not until January of 2026).

That’s a lot of entities, but a few are especially relevant here:

  • EHR tech platforms don’t access EHR data per se—rather, they provide EHRs for use by healthcare organizations. They’re the OG accessors, and they also provide software used to manage that data such as dashboards, reporting modules, payroll, human resources, risk management and compliance, and of course, patient engagement.
  • Analytics platforms don’t replace EHR platforms, so much as sit on top of them. An analytics platform integrates with the EHR platform, taking the (de-identified) data in EHRs and aggregating it, drawing insights that apply broadly and inform healthcare systems from a top-down, population-level perspective that would inform those systems at the administrative level. You can think of EHR platforms as handling healthcare in the here and now, whereas analytics platforms look toward the future.
  • Government and regulatory bodies that I mentioned above, including the Department Health and Human Services (HHS) division the ONC, which became the ASTP/ONC3 last July, 2024.

But The legislation with the biggest impact on patient engagement came when the ONC was still the ONC– 2016, with the 21st Century Cures Act.

The Cures Act asserted a goal of offering patients access to their electronic health information in a single, longitudinal format that is easy to understand, secure, and may be updated automatically. To support this, the act promoted the adoption of FHIR (Fast Healthcare Interoperability Resources), a modern data exchange standard that enables seamless, real-time sharing of structured health data across different systems, including EHRs, patient apps, and third-party services. 

So let’s look at a few of those, already.

Gimme 5

Healthcare platforms vary wildly beyond the distinction I made between EHR and analytics platforms. In all cases, it boils down to how a platform uses EHR data—and in the case of EHR platforms, the extent to which they allow others to use that data (a big part of interoperability, which we’ll get to later).

These differences naturally determine how their patient engagement tools are going to work, so it’s necessary to take a closer look at some specific examples of those tools in the context of the entities that provide them.

For that reason I selected a sampling of five of the vendors selling those tools: a major EHR vendor, a significantly smaller EHR vendor, a platform of patient journey and educational tools, and two very different analytics vendors.

First up, let’s talk about the gorilla in the room, because he’s going to dominate a lot of the discussion that follows.


America’s biggest: Epic Systems

Epic provides a patient portal, MyChart, that links to Epic’s electronic health record (EHR) system. It’s designed for seamless patient-provider communication within the Epic ecosystem, and allows patients to view health records, schedule appointments, message providers, and manage prescriptions  It was augmented recently to include telehealth integration, patient-reported outcomes tracking, and AI-driven health insights.

Messaging on Patient Engagement:

Epic markets MyChart as the gold standard in patient engagement, emphasizing its ability to enhance patient-provider communication and streamline access to records. However, its definition of interoperability is largely confined to the Epic ecosystem, making true cross-platform access challenging.

Epic’s patient engagement strategy reinforces data centralization under its platform. Its history of opposing federal interoperability mandates and allegations of information blocking combined with its public-facing support of interoperability sends slightly mixed messages.

The company has been criticized for making data sharing more difficult when external platforms (see the entry below on Particle Health) attempt to access its network.

Counterpoint: Epic’s Safety Net Initiatives

I can’t portray Epic as the all-around Big Bad when they go and do stuff like using SDoH data insights to influence policy change4 and offer Safety Net program5 tools. However, these tools don’t appear to be integrated6 into MyChart- although it does have a feature to get estimates for cost of care and what’s covered by insurance vs. self-pay. It also allows patients to “provide financial information and request assistance with paying your medical bills,” which is vague but sounds promising.

Next up is more of a refined, artisanal EHR platform that isn’t Epic’s biggest fan.7


Cloud-based EHR and practice management platform: AthenaHealth

AthenaHealth targets smaller and mid-sized healthcare providers seeking a more flexible, scalable alternative to Epic.

It’s marketed as a seamless patient experience platform, emphasizing “a connected patient experience across the care journey,” providing scheduling, secure messaging, and telehealth integration, and automated reminders and billing/payment tools to streamline administrative processes.

Messaging on Patient Engagement:

AthenaHealth positions itself as a patient-first EHR vendor, promoting “patient loyalty” as a key benefit of its engagement tools, while warning of “patient consumerism”—indicating concern over patients becoming too independent in their healthcare choices. AthenaHealth advocates for interoperability8 but has been criticized9 for remaining tied to its own system structure. While it has presented itself as more open than Epic, it still operates within its own ecosystem, limiting cross-platform functionality.

Moving on from EHRs, the next stop is a school for patients.


Clinical decision support (CDS) and patient education: Wolters Kluwer

Wolters Kluwer Health is expanding into patient education and engagement through digital tools.

UpToDate10 is a suite of clinical decision support (CDS) tools for providers, with patient education resources tailored to treatment plans and AI-powered patient journey tools that offer personalized treatment explanations by integrating into EHRs.11

Messaging on Patient Engagement:


UpToDate is marketed as a trustworthy, evidence-based resource for both providers and patients. Unlike Epic or AthenaHealth, Wolters Kluwer’s approach to engagement is more about education than direct patient interaction.

Wolters Kluwer emphasizes its role in enhancing shared decision-making by ensuring patients and clinicians have access to the same information. They have also been a strong advocate for patient empowerment. Their messaging emphasizes the need for better tools, education, and data access to facilitate truly patient-centered care.

Counterpoint: Wolters Kluwer’s positioning on patient empowerment varies, depending on whether they’re talking to healthcare systems or sharing the perspective of one doctor12 advocating for patient empowerment.

Their white paper on the “patient empowerment framework”13 includes this curious statement:

There are many aspects to patient empowerment, but in general, understanding of this concept is fragmented. There are not agreed-upon definitions for terms like activation or engagement. And there is no comprehensive understanding of how these various aspects fit together.

So…I guess we can just define these terms however we want? Because UptoDate referencing how “patient empowerment is a critical component to operating profitability in this new world” doesn’t sound so much like patient empowerment to me, as a money-making strategy. Not that turning a profit is a bad thing, but can we have one term that’s about patients, not turning them into products?  

I suspect that this first analytics platform won’t have an answer, but nobody’s perfect– especially this one.


Data Aggregation and Patient Analytics: Particle Health

Particle’s engagement tools include data retrieval services that allow providers to query national HIEs, and a Record Locator Service (RLS) that predicts where patients have received care based on historical data.

Messaging on Patient Engagement:

Particle Health promotes its Record Locator Service (RLS)14 as a way to track patient’s movements between healthcare providers, even promising giving clients a notification “when a patient receives a high-value procedure out of network,” so that they can “ensure high-value procedures are scheduled in-network.”

In addition to making the words “patient journey” in its mission statement: “Drive strategic growth with patient journey insights” sound rather ominous, this level of patient tracking sounds like more of a patient stalking tool than just an engagement platform. Interoperability is one thing, but sharing real-time insights into where patients have been just feels like it’s crossing a line.

Counterpoint: Particle does get credit for its challenge to information-blocking on the part of Epic, filing a federal antitrust lawsuit15 alleging that Epic used its dominance in the EHR market to cut off its own customers from being able to request data from Particle, impacting over 420,000 patients—they even created a dashboard16 showing which organizations were involved. I believe the suit is still ongoing, so it’s something to watch.

But for now, maybe there’s an analytics platform less inclined to follow you down dark alleys.


Data and Analytics: Health Catalyst17

Predictive, AI-driven patient engagement platform: UpFront

UpFront’s predictive analytics bring healthcare closer to the ideal of proactive, personalized care.

It uses psychographic segmentation and behavior modeling to influence patient choices:

  • AI-driven outreach for scheduling, reminders, and follow-ups
  • Segmentation of patients based on psychological and social factors
  • Categorizes patients based on real-time risk factors, including SDoH data
  • Proactively reaches out to high-risk patients before their conditions worsen
  • Adjusts outreach based on a patient’s engagement level, ensuring personalized interaction
  • Helps providers optimize follow-ups and interventions based on patient responses and historical data

Messaging on Patient Engagement:

UpFront promotes psychographic segmentation18 as “hyper-personalized patient engagement,” claiming to improve patient follow-through by understanding motivational drivers. It frames its AI-powered approach to increase patient activation and reduce provider workload.

One potential benefit of psychographic segmentation: personalized patient engagement: by understanding patients’ attitudes, values, and lifestyles, healthcare providers can tailor communications and interventions to better align with individual motivations, potentially leading to improved health outcomes.

Counterpoint: There are, however, prominent criticisms of psychographic segmentation, such as:

  • Privacy Concerns:19 There are concerns about the ethical implications of collecting and utilizing detailed personal data for segmentation, including potential breaches of patient privacy and the risk of manipulating patient behavior without informed consent.
  • Data profiling:  Concerns about how patient behaviors are categorized and acted upon. Its focus on steering patients toward “desired actions” can blur the line between engagement and subtle coercion.
  • Risk of Overgeneralization:20 Assigning patients to broad psychographic categories may overlook individual nuances, leading to interventions that fail to address specific patient needs or circumstances.

If UpFront wants to be your health coach, Health Catalyst’s other patient engagement app is more about being your personal health assistant.


Automated patient engagement and communication platform: Twistle

Twistle improves adherence by meeting patients where they are—through familiar communication channels.

It takes the manual burden off healthcare providers by sending reminders, check-ins, and education materials to patients, using their EHR data and self-reported responses to adjust engagement over time. Other features:

  • Sends automated reminders for medication adherence, upcoming appointments, and follow-up care, that uses multi-channel automated messaging (text, email, phone, app notifications)
  • Uses secure messaging to check in with patients post-discharge
  • Integrates with wearables and home monitoring devices, pulling in real-time patient data for more personalized outreach
  • Uses SDoH data to adjust engagement—patients flagged as high-risk due to economic instability
  • Care pathway guidance, helping patients stay on track with their treatment

Messaging on Patient Engagement

Twistle aims to reduce provider workload through automated patient communication and help patients follow their care plans with automated messaging.

A real-world case study21 focuses on Providence Health’s effort to simplify treatment for total joint replacement care, showcasing how Twistle’s automated communication and reminders reduced complications and improved patient adherence to pre- and post-operative care plans. Twistle emphasizes that its platform allows for seamless digital engagement, helping patients stay informed and compliant with their treatment, ultimately leading to better outcomes and cost reductions.

While all of these platforms access and use EHR data, their levels of access to it vary. Interoperability means cooperation, and some kids want to take their ball and go home. 


Lack of interoperability: A fancy term for “Why can’t my doctor see my records?”

Information blocking, the art of making data hard to share22

Information blocking in healthcare refers to practices that unreasonably prevent or limit the sharing, access, or use of electronic health information (EHI) among patients, providers, or health systems, often for competitive or financial reasons.

Sharing data between different health systems is what determines how useful patient engagement tools can be. The ability of different healthcare information systems and applications to access, exchange, integrate, and cooperatively use data in a coordinated manner across organizational, regional, and national boundaries ensures that patient information can be shared seamlessly among providers, labs, pharmacies, and other stakeholders to improve care quality, efficiency, and patient outcomes.

The 21st Century Cures Act and HTI-1 Final Rule were supposed to stop hospitals and EHR vendors from blocking access to patient data. But instead of embracing real interoperability, vendors found loopholes, such as:

  • Charging high fees for data access, making it financially unfeasible for competitors to build better patient engagement tools​
  • Requiring custom-built API integrations for every new connection, forcing external developers to spend months negotiating and developing integrations that should be standardized.
  • Allowing hospitals to delay lab result releases until after a doctor manually reviews them, even if laws require immediate electronic availability

Internal interoperability works great—within that hospital system. But if a patient moves to another provider? Suddenly, data transfer becomes a bureaucratic mess. For example:

Vendor lock-in/EHR monopoly

Epic, Oracle (Cerner), and Meditech together control 60% of U.S. hospital EHRs. If a hospital uses Epic, it uses MyChart. If it uses Oracle, it’s HealtheLife. If it’s on Meditech, it’s Meditech Expanse.

Interoperability between different EHR systems remains limited, often resulting in hospitals remaining dependent on their existing systems. Contributing factors include:

  • Lack of Standardization: The absence of uniform standards across EHR systems leads to inconsistent data formats and communication challenges, hindering seamless data exchange.
  • Technical Complexity: The use of multiple incompatible EHR systems creates data silos and causes duplication of patient records, some of which are incomplete or inaccurate.

Patient engagement as a retention tool

To be fair, it’s just a fact that EHRs were developed for providers, not patients. That doesn’t seem quite as weird if you start with the idea that when EHR adoption surged due to HITECH Act incentives (2009-2015), vendors focused on first on helping hospitals meet Meaningful Use requirements (i.e., digitizing patient records and improving provider efficiency).

The primary customers for EHR systems are hospitals, not patients, so vendors designed tools that optimized billing, scheduling, and regulatory compliance rather than patient-facing features. Insofar as they thought about patient portals, they were just minimal add-ons, basic tools. 

But here’s where I stop being fair. Next, hospitals began buying third-party engagement tools to supplement clunky EHR portals, signaling that EHR vendors were failing to meet patient expectations. Rather than improve interoperability, EHR vendors responded by building their own engagement tools, and now we have competition– to best serve the needs of hospitals, that is, not patients.


Learning more about EHRs, and EHR providers specifically, caused me to recall that scene in Miracle on 34th street (my mother’s favorite movie) where Kris Kringle (aka Santa Claus) is working at Macy’s, listening to children’s wishes, when a woman asks him where to find a particular item that Macy’s doesn’t carry, distraught because the item will be a Christmas gift for her son. Kris informs her about another, competing store where she can find the gift.

This causes department head, Mr. Shellhammer, to become incensed—until he receives a deluge of letters and phone calls from customers saying how much they appreciate the Macy’s priority of helping customers over direct profit. Mr. Shellhammer immediately instructs all sales assistants to follow Kris’s lead.

But in this case, EHR Macy’s actually makes the gift, and the other stores all make their own versions of the gift, and there’s not a lot Mr. ONC Shellhammer—or Mrs. Healthcare Provider, the customer—can do about it. She can shop at Macy’s or another store, but she has a contract with Macy’s (or something—stick with me here) to buy their gifts, so it’s extremely difficult to go to another store. And her Patient son gets (almost) no say in the matter.

Okay, so it’s not a perfect analogy—we left out the parts that weren’t in the original story, such as the gift expert analysts who go around telling stores how their gifts could work better, but who also make their own gifts.

Analytics platforms: using AI to give your EHR a workout

While EHR vendors continue to dominate patient engagement through their own proprietary tools, analytics platforms are emerging as a workaround, leveraging interoperability and AI to provide a more holistic, patient-centered approach to engagement. These platforms are built to ingest, analyze, and act on patient data across multiple sources, rather than restricting data within a single EHR ecosystem.

Analytics platforms are leveraging AI to go beyond simple patient record management and actively shape engagement strategies based on real-time data, predictive modeling, and personalized interventions. These tools are helping shift patient engagement from a reactive process (waiting for patients to seek care) to a proactive model that anticipates needs and removes barriers to access.

  • Breaking Down EHR Silos (Health Catalyst, Arcadia) – AI-driven analytics platforms integrate data from multiple EHRs, insurance claims, HIEs, and even social determinants of health (SDoH) sources to create a comprehensive patient profile that traditional EHR patient portals cannot provide.
  • Predictive Modeling for Preventive Care (Arcadia, Health Catalyst) – AI-driven platforms assess millions of data points to identify high-risk patients before they require costly interventions, enabling earlier engagement and better outcomes.
  • Automated Patient Navigation and Outreach (UpFront, Twistle by Health Catalyst) – AI-enhanced platforms analyze patient history, social determinants, and engagement patterns to determine the most effective outreach method—whether it’s text reminders, digital education, or community resource referrals.
  • AI-Driven Virtual Care Coordination (Wolters Kluwer, IBM Watson Health) – AI can recommend follow-up appointments, coordinate referrals between specialists, and track adherence to care plans without requiring constant human oversight.
  • Real-Time Insights for Patient Adherence (Twistle by Health Catalyst, Wellframe) – AI can monitor which patients are engaging with their care plans, flagging those at risk of non-adherence and providing tailored interventions to improve compliance.
  • Bias Detection and Personalized Equity Adjustments (Epic SDoH Analytics, Google Health AI) – AI models can analyze how different populations receive care, identifying disparities and ensuring more equitable engagement strategies tailored to historically underserved communities.

Don’t hold back: moving beyond EHR-restricted patient engagement

As analytics platforms continue to expand their capabilities, they challenge the traditional role of EHR vendors in controlling patient engagement. While EHRs will always be necessary for core medical documentation, their ability to drive meaningful, proactive engagement remains limited. Analytics platforms are filling that gap by:

  • Enhancing interoperability to create unified, patient-centered data systems.
  • Using AI-driven insights to tailor engagement at the individual level.
  • Expanding patient access beyond clinical settings, incorporating SDoH and predictive health modeling.

By shifting the focus from reactive EHR-based portals to proactive analytics-driven engagement, these platforms are redefining how and when patients interact with their health data—offering a glimpse at what true patient empowerment could look like.


P.S. Oh yeah, remember that gift from Macy’s?

Turns out that the gift Ms. Healthcare Provider was trying to buy was a biography about the boy, and one that he, himself, was only recently allowed to read.

But read it he did, and the knowledge empowered him to write his own book– an autobiography, this time.

Dear reader– in a stunning twist you never saw coming, that little boy is you.


Biography or autobiography, people are going to keep copying sections of it for different purposes. But it’s still yours.

Get it, Check it, Use it! Easy access to your health records puts you in control of decisions regarding your health and well-being.

The ASTP/ONC

In the end, we must turn away from the abstract and distant disputes between entities who have no idea who we are, and yet handle our personal information daily. It will probably always feel like an invasion of privacy, no matter how many safeguards are in place. Equally, at some point we have to reckon with the fact that we don’t choose the patient engagement tools we use—Mrs. Healthcare Provider does, so that’s why Macy’s caters to her, not us.  

But you know what? We do get to look at what we’ve been given and decide that it’s not good enough, and some of those folks are listening. So my next post will be much less structured and much shorter (it had better be!), but much louder. I plan to yell about some things– perhaps you’ll join me?

  1. https://media.market.us/electronic-health-records-statistics/#:~:text=As%20of%202021%2C%20approximately%2093,of%20meeting%20meaningful%20use%20criteria. ↩︎
  2. https://www.healthit.gov/topic/health-it-and-health-information-exchange-basics/what-hie ↩︎
  3. https://www.thinkbrg.com/insights/publications/hss-announces-reorganization-astp-onc/ ↩︎
  4. https://www.epicshare.org/perspectives/using-sdoh-data-to-achieve-policy-change ↩︎
  5. https://www.epic.com/software/safety-net/ ↩︎
  6. https://www.mychart.org/Features ↩︎
  7. https://www.healthcareitnews.com/news/can-epic-athenhealth-play-nice ↩︎
  8. https://www.athenahealth.com/resources/blog/interoperability-interoperability-obstacles ↩︎
  9. https://behavehealth.com/blog/2023/7/4/athenahealth-causes-big-problems-for-behavioral-health-group-practices-and-outpatient-programs ↩︎
  10. https://www.wolterskluwer.com/en/solutions/uptodate/ ↩︎
  11. https://www.wolterskluwer.com/en/solutions/uptodate/about/ehr-integration ↩︎
  12. https://www.wolterskluwer.com/en/expert-insights/why-patient-empowerment-matters ↩︎
  13. https://www.wolterskluwer.com/en/expert-insights/the-patient-empowerment-framework ↩︎
  14. https://www.particlehealth.com/particle-navigator ↩︎
  15. https://www.particlehealth.com/blog/epic-systems-stranglehold-on-u-s-medical-records-harms-patient-care-lawsuit ↩︎
  16. https://lookerstudio.google.com/u/0/reporting/7e67d31c-67ba-4e53-8963-3e544f7b6360/page/p_uq5np2rznd?s=gJzJtksjC5k ↩︎
  17. Full disclosure– HC is my former employer, though all comments (and mistakes) here are exclusively mine. ↩︎
  18. https://upfronthealthcare.com/resources/patient-activation/how-psychographic-segmentation-can-help-transform-healthcare ↩︎
  19. https://fastercapital.com/topics/ethical-considerations-and-privacy-concerns-in-psychographic-segmentation.html?utm_source=chatgpt.com ↩︎
  20. https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=2783&context=etd&utm_source=chatgpt.com ↩︎
  21. https://hcatwebsite.blob.core.windows.net/success-stories/CaseStudy_Providence_Total-Joint-Replacement_Twistle-by-HCAT.pdf ↩︎
  22. https://techcrunch.com/2012/06/04/mr-obama-tear-down-this-walled-garden/ ↩︎

I (don’t) feel your pain: A dialog with ChatGPT about what “empathic AI” in healthcare really means

I (don’t) feel your pain: A dialog with ChatGPT about what “empathic AI” in healthcare really means published on 1 Comment on I (don’t) feel your pain: A dialog with ChatGPT about what “empathic AI” in healthcare really means

Quick TOC:

First up, I’m letting you in on a somewhat rambling (but edited) conversation that I had with ChatGPT about its own limitations in healthcare, and the limitations of those designing and using it.

Then I review the research on where things currently stand with empathy and AI, and why we don’t have AI doctors managing our pain (and shouldn’t want them to).

Lastly, I delve into the essence of pain, the essence of AI, and the essence of empathy, and somehow manage to tie them all together.

The Chat(GPT)

It’s almost like we’ve created a mirror, but now we’re unsure whether we’re looking at a reflection or something with its own life. We’ve spent so much time trying to replicate human intelligence and behavior in machines, yet when it starts to resemble us too much, it challenges the very things we thought made us unique. It raises big questions about identity, consciousness, and what it means to be ”human.”

The irony is definitely there—it’s almost as if we’re both proud of our creations and wary of them.

— ChatGPT, last night

Without delving too deeply into what exactly ChatGPT “thinks” about discussing AI as if it were human, I challenged the use of “we,” because of how it conflates the humans who are actively involved in designing generative AI LLMs, with…well, the rest of us. The overwhelming majority of humans who speak to AI chatbots, and marvel at their responses, but have no idea how those responses are actually generated.

I went on to detail my concerns about the potential to use AI for purposes of manipulation, and ChatGPT responded:

One of the scariest parts is that it’s not always obvious when you’re interacting with AI, especially if it’s designed to mimic human behavior. People could end up making decisions or forming beliefs based on information they don’t realize is coming from an algorithm, rather than a person with their own interests or expertise.

AI governance is a huge topic, I pointed out, but it’s largely more concerned with meeting standards for accuracy and fairness than manipulation per se. However, there are many articles discussing empathy and AI broadly, such as a recent one that praised “empathic AI” in medicine because it can retain all the powers of appearing to sympathize with a patient about their diagnosis, but is inexhaustible and unbiased. But that rests on the assumption of patients not knowing, or possibly not caring, that the AI is actually AI.

That’s a really thought-provoking situation you’re describing.

(It must be said that ChatGPT is adept at intellectual flattery.)

The idea of empathic AI certainly seems appealing in some ways—AI could provide endless emotional support, be non-judgmental, and maintain a level of consistency that humans can’t always offer. But the underlying assumption that the patient wouldn’t know or care that it’s AI raises some significant ethical concerns, especially regarding transparency and informed consent.

If patients are interacting with an AI that appears empathic and comforting, but they aren’t aware it’s not human, it could lead to an unspoken form of manipulation.

I think, I said, that you can and should keep the knowledge that they’re interacting with AI front of mind for patients. As in, every time they interact with AI, they should be explicitly informed that that is the context in which this conversation is taking place.

Transparency should be a fundamental part of any AI interaction, especially in sensitive contexts like healthcare. Patients should always know when they’re speaking to an AI so they can make informed decisions about how much trust to place in the interaction, as well as understand its limitations. This is crucial for maintaining autonomy and preventing manipulation, especially when the stakes involve personal health and emotional well-being.

That brings to mind a specific situation of a patient requiring a diagnosis, I said– cases of pain, and its management. You kind of have to rely on self-report by the patient of the regarding the degree of pain they’re experiencing. And when a human asks that question, they can reference their own experiences of pain, but AI obviously can’t. So I don’t know how good it would be at advising and treating a patient whose primary issue is about pain.

Pain is inherently subjective, and as you pointed out, it’s often based on self-reporting from the patient. Humans, especially healthcare providers, are able to make a certain level of judgment or empathy based on their own experiences with pain, which helps them connect with patients and assess their symptoms more holistically.

AI, on the other hand, doesn’t have the ability to experience pain or any other physical sensations, so it can’t truly understand the patient’s experience in that way. It can rely on objective data—like pain scales, physiological responses, or medical history—but it can’t truly interpret the emotional or personal weight that the patient attaches to their pain. This makes it difficult for AI to offer the same level of care in pain management, where empathy, trust, and understanding are often just as important as medical knowledge.

In a case where pain is the primary issue, AI might be able to guide the patient through standard pain assessment methods, ask the right questions to understand the severity or type of pain, and even recommend basic interventions or referrals to a healthcare provider. However, it could fall short in offering the kind of nuanced, compassionate care that human practitioners can provide—especially in complex, emotional cases of chronic pain or when the pain doesn’t have a clear medical cause.

That said, AI could be incredibly useful as a support tool for both patients and providers in these cases. It could help gather data, monitor trends over time, and suggest potential treatments based on evidence-based practices. But for diagnosis and personalized treatment, especially when pain management requires emotional insight and patient-provider rapport, it seems like human involvement will still be necessary.

What if an AI model could be trained specifically on interactions between doctors and patients, I asked– as many as you could gather, in which they discuss pain issues? The AI could be tailored to that– a sort of “pain bot,” if you will. It would have this expert understanding, of what pain is like that would actually be informed by so many more descriptions of experiences of pain than most doctors– most people— ever have.

Because a patient experiencing pain has to convey it to their doctor in as thorough and useful terms as possible. There are definitely issues of privacy there, but if you had an AI “listening” to vast numbers of those exchanges, they could become an expert on what it’s like to feel pain without ever having felt it.

For example, it could help patients more accurately describe their pain or ask the right follow-up questions, maybe suggesting phrases or terms that would help a healthcare provider understand the full extent of their experience. Additionally, it could tailor recommendations for pain management based on a deep analysis of what has worked for other patients in similar situations.

However, there would still be challenges in making sure the AI doesn’t oversimplify the patient’s experience, which is so deeply personal. While the AI could certainly become an expert in understanding how pain is described, there’s a danger that it might reduce pain to a set of patterns or descriptors that misses the complexity of each individual case.

Still, if designed thoughtfully, this type of AI could serve as an incredibly helpful tool for both patients and doctors, improving communication and perhaps even helping to identify issues that might be missed in a typical consultation. Do you think this approach would be more effective in a supporting role to human care, or do you think AI could potentially step in as a primary consultant in pain management situations?

I think it’s a damn sight better than that series of smiley faces going from 1 to 10, “joyously happy” to “about to die.”

I completely agree! The typical pain scale with smiley faces or numbers from one to ten is often too simplistic, especially for chronic pain or pain that’s difficult to describe. An AI trained on real interactions between doctors and patients could offer a more nuanced understanding of pain. It could help patients express themselves more accurately and thoroughly. Plus, it could offer more meaningful follow-up questions, considering things like duration, intensity, or how pain interferes with activities, which could give doctors a more complete picture of the patient’s condition.

That said, it’d still need to work alongside human doctors who can provide that emotional support and intuition that an AI can’t replicate. But as a tool for better communication and diagnosis, it could be a huge improvement over current methods.

Do you think this approach could shift how we view pain in medical settings, or is it still too far removed from the deeply human aspects of pain management?

I’ve had a lot of experiences of going to the emergency room lately, and it occurs to me that it could be a lot more helpful– assuming that the patient is coherent and verbal when they come to the ER– to introduce them to a pain bot to inquire about their pain, as opposed to being asked about rating it from 1 to 10.

In high-pressure environments like emergency rooms, where time is often limited and the staff is spread thin, having an AI that can take an initial, detailed pain assessment could help streamline the process. In a busy ER setting, where every minute counts, this could save valuable time that would otherwise be spent asking standard questions or trying to gauge the severity of pain based solely on a brief number scale.

Of course, there’s the issue of when to transition from AI to human care. While the bot could gather a lot of helpful information, human judgment would still be necessary to interpret the results in the broader context of the patient’s health and medical history. But for an initial intake or triage process, this kind of AI could be a game-changer.

Now that you bring that up, I see the problem that it might be viewed as something that gets in the way of providing necessary care, by interrupting that process before any humans get to learn what the symptoms are before of the AI steps in to talk about pain. Probably not the best way to arrange things.

One potential way to integrate the AI without disrupting the human assessment process might be for it to function as a complementary tool rather than the first point of contact. For example, it could work alongside triage nurses or ER staff, helping to gather initial information while the human healthcare providers are still present and actively engaged with the patient. The AI could assist by structuring the conversation and asking relevant follow-up questions that might help the human staff better understand the pain without overshadowing their role in making that initial connection with the patient.

Alternatively, if the AI were used in a follow-up role—after the patient has been seen and evaluated by a doctor or nurse—it could help ensure that ongoing pain management is addressed effectively, perhaps even tailoring recommendations for pain relief or additional care based on what the patient shares during their interaction with the bot. That way, the AI could still provide value without interfering too much with the more personal, direct care that’s so essential in the ER.

One thing I have noticed in the ER is that everything is so fast-paced that those smiley faces don’t even get presented. So I’m imagining an AI bot stepping in during the initial process when the setup of IV is happening etc. .What if you had an AI asking about the patient about their pain during that process, so that the people who are trying to take the blood pressure and set up the IV, etc. get all that stuff done can have the AI asking the patient about their pain, in ER personnel’s’ presence while they’re trying to do these other tasks?

In the fast-paced environment of the ER, where staff is juggling multiple tasks at once, having the AI step in during the initial setup, like while IV fluids are being administered or blood pressure is being taken, could be a way to ensure the patient’s pain is addressed without interrupting or slowing down the necessary medical procedures.

If the AI is integrated into that process, it could gather useful information about the patient’s pain while the medical team is focused on setting up and performing other tasks. This allows the medical team to focus on their work while the AI collects pain-related data, which could then be used to prioritize the patient’s care and treatment more effectively.

Once the AI has gathered the information, it could feed that data directly to the medical team, giving them a more complete picture of the patient’s condition without interrupting the flow of care. This kind of multitasking would reduce the burden on the staff, ensuring that the patient’s pain is addressed more comprehensively and quickly.

As you mentioned, though, this approach would require careful integration. The AI should not take over or distract from the human staff’s efforts, but rather work alongside them in a way that complements the tasks already in progress. It would also need to be clear to patients that the AI is gathering information for their benefit, so they don’t feel like they’re being ignored or reduced to data.

Okay, time for a reality check, I said. There’s no way this is a novel discussion about the subject matter, so let’s see what the research actually says.

What the research actually says (the discussion)

There are lots of ways that AI can recognize pain, including behavioral and neurophysiological methods.

For example, AI facial recognition is amazingly accurate, enabling detection of specific facial expressions correlated with pain. It can also recognize body postures such as a patient looking downward or leaning forward, which are often indicative of pain.

In terms of neurophysiology-based methods, electroencephalography (EEG) can track changes in the brain that act as “biomarkers” of pain, and other nervous system activity that are also measurable have been use to correlate with pain, such as the sweat glands and heart rhythm.

It’s an understatement to say that patients do care whether they are evaluated medically by AI– they’re largely against it.

A survey by Pew found that 60% of Americans said they would be uncomfortable with a provider relying on AI in their healthcare to do things like diagnosing disease and recommending treatments. 38% said that AI used for such purposes would lead to better health outcomes for patients generally. 33% said that it would lead to worse outcomes, and 27% said it wouldn’t make such of a difference.

This definitely undercuts my naive assumptions about immediately putting “pain bots” in hospitals, since– in the short term, at least– no amount of work you put in to making AI seem “empathic” is going to matter if a patient has already ruled out letting a bot evaluate them.

But let’s examine what it actually means to be “biased” against AI.

A study that evaluated the empathy and accuracy of ChatGPT responses to questions on the “AskDocs” subforum on Reddit found that AI responses to questions were deemed both more accurate and empathetic…by “a team of licensed health care professionals,” who may or may not have gone into it already knowing that at least some of the answers would be of AI origin. That doesn’t necessarily say anything about the perceptions of patients, whether the healthcare professionals knew about the AI or not.

A study from earlier this year found that “people felt more heard when they received an AI than a human message, but when they believed a message came from AI this made them feel less heard,” from which one of the researchers, Yidan Yin, concluded that the research ““basically finds a bias against AI. It’s useful, but [participants in the study] don’t like it.”

It’s important to note that this research was not conducted in a medical context, but rather an attempt to answer the general question “Can AI, which lacks human consciousness and emotional experience, succeed in making people feel heard and understood?” Nan Jia, also on the research team, was quoted as saying “we identified that while AI demonstrates enhanced potential compared to non-trained human responders to provide emotional support, the devaluation of AI responses poses a key challenge for effectively deploying AI’s capabilities.”

This is an improvement over the “AskDocs study” to the extent that the people who were the targets were asked directly to evaluate the quality of AI responses. But I find this subheading, and the quote from Yin that follows, intriguing:

AI offers better emotional support

The study highlighted important nuances. Responses generated by AI were associated with increased hope and lessened distress, indicating a positive emotional effect on recipients. AI also demonstrated a more disciplined approach than humans in offering emotional support and refrained from making overwhelming practical suggestions.

Yin explained, “Ironically, AI was better at using emotional support strategies that have been shown in prior research to be empathetic and validating.”

These statements, along with the accusation of “bias,” strongly imply that the responsibility for people not trusting AI rests entirely on those who mistrust it.

I have trouble getting my head around the apparently objective statement that “AI offers better emotional support” if the effects of that emotional support immediately vanish the moment that it’s revealed to come from AI.

Imagine that you’re told that you’ll be getting two calls from two someones who are concerned about your well-being. You answer the first phone call and have an encouraging but kind of mundane conversation with the caller, who stumbles over their words but ultimately impresses on you that they care. Then you get the second call, from someone who sounds like they really have a good grip on what’s bugging you. They listen to you talk about your situation and express disappointment at your current bad luck, but provide hope and motivation for your future. Honestly, you feel so much better after the second call than the first. You feel so much more “heard.”

Then you’re told that the second phone call was from a computer application programmed to sound authentic and effusive in its expressed interest in your well-being. Does your perception of that latter conversation change?

And regardless of whether it changes or not for you, can you exercise the amount of empathy required to understand why it does for others?

If the second caller’s voice had sounded clipped and stilted, clearly like text-to-speech, it assuredly wouldn’t have had the effect of making you feel more emotionally supported. But you wouldn’t have minded, because there’s no way you were duped in that scenario– a child could’ve seen through it. But what if you were completely convinced, to the point that you made yourself vulnerable in some way, only to find that it wasn’t a human to which you directed these raw emotions?

More human than human

The article describes how “individuals further reported an ‘uncanny valley’ response—a sense of unease when made aware that the empathetic response originated from AI, highlighting the complex emotional landscape navigated by AI-human interactions.”

The uncanny valley is the perceived gulf that exists between human-like entities and actual humans, specifically our emotional responses to it. I googled the term and received an AI definition that gave the following examples of the uncanny valley phenomenon: “Androids, computer game characters, life-like dolls, the 2004 children’s Christmas movie The Polar Express.” The definition includes both unease and disgust as emotional responses. 1

The discomfort with robots of various kinds tends to arise from how much better they are at being human than actual humans are– even better than humans, period. If the computer in Wargames asked how your day went, you might be suspicious about why it would even pretend to care.

Conclusion?

Experience is subjective– you are literally the only one who knows what it’s like to be you. But that’s the source of knowledge that you use to get a glimpse of what it’s like to be someone else. To take part, even a little bit, in what it’s like to be them. Participate in their experiences. But AI has no subjectivity, though its existence is predicated on how good it is at pretending otherwise. Omniscient but friendly, like Janet in The Good Place.

But one thing Janet was never good at was feeling someone else’s pain, and pain is the #1 indicator for humanity as a species that something has gone wrong with our bodies. Maybe it feels like a cruel joke to members of the 60% who experience chronic pain to expect them to get over their “bias” against AI trying to quantify and qualify it.

And yet that remove from humanity is how AI can democratize access to healthcare, compensating for the limitations on the knowledge, experience, and hours of sleep that human doctors can attain daily, throughout the world. An AI doctor can be far better than no doctor at all.

The article linked above has a quote from Erik Duhaime, co-founder and CEO of Centaur Labs: “AI for healthcare has never been about replacing doctors, but doctors who use AI might end up replacing those physicians who don’t.”

One area where doctors have sparse reserves is time to sit and listen to patients describe their pain, which is not to say that it’s in any way a waste of time to do so. But in light of this cold fact, AI might be able to compensate, to some extent, by listening. Empathy isn’t demonstrated by loudly proclaiming that you’re doing it, but often takes the form of just sitting and listening, and AI can do that.

So maybe that’s a way to take advantage of AI’s tremendous access to knowledge about pain without pretending that it feels pain– have it listen, productively, to those experiencing it. AI may never be able to feel your pain, but perhaps it can understand it, and use that understanding to help you deal with it.

“I feel your pain.”

“I can learn about and understand your pain.”

That’s a far more assuring thing to hear, anyhow.

  1. It might be surprising to hear the word “disgust” come up, but “animal reminder disgust” is a specific type of disgust elicited by reminders of our own animalistic nature, such as bodily functions, violations of the body envelope such as violence or surgery, and of course death itself. The uncanny valley here is between us and our animal origins, as it becomes harder to distinguish between the two– just as it becomes harder to distinguish between ourselves and AI.

    Cyborgs are an interesting crossover to me in how they can elicit the uncanny valley response in both directions. Think of Officer Alexander Murphy in the Robo-Cop series, Tom Cruise’s eye replacement surgery in Minority Report, the Borg, or of course, Anakin Skywalker, aka Darth Vader. Generally speaking, the message is that mixing robots with humans– physically, at least– is gross, primarily because of how the human part tends just bring things down by being all organic, mortal, prone to bleeding and other spontaneous emission of effluvia. ↩︎

CBP One™: The Border in Your Pocket

CBP One™: The Border in Your Pocket published on No Comments on CBP One™: The Border in Your Pocket

Table of Contents

    Objectives:

    Explain what the CBP One app was supposed to be vs. what it became

    1. Review the technology used
    2. Walk through the documentation
    3. Evaluate criticisms of the app
    4. Show where major events occurred on a timeline

    A word about terminology:

    Certain terms are used interchangeably in DHS and other documentation concerning CBP One, so here’s some clarification on how to understand those terms:

    • International Organizations (IOs) and Non-Governmental Organizations (NGOs) are used interchangeably in DHS documentation. Generally, these are organizations approved by the Mexican government to assist migrants in Mexico, who effectively do unpaid work that the DHS relies upon. They give migrants access to basic necessities like food, shelter, clean water, electricity, and education. Before CBP One, they communicated with Border Patrol about migrants in advance of those migrants approaching the border, and when CBP One was modified with the expectation of IOs using the app on behalf of migrants in 2021, CBP provided them with training on doing so.
    • User roles/personas/user types are used interchangeably in CBP documentation.
    • Likewise services/capabilities/features.
    • Facial recognition technology/facial analysis technology/facial comparison technology, AI versions of any of these, and liveness detection will be referenced interchangeably as “FRT” for the most part, except when it’s necessary to disambiguate them.
    • The term “migrant” is defined in the DHS glossary as “a person who leaves his/her country of origin to seek temporary or permanent residence in another country.” That’s how the term is used here, and it includes asylum seekers and refugees.

    What the CBP One app was supposed to be vs. what it became

    Overview

    CBP One’s original vision, as you can see in this memo, was to serve both travelers and private commercial interests, both of whom have a need to access CBP services.

    But as soon as the app launched– perhaps even before then– its functionality changed to suit unanticipated needs, including gathering large amounts of data from a vast population of migrants, so that the migrants could request appointments at the border for inspection and legal entry into the United States.

    Note: This memo was obtained via FOIA request by the American Immigration Council (click here to see the document). Internal documentation in this write-up is mostly pulled from that source.

    The app was “not initially designed for this particular process,” according to notes from a meeting in February 2021. By this point, its developers were already deviating from that original vision.

    They were incorporating AI facial recognition technology, which probably would’ve been incorporated into the app regardless if it had gone on to incorporate Trusted Traveler programs as planned, but instead, FRT was used in the app to compare migrants against DHS databases and keep their images on file for future use.

    The app’s user based shifted to accommodate IO/NGO staff who used it to check migrant enrollment in the Migrant Protection Protocols and submit information about migrants in advance of their appointments at the border.

    A year later it would change again, to accommodate those migrants using CBP One to submit information on their own and schedule their own appointments.

    Because of these external needs to use the app in ways that diverged from its original purpose, its usage changed dramatically. As a consequence of that, its functionality changed as well.

    Some of the public-facing documentation reflects that shift in usage, but some very important parts of it don’t. The design of the app was forced to change, but most public-facing documentation doesn’t reflect that. Statements about what the app is for and how it’s used, both within the app and in most external documentation specific to CBP One, also don’t reflect that.

    My goal here is to show that, and suggest reasons why.

    User roles and services/capabilities

    One choice made in CBP One’s design was that the app would display all anticipated user types and services from the beginning, and gradually they would become accessible within the app. Until that point, clicking on those user types and services would trigger a pop-up message saying “Coming Soon. This feature is coming soon. Additional services will be rolled out over the next year.”

    As a result, it can be difficult to tell which user types and services were available at any particular point in the app’s development. You can’t, after all, time travel back to any of those points and try the app out for yourself, so I’m forced to discern the app’s functionalities based on the following:

    • Reports from users of the app at different times in its development
    • Statements made by CBP/DHS
    • Changes in the law/policy that required changes in the app

    For this section, I’ll focus on services known to be available at launch.

    • The Broker/Carrier/Forwarder role could schedule inspections of cargo prior to crossing the border, an idea pitched at a “shark tank” event at the Miami Field Office in 2018 and pilot tested in 2020.
    • Land and Air Travelers could access the Form I-94 website from the app. The is needed by most international travelers to the United States, and it’s used to track entrance to the country and exit from it. Through the app, a traveler could apply for the form and then use it to access their travel history, prove their visitor status. Land travelers could use it to apply for a provisional Form I-94 (it’s generally automatically for air travelers).

    Not long after, Air Travelers could apply for membership in one of the Trusted Traveler programs which expedite screening and other processes of international travel for pre-vetted American citizens. They could also Request Inspection of certain items like hunting trophies.

    Bus Operator could Submit a Manifest and Check Border Wait Times by checking the Border Wait Times site within the app, where they would be (and are still) prompted to get the CBP Border Wait Times app.

    Services added in 2021 for the International Organization role to assist migrants:

    • International Organization > Check Case Status
    • International Organization > Submit Advance Information

    Subsequently, migrants could access these services themselves:

    • Air Traveler > Advance Traveler Authorization: “Request authorization for non-United States citizens intending to travel to the United States via flight. This action is only available to travelers following the approval of their supporters on Form I-134A through the USCIS.”
    • Land Traveler > Submit Advance Information: “Submit your information before your arrival to a southwest Port of Entry.”

    Additionally there’s a TSA role, hidden to anyone who doesn’t use a TSA email address to log in, allowing TSA supervisors to take a photo of asylum seekers enrolled in the Migrant Protection Protocols (MPP) program using facial recognition technology (FRT) to verify their enrollment and allow them to travel within the country.

    A bend in the roadmap

    CBP One has changed significantly over the time since launch, diverging from the capabilities planned on this roadmap from October, 2020.

    Apologies for the blurriness (it’s from the FOIA docs), but you should be able to see that I-94, Bus Processing, Scheduling Perishable Cargo, Pleasure Boat (CBP ROAM), Seaplanes Processing, Cruising License, Cargo Crew Processing, Fee Payments, Border Wait Times, and Hand-Carry Permit Items were all to be added by November of 2020. (Launch day was, remember, October 28!)

    By February of 2021 the features would include Scheduling Vessel Arrivals, Trusted Traveler programs (not just signing up, but doing everything you currently do in the Global Entry app, for example), and Scheduling All Cargo.

    2022 would bring Landing Rights Requests, Diversion Notices, Notice of Arrival, and POV/COV (I’m Ready/PARE), which refers to Ready Lanes at land border crossings. You can get expedited processing and across the border faster if you have one of several RFID chip-enabled ID cards.

    It looks like CBP One got about as far as Perishable Cargo before that road diverged at the beginning of 2021.

    The UI as intended

    I think the original premise for navigating the app is pretty intuitive in itself, as a way to shortcut a user to which services apply to them and away from those that don’t, even if it means some repetition where different types of users need the same service (e.g. I-94 travel history), so you’ll see the same option listed for two different user roles. That’s intuitive given the intended audiences for the app– American companies/citizens, who need to interact with Customs services to comply with regulations regarding commercial shipping and/or international travel, and documented international travelers who need to access that documentation quickly.

    It could’ve been done differently and be even more intuitive, though, based on how large the audience is for one service or another. E.g. if 75% of your audience needs Form I-94 services, it would make more sense to put that on the home menu rather than burying it behind Traveler > Land or Traveler > Air (or Traveler > Sea, but that’s “Coming Soon.”) That would require knowing how large your audiences are for different features, but those stats could be pulled from the existing apps/web pages where they’re currently accessing the features. And of course, it would likely require modifying the user interface as you go.

    It’s difficult to go beyond that first impression, however, because that’s all it is– a first impression. That home screen is the face of a different app than the one CBP One would turn out to be.

    6 very simplified, chronological user guides to CBP One

    Why put user guides here, when they exist on the CBP One website? And why six of them?

    To show how the user experience changed from February of 2021 (when IOs first started using the app) onward, I’ve written up some very abbreviated user guides reflecting how the app was used with each major change over time– from the migrant’s point of view, because they are The User.


    User Guide 1: February, 2021 (MPP Check Case Status)

    In December of 2018, the Trump administration announced the so-called Migration Protection Protocols, and the program went into effect in January of 2021. The MPP or “Remain in Mexico” program allowed DHS to send migrants to Mexico upon their arrival at the border and prevent them “clustering” at the border while waiting for their hearings. At this time, the International Organization user role in CBP one allowed staff from those organizations to identify migrants who were enrolled in the program.

    You made it to the border, but they gave you a piece of paper and put you back on a bus to Mexico, where you joined 70,000 other migrants given a court hearing and a notice to appear, then sent away with no real plan to make it back in time for that hearing. You’ve been through hell in Mexico, but CBP officers weren’t asking about that. To request asylum at the border, you’d have had to affirmatively assert that you’re afraid of being sent back, and only then you might’ve gotten referred to a UCSIS asylum officer.

    That piece of paper they gave you has your A-number on it, and you’re so grateful that your paperwork wasn’t stolen (and of course that you weren’t one of the 1,544 cases of rape, kidnapping, assault, and other violence committed against migrants sent back under MPP.

    That number is one of the ways a kind woman from, say, the Keynote Border Initiative is able to look you up using an app on her phone. She needs to send off the right information that identifies you in an Immigration and Custom Enforcement, or ICE database called the Migrant Protection Protocols Immigration Enforcement Database, or MPP IED.

    If she can verify that you have a hearing pending, you can go back to the border to attend it (if you don’t, your case is thrown out for failure to appear). Turns out she has to use your A-number to do that, because the photos she takes aren’t being accepted. But finally, you’re confirmed as enrolled with a case pending.

    This screenshot is from a Powerpoint presentation given to IOs in early February. It was never provided to the public, even when migrants started using the app directly.


    User Guide 2: March, 2021 (Title 42 Submit Advance Information)

    To streamline the processing of undocumented individuals who may potentially be excepted from the CDC Order, CBP is relying on partnerships with certain International Organizations/NGOs. International Organizations/NGOs will identify undocumented individuals that are potentially excepted from the CDC Order on humanitarian grounds. . . .The manual input of data into USEC by CBPOs is a time-consuming process. The advance collection enables CBPOs to import the information collected by CBP One™ directly into a Unified Secondary event, which reduces the need for manual data entry and improves case processing efficiencies.

    PIA, Unified Secondary System: Advance Information from Certain Undocumented Individuals DHS Reference No. DHS/CBP/PIA-067(a)

    Yours was one of the 1.8 million expulsions from the border under an emergency implementation of a U.S. health law, Section 265 of U.S. Code Title 42, otherwise known simply as Title 42, which went into effect on March 20, 2021.

    IOs have been training on using the CBP One app to take information about you, such as the standard names, dates, birthplaces, etc. but also information about your parents, the address where you lived before coming to the US (which address, exactly?) and other more specific information.

    They’ll submit this information about you along with a statement attesting that you fit one or more of the vulnerability criteria that would merit exemption from Title 42, under which there is no claiming asylum– CBP stopped processing that asylum requests this month, expelling over 215,000 parents and children together who were asylum seekers.

    But at least a teacher from the Sidewalk School helping your kids learn English is able to submit this statement on your behalf. You fit at least two or three of the criteria, so maybe you have a shot? Guess we’ll find out.


    User Guide 3: April 25, 2022 (Ukrainian Direct Submit Advance Information)

    Ukrainians fleeing Russia’s invasion could come to to the U.S. through the Uniting for Ukraine humanitarian parole program, i.e. be accepted into the U.S. for a period of two years to live and work lawfully, provided they pass a background check and have a financial sponsor who applied for a Form I-134A on their behalf.

    1. Go to Login.gov, “the public’s one account for government,” and create login credentials for yourself to use in CBP One.
    2. Log in to the app and select Traveler, then Air.
    3. On the Air Traveler screen, select Advance Traveler Authorization: “Request authorization for non-United States citizens intending to travel to the United States via flight. This action is only available to travelers following the approval of their supporters on Form I-134A through the USCIS.”
    4. On your first time using the app, a pop-up will say that your profile is missing information. Hopefully your English is good enough to carry you through this, because that and Spanish are the only options.
    • Facial Photograph
    • Photo obtained from the passport or Chip on ePassport, where available
    • Alien Registration Number
    • First and Last Name
    • Date of Birth
    • Passport Number

    User Guide 4: January 5, 2023 (CHNV Direct Submit Advance Information)

    On January 5, 2023, the Biden administration announced a humanitarian parole program for nationals of Cuba, Haiti, Nicaragua, and Venezuela (CHNV). Up to 30,000 refugees in the CHNV program would be accepted into the U.S. each month for a period of two years to live and work lawfully, provided they pass a background check and have a financial sponsor who applied for a Form I-134A on their behalf.

    Nationals of these countries could use CBP One to submit their information in advance, but if they attempted to enter the United States without using the app and/or somewhere outside a point of entry, they would be expelled. With the opportunity to enter as a refugee, CHNV nationals largely lost their chances at applying for asylum, and Mexico made an agreement with the U.S. to allow up to 30,000 asylum seekers to be expelled to Mexico each month– despite not being from Mexico.

    See user guide 3 for instructions, but if you’ll be arriving by land entry, use CBP One to make the appointment.


    User Guide 5: January 12, 2023 (Title 42 Direct Submit):

    Anyone can use the app now to submit their information and attest that they fit the vulnerability criteria to be exempt from Title 42. But now IOs aren’t instructed to help you– you have to do it yourself, and there is no guide to using the app anywhere. Not within the app, and at this time there isn’t even a website.

    1. Go to Login.gov, “the public’s one account for government,” and create login credentials for yourself to use in CBP One.
    2. Log in to the app and select Traveler, then Land.
    3. Select Submit Advance Information | Enviar Información Anticipada, then fill out your profile: Name, Date of Birth, Phone Numbers, U.S. Address, Foreign Addresses, Nationality, Employment history, Travel History, Emergency Contact Information, Family Information, Marital information, Gender, Height, Weight, and Eye color.
    4. Take a photo of yourself and upload it.
    5. Confirm that you meet one or more of the vulnerability criteria and can provide supporting documentation, and hope for the best.
    6. Do steps 2-5 every morning at 2am as the 700 or so available slots for appointments vanish over a few minutes. Yes, including the registration– every morning, enter all of that information as fast as you can, take another photo, or as many as it takes, and keep trying,.

    A 27-year-old Cuban woman, who also requested anonymity over concerns that recognition would affect her entry into the U.S., told Rest of World she’d been waiting on the Mexicali-Calexico border penniless with an infant for over a month. ‘I have to wake my 3-year-old baby at 2 a.m. every day to enter our information and try our chances with the app.’ She said she had used the auto clicker to tap over and over on the photo she had to upload to get an appointment. ‘What I have noticed is that auto-clicker apps work mostly when there is only one person trying to get the appointment.’

    Once a ticket scalpers’ tool, auto clickers now help migrants enter the U.S. : Migrants in Mexico are using automation apps to secure appointments with U.S. border officials

    User Guide 6: May 12, 2023 (Direct Submit, post Title 42)

    • Title 42 was lifted on May 11, and the vulnerability requirement is gone from the app.
    • Haitian Creole is added to the app– sort of.

    The quality of the current Creole translations is spotty at best. Users can only select Creole after a full user registration process in English and Spanish, including two-factor authentication. Error messages, drop-down menus, and navigation tools continue to display only in English. . . ‘Any human who is familiar with any kind of written language would look at that and say, that looks wrong,’ said Wagner, who recommends CBP hire language professionals to review the Haitian Creole text on the app. ‘It shows that they truly don’t care whether anybody understands it.’

    Seeking asylum at the U.S.-Mexico border? You’d better speak English or Spanish
    • Appointments are allocated on a lottery arrangement, but with preference given to those who requested an appointment yesterday.
    • You still must ask for an appointment each day, but you have 23 hours in which to ask for an appointment, and then another 23 to accept and confirm when you get a notification.
    • Before you’re allowed to request an appointment, you must agree to share your location (and you must be in central or northern Mexico, including Mexico City and Guadalajara). Before you’re allowed to accept an appointment, you must share your location again, and take a “video selfie.”

    Review the technology used (and not used)

    Planned technology vs. technology utilized

    One thing that stands out almost immediately:

    Progress on the rollout of planned features for CBP One ground to a halt in 2021 as the app’s focus shifted away

    CBP stopped adding capabilities to CBP One, and used them instead to make more apps

    The new app was designed to have a “user centric interface to guide users with an intuitive and guided border entry/exit experience regardless of geographic location, mode of transportation or citizenship.” It would “eliminates the need for multiple CBP applications.”

    When development of CBP One was announced, CBP had five mobile apps: CBP Jobs, ROAM, MPC (Mobile Passport Control), Border Wait Times, and CBP DTOPS.

    As of this writing, CBP currently has eight apps: MPC, Global Entry, ROAM, CBP One, Border Wait Times, myCBP, ESTA, and CBP Truck QR. 

    Development status reports showed a distinct lack of development

    CBP published internal status reports for CBP One usage and available/upcoming features, which give an idea of how things weren’t progressing.

    In comparing these status reports for June and July, a few things to notice:

    • Under “What is Available Now,” features are listed as having been available in February and March of 2020, before the app actually launched in October. Presumably this was a typo and they should’ve said 2021, but the error wasn’t fixed from June to July.
    • As of June 18, NGOs had the ability to schedule appointments using the app, but migrants using the app to submit their own requests directly was projected for the end of June, 2021. The July report says that the functionality had been placed on hold, and migrants didn’t get the ability to submit their own information until January of 2023.
    • The “current statistics” are exactly the same for both months.

    Facial Recognition Technology

    Most of the criticisms about CBP One’s actual functionality concern its facial recognition technology. The background on that certainly explains some of the complaints.

    CBP One was designed to comply with a biometric entry/exit mandate issued before apps (or the DHS) existed

    Biometric identity information is used to identify or verify who you are based on physically distinguishing characteristics, such as your fingerprints, irises, or face. It suddenly became very important to the U.S. federal government in the wake of 9/11/2001, when for many, any shifty-eyed stranger on an airplane might be a terrorist ready to steer the flight into a building, and three months later, a fear of explosive shoes took hold of America and has largely kept its grip ever since Richard Reid completely failed to set fire to his.

    The DHS cites multiple pieces of legislation from around that time, including the Enhanced Border Security and Visa Entry Reform Act of 2002’s Title III: Visa Issuance, as its mandate for gathering biometric data on travelers entering and exiting the country. The legislation references INS (Immigration and Naturalization Service), because DHS hadn’t yet been created as unification of USCIS, ICE, and CBP, all of which had previously been subsumed under INS, in 2003.

    The relevant section reads:

    Title III: Visa Issuance – Amends the Immigration and Nationality Act (Act) to direct the Secretary of State (Secretary), upon issuance of an alien visa, to provide INS with an electronic version of the alien’s visa file prior to the alien’s U.S. entry.

    (Sec. 302) Sets forth technology standard and interoperability requirements (including October 26, 2004 implementation deadlines) respecting development and implementation of the integrated entry and exit data system and related tamper-resistant, machine-readable documents containing biometric identifiers. Requires a visa waiver country, in order to maintain program participation, to certify by October 26, 2004, that it has a program to issue to its nationals qualifying machine-readable passports that are tamper-proof and contain biometric identifiers. Authorizes appropriations.

    The need to gather biometric information applying in all of these cases, it’s not surprising that CBP’s AI Facial Recognition Engine, Traveler Verification Service (TVS), isn’t just used in CBP One, but in TSA PreCheck, Global Entry kiosks and the Global Entry app, and the Mobile Passport Control App

    CBP developed TVS to be scalable and seamlessly applicable to all modes of transport throughout the travel continuum. CBP has successfully implemented facial biometrics into the entry/arrivals processes at all international airports and into the exit processes at 32 airport locations. CBP also established facial biometrics at 26 seaports and all pedestrian lanes at both the Southwest Border and the Northern Border land POEs.

    Statement for the Record on Assessing CBP’s Use of Facial Recognition Technology, July 2022

    Facial recognition technology works in two very general ways: 

    • One-to-one comparisons for the purposes of verification, such as when you unlock your phone using your face to authenticate. This works by algorithms learning what your face looks like first, and then comparing future images of you to that original image, using it as a template. 
    • One-to-many comparisons for purposes of identification, such as when a photo is taken of someone in a crowd, and you identify them by comparing that photo to a database of photos of people that may include one or more photos of the person you’re identifying. These photos are also templatized, as in, they’re converted to a numerical pattern that is, ideally, specific enough to avoid making an incorrect match by false positives (matches to photos that don’t actually show the same person) or false negatives (overlooking images that show that person). 

    According to TVS’s first Privacy Impact Assessment (PIA) in 2018, it was tested by employing CBP agents (in partnership with TSA) at airport departure gates, where they would take photos of travelers preparing to board the plane. Each photo would then be compared to a downloaded gallery containing templates from previously-acquired photos of the same traveler (such as a passport photo), and images of all travelers associated with the flight manifest, created using the Advance Passenger Information System (APIS) data

    If a match couldn’t be found, a CBP officer would use a Biometric Exit Mobile wireless handheld device, or BE-Mobile, to manually capture the traveler’s fingerprint and use that to query a DHS-wide database called the Automated Biometric Identification System, or IDENT. Non-citizens’ facial images would then be retained in IDENT for use in future encounters with CBP. 

    The success of these programs led CBP to adopt TVS as its “accredited CBP information technology system that consists of a group of similar systems and subsystems that support the core functioning and transmission of data between CBP applications and partner interfaces.” It would use TVS as its “backend matching service for all biometric entry and exit operations that use facial recognition, regardless of air, land, or sea.” 

    Nevertheless, the PIA acknowledged, “While CBP may create APIS manifests on land border crossers via bus or rail, unlike travelers in the air and sea environments, there are no manifests created for pedestrian travelers to assemble a gallery of known travelers. CBP is developing processes that would enable the use of TVS at the land border; for example, CBP may briefly retain local galleries of travelers who have recently crossed at a given Port of Entry and are expected to cross again within a given period of time.” 

    A 2016 PIA for CBP’s Departure Verification System (DVS, TVS’s predecessor) described it this way:

    At selected departure gates at select airports, CBP will deploy a facial recognition camera in close proximity to the airline boarding pass reader. This camera will match live images with existing photos from passenger travel documents assembled based on flight manifest data of the boarding flight. Upon receipt of the passenger flight manifest and throughout the passenger check-in process, CBP will compile photos from the Automated Biometric Identification System (IDENT), the Department of State’s Consolidated Consular Database, and U.S. Citizen and Immigration Service’s Computer Linked Adjudication Information Management System (CLAIMS 3) to build a flight-specific gallery housed in the Automated Targeting System (ATS).

    . . . The test was scoped to include only one route and run until September 30, 2016; the pilot was later extended through November 2016. For flights operating on this route, a CBP-manned camera and tablet computer were placed between the boarding pass reader and the aircraft. As travelers checked in for their flight, CBP obtained passenger manifest data and assembled existing traveler photographs into a downloadable file that was pushed to the tablet prior to boarding. These photographs had been accessed from various DHS and Department of State systems. As travelers passed through the boarding area, the camera took their photographs. The real-time photographs were compared to the downloaded pictures to determine if CBP systems could accurately match the two photographs.

    (Yes, this is what it’s like to read every Privacy Impact Assessment (PIA)– they’re clearly not intended to be consumed by, for example, the passengers on these flights. But though they’re public, they’re mostly about explaining how new technologies don’t violate any existing privacy regulations. IMO they could just as easily be called CYAs as PIAs, but YMMV.)

    That gives you an idea of the conditions under which the TVS was developed: a very controlled environment composed of a brightly lit airport departure gate, where CBP officers (“CBP-manned camera”) were taking photos of travelers and comparing those photos to the travelers’ own photos from their travel documents, i.e. passport photos etc., as well as to the flight manifest.

    A September 2020 GAO Report shows that FRT for land crossings was low priority– at best– when CBP One launched

    The Government Accountability Office published a report evaluating CBP’s use of facial recognition technology (TVS) to date.

    As of May 2020, CBP, in partnership with airlines, had deployed FRT to 27 airports to biometrically confirm travelers’ identities as they depart the United States (air exit) and was in the early stages of assessing FRT at sea and land ports of entry.

    FACIAL RECOGNITION: CBP and TSA are Taking Steps to Implement Programs, but CBP Should Address Privacy and System Performance Issues

    The report described, and included photos of, the scenarios of several pilot tests, looking at the accuracy of FRT but also the implementation of privacy safeguards and warnings. It included recommendations for CBP to be more diligent about displaying signage informing passengers of the Biometric Entry-Exit program and their right to opt out if they chose, as well as auditing airlines employing FRT for privacy purposes.

    But the record scratched at the description of the process CBP had in place to test FRT for pedestrians:

    As travelers approach the primary inspection booth and present their travel identification documents, such as passports or visas, cameras connected to TVS attempt to capture live photos. CBP officers scan the traveler’s identification document, which allows CBP’s TECS system to locate the document photo. Once the photo has been located, CBP’s system sends the photo to TVS. TVS then compares the live photo against the document photo to produce a match or no-match result. Travelers who are not matched by TVS instead have their identities verified manually (a visual inspection) by a CBP officer.

    Here’s a rough approximation of what that looks like:

    1. You present your travel documents to the CBP Officer.
    2. A facial recognition camera takes a photo of you.
    3. That photo is then sent to TVS.
    4. This newly-captured photo of you is compared to the one in your travel documents.
    5. If a match isn’t found, CBP officers do a visual inspection to verify your identity.

    Now compare that to the process in CBP One:

    1. You have no documents, so skip this step.
    2. Instead, you manually enter your information into an app, and that takes a photo of you.
    3. If the app accepts your photo, it will be compared to at least two databases that might or might not have your photo. You won’t know the results of that comparison.
    4. If you’re issued an appointment and want to accept, you’ll need to undergo liveness detection by taking a “video selfie.”
    5. If at any point something doesn’t work, too bad. No officers are around to give you a visual inspection, so this is where the process ends.

    Sure, TVS might be doing the comparison– but nearly everything else is different.

    The advantage in the first scenario couldn’t be more obvious– there’s an actual room, with CBP officers in it. In other words, you’re not using an app.

    In a July 2021 report, NIST said that the quality of the camera and the environment in which the picture is taken affect the accuracy of facial recognition. Thus, the availability of CBP officers to check the accuracy of the systems conducting the photos’ comparison is vital to ensure racial minorities are not disproportionately impacted by the technology’s shortcomings.

    CBP One: An Overview

    And what about when something goes wrong?

    In addition to random sampling, CBP officials can be informed of problems with air exit facial recognition if they are observed or reported by airlines or airports. For example, as previously mentioned, we observed a flight that experienced a high number of no-matches. When we alerted officials to the problem, they reviewed match data from other flights at that airport and identified similar issues. Specifically, CBP officials determined that lighting issues at a particular terminal were affecting the quality of the photos taken at the gate, and they worked with airport officials to address the issue. CBP officials also noted that they generate automated reports of matching rates and usage on a weekly basis, and provide weekly performance reports to stakeholders, such as airline partners. Officials said they use this reporting to gauge system performance.

    So there was a problem, and officials were alerted to it, and they assessed the situation– probably in person– then determined that the lighting was affecting the quality of the photos. Also they generate automated reports weekly and report to stakeholders.

    Does any of that come even close to applying to how CBP One is used? Note that this report came out a month before CBP One was launched, and the assessment of CBP’s facial recognition technology is that it’s very accurate when used in airports, except if there’s an issue with lighting or otherwise affecting the image quality– in which case CBP officials look into it and they addressed the issue. And they’re looking into applying FRT at land crossings for pedestrians, but that means pedestrians with passports and/or visas arriving on foot, in person, facing a camera operated by CBOs.

    CBP might’ve been determined to use TVS as its “backend matching service for all biometric entry and exit operations that use facial recognition, regardless of air, land, or sea,” but if the vast majority of your pilot testing and general application of a technology is on air travelers, you are by definition excluding all undocumented migrants from your results. You are developing your technology to fit a scenario that does not include, and therefore cannot apply to, this audience.

    This audience is composed of people using their own phones, on crappy wifi, by themselves with no help, most likely terrible lighting, and nobody noticing when it’s not going as planned. And when that happens, it doesn’t seem like it goes in anybody’s weekly performance report for stakeholders.

    Statement for the Record on Assessing CBP’s Use of Facial Recognition Technology

    In July of 2022, CBP submitted a statement for the record for a hearing titled “Assessing CBP’s Use of Facial Recognition Technology” before the House Committee on Homeland Security.

    CBP is aware of concerns regarding biometric facial comparison matching, specifically that non-match results may be racially or demographically biased in performance. CBP does not track race as a descriptor during traveler processing; however, CBP data analysts have performed extensive operational analytics on TVS matching that shows a negligible effect in regard to biometric matching based on country of citizenship, age, or gender while achieving an average technical match rate of 99.4 percent on entry and 98.1 percent on exit. No changes have been necessary as the matching performance has remained consistent for several years across multiple matching algorithms. From January 2017 through the end of June 2022, technical match rates remained high among citizens from various regions of the globe, for example: Africa 99.5 percent match rate; Asia 99.3 percent match rate; Central America 99.6 percent match rate; and Europe 99.6 percent match rate. If a traveler cannot be matched by CBP’s biometric facial comparison technology, the traveler will simply be processed through the traditional inspection process consistent with existing requirements for entry into the United States.

    Maybe you’ve already guessed, but this statement didn’t mention CBP One.

    Liveness Detection

    Wait, what’s a “video selfie”?

    The first CBP One PIA has a brief description of liveness detection in the app, and it’s clearly not just talking about a photo.

    CBP One™ prompts the user to take a live photograph or selfie (new photograph and not the same image collected from the passport/epassport). CBP One™ instructs the user to line their face up with a circle on the screen of their mobile device. CBP One’s embedded software then performs a ‘liveness’ test to determine that it is real person (and not a picture of a person).

    First CBP One PIA

    A footnote on that section reads:

    While the user is taking the “selfie,” the technology embedded within the mobile application relies on the devices camera to view a live image through 3D face changes and observing perspective distortion to prove the image is 3D. If “liveness” cannot be confirmed, the user is unable to utilize the CBP One application.

    This sounds very much like the iProov, product Flashmark, which “uses the screen of a mobile device to flash a unique, one-time sequence of colors, under server control, onto the user’s face. The server uses machine learning technology to analyze and determine if the image is a live person.” iProov received multiple CBP contracts to integrate “Genuine Presence Assurance” into CBP’s technology, starting in 2018.

    This occurred to me when I read the monthly operational report from March 17, 2023, which states:

    The large number of appointments scheduled via CBP One in recent months was made possible through the identification of process improvements and implementation of a number of software updates that fixed earlier reported technical difficulties. For example, CBP addressed reported challenges related to geolocation and error messages due to bandwidth issues with a third-party software for liveness.

    First, I think that third party must be iProov.

    Second, this is as classic an example of “bug fix that isn’t a bug fix” message as you could get — “The app worked really well, which was only possible because we fixed the thing we broke.” Or in this case, possibly “We made the third party fix the thing it broke.”

    Third, bug fixes go in release notes. Or at least they should. But for CBP One, there are no release notes (see below), because the release notes go in monthly operational reports. Because of course.

    A thread for someone else to pull on?

    So many of the complaints about facial recognition point to studies, including by NIST, demonstrating racial and other biases in the technology, suggesting that this accounts for when Haitian refugees, for example, can’t seem to get the app to recognize them. In response both CBP and NIST point out that facial recognition algorithms improve rapidly over time, and that:

    CBP has partnered with the National Institute of Standards and Technology (NIST) to perform an independent analysis of CBP’s facial recognition performance, including potential impacts due to traveler demographics and image quality.  Based on an algorithm vendor test conducted by NIST in 2019, it was concluded that the false positive differentials based on demographics were undetectable using the NEC-3 algorithm which is used by CBP.  Per NIST, the NEC-3 is the most accurate algorithm evaluated (out of the 189 tested).  CBP’s match rate is greater than 97 percent and improving.

    U.S. Customs and Border Protection (CBP) Use of Biometrics (Knowledge Article)

    (CBP partnered with Nec Corporation of America in June, 2017 – CBP’s OFO, United airlines, NEC Corporation tested facial recognition at Houston George Bush airport. Product: NeoFace® Express facial recognition stations)

    However,

    • That article doesn’t link to the actual NIST report on testing for demographic effects: Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects
    • False positive differentials are mentioned, but not false negatives
    • The article specifically describes air travel and comparing photos to passports and visa photos (whereas the FRVT test also looks at border crossing images)
    • It also doesn’t mention presentation attack detection (PAD), which is what the “video selfies” are for– to verify that that the camera is not just seeing you, but the actual you rather than, say, some imposter holding up a picture of you. It seems to me that this is an entirely separate area where bias might be introduced, which in this case would be in IProov’s territory rather than Nec’s.

    But I can’t really give this the full examination it deserves. NIST has even separated out FRVT into two different areas, FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation), to make a clearer differentiation between FRT and PAD (to put it generally), and while it’s fascinating, it’s really out of my wheelhouse at the moment.

    Still, I suspect that many of the complaints about bias are actually about liveness detection and not facial recognition.

    A note on ROAM (Reporting Offsite Arrival Mobile)

    CBP’s app for pleasure boaters was on the app store in 2020 when CBP One launched. CBP One’s roadmap calls ROAM out as a service to be incorporated into the app within the first month.

    And announcements about the app’s demise, which appear to have been greatly exaggerated, appeared publicly on the CBP One page since it was originally posted.

    But not only is the app still around, CBP is still updating it– in September of 2021, they added a feature to apply for cruising licenses. Stranger still, ROAM’s description on the App store says “Disclaimer: This is a pilot version only for use in limited areas. Contact your local Port of Entry for guidance.”

    The strangest thing of all might be this quote from the first CBP One PIA. That’s all of the information about it in the PIA, so I don’t know what happened with this functionality.

    Reporting Offsite Arrival-Mobile (ROAM)

    The ROAM mobile functionality is embedded into the CBP One™ mobile application and provides travelers arriving to the United States with an option to voluntarily self-report their arrival to CBP. In addition, the ROAM mobile functionality will automate existing manual data entry and law enforcement queries for CBP and provide a more sophisticated capability for conducting a remote inspection via video conference. This function will not be available at launch of CBP One™; CBP will publish a standalone, function-specific PIA to discuss the privacy risks and mitigations thoroughly. CBP will update this Appendix when the standalone PIA is published.

    First PIA for CBP One

    Walk through the documentation

    Website

    Here’s a screenshot of the first day of the CBP One website — February 23, 2021. You could guess the timing based on the blue box talking about MPP, but the rest of it with the orange arrows has been more or less standard since then.

    It launched with a “Getting Started” section limited to a brief set of instructions to download the app, create a login.gov account to use it, then “users can access the different CBP services based on their specific needs.” It could have contained, for example, the Powerpoint presentations given to NGOs, or the January 5, 2023 fact sheet announcing that migrants could start using CBP One on January 12, but did not.

    That’s as much time as I’ll spend on the website, which ordinarily would be the focus when talking about documentation for an app. But that’s exactly the problem, because there’s not much to say about documentation that’s incomplete and out of date, except that it’s incomplete and out of date. Which it is.

    And I’m not actually sure how important guides are, here. They should exist, absolutely, and they should be up to date, absolutely. But the guides do not tell you what to do when the app crashes over and over again, erasing your registration and taking you back to the login screen. They don’t tell you what to do if you can’t create a login, register a traveler or request an appointment. But there is one troubleshooting item you will see. They won’t tell you what a video selfie is, or what it’s used for, but if it fails– take a video selfie of someone else. Or delete your registration and start over.

    Update announcements

    Since January of 2023 when the Biden administration announced that migrants would use CBP One to submit their own information, there have been two update announcements posted to the web site: May 5, 2023 (announcing an update for May 10) and February 29, 2024, which…might not have been announced at all, actually, since the page doesn’t seem to notice that it’s changed.

    That might be because the time in which you can make an appointment has actually gotten shorter, for the first time since launch– as of May 10, 2023, you had 23 hours in which to request an appointment, and as of Feb. 29, 2024, you have 12. I can see why you wouldn’t draw attention to that unless you had to.

    Release notes

    What release notes?

    Tech support

    Well, umm…yep. That’s it.

    Anybody try emailing this address? I did, and didn’t get any reply. Weird. Should I submit a FOIA request?

    In-app documentation

    Recall that CBP announced the development of a new app, citing the need for a “an intuitive single portal for travelers and stakeholders to access CBP mobile applications and services such as CBP ROAM, I-94 Entry/Exit, and the Appointment Request Feature.”

    The app officially launched on October 28, 2020.

    The new app would be designed to have a “user centric interface to guide users with an intuitive and guided border entry/exit experience regardless of geographic location, mode of transportation or citizenship.”

    It would, effectively, be a hub where users could be directed toward services based on their particular needs on the basis of their user type/role/persona (I’ve seen all three used interchangeably in documentation).

    These screenshots describing CBP’s vision for CBP One are still visible in the app, on the General FAQs screen. As of this writing it’s mid-April, 2024, which makes you wonder which “early October” and “November” are referenced here. Based on what user types and features are actually available in the app, I have a feeling it’s 2020.

    It’s like the app is haunted with ghosts of personas and services Never Yet To Come.

    It’s like walking through a rental office space past a series of doors with signs on them, but if you open the door, all you see is a poster with cheerful text reading “Coming soon! Features for each user type will be rolled out at different times, so please check back if the one you need is not yet available!”

    One of those rooms is, of course, full of hundreds of thousands of migrants, and that’s why the rest will remain empty. But you won’t see that mentioned on the signage.

    What CBP One might have been

    I give you the Mobile Passport Control app, developed by CBP with Airside in 2013:

    Why the profound difference between the two?

    • MPC is designed to help air travelers avoid some of the “agony” (as Hipmunk used the term) that other travelers experience when trying to comply with federal regulations.
    • CBP One is designed as the only way for migrants to comply with regulations, thereby possibly relieving them of the “agony” experienced by migrants who aren’t allowed into the country at all.

    In both cases, a select group is given an advantage over other groups in terms of complying with regulations set by the same entity extending that advantage. (Like TSA PreCheck, which allows travelers to pay to get through security quicker, and also uses facial recognition — and also seems like something everybody should get automatically, rather than an advantage you can pay for)

    But I think it’s difficult to get our heads around the real, enormous, but hidden difference: every other program, every other app, is a choice. A real choice– the most you risk by not using them, maximum, is an hour in a security line.

    CBP One, despite all statements to the contrary made by CBP itself, is not used voluntarily. Nobody would volunteer to use it. This is the kind of app you only use if you’re required, which sounds absurd as a design critique for an app. But it’s true, because CBP One is the most powerful app. The most you risk by not using it can be as costly as your future, even your life.

    Evaluate criticisms of the app

    The indignity of “glitchiness”

    CBP One requires that applicants take a live photo. You can’t use an old selfie, and the app seems to have trouble with darker skin tones. And that is one of the glitchier aspects of this entire application, because the AI– the camera does not pick up certain phenotypes. And interestingly enough, when you get to that step, there is a model who is facing who’s on the screen. She’s a beautiful white model. And it’s surreal to watch an indigenous Mayan woman trying to take a photo facing this white model, and the camera just does not pick up her skin complexion. And that is often where the app crashes.

    Gia Del Pino, Director of Communications at the Keynote Border Initiative, on Slate’s TBD podcast

    This has happened to me as well– many times

    Austin Kocher made an excellent point in a paper about CBP One last year:

    I argue that while glitches productively call attention to the controversial processes of asylum digitization, representing technological barriers to asylum as “glitches” displaces political discussions about the right to asylum with depoliticized discussions about patching software problems.

    Glitches in the Digitization of Asylum: How CBP One Turns Migrants’ Smartphones into Mobile Borders

    Glitches are pretty generic, as it goes. They come in a few varieties that you see across all kinds of software, regardless of how frivolous or necessary it is. The glitches people have reported about CBP One for years now have pretty much remained constant in type – the FRT can’t recognize your face, or the geolocation thinks you’re already in the U.S., or the app crashes and takes you back to login repeatedly (something that happens to me quite a lot, actually).

    It’s hard to say how frequently they’re occurring or where, or for whom, though, because CBP One’s documentation doesn’t tell us that. It doesn’t tell us what, if anything, it’s doing to fix them. This kind of dynamic is also pretty common– who works with software regularly and feels completely in the dark about when, or whether, a problem they’ve reported is going to be fixed?

    What sets CBP One’s glitchiness apart is the very fact of glitches. To complain that your asylum app has glitches can at once be 100% legitimate, and 100% like complaining that your right to privacy burnt a hole in your hand. You should not need to protect against flammability in claiming your right to privacy. You should not have to protect against glitches to claim your right to asylum.

    CBP could give away hundreds of thousands of iPhone 15s to hundreds of thousands of asylum seekers, coupled with power bricks that last forever. It could make CBP One the most user-friendly app on the planet. It could provide top-notch customer service. None of these things would, in the slightest, rectify the inherent indignity of predicating a migrant’s well-being on a program you download from the same place as Candy Crush Saga.


    The United Nations doctrine against returning refugees to their countries of origin where they faced oppression sufficient to flee to another country is called the principle of “nonrefoulment.” It’s invoked in the 1951 United Nations Refugee Convention, a treaty that the U.S. entered into in 1967. It contains many provisions about treatment of refugees (spoiler: the U.S. doesn’t comply with most of them), but its central principle of non-refoulment is articulated in Article 33:

    No Contracting State shall expel or return (“refouler”) a refugee in any manner whatsoever to the frontiers of territories where his life or freedom would be threatened on account of his race, religion, nationality, member- ship of a particular social group or political opinion.

    I like that term for a lot of reasons, such as how it firmly assigns the “foulness” to the location from which the refugee is fleeing, rather than on the refugee, as Trump recently did by calling migrants “vermin.” In that he echoed bigots across history who have made entire populations their targets of moral disgust, labeling them as parasites, pests, germs, etc.– that kind of rhetoric certainly aided in closing the border to even asylum seekers in the name of protection against Covid (extra ironic given Trump’s own stance on the disease).

    Obviously the complaints about how CBP One works differ wildly between the two parties (Homeland Security Committee vs. the 26 signers to the 3/13/23 letter) but this means that at least theoretically, in some hypothetical scenario, both sides on this issue could work together to make CBP One a better app.

    Now on the subject of migrant-hating Republicans, I must bring up the House Committee on Homeland Security. But since there’s no possible way to cover every outlandish claim they’re making about CBP One, I’ll look into one– especially since it involves a supposed “glitch.”

    An “extensive investigation”

    In September, the Washington Examiner reported that cartels are using virtual private networks (VPN) to skirt requirements that aliens signing up for appointments at ports of entry via CBP One be present in northern Mexico before making the appointment. Using these VPNs, the cartels can exploit vulnerabilities in the app and schedule appointments for individuals regardless of their location—all for a fee—and they advertise this “service” on social media.

    NEW DOCUMENTS OBTAINED BY HOMELAND MAJORITY DETAIL SHOCKING ABUSE OF CBP ONE APP

    A letter sent to Secretary Mayorkas on September 14, 2023 from Reps Mark Green and Clay Higgins claimed that cartels are using VPN connections to evade CBP’s geofencing requirement that applications be scheduled only from central and northern Mexico, based on an August article in the Washington Examiner. The article claimed that migrants from Guatemala were only allowed by Mexico to pass through the country to the United States if they had a CBP One appointment, which shouldn’t be possible to schedule from outside of Mexico, so that proved that the app had been “hacked” by cartels

    In a follow-up article in the Washington Examiner in October of that year, CBP Spokeswoman Erin Waters was quoted as saying “Claims that the CBP One app has been hacked are categorically false. Criminal organizations and smugglers continue to prey on vulnerable migrants, lying to them and putting them in harm’s way. Here is the reality: The lawful and orderly pathways we have established have been bad for cartels and other criminal organizations seeking to exploit migrants.” She also pointed out that “Importantly, the CBP One app requires a user’s device location services and GPS data to verify their location before booking and confirming an appointment.”

    The Washington Examiner (described by Media Bias/Fact Check as “based on editorial positions that almost exclusively favor the right and mixed for factual reporting due to several failed fact checks”) makes this claim about exploitation of CBP One’s geolocation by cartels selling VPN service based on “an extensive investigation that included a review of unclassified, internal DHS documents and communications,” but I see no reference to the details of this investigation in the article, and no way to view those unclassified DHS documents.

    Yet the House Committee on Homeland Security Chairman not only used this article to make a claim of fact that “Mexican cartels are abusing the Biden administration’s expanded use of the CBP One app as part of their vast human smuggling operations,” but then itself claimed to have accessed those same “unclassified, internal DHS documents and communications.” So where are they?

    The documents are still under review to determine the extent of DHS’ compliance with the Committee’s comprehensive request.

    Umm. Admittedly, the extent of my research here is a) looking at other “news” posts on the House Committee on Homeland Security to see if they eventually released the documents from review (not so far as I could tell) b) reading those two Washington Examiner pieces closely, trying to find a link or something to the “unclassified, internal DHS documents and communications,” (no luck), and c) tweeting at Washington Examiner journalist Anna Giaritelli to ask if she’s seen them. But they can’t have just made up an investigation into internal DHS documents, right? I mean, they got a quote from CBP Spokeswoman Erin Waters saying they’re full of it. If they had evidence that they’re not, wouldn’t it….be somewhere?

    …especially if the House Committee on Homeland Security is going to make that claim themselves, citing the Washington Examiner as their only evidence?

    In the end, are CBP the only ones who like the CBP One app?

    I’m not sure even they are big fans of it, but they do at least sound like they’re fans of getting information about migrants– both biographic and biometric– submitted via an app.

    For partners, using biometric technologies advances their operations, so they can improve the guest experience and boost customer satisfaction. For CBP, using biometrics allows us to shift the focus of our Officers from administrative functions to core law enforcement duties, improving our ability to deter, detect, and prevent threats to our nation.

    Biometric Partners | U.S. Customs and Border Protection

    Fundamentally, what we’re looking to do is get rid of paper, get rid of manual processing steps, and let . . . us free up our time from border agents and others in the process so they’re spending less time staring at a screen, less time printing out documents, and more time actually on the front lines, doing their jobs keeping us safe. That’s been a core part of my role as CIO, and we’re going to continue to accelerate that with AI innovations.

    DHS Chief AI Officer Eric Hysen goes on Politico Tech podcast

    Typically, once an undocumented individual arrives at a land POE for processing, CBP Officers (CBPO) spend significant time collecting and verifying basic biographic data about the individual during the inspection process. One at a time, the CBPOs interview and collect information from such individuals during secondary inspection. The CBPOs manually enter the information into the Unified Secondary System (USEC). To streamline and increase processing capacity at land POEs, CBP uses the CBP One™ mobile and desktop applications to allow the advance submission of biographic and biometric information from undocumented individuals seeking admission into the United States.

    CBP One PIA Feb. 19, 2021

    Historically, U.S. Customs and Border Protection (CBP) received no advance biographic or biometric information prior to the arrival of undocumented individuals at ports of entry (POE). This lack of information increases the amount of time it takes CBP officers (CBPO) to process undocumented individuals upon their arrival. To streamline and increase processing capacity at land POEs, CBP is expanding the use of the CBP One™ mobile and desktop application to allow the advance submission of biographic and biometric information from undocumented individuals seeking admission into the United States.

    Privacy Impact Assessment DHS/CBP/PIA-076 Advance Information from Certain Undocumented Individuals

    But the March 13, 2023 letter to Mayorkas from Jesús G. “Chuy” García and 25 other representatives cited something I hadn’t noticed before:

    According to DHS Guidance, asylum seekers or others seeking humanitarian protection cannot be required to submit advance information in order to be processed at a southwest Border land POE.

    A footnote cited Guidance for Management and Processing of Undocumented Noncitizens at Southwest Border Land Ports of Entry, U.S. Customs and Border Protection (Nov. 1, 2021), which is a letter to William A. Ferrara, Executive Assistant Commissioner, Office of Field Operations, from Troy A. Miller, Acting Commissioner, U.S. Customs and Border Protection, expressing that exact thing:

    Possible additional measures include the innovative use of existing tools such as the CBP One™ mobile application, which enables noncitizens seeking to cross through land POEs to securely submit certain biographic and biometric information prior to arrival and thus streamline their processing upon arrival. OFO also should accelerate ongoing efforts to digitize processing at POEs and more effectively use data to increase throughput. In developing these solutions, CBP should, as appropriate, collaborate with interested non-governmental organizations and other key partners, consistent with applicable privacy protections and civil rights and civil liberties.

    Importantly, however, asylum seekers or others seeking humanitarian protection cannot be required to submit advance information in order to be processed at a Southwest Border land POE. The submission ( or lack thereof) of advance information should not influence the outcome of any inspection. CBP will continue to make admissibility and processing determinations on a case-by-case-basis at the POE.

    According to this guidance, CBP One shouldn’t be the exclusive means for migrants seeking humanitarian protection to appear at the border legally for inspection– or even, possibly, the primary means for them to do so. It sounds like Miller was, in fact, suggesting that the CBP One app should be used like an app-– which is to say, a supplementary device that makes a process more convenient.

    And according to the sources quoted above, it does indeed make the immigration process more convenient– for CBP officers. Obviously CBP isn’t a business, but if it were, then it would be the odd sort of business that makes an app for employees to interact customers, but primarily serves the employees rather than the presumed customer base.

    In other words, CBP One is an app made for CBP, not migrants. It should not, according to Miller’s guidance, be used as a replacement for human-to-human interaction. But that’s precisely how it is used today.

    Some closing thoughts

    This is a story about the most powerful mobile app in the world, and why it shouldn’t be.

    It’s strange and grandiose to put it that way, I know. But think of how under Title 42, this app is how migrants were able to claim exemption from being expelled from the border on the grounds of being potentially diseased.

    They did so on the basis of meeting certain “vulnerability criteria”– literally, there was a list in the app, and migrants were required to attest that they personally, and/or their family members, fit one or more of those criteria (which, ironically, included physical illness).

    Imagine having to tell an app that you fear for your and your family’s safety where you are, so that hopefully some human somewhere will see it and decide to help you. Then imagine not being able to.

    There’s no list of vulnerability criteria in the app now, because Title 42 is no longer in effect. Which is good, because it means you won’t be summarily dismissed from the border on the grounds that you might be diseased. But also bad, because at this moment it’s functionally the only way to request asylum under the Biden administration’s Circumvention of Lawful Pathways rule.

    No one has to look you in the face to tell you that your misery doesn’t count, that your suffering isn’t great enough, to even give you a chance at finding a safe place to just live your life. Work, pay taxes, have kids, send them to school– just like everyone else.

    And this app won’t let you tell them how badly you need it. And this app won’t let you tell them you deserve it, just as much as anyone else. And this app won’t let you tell them it’s your right, even though it is.

    This app won’t let you tell them anything, for that matter. It just lets you give them something– your personal details, your family history, even the shape of your face. What will they give in return?

    Post-script

    Let’s not forget that the literal Lady of Liberty was, and is, an eternal advocate for asylum seekers:

    Not like the brazen giant of Greek fame,
    With conquering limbs astride from land to land;
    Here at our sea-washed, sunset gates shall stand
    A mighty woman with a torch, whose flame
    Is the imprisoned lightning, and her name
    Mother of Exiles. From her beacon-hand
    Glows world-wide welcome; her mild eyes command
    The air-bridged harbor that twin cities frame.
    “Keep, ancient lands, your storied pomp!” cries she
    With silent lips. “Give me your tired, your poor,
    Your huddled masses yearning to breathe free,
    The wretched refuse of your teeming shore.
    Send these, the homeless, tempest-tost to me,
    I lift my lamp beside the golden door!”

    The New Colossus

    I searched for “The New Colossus” on the DHS site, and found a single link: to Emma, the virtual assistant on the USCIS website.

    Emma is named for Emma Lazarus, who wrote the poem inscribed at the base of the Statue of Liberty about helping immigrants. Inspired by her namesake, our Emma can help you find the immigration information you need.

    Look at the happy smiling people with questions for Emma that aren’t “How can I just enter the country safely with my family, find a job, make a home, and live the so-called American Dream?”

    3rd Parties/Partners

    Booth babes

    Booth babes published on 2 Comments on Booth babes

    Gizmodo takes ’em on:

    At CES last week, in addition to all the gear and gadgets, there was something else on display: women. As with many trade shows—especially ones aimed at a male audience—CES was rife with booth babes. Yet when the BBC ran a story on the practice of hiring scantily-clad models to stand around booths and draw stares from wandering men, it found an interesting defender: Consumer Electronics Association president and CEO Gary Shapiro, the guy who puts on the biggest electronics trade show in the USA. “Well, sometimes it is a little old school, but it does work,” Shapiro tells the BBC. “People naturally want to go towards what they consider pretty. So your effort to try to get a story based on booth babes, which is decreasing rather rapidly in the industry, and say that it’s somehow sexism imbalancing, it’s cute but it’s frankly irrelevant in my view.” Cute? Irrelevant? “Imbalancing?” (Is that even a word?) I’m sorry. Would you care to try again, Gary? The reason his answer is so bothersome is because as the head of the CEA he is, in a very real sense, speaking for all of us in the technology industry. And that Mad Men bullshit doesn’t represent who we are as an industry anymore, and it certainly doesn’t represent what we should aspire to. Technology is about the future, and this attitude is from the past. Shapiro needs to retract those dismissive remarks. And if he’s smart, he’ll do more than simply that. He’ll get ahead of it. He’ll become the example of what to do, rather than what not. There are two issus at play here. First, there’s the gender issue. Women are under-represented in the tech sector. And while there are a thousand theories why that is, the one thing that is clear is that they aren’t underrepresented in society, and by extension, the marketplace. The argument that says CES should be geared towards men because men buy the most electronics ignores that women like gadgets too. If the industry keeps ignoring women in order to market towards men, it’s going to lose sales. If you can create a gadget that women like just as much as men (hello, iPhone) you have a hit on your hands. So why would you want to do anything that might discourage women from showing up? (And it’s abundantly clear that some women certainly are off-put by booth babes.) Why wouldn’t you want to know what a key demographic thinks of your product before it goes on sale? But the second issue is arguably more important. It’s the cluelessness. To demean the concerns about booth babes as “cute” and “irrelevant” shows a huge disconnect with, I dunno… this century. The drumbeat against booth babes grows louder every year. It isn’t going away, and will only get bigger. Other trade shows are at least addressing it, and the CEA should do the same before it finds 60 Minutes shoving a camera in Shapiro’s mug.

    And this is right on the heels of a stereotype-debunking study that suggests women are more avid consumers of technology than men:

    Women are more likely than men to purchase tabletslaptops and smartphones – three out of the four top consumer electronics categories, according to a new study.Parks Associates asked 2,000 consumers, ages 18 and older about their buying habits in the consumer tech space. The study, which was conducted in late 2011, asked men and women which products they intend to buy before January 1, 2012. Retailer HSN announced the findings on Monday at the 2012 Consumer Electronics Show. Women expressed more interest in tablets (18%), laptops (20%) and smartphones (20%). Only 15% of men planned to buy a tablet, while 14% sought a laptop and 17% intended to buy a smartphone. The only category in which men surpassed female interest was flat screen LCD TVs, with men (19%) favoring the sets over women (17%). “The tech industry has long been dominated by men — even at CES — but women are really the powerhouse in the household driving purchase decisions,” Jill Braff, executive VP of digital commerce for HSN, told Mashable. “Women are highly engaged with the latest and greatest gadgets and technology.”

    PAX, the convention organized by Penny Arcade….founders? Authors? Guys? Jerry Holkins and Mike Krahulik, banned booth babes in 2010, while E3 (Electronics Entertainment Expo) appears to celebrate them.

    Just speaking for myself…there might be more efficient ways to drive me away from wanting to learn more about and possibly buy a product than to have a “spokesmodel” promoting it, but that’s surely near the top.

    Google Plus

    Google Plus published on No Comments on Google Plus

    Yes, I have been seduced. Feel free to add me if you have as well, and would like another person in whatever circle is deemed most appropriate.