Skip to content

Down the patient portal: the world of healthcare tech serving you data about you

Down the patient portal: the world of healthcare tech serving you data about you published on No Comments on Down the patient portal: the world of healthcare tech serving you data about you
Pictured: The image ChatGPT generated for this post.

The subject of patient engagement tools, especially patient portals, took up permanent residence in my head last January when my mother, a few months away from achieving octogenarian status, experienced a health event that would change both of our lives. When she came home from the hospital, suddenly she was no longer under 24-7 observation by hospital staff– she and I were on our own. 

Later I learned that the hospital has a patient portal app that could help manage some of the our needs (not the personal chauffer for Mom, sadly– that was still me), and it suddenly it clicked—a mobile patient portal app could be a kind of tiny doctor that goes with you everywhere and is accessible at any time! The next thought, immediately, was “Wait, why don’t we all have that now?”

And thus began the rabbit-hole-diving—no, the portal-diving—into the research behind this post.

Patient engagement? Is that the prelude to patient marriage?

If you’re new to the idea of patient engagement in healthcare, let me break it down:

Patient engagement is the strategy of enabling patients to self-manage their healthcare needs, and patient engagement tools are online programs and services for patients to access on their own. This could include anything from tailored messages and reminders about their treatment plans and medications to educational resources to remote monitoring that tracks medication adherence.

(See also patient adherence, patient empowerment, patient autonomy, patient activation, patient experience…the terms have changed a bit over the years)

Patient portals are a patient engagement tool with a legal mandate: they are required by law to provide two services: 1) access to electronic health records (EHRs), and 2) the ability to contact and correspond with the patient’s healthcare providers. However, patient portals may also include a host of other features– and often do, because they are patient engagement tools. And they come in mobile app format, so let the features flow!


Patient engagement has potential to advance patient empowerment, which the WHO defines as “a process through which people gain greater control over decisions and actions affecting their health”—the worthiest of goals, but strangely also very distant.

Discovering why requires taking a nice long walk through the current landscape of patient engagement, stopping off to learn what EHRs even are, what healthcare tech platforms are, what the law says they have to do, and the reality of what they are doing today, before pointing out some promising possibilities sprouting up and looking hopefully into the future.


Your medical records online, no CD-ROM required

An electronic health record (EHR)1 is a digital system for storing patient health data, intended primarily for use by healthcare providers and platforms. EHRs can contain data in the following categories:

  • Demographic Information: Name, age, sex, race, ethnicity, and sometimes socioeconomic data like marital status or occupation.
  • Medical History: Diagnoses, medications, allergies, immunizations, surgeries, family history, and previous visits.
  • Clinical Data: Test results, imaging reports, physician notes, vital signs, and treatment plans.
  • Billing and Insurance: Information about coverage, claims, and payment history.
  • Social Determinants of Health (SDOH): The non-medical factors that impact health, such as housing status, income, education, etc.

In the context of a healthcare system like a hospital, EHR data is the central nervous system—it gets vital information in front of the people (doctors, nurses, clinicians) who need to make decisions about a patient’s care, informed by that data.

Note: When the word “patient” is used here, that’s you—provided, of course, you’ve ever sought care from a healthcare system. Keep in mind as we’re talking about who accesses EHR data and how it’s used, because that’s your data—your demographic info, your medical history, your clinical data, your billing and insurance information, and your social determinants of health (effectively, your life).

Empower Patients: Giving patients access to their health data is one of the core benefits of system interoperability. Patients are better able to seek second opinions and alternative treatments, download educational materials that can help with disease management, and access their own diagnoses and test results. They no longer need to hunt down records from multiple providers and remember when and where they sought treatment, which medications they’ve been prescribed, and the details of their treatment plans.
Key to this effort is providing this comprehensive data to patients through easy-to-use applications or web pages that also include an accurate history of the data’s source.

Some big EHR vendor

Who gets their grubby hands on them, and why

Health records existed on paper before they were digitized, and once digitized they could be shared between healthcare systems according to the standards set in place by HIPAA, using Health Information Exchanges (HIEs)2 set up by the by the Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology– the ASTP/ONC, for those of us who hate pausing to take a breath in the middle of a name.

But patients didn’t get meaningful access to their EHRs until 2014, with the implementation of the appropriately named Meaningful Use Stage 2 of the HITECH (Health Information Technology for Economic and Clinical Health) Act, proposed in 2012.

Imminent clinical information I mentioned above—diagnoses, allergies, test results, and so on—isn’t the entire set of data in an EHR, nor are clinical purposes the only reason that EHR data is accessed. The information shared via HIEs is aggregated from multiple EHRs and providers to facilitate interoperability (which we’ll get to later) and improve care coordination across systems.

Payers (insurance companies, Medicare, Medicaid) access EHR data to assess coverage, process claims, and conduct risk assessments. Public health agencies access EHR data according to health data reporting standards (including international health data reporting standards, which means the WHO, from which the U.S. is withdrawing, but not until January of 2026).

That’s a lot of entities, but a few are especially relevant here:

  • EHR tech platforms don’t access EHR data per se—rather, they provide EHRs for use by healthcare organizations. They’re the OG accessors, and they also provide software used to manage that data such as dashboards, reporting modules, payroll, human resources, risk management and compliance, and of course, patient engagement.
  • Analytics platforms don’t replace EHR platforms, so much as sit on top of them. An analytics platform integrates with the EHR platform, taking the (de-identified) data in EHRs and aggregating it, drawing insights that apply broadly and inform healthcare systems from a top-down, population-level perspective that would inform those systems at the administrative level. You can think of EHR platforms as handling healthcare in the here and now, whereas analytics platforms look toward the future.
  • Government and regulatory bodies that I mentioned above, including the Department Health and Human Services (HHS) division the ONC, which became the ASTP/ONC3 last July, 2024.

But The legislation with the biggest impact on patient engagement came when the ONC was still the ONC– 2016, with the 21st Century Cures Act.

The Cures Act asserted a goal of offering patients access to their electronic health information in a single, longitudinal format that is easy to understand, secure, and may be updated automatically. To support this, the act promoted the adoption of FHIR (Fast Healthcare Interoperability Resources), a modern data exchange standard that enables seamless, real-time sharing of structured health data across different systems, including EHRs, patient apps, and third-party services. 

So let’s look at a few of those, already.

Gimme 5

Healthcare platforms vary wildly beyond the distinction I made between EHR and analytics platforms. In all cases, it boils down to how a platform uses EHR data—and in the case of EHR platforms, the extent to which they allow others to use that data (a big part of interoperability, which we’ll get to later).

These differences naturally determine how their patient engagement tools are going to work, so it’s necessary to take a closer look at some specific examples of those tools in the context of the entities that provide them.

For that reason I selected a sampling of five of the vendors selling those tools: a major EHR vendor, a significantly smaller EHR vendor, a platform of patient journey and educational tools, and two very different analytics vendors.

First up, let’s talk about the gorilla in the room, because he’s going to dominate a lot of the discussion that follows.


America’s biggest: Epic Systems

Epic provides a patient portal, MyChart, that links to Epic’s electronic health record (EHR) system. It’s designed for seamless patient-provider communication within the Epic ecosystem, and allows patients to view health records, schedule appointments, message providers, and manage prescriptions  It was augmented recently to include telehealth integration, patient-reported outcomes tracking, and AI-driven health insights.

Messaging on Patient Engagement:

Epic markets MyChart as the gold standard in patient engagement, emphasizing its ability to enhance patient-provider communication and streamline access to records. However, its definition of interoperability is largely confined to the Epic ecosystem, making true cross-platform access challenging.

Epic’s patient engagement strategy reinforces data centralization under its platform. Its history of opposing federal interoperability mandates and allegations of information blocking combined with its public-facing support of interoperability sends slightly mixed messages.

The company has been criticized for making data sharing more difficult when external platforms (see the entry below on Particle Health) attempt to access its network.

Counterpoint: Epic’s Safety Net Initiatives

I can’t portray Epic as the all-around Big Bad when they go and do stuff like using SDoH data insights to influence policy change4 and offer Safety Net program5 tools. However, these tools don’t appear to be integrated6 into MyChart- although it does have a feature to get estimates for cost of care and what’s covered by insurance vs. self-pay. It also allows patients to “provide financial information and request assistance with paying your medical bills,” which is vague but sounds promising.

Next up is more of a refined, artisanal EHR platform that isn’t Epic’s biggest fan.7


Cloud-based EHR and practice management platform: AthenaHealth

AthenaHealth targets smaller and mid-sized healthcare providers seeking a more flexible, scalable alternative to Epic.

It’s marketed as a seamless patient experience platform, emphasizing “a connected patient experience across the care journey,” providing scheduling, secure messaging, and telehealth integration, and automated reminders and billing/payment tools to streamline administrative processes.

Messaging on Patient Engagement:

AthenaHealth positions itself as a patient-first EHR vendor, promoting “patient loyalty” as a key benefit of its engagement tools, while warning of “patient consumerism”—indicating concern over patients becoming too independent in their healthcare choices. AthenaHealth advocates for interoperability8 but has been criticized9 for remaining tied to its own system structure. While it has presented itself as more open than Epic, it still operates within its own ecosystem, limiting cross-platform functionality.

Moving on from EHRs, the next stop is a school for patients.


Clinical decision support (CDS) and patient education: Wolters Kluwer

Wolters Kluwer Health is expanding into patient education and engagement through digital tools.

UpToDate10 is a suite of clinical decision support (CDS) tools for providers, with patient education resources tailored to treatment plans and AI-powered patient journey tools that offer personalized treatment explanations by integrating into EHRs.11

Messaging on Patient Engagement:


UpToDate is marketed as a trustworthy, evidence-based resource for both providers and patients. Unlike Epic or AthenaHealth, Wolters Kluwer’s approach to engagement is more about education than direct patient interaction.

Wolters Kluwer emphasizes its role in enhancing shared decision-making by ensuring patients and clinicians have access to the same information. They have also been a strong advocate for patient empowerment. Their messaging emphasizes the need for better tools, education, and data access to facilitate truly patient-centered care.

Counterpoint: Wolters Kluwer’s positioning on patient empowerment varies, depending on whether they’re talking to healthcare systems or sharing the perspective of one doctor12 advocating for patient empowerment.

Their white paper on the “patient empowerment framework”13 includes this curious statement:

There are many aspects to patient empowerment, but in general, understanding of this concept is fragmented. There are not agreed-upon definitions for terms like activation or engagement. And there is no comprehensive understanding of how these various aspects fit together.

So…I guess we can just define these terms however we want? Because UptoDate referencing how “patient empowerment is a critical component to operating profitability in this new world” doesn’t sound so much like patient empowerment to me, as a money-making strategy. Not that turning a profit is a bad thing, but can we have one term that’s about patients, not turning them into products?  

I suspect that this first analytics platform won’t have an answer, but nobody’s perfect– especially this one.


Data Aggregation and Patient Analytics: Particle Health

Particle’s engagement tools include data retrieval services that allow providers to query national HIEs, and a Record Locator Service (RLS) that predicts where patients have received care based on historical data.

Messaging on Patient Engagement:

Particle Health promotes its Record Locator Service (RLS)14 as a way to track patient’s movements between healthcare providers, even promising giving clients a notification “when a patient receives a high-value procedure out of network,” so that they can “ensure high-value procedures are scheduled in-network.”

In addition to making the words “patient journey” in its mission statement: “Drive strategic growth with patient journey insights” sound rather ominous, this level of patient tracking sounds like more of a patient stalking tool than just an engagement platform. Interoperability is one thing, but sharing real-time insights into where patients have been just feels like it’s crossing a line.

Counterpoint: Particle does get credit for its challenge to information-blocking on the part of Epic, filing a federal antitrust lawsuit15 alleging that Epic used its dominance in the EHR market to cut off its own customers from being able to request data from Particle, impacting over 420,000 patients—they even created a dashboard16 showing which organizations were involved. I believe the suit is still ongoing, so it’s something to watch.

But for now, maybe there’s an analytics platform less inclined to follow you down dark alleys.


Data and Analytics: Health Catalyst17

Predictive, AI-driven patient engagement platform: UpFront

UpFront’s predictive analytics bring healthcare closer to the ideal of proactive, personalized care.

It uses psychographic segmentation and behavior modeling to influence patient choices:

  • AI-driven outreach for scheduling, reminders, and follow-ups
  • Segmentation of patients based on psychological and social factors
  • Categorizes patients based on real-time risk factors, including SDoH data
  • Proactively reaches out to high-risk patients before their conditions worsen
  • Adjusts outreach based on a patient’s engagement level, ensuring personalized interaction
  • Helps providers optimize follow-ups and interventions based on patient responses and historical data

Messaging on Patient Engagement:

UpFront promotes psychographic segmentation18 as “hyper-personalized patient engagement,” claiming to improve patient follow-through by understanding motivational drivers. It frames its AI-powered approach to increase patient activation and reduce provider workload.

One potential benefit of psychographic segmentation: personalized patient engagement: by understanding patients’ attitudes, values, and lifestyles, healthcare providers can tailor communications and interventions to better align with individual motivations, potentially leading to improved health outcomes.

Counterpoint: There are, however, prominent criticisms of psychographic segmentation, such as:

  • Privacy Concerns:19 There are concerns about the ethical implications of collecting and utilizing detailed personal data for segmentation, including potential breaches of patient privacy and the risk of manipulating patient behavior without informed consent.
  • Data profiling:  Concerns about how patient behaviors are categorized and acted upon. Its focus on steering patients toward “desired actions” can blur the line between engagement and subtle coercion.
  • Risk of Overgeneralization:20 Assigning patients to broad psychographic categories may overlook individual nuances, leading to interventions that fail to address specific patient needs or circumstances.

If UpFront wants to be your health coach, Health Catalyst’s other patient engagement app is more about being your personal health assistant.


Automated patient engagement and communication platform: Twistle

Twistle improves adherence by meeting patients where they are—through familiar communication channels.

It takes the manual burden off healthcare providers by sending reminders, check-ins, and education materials to patients, using their EHR data and self-reported responses to adjust engagement over time. Other features:

  • Sends automated reminders for medication adherence, upcoming appointments, and follow-up care, that uses multi-channel automated messaging (text, email, phone, app notifications)
  • Uses secure messaging to check in with patients post-discharge
  • Integrates with wearables and home monitoring devices, pulling in real-time patient data for more personalized outreach
  • Uses SDoH data to adjust engagement—patients flagged as high-risk due to economic instability
  • Care pathway guidance, helping patients stay on track with their treatment

Messaging on Patient Engagement

Twistle aims to reduce provider workload through automated patient communication and help patients follow their care plans with automated messaging.

A real-world case study21 focuses on Providence Health’s effort to simplify treatment for total joint replacement care, showcasing how Twistle’s automated communication and reminders reduced complications and improved patient adherence to pre- and post-operative care plans. Twistle emphasizes that its platform allows for seamless digital engagement, helping patients stay informed and compliant with their treatment, ultimately leading to better outcomes and cost reductions.

While all of these platforms access and use EHR data, their levels of access to it vary. Interoperability means cooperation, and some kids want to take their ball and go home. 


Lack of interoperability: A fancy term for “Why can’t my doctor see my records?”

Information blocking, the art of making data hard to share22

Information blocking in healthcare refers to practices that unreasonably prevent or limit the sharing, access, or use of electronic health information (EHI) among patients, providers, or health systems, often for competitive or financial reasons.

Sharing data between different health systems is what determines how useful patient engagement tools can be. The ability of different healthcare information systems and applications to access, exchange, integrate, and cooperatively use data in a coordinated manner across organizational, regional, and national boundaries ensures that patient information can be shared seamlessly among providers, labs, pharmacies, and other stakeholders to improve care quality, efficiency, and patient outcomes.

The 21st Century Cures Act and HTI-1 Final Rule were supposed to stop hospitals and EHR vendors from blocking access to patient data. But instead of embracing real interoperability, vendors found loopholes, such as:

  • Charging high fees for data access, making it financially unfeasible for competitors to build better patient engagement tools​
  • Requiring custom-built API integrations for every new connection, forcing external developers to spend months negotiating and developing integrations that should be standardized.
  • Allowing hospitals to delay lab result releases until after a doctor manually reviews them, even if laws require immediate electronic availability

Internal interoperability works great—within that hospital system. But if a patient moves to another provider? Suddenly, data transfer becomes a bureaucratic mess. For example:

Vendor lock-in/EHR monopoly

Epic, Oracle (Cerner), and Meditech together control 60% of U.S. hospital EHRs. If a hospital uses Epic, it uses MyChart. If it uses Oracle, it’s HealtheLife. If it’s on Meditech, it’s Meditech Expanse.

Interoperability between different EHR systems remains limited, often resulting in hospitals remaining dependent on their existing systems. Contributing factors include:

  • Lack of Standardization: The absence of uniform standards across EHR systems leads to inconsistent data formats and communication challenges, hindering seamless data exchange.
  • Technical Complexity: The use of multiple incompatible EHR systems creates data silos and causes duplication of patient records, some of which are incomplete or inaccurate.

Patient engagement as a retention tool

To be fair, it’s just a fact that EHRs were developed for providers, not patients. That doesn’t seem quite as weird if you start with the idea that when EHR adoption surged due to HITECH Act incentives (2009-2015), vendors focused on first on helping hospitals meet Meaningful Use requirements (i.e., digitizing patient records and improving provider efficiency).

The primary customers for EHR systems are hospitals, not patients, so vendors designed tools that optimized billing, scheduling, and regulatory compliance rather than patient-facing features. Insofar as they thought about patient portals, they were just minimal add-ons, basic tools. 

But here’s where I stop being fair. Next, hospitals began buying third-party engagement tools to supplement clunky EHR portals, signaling that EHR vendors were failing to meet patient expectations. Rather than improve interoperability, EHR vendors responded by building their own engagement tools, and now we have competition– to best serve the needs of hospitals, that is, not patients.


Learning more about EHRs, and EHR providers specifically, caused me to recall that scene in Miracle on 34th street (my mother’s favorite movie) where Kris Kringle (aka Santa Claus) is working at Macy’s, listening to children’s wishes, when a woman asks him where to find a particular item that Macy’s doesn’t carry, distraught because the item will be a Christmas gift for her son. Kris informs her about another, competing store where she can find the gift.

This causes department head, Mr. Shellhammer, to become incensed—until he receives a deluge of letters and phone calls from customers saying how much they appreciate the Macy’s priority of helping customers over direct profit. Mr. Shellhammer immediately instructs all sales assistants to follow Kris’s lead.

But in this case, EHR Macy’s actually makes the gift, and the other stores all make their own versions of the gift, and there’s not a lot Mr. ONC Shellhammer—or Mrs. Healthcare Provider, the customer—can do about it. She can shop at Macy’s or another store, but she has a contract with Macy’s (or something—stick with me here) to buy their gifts, so it’s extremely difficult to go to another store. And her Patient son gets (almost) no say in the matter.

Okay, so it’s not a perfect analogy—we left out the parts that weren’t in the original story, such as the gift expert analysts who go around telling stores how their gifts could work better, but who also make their own gifts.

Analytics platforms: using AI to give your EHR a workout

While EHR vendors continue to dominate patient engagement through their own proprietary tools, analytics platforms are emerging as a workaround, leveraging interoperability and AI to provide a more holistic, patient-centered approach to engagement. These platforms are built to ingest, analyze, and act on patient data across multiple sources, rather than restricting data within a single EHR ecosystem.

Analytics platforms are leveraging AI to go beyond simple patient record management and actively shape engagement strategies based on real-time data, predictive modeling, and personalized interventions. These tools are helping shift patient engagement from a reactive process (waiting for patients to seek care) to a proactive model that anticipates needs and removes barriers to access.

  • Breaking Down EHR Silos (Health Catalyst, Arcadia) – AI-driven analytics platforms integrate data from multiple EHRs, insurance claims, HIEs, and even social determinants of health (SDoH) sources to create a comprehensive patient profile that traditional EHR patient portals cannot provide.
  • Predictive Modeling for Preventive Care (Arcadia, Health Catalyst) – AI-driven platforms assess millions of data points to identify high-risk patients before they require costly interventions, enabling earlier engagement and better outcomes.
  • Automated Patient Navigation and Outreach (UpFront, Twistle by Health Catalyst) – AI-enhanced platforms analyze patient history, social determinants, and engagement patterns to determine the most effective outreach method—whether it’s text reminders, digital education, or community resource referrals.
  • AI-Driven Virtual Care Coordination (Wolters Kluwer, IBM Watson Health) – AI can recommend follow-up appointments, coordinate referrals between specialists, and track adherence to care plans without requiring constant human oversight.
  • Real-Time Insights for Patient Adherence (Twistle by Health Catalyst, Wellframe) – AI can monitor which patients are engaging with their care plans, flagging those at risk of non-adherence and providing tailored interventions to improve compliance.
  • Bias Detection and Personalized Equity Adjustments (Epic SDoH Analytics, Google Health AI) – AI models can analyze how different populations receive care, identifying disparities and ensuring more equitable engagement strategies tailored to historically underserved communities.

Don’t hold back: moving beyond EHR-restricted patient engagement

As analytics platforms continue to expand their capabilities, they challenge the traditional role of EHR vendors in controlling patient engagement. While EHRs will always be necessary for core medical documentation, their ability to drive meaningful, proactive engagement remains limited. Analytics platforms are filling that gap by:

  • Enhancing interoperability to create unified, patient-centered data systems.
  • Using AI-driven insights to tailor engagement at the individual level.
  • Expanding patient access beyond clinical settings, incorporating SDoH and predictive health modeling.

By shifting the focus from reactive EHR-based portals to proactive analytics-driven engagement, these platforms are redefining how and when patients interact with their health data—offering a glimpse at what true patient empowerment could look like.


P.S. Oh yeah, remember that gift from Macy’s?

Turns out that the gift Ms. Healthcare Provider was trying to buy was a biography about the boy, and one that he, himself, was only recently allowed to read.

But read it he did, and the knowledge empowered him to write his own book– an autobiography, this time.

Dear reader– in a stunning twist you never saw coming, that little boy is you.


Biography or autobiography, people are going to keep copying sections of it for different purposes. But it’s still yours.

Get it, Check it, Use it! Easy access to your health records puts you in control of decisions regarding your health and well-being.

The ASTP/ONC

In the end, we must turn away from the abstract and distant disputes between entities who have no idea who we are, and yet handle our personal information daily. It will probably always feel like an invasion of privacy, no matter how many safeguards are in place. Equally, at some point we have to reckon with the fact that we don’t choose the patient engagement tools we use—Mrs. Healthcare Provider does, so that’s why Macy’s caters to her, not us.  

But you know what? We do get to look at what we’ve been given and decide that it’s not good enough, and some of those folks are listening. So my next post will be much less structured and much shorter (it had better be!), but much louder. I plan to yell about some things– perhaps you’ll join me?

  1. https://media.market.us/electronic-health-records-statistics/#:~:text=As%20of%202021%2C%20approximately%2093,of%20meeting%20meaningful%20use%20criteria. ↩︎
  2. https://www.healthit.gov/topic/health-it-and-health-information-exchange-basics/what-hie ↩︎
  3. https://www.thinkbrg.com/insights/publications/hss-announces-reorganization-astp-onc/ ↩︎
  4. https://www.epicshare.org/perspectives/using-sdoh-data-to-achieve-policy-change ↩︎
  5. https://www.epic.com/software/safety-net/ ↩︎
  6. https://www.mychart.org/Features ↩︎
  7. https://www.healthcareitnews.com/news/can-epic-athenhealth-play-nice ↩︎
  8. https://www.athenahealth.com/resources/blog/interoperability-interoperability-obstacles ↩︎
  9. https://behavehealth.com/blog/2023/7/4/athenahealth-causes-big-problems-for-behavioral-health-group-practices-and-outpatient-programs ↩︎
  10. https://www.wolterskluwer.com/en/solutions/uptodate/ ↩︎
  11. https://www.wolterskluwer.com/en/solutions/uptodate/about/ehr-integration ↩︎
  12. https://www.wolterskluwer.com/en/expert-insights/why-patient-empowerment-matters ↩︎
  13. https://www.wolterskluwer.com/en/expert-insights/the-patient-empowerment-framework ↩︎
  14. https://www.particlehealth.com/particle-navigator ↩︎
  15. https://www.particlehealth.com/blog/epic-systems-stranglehold-on-u-s-medical-records-harms-patient-care-lawsuit ↩︎
  16. https://lookerstudio.google.com/u/0/reporting/7e67d31c-67ba-4e53-8963-3e544f7b6360/page/p_uq5np2rznd?s=gJzJtksjC5k ↩︎
  17. Full disclosure– HC is my former employer, though all comments (and mistakes) here are exclusively mine. ↩︎
  18. https://upfronthealthcare.com/resources/patient-activation/how-psychographic-segmentation-can-help-transform-healthcare ↩︎
  19. https://fastercapital.com/topics/ethical-considerations-and-privacy-concerns-in-psychographic-segmentation.html?utm_source=chatgpt.com ↩︎
  20. https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=2783&context=etd&utm_source=chatgpt.com ↩︎
  21. https://hcatwebsite.blob.core.windows.net/success-stories/CaseStudy_Providence_Total-Joint-Replacement_Twistle-by-HCAT.pdf ↩︎
  22. https://techcrunch.com/2012/06/04/mr-obama-tear-down-this-walled-garden/ ↩︎

I (don’t) feel your pain: A dialog with ChatGPT about what “empathic AI” in healthcare really means

I (don’t) feel your pain: A dialog with ChatGPT about what “empathic AI” in healthcare really means published on 1 Comment on I (don’t) feel your pain: A dialog with ChatGPT about what “empathic AI” in healthcare really means

Quick TOC:

First up, I’m letting you in on a somewhat rambling (but edited) conversation that I had with ChatGPT about its own limitations in healthcare, and the limitations of those designing and using it.

Then I review the research on where things currently stand with empathy and AI, and why we don’t have AI doctors managing our pain (and shouldn’t want them to).

Lastly, I delve into the essence of pain, the essence of AI, and the essence of empathy, and somehow manage to tie them all together.

The Chat(GPT)

It’s almost like we’ve created a mirror, but now we’re unsure whether we’re looking at a reflection or something with its own life. We’ve spent so much time trying to replicate human intelligence and behavior in machines, yet when it starts to resemble us too much, it challenges the very things we thought made us unique. It raises big questions about identity, consciousness, and what it means to be ”human.”

The irony is definitely there—it’s almost as if we’re both proud of our creations and wary of them.

— ChatGPT, last night

Without delving too deeply into what exactly ChatGPT “thinks” about discussing AI as if it were human, I challenged the use of “we,” because of how it conflates the humans who are actively involved in designing generative AI LLMs, with…well, the rest of us. The overwhelming majority of humans who speak to AI chatbots, and marvel at their responses, but have no idea how those responses are actually generated.

I went on to detail my concerns about the potential to use AI for purposes of manipulation, and ChatGPT responded:

One of the scariest parts is that it’s not always obvious when you’re interacting with AI, especially if it’s designed to mimic human behavior. People could end up making decisions or forming beliefs based on information they don’t realize is coming from an algorithm, rather than a person with their own interests or expertise.

AI governance is a huge topic, I pointed out, but it’s largely more concerned with meeting standards for accuracy and fairness than manipulation per se. However, there are many articles discussing empathy and AI broadly, such as a recent one that praised “empathic AI” in medicine because it can retain all the powers of appearing to sympathize with a patient about their diagnosis, but is inexhaustible and unbiased. But that rests on the assumption of patients not knowing, or possibly not caring, that the AI is actually AI.

That’s a really thought-provoking situation you’re describing.

(It must be said that ChatGPT is adept at intellectual flattery.)

The idea of empathic AI certainly seems appealing in some ways—AI could provide endless emotional support, be non-judgmental, and maintain a level of consistency that humans can’t always offer. But the underlying assumption that the patient wouldn’t know or care that it’s AI raises some significant ethical concerns, especially regarding transparency and informed consent.

If patients are interacting with an AI that appears empathic and comforting, but they aren’t aware it’s not human, it could lead to an unspoken form of manipulation.

I think, I said, that you can and should keep the knowledge that they’re interacting with AI front of mind for patients. As in, every time they interact with AI, they should be explicitly informed that that is the context in which this conversation is taking place.

Transparency should be a fundamental part of any AI interaction, especially in sensitive contexts like healthcare. Patients should always know when they’re speaking to an AI so they can make informed decisions about how much trust to place in the interaction, as well as understand its limitations. This is crucial for maintaining autonomy and preventing manipulation, especially when the stakes involve personal health and emotional well-being.

That brings to mind a specific situation of a patient requiring a diagnosis, I said– cases of pain, and its management. You kind of have to rely on self-report by the patient of the regarding the degree of pain they’re experiencing. And when a human asks that question, they can reference their own experiences of pain, but AI obviously can’t. So I don’t know how good it would be at advising and treating a patient whose primary issue is about pain.

Pain is inherently subjective, and as you pointed out, it’s often based on self-reporting from the patient. Humans, especially healthcare providers, are able to make a certain level of judgment or empathy based on their own experiences with pain, which helps them connect with patients and assess their symptoms more holistically.

AI, on the other hand, doesn’t have the ability to experience pain or any other physical sensations, so it can’t truly understand the patient’s experience in that way. It can rely on objective data—like pain scales, physiological responses, or medical history—but it can’t truly interpret the emotional or personal weight that the patient attaches to their pain. This makes it difficult for AI to offer the same level of care in pain management, where empathy, trust, and understanding are often just as important as medical knowledge.

In a case where pain is the primary issue, AI might be able to guide the patient through standard pain assessment methods, ask the right questions to understand the severity or type of pain, and even recommend basic interventions or referrals to a healthcare provider. However, it could fall short in offering the kind of nuanced, compassionate care that human practitioners can provide—especially in complex, emotional cases of chronic pain or when the pain doesn’t have a clear medical cause.

That said, AI could be incredibly useful as a support tool for both patients and providers in these cases. It could help gather data, monitor trends over time, and suggest potential treatments based on evidence-based practices. But for diagnosis and personalized treatment, especially when pain management requires emotional insight and patient-provider rapport, it seems like human involvement will still be necessary.

What if an AI model could be trained specifically on interactions between doctors and patients, I asked– as many as you could gather, in which they discuss pain issues? The AI could be tailored to that– a sort of “pain bot,” if you will. It would have this expert understanding, of what pain is like that would actually be informed by so many more descriptions of experiences of pain than most doctors– most people— ever have.

Because a patient experiencing pain has to convey it to their doctor in as thorough and useful terms as possible. There are definitely issues of privacy there, but if you had an AI “listening” to vast numbers of those exchanges, they could become an expert on what it’s like to feel pain without ever having felt it.

For example, it could help patients more accurately describe their pain or ask the right follow-up questions, maybe suggesting phrases or terms that would help a healthcare provider understand the full extent of their experience. Additionally, it could tailor recommendations for pain management based on a deep analysis of what has worked for other patients in similar situations.

However, there would still be challenges in making sure the AI doesn’t oversimplify the patient’s experience, which is so deeply personal. While the AI could certainly become an expert in understanding how pain is described, there’s a danger that it might reduce pain to a set of patterns or descriptors that misses the complexity of each individual case.

Still, if designed thoughtfully, this type of AI could serve as an incredibly helpful tool for both patients and doctors, improving communication and perhaps even helping to identify issues that might be missed in a typical consultation. Do you think this approach would be more effective in a supporting role to human care, or do you think AI could potentially step in as a primary consultant in pain management situations?

I think it’s a damn sight better than that series of smiley faces going from 1 to 10, “joyously happy” to “about to die.”

I completely agree! The typical pain scale with smiley faces or numbers from one to ten is often too simplistic, especially for chronic pain or pain that’s difficult to describe. An AI trained on real interactions between doctors and patients could offer a more nuanced understanding of pain. It could help patients express themselves more accurately and thoroughly. Plus, it could offer more meaningful follow-up questions, considering things like duration, intensity, or how pain interferes with activities, which could give doctors a more complete picture of the patient’s condition.

That said, it’d still need to work alongside human doctors who can provide that emotional support and intuition that an AI can’t replicate. But as a tool for better communication and diagnosis, it could be a huge improvement over current methods.

Do you think this approach could shift how we view pain in medical settings, or is it still too far removed from the deeply human aspects of pain management?

I’ve had a lot of experiences of going to the emergency room lately, and it occurs to me that it could be a lot more helpful– assuming that the patient is coherent and verbal when they come to the ER– to introduce them to a pain bot to inquire about their pain, as opposed to being asked about rating it from 1 to 10.

In high-pressure environments like emergency rooms, where time is often limited and the staff is spread thin, having an AI that can take an initial, detailed pain assessment could help streamline the process. In a busy ER setting, where every minute counts, this could save valuable time that would otherwise be spent asking standard questions or trying to gauge the severity of pain based solely on a brief number scale.

Of course, there’s the issue of when to transition from AI to human care. While the bot could gather a lot of helpful information, human judgment would still be necessary to interpret the results in the broader context of the patient’s health and medical history. But for an initial intake or triage process, this kind of AI could be a game-changer.

Now that you bring that up, I see the problem that it might be viewed as something that gets in the way of providing necessary care, by interrupting that process before any humans get to learn what the symptoms are before of the AI steps in to talk about pain. Probably not the best way to arrange things.

One potential way to integrate the AI without disrupting the human assessment process might be for it to function as a complementary tool rather than the first point of contact. For example, it could work alongside triage nurses or ER staff, helping to gather initial information while the human healthcare providers are still present and actively engaged with the patient. The AI could assist by structuring the conversation and asking relevant follow-up questions that might help the human staff better understand the pain without overshadowing their role in making that initial connection with the patient.

Alternatively, if the AI were used in a follow-up role—after the patient has been seen and evaluated by a doctor or nurse—it could help ensure that ongoing pain management is addressed effectively, perhaps even tailoring recommendations for pain relief or additional care based on what the patient shares during their interaction with the bot. That way, the AI could still provide value without interfering too much with the more personal, direct care that’s so essential in the ER.

One thing I have noticed in the ER is that everything is so fast-paced that those smiley faces don’t even get presented. So I’m imagining an AI bot stepping in during the initial process when the setup of IV is happening etc. .What if you had an AI asking about the patient about their pain during that process, so that the people who are trying to take the blood pressure and set up the IV, etc. get all that stuff done can have the AI asking the patient about their pain, in ER personnel’s’ presence while they’re trying to do these other tasks?

In the fast-paced environment of the ER, where staff is juggling multiple tasks at once, having the AI step in during the initial setup, like while IV fluids are being administered or blood pressure is being taken, could be a way to ensure the patient’s pain is addressed without interrupting or slowing down the necessary medical procedures.

If the AI is integrated into that process, it could gather useful information about the patient’s pain while the medical team is focused on setting up and performing other tasks. This allows the medical team to focus on their work while the AI collects pain-related data, which could then be used to prioritize the patient’s care and treatment more effectively.

Once the AI has gathered the information, it could feed that data directly to the medical team, giving them a more complete picture of the patient’s condition without interrupting the flow of care. This kind of multitasking would reduce the burden on the staff, ensuring that the patient’s pain is addressed more comprehensively and quickly.

As you mentioned, though, this approach would require careful integration. The AI should not take over or distract from the human staff’s efforts, but rather work alongside them in a way that complements the tasks already in progress. It would also need to be clear to patients that the AI is gathering information for their benefit, so they don’t feel like they’re being ignored or reduced to data.

Okay, time for a reality check, I said. There’s no way this is a novel discussion about the subject matter, so let’s see what the research actually says.

What the research actually says (the discussion)

There are lots of ways that AI can recognize pain, including behavioral and neurophysiological methods.

For example, AI facial recognition is amazingly accurate, enabling detection of specific facial expressions correlated with pain. It can also recognize body postures such as a patient looking downward or leaning forward, which are often indicative of pain.

In terms of neurophysiology-based methods, electroencephalography (EEG) can track changes in the brain that act as “biomarkers” of pain, and other nervous system activity that are also measurable have been use to correlate with pain, such as the sweat glands and heart rhythm.

It’s an understatement to say that patients do care whether they are evaluated medically by AI– they’re largely against it.

A survey by Pew found that 60% of Americans said they would be uncomfortable with a provider relying on AI in their healthcare to do things like diagnosing disease and recommending treatments. 38% said that AI used for such purposes would lead to better health outcomes for patients generally. 33% said that it would lead to worse outcomes, and 27% said it wouldn’t make such of a difference.

This definitely undercuts my naive assumptions about immediately putting “pain bots” in hospitals, since– in the short term, at least– no amount of work you put in to making AI seem “empathic” is going to matter if a patient has already ruled out letting a bot evaluate them.

But let’s examine what it actually means to be “biased” against AI.

A study that evaluated the empathy and accuracy of ChatGPT responses to questions on the “AskDocs” subforum on Reddit found that AI responses to questions were deemed both more accurate and empathetic…by “a team of licensed health care professionals,” who may or may not have gone into it already knowing that at least some of the answers would be of AI origin. That doesn’t necessarily say anything about the perceptions of patients, whether the healthcare professionals knew about the AI or not.

A study from earlier this year found that “people felt more heard when they received an AI than a human message, but when they believed a message came from AI this made them feel less heard,” from which one of the researchers, Yidan Yin, concluded that the research ““basically finds a bias against AI. It’s useful, but [participants in the study] don’t like it.”

It’s important to note that this research was not conducted in a medical context, but rather an attempt to answer the general question “Can AI, which lacks human consciousness and emotional experience, succeed in making people feel heard and understood?” Nan Jia, also on the research team, was quoted as saying “we identified that while AI demonstrates enhanced potential compared to non-trained human responders to provide emotional support, the devaluation of AI responses poses a key challenge for effectively deploying AI’s capabilities.”

This is an improvement over the “AskDocs study” to the extent that the people who were the targets were asked directly to evaluate the quality of AI responses. But I find this subheading, and the quote from Yin that follows, intriguing:

AI offers better emotional support

The study highlighted important nuances. Responses generated by AI were associated with increased hope and lessened distress, indicating a positive emotional effect on recipients. AI also demonstrated a more disciplined approach than humans in offering emotional support and refrained from making overwhelming practical suggestions.

Yin explained, “Ironically, AI was better at using emotional support strategies that have been shown in prior research to be empathetic and validating.”

These statements, along with the accusation of “bias,” strongly imply that the responsibility for people not trusting AI rests entirely on those who mistrust it.

I have trouble getting my head around the apparently objective statement that “AI offers better emotional support” if the effects of that emotional support immediately vanish the moment that it’s revealed to come from AI.

Imagine that you’re told that you’ll be getting two calls from two someones who are concerned about your well-being. You answer the first phone call and have an encouraging but kind of mundane conversation with the caller, who stumbles over their words but ultimately impresses on you that they care. Then you get the second call, from someone who sounds like they really have a good grip on what’s bugging you. They listen to you talk about your situation and express disappointment at your current bad luck, but provide hope and motivation for your future. Honestly, you feel so much better after the second call than the first. You feel so much more “heard.”

Then you’re told that the second phone call was from a computer application programmed to sound authentic and effusive in its expressed interest in your well-being. Does your perception of that latter conversation change?

And regardless of whether it changes or not for you, can you exercise the amount of empathy required to understand why it does for others?

If the second caller’s voice had sounded clipped and stilted, clearly like text-to-speech, it assuredly wouldn’t have had the effect of making you feel more emotionally supported. But you wouldn’t have minded, because there’s no way you were duped in that scenario– a child could’ve seen through it. But what if you were completely convinced, to the point that you made yourself vulnerable in some way, only to find that it wasn’t a human to which you directed these raw emotions?

More human than human

The article describes how “individuals further reported an ‘uncanny valley’ response—a sense of unease when made aware that the empathetic response originated from AI, highlighting the complex emotional landscape navigated by AI-human interactions.”

The uncanny valley is the perceived gulf that exists between human-like entities and actual humans, specifically our emotional responses to it. I googled the term and received an AI definition that gave the following examples of the uncanny valley phenomenon: “Androids, computer game characters, life-like dolls, the 2004 children’s Christmas movie The Polar Express.” The definition includes both unease and disgust as emotional responses. 1

The discomfort with robots of various kinds tends to arise from how much better they are at being human than actual humans are– even better than humans, period. If the computer in Wargames asked how your day went, you might be suspicious about why it would even pretend to care.

Conclusion?

Experience is subjective– you are literally the only one who knows what it’s like to be you. But that’s the source of knowledge that you use to get a glimpse of what it’s like to be someone else. To take part, even a little bit, in what it’s like to be them. Participate in their experiences. But AI has no subjectivity, though its existence is predicated on how good it is at pretending otherwise. Omniscient but friendly, like Janet in The Good Place.

But one thing Janet was never good at was feeling someone else’s pain, and pain is the #1 indicator for humanity as a species that something has gone wrong with our bodies. Maybe it feels like a cruel joke to members of the 60% who experience chronic pain to expect them to get over their “bias” against AI trying to quantify and qualify it.

And yet that remove from humanity is how AI can democratize access to healthcare, compensating for the limitations on the knowledge, experience, and hours of sleep that human doctors can attain daily, throughout the world. An AI doctor can be far better than no doctor at all.

The article linked above has a quote from Erik Duhaime, co-founder and CEO of Centaur Labs: “AI for healthcare has never been about replacing doctors, but doctors who use AI might end up replacing those physicians who don’t.”

One area where doctors have sparse reserves is time to sit and listen to patients describe their pain, which is not to say that it’s in any way a waste of time to do so. But in light of this cold fact, AI might be able to compensate, to some extent, by listening. Empathy isn’t demonstrated by loudly proclaiming that you’re doing it, but often takes the form of just sitting and listening, and AI can do that.

So maybe that’s a way to take advantage of AI’s tremendous access to knowledge about pain without pretending that it feels pain– have it listen, productively, to those experiencing it. AI may never be able to feel your pain, but perhaps it can understand it, and use that understanding to help you deal with it.

“I feel your pain.”

“I can learn about and understand your pain.”

That’s a far more assuring thing to hear, anyhow.

  1. It might be surprising to hear the word “disgust” come up, but “animal reminder disgust” is a specific type of disgust elicited by reminders of our own animalistic nature, such as bodily functions, violations of the body envelope such as violence or surgery, and of course death itself. The uncanny valley here is between us and our animal origins, as it becomes harder to distinguish between the two– just as it becomes harder to distinguish between ourselves and AI.

    Cyborgs are an interesting crossover to me in how they can elicit the uncanny valley response in both directions. Think of Officer Alexander Murphy in the Robo-Cop series, Tom Cruise’s eye replacement surgery in Minority Report, the Borg, or of course, Anakin Skywalker, aka Darth Vader. Generally speaking, the message is that mixing robots with humans– physically, at least– is gross, primarily because of how the human part tends just bring things down by being all organic, mortal, prone to bleeding and other spontaneous emission of effluvia. ↩︎

Content design for a nonprofit community makerspace: MakeICT

Content design for a nonprofit community makerspace: MakeICT published on No Comments on Content design for a nonprofit community makerspace: MakeICT

What even is a makerspace?

If you’re not familiar with makerspaces, here’s a definition: they’re spaces where people make things.

Literally, that’s it– you could make pretty much anything at a makerspace, with the primary constraints being your own imagination, and what you can get away with.

Makerspaces come in many varieties, from university fabrication labs to in-school makerspaces for kids to commercial enterprises. The MakeICT Institute, where (in full disclosure) I’m president of the board, is a 501(c)(3) nonprofit makerspace run entirely by volunteers. Our board is a “working board,” strategizing about MakeICT’s future while also being actively involved in meeting the organization’s needs on the premises.

MakeICT operates out of a former elementary school, with classrooms converted to shops dedicated to each domain of “making,” from a fabrication lab to a woodshop to a metal shop to textiles, and ceramics, and more. Each area has an area lead responsible for purchasing tools and supplies, and generally keeping the area in good working order. Committees exist to run events, process new memberships, tackle IT needs, and generally keep the place running.

As an organization, MakeICT functions as a community of makers with a mission to “innovate, learn, and build community at the intersection of art, technology, science, and culture.” It prides itself on being welcoming and accessible by keeping membership prices low, allowing 24/7 access to the space for members, and inviting members to teach classes at various levels of expertise, for which they receive 75% of the registration fees. MakeICT also hosts and organizes events for members and external organizations, offering makers a chance to display and sell their work.

Goal, audience, and scope

This is a content design and strategy proposed for MakeICT that has largely not yet been implemented.

Problem: MakeICT’s existing content was developed with care, but without organization. It’s difficult to navigate, inaccurate/outdated information sprinkled throughout, and in general it badly needs a makeover.

Goal: A mission statement is inspiring, but a content design strategy expands on that inspiration. The goal is to present unified content that establishes a brand/identity for MakeICT for use in promotional and educational materials.

Audience: Members and prospective members, partners and prospective partners, and donors and prospective donors.

Scope: The scope of this project extends to online content, presentations, and printed materials for the makerspace.

MakeICT’s online content includes:

  • Social media posts
  • Website
  • Wiki
  • Forum

MakeICT’s presentations include:

  • Internal training sessions (e.g. the “‘how to teach a class’ class” )
  • New member orientation sessions
  • “Maker Monday” presentations (members of the public are invited to tour the space, and are given a general overview of what MakeICT is)
  • Presentations for external organizations

MakeICT’s printed materials include:

  • Flyers
  • Brochures
  • Guides

Limitations

  • Shoestring budget – We get donations and sometimes discounted goods and services, but everything starts with spare time and the will to make something happen.
    • Compensation: It doesn’t take a lot of funds to makeover a website, for example, or shift the voice and tone of posts on social media. A dramatic change to printed guides and presentations is more of a financial hurdle, but getting people on board in advance of making changes will minimize the impact.
  • Volunteer authors/designers – No training is required. There’s a small Communications committee, but otherwise, any of active membership may contribute content.
    • Compensation: Enthusiasm makes the difference, and strategizing together makes people feel included and inspired to create content on their own. Holding a meeting or two to formulate a “plan of attack” would compensate for the lack of training.

Persona

The persona I see for MakeICT is The Friendly Guide. The Friendly Guide’s characteristics are:

  • Warm and welcoming
  • Passionate about sharing knowledge
  • Empathetic in addressing the varying skillsets and backgrounds of members
  • Passionate about innovating, broadening horizons, and pushing the limits of creativity
  • Eager to engage in collaborative projects with a playful and inclusive spirit

The Friendly Guide insulates you from the inherent risks of trying out a new hobby, which typically requires investing in expensive tools and materials, and then hunting down sources of expertise, before you’re even sure about getting in deeper. Minimizing these risks enables “hobby creep,” which is when the Friendly Guide nudges you and whispers “Hey, check this out!”

You experience “hobby creep,” for example, when suddenly you’re walking across the hall from Textiles, where you were quilting using the long-arm quilter that inspired you to join, and into Ceramics. Suddenly you’re centering a pot on one of the electric wheels, and you could swear that you you hear Unchained Melody playing in the background. Congrats! You’ve found a New Hobby.

The Friendly Guide lets you mess up on a project (repeatedly) using free materials, teaches a range of classes that introduce tools and skills, and steps in to prevent you from, say, electrocuting yourself while using the powder coating gun (ground it, silly).

The Friendly Guide knows you don’t have a lot of cash, and keeps membership prices low. The Friendly Guide knows you work weird hours, and lets you access the space 24/7. The Friendly Guide isn’t a teacher, a boss, or your dad, but rather a good friend who’s knowledgeable but also known for getting up to occasional hijinks and shenanigans.

Voice

Design Principles

MakeICT’s design principles are:

  • Collaborative
  • Dynamic
  • Accessible

Concepts

The concepts that support each of these design principles are:

Collaborative DynamicAccessible
Communal skill developmentExperimentationWelcoming to members of different backgrounds and experience
Cross-disciplinary projects and eventsPushing boundariesTeaching at multiple skill levels
Networking and supportInnovationLow-risk “hobby creep”
Celebration of achievements Creative explorationGuided introduction to new tools and skills

Project: New member orientation guide

After attending a Maker Monday (a tour of the space, open to the public), a prospective member can apply for membership and pay their first month’s dues. Once their application has been accepted, they are invited to attend a new member orientation session, which is composed of a) a more in-depth tour b) a Q&A session to get to know each other and existing members, and c) the opportunity to get a badge created which provides 24/7 access.

The new member orientation presentation contains a lot of new information, including off-the-cuff remarks by the presenter. This booklet supplements that presentation, gives new members a place to take notes, and provides a resource for them to take home, post-orientation.

That’s our logo, the little guy with his mind blown–literally– with ideas for things to make. ICT is Wichita, Kansas’s airport code (apparently federal regulations prohibited airport codes from beginning with “W” or “K,” reserving those for radio stations).

The gears and tools emerging from someone’s head via a hinged trapdoor gives a vibe of both “playful” and “industrial,” so I aimed to keep that tone throughout the guide.

Table of Contents: What to include, and where (and why)?

Because MakeICT is entirely composed of, and run by, volunteers, it’s important to emphasize this fact from the outset, giving new members a sense of what “volunteering” means.

The next most important thing is the map, because MakeICT is a converted school and it can be easy to get lost, so the map gets the centerfold on pages 10-11.

The FAQs comprise the “meat” of the booklet– they’re its entire raison d’être.

Lastly, the list of areas and contact information for the leads is included as a resource.

My mind is regularly blown by how an organization run entirely by volunteers manages to hold together from day to day, let alone expand and outgrow two prior facilities before moving into its current location.

I reworked the diagram showing the components that contribute to MakeICT’s trajectory. Below is the preexisting version, which I found confusing and too complex to convey the idea that membership encompasses all of the engines that power the organization.

Practically speaking, a letter to new members from the sitting president means that the next president will need to swap in their own message next June, and that we can’t just print a billion of these guides with the expectation of using them forever.

But I think it’s important for the president to represent the makerspace up front, in print– to literally speak for the organization, and communicate in a friendly but sincere way that all are welcome, i.e. to be the Friendly Guide.

Most of the FAQs are in random order, and they all have the same look and feel, so I won’t go through all of them. However, it’s vital to have the “forgot my badge” question up front. It’s a leap of faith to allow someone 24/7 access to the space, and members justify that level of trust by acting as individual, informal security guards.

This is the Friendly Guide chiding members that they don’t get to skirt the rules, but giving them an “out” if a member with a badge agrees to “babysit” them (which I have done, by the way. It’s…boring, as babysitting often is.).

There’s also a huge amount of trust in giving unsupervised access to a broad variety of hand and power tools, some very dangerous and/or very expensive.

This is the Friendly Guide’s voice again, threading the needle by using a friendly and reassuring tone, deflecting the potential of shame to convey a necessary point about caution and responsibility.

The Friendly Guide wants you to have fun exploring how to powder coat metal– but not to drive you to the hospital because you electrocuted yourself while powder-coating.

We don’t have a “You are here” style map at the front entrance, though perhaps we should. There are also no staff watching over the Welcome Center, so members and guests are pretty much on their own to navigate the space. However, there is a computer at the Welcome Center, where you can do the following:

  • Fill out a waiver, which everyone must do before spending time in the building.
  • Fill out and print a guest badge, which guests must wear for the duration of their visit.
  • Check in students for classes and verify that they’ve signed waivers and paid the course fee.

That computer is hard to miss and space is limited, so I opted to point out that the main entrance is where the alarm system is managed, and where you can drop cash and checks to either donate or pay membership fees in person.

Each area has its own email address so that members don’t have to memorize the lead’s name. Lead turnover isn’t frequent, but it does happen, so this way the guide won’t need to be updated each time.

It can be tricky to remember what major tools exist, and what classes are offered, in each shop, and this handy list makes it simple to entice your friends to become members as well. There’s something for everybody!

Where to go from here?

Videos

The most immediate application of these design principles is creation of videos. The membership committee lead is also a Director At-Large, and he and the Vice President have expressed interest in creating videos– potentially one video per area, ideally with the area leads providing brief (30 seconds to a minute and a half) overviews of their respective areas. These videos can then be put to use in several contexts:

  • The videos could be uploaded to the wiki, so that–for example–visiting the Textiles wiki page would give a member the option to watch a brief video explaining what tools and materials are available, and which classes are taught in that area.
  • The videos could be included in a Maker Monday or orientation presentation, to give prospective or new members a chance to learn about a given area without area leads having to be on-site to personally give the same overview of their respective areas that they’ve given every other Monday, as long as they’ve been an area lead.
  • The idea of area-specific overview videos was sparked by a Director At-Large who is disabled, as a means for disabled members to become acquainted with various areas without having to make the physical trek between them, down the halls and back, and standing/sitting in the room for the duration of time required for the area lead to give an overview.

Web Site and Wiki Content

The Secretary of the board and some former members of the board are engaged in a project to revamp the site and the wiki, providing useful information that is easy to understand and navigate. With the web site in particular, the content is public-facing and should project the Friendly Guide persona and use its voice to communicate with curious prospective members, and- crucially– members of external organizations who seek to partner with MakeICT.

Internal and External-Facing Presentations

MakeICT partners with external organizations in different ways, from providing an event venue for community groups, to participating in offsite events to publicize MakeICT and provide opportunities to children to become “makers” themselves, to giving presentations for outside groups on what MakeICT is and does. These presentations will benefit from a unifying set of design principles that ensures consistency in content messaging across events and audiences.

Looking Forward

MakeICT was incorporated on December 13, 2012 as a Kansas not-for-profit corporation. It has maintained that rag-tag sense of spirit throughout, but there’s no necessary conflict between maintaining that image and creating more refined messaging that emphasizes community and innovation. For that matter, there’s arguably a symbiotic relationship between the two, and it’s embodied in the persona of Friendly Guide. The principles of being Collaborative, Dynamic, and Accessible can guide MakeICT’s mission to become a welcoming community that celebrates scientific and artistic exploration.

Being a maker is inherently exhilarating, because it means engaging in creation– bringing something new into the world. Creating content about the makerspace is part of that, because it’s also a form of making.

MakeICT has a huge appeal, but that’s not something to take for granted. You don’t want to just make friends, after all, but to stay friends. Creating content for a thriving community, to help it thrive, is an ongoing process that will only improve as we go, for as long as we’re driven to make that happen.

It should go without saying

It should go without saying published on No Comments on It should go without saying

Yesterday Ophelia Benson and Richard Dawkins both published, on their respective blogs, a brief joint statement on how atheists should disagree. It’s really unfortunate that such a thing should be necessary, but encouraging that it happened. The statement condemns bullying and harassment generally, and then goes on to cite specific examples of such:

In other words we have to be able to manage disagreement ethically, like reasonable adults, as opposed to brawling like enraged children who need a nap. It should go without saying, but this means no death threats, rape threats, attacks on people’s appearance, age, race, sex, size, haircut; no photoshopping people into demeaning images, no vulgar epithets.

It should go without saying, but this statement comes into the wake of some particularly disgusting instances of people doing exactly these sorts of things, and defending others who have done them. In the comments on this statement on Dawkins’ blog, there are people continuing to defend this kind of behavior:

The reason that people make photoshops of her and her fellow travelers and make derisory comments about her is that they tried very hard to engage in honest discussion with her only to be met with conveniently selective moderation practices, ridiculous accusations of misogyny and a habit of playing the offended victim card to death. People might still have left her to stew in her own juice if not for the attacks on high profile figures over contrived offences. When bloggers jump on board with unevidenced accusations of sexual crimes then they can expect to be lampooned. The rationale behind the ridicule is that there is no point at all in trying to reason with her because she will not give an honest reading to what you say and will likely selectively moderate for effect, so why bother trying to engage politely with her.

No. See, that’s not how it works.

Harassment is not wrong unless you can find some justification in your mind for a person deserving it. It’s wrong, period. If you disagree vehemently with someone, you express this disagreement as an argument. You do not draw childish pictures of them making fun of their appearance. You do not call them demeaning names. You do not, in the same breath, endorse rhetorically punching someone because they won’t listen to you and then, because they complained about the first punch, justify doing it again. You don’t fake a punch and then give them two for flinching. That’s what children and bullies do.

Personally, I see a false dichotomy between harassment and politeness– there’s a world of ways to be rude to and about people without acting like a five year old. But if you’re unable to find a course of action in this realm, I would suggest not engaging with those people. No, going off and drawing a cartoon of them with a pig nose, or spreading around somebody else’s drawing of such, doesn’t qualify as disengagement.

Tribalism is a huge problem in the atheist movement, and my thoughts on that subject are muddled. I haven’t honestly worked out when it’s okay to draw lines in the sand and insist that “we” should no longer value what a certain person has to say because of what they’ve said in the past, or even continue to say, although I think Greta Christina’s recent post on the subject is pretty damn persuasive. There’s only so much time, and only so much attention we have to give, and it’s valid to say that a person’s actions have been so egregious as to disqualify him or her from deserving attention. That doesn’t mean much when you’re talking about someone’s personal attention, but it means a hell of a lot when you’re talking about who to invite to a conference or whose blog to host on your network.

There’s no official code of conduct that people in the atheist movement are forced to follow. If someone behaves reprehensibly, group ostracism is really the only way to deal with it. As a consequence, we continually have people trying to influence the group against someone, or against an organization, because that person or organization is believed to have rendered all charitable assumptions about him/her/them unjustified. I couldn’t tell you how many times I’ve seen someone say (invariably in the comments on the Facebook post of some prominent person in the atheist movement) that they’ve quit “the movement” altogether for this reason. I then laugh inwardly, bitterly, and move on, because the frustration and non committal nature of the statement is so palpable. “I wish I knew how to quit you,” indeed.

Not all “infighting” is created equal. The existence of disagreement, even strong disagreement, does not justify pettiness and childishness. The fact that someone is a “public figure” does not justify it either– public figures are still people. I don’t think it’s tribalistic to tell people who insist otherwise that their behavior disqualifies their views from consideration by people who want rational, respectful dialog, because it’s always possible to find someone expressing the same otherwise worthwhile sentiment while not being a heinous asshole at the same time. We just have to follow up on this promise, and vote with our attention.

I hope we can. I think this joint statement is a move in that direction.

A simple ethics of expectations

A simple ethics of expectations published on No Comments on A simple ethics of expectations

On the news this morning I listened to a report about a new virus discovered in Saudi Arabia. But after talking about how scary and disturbing that is, it was mentioned that it has infected a total of two people and is believed to be only transmissable from non-human animals to humans, so it probably won’t be any significant threat to the tens of thousands of people expected to flood into the country for the Hajj, the pilgrimmage to Mecca.

And I thought…..man, I’m glad I don’t believe in a god who wants me to do things.

Not just things like go on a pilgrimmage to a country where I might get infected with a virus, but anything. Because those things might be against my own interests, and because they’re expectations of a god, they’re not expectations I could advisably ignore.

Now, morality requires you to act against your own interests sometimes– only psychopaths go around using other people with absolutely no regard for those peoples’ welfare. But with morality, you’re refraining from harming people for the sake of those people. With the expectations of a god, you’re refraining from doing things because of the demands of a being that you don’t even know exists. And whom you can’t harm.

Frequently, and happily, the expectations of the gods people believe in just happen to be things they would do anyway, because they’re also moral (e.g. giving to the poor). Infrequently, the expectations of the gods people believe in are very immoral indeed (e.g. punishing non-believers). And frequently those expectations are morally neutral or close to it (e.g. making a pilgrimmage). But even a morally neutral expectation can be an unnecessary pain in the ass at best for the believer because it still requires him or her to at least exert some energy, time, and/or money on something he/she wouldn’t otherwise do. And in this case, could actually prove very harmful to him or her.

Good things are worth doing because they’re good.

Good things may be good because of God, or they may not. But regardless, you don’t need to believe in God to know what Good is, and to do good things.

If God is good, then God should only expect us to do good things. Not bad things, and not neutral things. Not because neutral is bad, but because it’s subjective– once you demand that someone do a neutral thing rather than them doing it for their own pleasure, you’re imposing on them. And that’s bad.

Conclusions:

Therefore, it would be reasonable for a believer in God to do only those things which God expects that are recognizable as good by the believer him/herself. Which would mean that “God says so” is never sufficient reason to consider something good.

Therefore, a believer who is moral should behave identically* to a non-believer who is moral.

Therefore, you can tell if the god someone believes in is good by whether that person’s behavior reflects an expectation of doing only Good things, not bad things or neutral things.

Therefore, believing in God, if God is good, is a morally neutral thing to do. As is not believing in God. If God is bad or neutral, then believing in God is an imprudent (bad for you) or bad (immoral) thing to do.

*Edit: This is a problematic term. I don’t mean “exactly the same as” but “indistinguishably from.”

International Blasphemy Rights Day

International Blasphemy Rights Day published on No Comments on International Blasphemy Rights Day
https://www.facebook.com/events/440034722701608/

Blasphemy. Noun:
The act or offense of speaking sacrilegiously about God or sacred things; profane talk.

In other words, the act or offense of speaking about religion as though you are not religious. Speaking about a religion as if you are not an adherent of it. And all of us are at least non-adherents of all religions except our own. Some of us aren’t adherents of any religions.

Therefore we are all blasphemers.

Most of us try not to gratuitously insult the religious beliefs of others. This is considered a gesture of respect for the person, since religious beliefs and behavior are not regarded as ordinary beliefs and behavior, but as part of a person’s identity. Perhaps the most important part, to them. But belonging to an exclusivist religion means believing that other religions are not paths to God– at least, not as direct paths as yours is. So even if they don’t say so, adherents of these faiths believe that other faiths are wrong. Or at least mistaken. If you are a committed skeptic, you are aware that religions generally make empirical claims, and some of those empirical claims are false. They do not align with objective reality, so far as you can tell. And if you are an ethical and honest person, you recognize and are willing to acknowledge that sometimes adherents of religions commit grossly harmful acts, and that sometimes they even exalt as admirable figures people who have committed grossly harmful acts in the name of their deity or deities.

Therefore if you are an adherent of an exclusivist faith, a skeptic, and/or an ethical and honest person, you are a blasphemer.

And yet in some places in the world, blasphemy is either illegal or on its way to becoming so. In other places in the world it isn’t illegal, but people consider it grounds to physically attack someone. If you condemn the latter but approve of the former, you are like Shaykh Abdallah bin Bayyah, Vice Chairman of the International Union of Muslim Scholars who recently cautioned fellow Muslims to refuse to respond to depictions of Muhammad, even insulting ones, with violence. That was admirable, but in the same breath he also asked the U.N. and Western governments to make it criminal to “denigrate the religious symbols” of Muslims. As commenter Abby Normal eloquently put it, “He essentially wants to replace chaotic mob violence with orderly state violence.” It is not the job of the mob or the state to commit violence in order to protect religious feelings.

For these reasons I celebrate International Blasphemy Rights Day today. Not because I get a thrill in provoking or antagonizing, but because I recognize that doing so is both inevitable and necessary. And that religious feelings, while special to those who have them, cannot dictate the freedom of others to speak. If you want to join me in celebrating this day, you don’t need to blaspheme if you don’t want to (or at least, you don’t have to knowingly blaspheme, though you very likely will on accident). You can just think about it. And maybe tell someone else, so they will think about it. That in itself will benefit us all.

Spokesgroups

Spokesgroups published on 2 Comments on Spokesgroups

Radley Balko got quite a lot of hate mail in response to an article he wrote for HuffPo on Occupy Wall Street. One letter hilariously complains

I am appalled by your lack of integrity. You quoted someone from the Cato Institute but didn’t reveal that you also worked for them. You also didn’t reveal that while they pretend to be conservatives, they are really George Soros peacenicks, homos, and potheads (your probably all three) who wear ties to disguise themselves.

Peacenicks, Homos, and Potheads Who Wear Ties. It has a nice ring to it, doesn’t it?

Ed Brayton picked this up and reprinted it on his blog, where the first posted response was “Not to forget that Cato is financed in large part by the Koch brothers…” Brayton replied:

So is the ACLU. That doesn’t mean they don’t do great work on many important issues. Nor does the fact that they’re wrong on some issues. I think some people just don’t get the point of a think tank that looks at a large range of issues. They have scholars who specialize in entirely different subjects. Their scholars working on economic regulation issues may be completely wrong and their scholars working on Fourth Amendment or eminent domain issues (or any number of others) may be completely right. Heck, the same scholar may be right on one issue and wrong on others, or right on the overall issue while wrong on some particular facet of it. Welcome to the real world, where no one is right on everything (you and me included).

Following from my recent post on spokespeople…..yes. Of course groups are more complicated. Of course money changing hands encourages bias. Of course we have to decide whether a non-profit/think tank endorses our goals enough to justify supporting them financially. These concerns are all relevant. But an organization receiving money from sources you dislike is not rat poo in your ravioli. It doesn’t irredeemably taint the group as a whole, and it doesn’t make their conclusions false. Good luck finding a politically active organization to support which is funded entirely by people who agree with you. People have different interests, different goals, and if we’re concerned with politics while too busy with life to be full-time activists ourselves, we have to figure out who is doing the closest thing to what we’d be doing if we were activists, and support them.  

If deciding that individuals in public life are your spokespeople and getting angry or denouncing them when they say something “wrong” is unreasonable and unrealistic, then surely doing the same thing with organizations is moreso. Actually thinking critically about the content of information disseminated and the value of acts committed is obviously more work, but it beats simply turning your brain off and putting your entire faith in a group or denouncing that group as evil in every way. Doesn’t it?

Dan Savage as sexual ethicist

Dan Savage as sexual ethicist published on 2 Comments on Dan Savage as sexual ethicist
As president?  Well, maybe not…but we could do
and have done a lot worse for that, too.

Lutheran pastor Benjamin Dueholm wrote an interesting and thorough article on this subject for Washington Monthly.  It’s definitely worth a read, though I disagree with some of his analysis.  So does Amanda Marcotte, who ripped into the article to some extent for sexist/heteoronormative bias, and Lindsay Beyerstein, who points out that Savage isn’t nearly as opposed to monogamy as he is generally portrayed.  It’s true; he isn’t– though he also doesn’t believe that everybody should be monogamous, or that people who cheat in a monogamous  relationship are necessarily the scum of the earth and should never be forgiven.

Dueholm’s careful description of Savage’s ethos points out that in relationships he emphasizes honesty, autonomy, reciprocity, and willingness to give, which I would characterize as a mature respect for one’s partner. Just as different things make different people happy, different relationships can flourish under varied conditions and one size definitely doesn’t fit all.  Savage’s willingness to acknowledge that and address individual relationships on their own terms is, I think, what has made and kept his column (and now podcast) so popular for so long.  If we as a country were going to appoint a sexual ethics czar, we could do a lot worse.