Skip to content

Out of our minds: externalized cognition and UX design

Out of our minds: externalized cognition and UX design published on No Comments on Out of our minds: externalized cognition and UX design

Imagine an app that feels like it thinks along with you, rather than for you. Instead of simple automation, it selectively offloads tasks requiring significant mental effort—the ones that slow you down. That’s a well-known concept in UX called “cognitive offloading,” usually referring to intuitive design of the type that Steve Krug wrote about in Don’t Make Me Think. But a few days ago, product designer Tetiana Sydorenko published an interesting article titled AI and cognitive offloading: sharing the thinking process with machines, in which she describes a model of thinking about the process of cognition itself and how it applies to UX design.

Thinking outside the brain’s boundaries

More specifically, she’s talking about where cognition is located. External mind theories hypothesize that cognition isn’t confined to an individual mind, but rather is more like an emergent property arising from a network of humans, the tools they use, and the artifacts they keep, passing down knowledge from prior generations. In this case the externalized mind takes the form of distributed cognition, as anthropologist Edwin Hutchins described it in his 1996 book Cognition in the Wild.

Hutchins observed that crew members of a navy ship navigated their way across the ocean as a team so integrated with each other that no individual crew member knew, or in fact needed to know, the entire route themselves. He pointed to each crew member functioning as a cognitive resource for each other, as well as the tools that they relied upon constantly such as a slide rule.

She frames Hutchins’s thesis in this way:

Hutchins proposed that to truly understand thinking, we must shift our focus. Rather than isolating the mind as the primary unit of analysis, we should examine the larger sociotechnical systems in which cognition occurs. These hybrid systems blend people, tools, and cultural artifacts, creating a web of interactions that extend far beyond any single individual.

Though Hutchins published his book in the mid-90’s, Sydorenko points out that the advancement of AI is ” revolutionizing the way we think and work by transforming how cognitive processes are distributed. Computers and AI systems take on repetitive and tedious tasks — like calculations, data storage, and searching — freeing humans to focus on creativity and problem-solving. But they don’t just lighten our mental load; they expand and enhance our cognitive abilities, enabling us to tackle problems and ideas in entirely new ways. Humans and computers together form powerful hybrid teams.”

Distributed Cognition in the Wild

Sydorenko then walks readers through a few examples of apps that she finds to particularly excel at using AI (and the AI we’re talking about is machine learning, more specifically generative AI, throughout) to manage cognitive offloading.

Probably the best example of distributed cognition in the list for me is Braintrust, an app designed for professional networking that uses AI to help employers create job listings.

Braintrust exemplifies how machine learning can act as a cognitive partner within a distributed system. By automating the creation of job descriptions, the platform reduces cognitive load while enabling collaboration between users and AI. This approach allows employers to focus on tailoring job roles while the AI handles the repetitive groundwork It also highlights the power of distributed cognition: tasks are shared across humans, machines, and tools to produce results more efficiently and effectively than any one component could achieve alone.

The common theme throughout the apps that Sydorenko visits on her tour is cognitive offloading, of course, and apart from Braintrust (professional networking) she walks us through the Craft app (note-taking and idea-generating), Red Sift Radar, (internet security) Relume (sitemap and wireframe generation), Airtable (business apps), Jasper (marketing), and June (product analysis).

These tools exemplify “cognitive augmentation,” a term that captures their ability to enhance and extend human thought processes, which I think is more accurate than “AI-assisted productivity” or something along those lines, because it gets at cognitive distribution while also suggesting a more dynamic interaction between human cognition and AI, now and in the future.

Backtracking a bit

But I want to step back and take a look at a couple of journal articles Sydorenko links in her piece, where she says “That was the ’90s. Today, modern information technologies, especially AI, are revolutionizing the way we think and work by transforming how cognitive processes are distributed.”

The first article, AI-teaming: Redefining collaboration in the digital era, examines how AI technologies influence human decision-making and perception. Interestingly, it notes some ambivalence about AI’s role, suggesting that the integration of AI in decision-making is not universally received as a positive development. This nuance complicates the narrative of seamless cognitive offloading that Sydorenko presents.

The second article is Supporting Cognition with Modern Technology: Distributed Cognition Today and in an AI-Enhanced Future, and sticking with the “in the wild” metaphor (Hutchins, after all, spent a lot of time in Papua New Guinea in his early career), it’s a lot of backpack to unpack.

It reframes distributed cognition in a modern AI context, focusing heavily on cognitive offloading while largely omitting its anthropological origins (no mention of Hutchins), but it nevertheless offers a robust framework for understanding cognitive offload that deserves to be unpacked.

It also mentions some risks accompanying cognitive offloading that are relevant to this discussion:

In three experiments, Grinschgl et al. (2021b) observed a trade-off for cognitive offloading: while the offloading of working memory processes increased immediate task performance, it also decreased subsequent memory performance for the offloaded information. Similarly, the offloading of spatial processes by using a navigation device impairs spatial memory (i.e., route learning and subsequent scene recognition; Fenech et al., 2010). Thus, information stored in a technical device might be quickly forgotten (for an intentional/directed forgetting account see Sparrow et al., 2011Eskritt and Ma, 2014) or might not be processed deeply enough so that no long-term memory representations are formed (cf. depth of processing theories; Craik and Lockhart, 1972Craik, 2002). In addition to detrimental effects of offloading on (long-term) memory, offloading hinders skill acquisition (Van Nimwegen and Van Oostendorp, 2009Moritz et al., 2020) and harms metacognition (e.g., Fisher et al., 20152021Dunn et al., 2021); e.g. the use of technical aids can inflate one’s knowledge. In Dunn et al. (2021), the participants had to answer general knowledge questions by either relying on their internal resources or additionally using the internet. Metacognitive judgments showed that participants were overconfident when they are allowed to use the internet. Similarly, Fisher et al. (2021) concluded that searching for information online leads to a misattribution of external information to internal memory.

To summarize: cognitive offloading carries the risk of forgetting how to navigate without Google maps, lose documents on Google drive that you were going to use for a blog post, being lazy about learning new skills because AI can do it for you, and make you cockier about your own abilities because you forgot how much AI assistance has helped you out.

But none of those risks have anyone reading this post, or indeed its author, right? Of course not,

I’m reminded of the lawyers who were sanctioned last year for using ChatGPT to write a legal brief for them, unaware that it cited case law for cases that don’t exist.

But on the other hand, Eleventh Circuit Judge Kevin Newsom wrote a piece last May about how generative AI can assist judges in finding insights on the “ordinary meanings” of words. That may sound bizarre, until you recall that the term “legalese” exists for a reason: legal practitioners can become so entrenched in writing and speaking about the law that they fail to notice how disconnected their language is from a normal conversation between human beings.

Large Language Models, of course, can help you to write in many voices and levels of jargon, making them incredibly useful for the kinds of apps Sydorenko describes. But does that mean distributed cognition provides the best design principles for apps of all kinds, at least when it comes to generative AI?

A Trusted Tour Guide

The risks associated with cognitive offloading may contribute, at least in part, to the falling public trust in AI. According to a 2023 global study, 61% of us are wary of trusting AI systems.

An article in Forbes citing that global study noted:

it isn’t just about trusting AI to give us the right answers. It’s about the broader trust that society puts in AI. This means it also encompasses questions of whether or not we trust those who create and use AI systems to do it in an ethical way, with our best interests at heart.

Those of us who use AI to create content need to remember that we’re both expecting trust and being expected to be trustworthy– while most of us don’t design AI directly, we’re nevertheless responsible for what we make with it.

In that sense, we’re both consumers and creators of the famous “black box” that AI, particularly generative AI, represents. I can’t see whether you write something with AI, so not only don’t I know if you can write, but I don’t know whether you fact-checked the content you’re sharing. In the other direction, if I’m writing with AI assistance, I don’t know why the AI is making the suggestions and changes that it does.

But my aim here is not to get bogged down in questions about trust– rather, it’s to suggest an alternative external cognition model for UX design that incorporates AI, and that’s what philosophers Andy Clark and David Chalmers called, in their famous 1998 paper, the Extended Mind.

Clark and Chalmers envisioned the extended mind as a personal notebook, a helpful assistant, a brainstorming partner. The extended mind, to them, is active, integrated, and indispensable.

“Now consider Otto,” they begin in a now-famous thought experiment:

Otto carries a notebook around with him every- in the environment where he goes. When he learns new information, he writes it down. When he needs some old information, he looks it up. For Otto, his notebook plays the role usually played by a biological memory. Today, Otto hears about the exhibition at the Museum of Modern Art, and decides to go see it. He consults the notebook, which says that the museum is on 53rd Street, so he walks to 53rd Street and goes into the museum. Clearly, Otto walked to 53rd Street because he wanted to go to the museum and he believed the museum was on 53rd Street. . . it seems reasonable to say that Otto believed the museum was on 53rd Street even before consulting his notebook. For in relevant respects the cases are entirely analogous: the notebook plays for Otto the same role that memory plays for [someone else]. The information in the notebook functions just like the information constituting an ordinary non-occurrent belief; it just happens that this information lies beyond the skin.

If Otto were a computer, we’d call the notebook his “external hard drive.” In such a case, there wouldn’t be much contention if you called the HDD part of the computer, especially if the two remained connected and the computer regularly “consults” the HDD for information.

Because it’s all just data, right? Well, no. In the extended mind, there is what Clark and Chalmers call “active coupling.” The relationship between the mind and external tools must be causal, active, and continuous. It needs to reliably augment mental functioning.

The difference between distributed cognition and extended mind could be summed up, at a very high level, in this way:

FeatureExtended MindDistributed Cognition
FocusIndividual cognition + toolsGroup cognition + social systems
Cognitive unitIndividual + artifactNetwork of agents, tools, and systems
AI relationshipAI as a personal cognitive extensionAI as part of a socio-technical system
Level of analysisPersonal (one user + one tool)Systems-level (multi-agent, cultural)
EmphasisIntegration of tools into selfCollaboration, social interaction

But what does an app that uses extended mind as a basis for design, rather than distributed cognition, actually look like? Here’s an example:

Ada Health

Ada is an AI-powered health companion that assists users in assessing symptoms and managing health concerns. Its design reflects extended mind principles through:

  • User-Centric Interface: The app’s intuitive design ensures accessibility and ease of use, facilitating seamless integration into users’ health management routines.
  • Interactive Symptom Assessment. Ada engages users in a conversational interface, asking personalized questions to evaluate symptoms, thereby acting as an extension of the user’s diagnostic reasoning.
  • Personalized Health Insights. Based on user input, Ada provides tailored health information and guidance, supporting informed decision-making.

Now for some caveats: Ada isn’t an all-encompassing personal health app. It’s primarily designed to be a symptoms-checker. But by partnering with healthcare providers, it can be integrated into their systems as their “digital front door to services,” as the Ada site puts it, providing a first point of contact for questions about symptoms, available 24/7 (in fact, they say that 52% assessments were outside clinic opening hours). And this may not sound like a direct service to patients, but the site also proclaims that it “helped to redirect over 40% of users to less urgent care.” That’s compared to heading straight to the emergency room– it’s also available 24/7, but most people don’t want to go there if they have a viable alternative.

One more quote from the site:

A reputation built on trust

Ada’s fully explainable AI is designed to mimic a doctor’s reasoning. Our approach to safety, efficacy, and scientific scrutiny earns us the trust of clinicians and patients.

93% of patients said Ada asked questions that their doctor wouldn’t have considered

A person’s goal for using the Ada app isn’t cognitive augmentation– they don’t want to be doctors. They want to have access to information and recommendations that doctors can provide (but apparently, in some cases, don’t).

There are other differences between an app that evaluates your help systems and one that helps you write marketing copy, of course, and the most striking one is the stakes involved. Not to say that marketing copy isn’t important, but it’s important in the same way as an Uber is, relative to am ambulance– low risk vs. high risk.

Another difference is that because an app like Ada is healthcare-oriented and high risk, it carries a higher level of trust– which it has to earn. Healthcare apps may be more trustworthy in the eyes of the public than HR apps, but that will only sustain as long as the trust patients place in apps like Ada is justifiable. The context that makes Ada’s design work dramatically better with extended mind principles is the context that requires it to exist to begin with.

Two (complementary) paths forward

Hopefully by this point it will be clear that my intent is not to argue with Sydorenko’s post on distributed cognition for cognitive offload, but to “Yes, and” it. I think that both distributed cognition and extended mind hold great promise for the design of apps using AI, but which appropriate is more appropriate in a specific instance can depend on factors like “systemic vs. duality,” “collaboration vs. consultation,” and as basic as it sounds, “group vs. individual.”

But it’s also not as easy as “high stakes vs. everything else.” To illustrate that idea, I’ll toss out an example of an app that’s high stakes, healthcare-related, and best served by the distributed cognition model: PulsePoint.

PulsePoint is an app that tracks cardiac emergencies, such as heart attacks, and notifies nearby users who are trained in CPR to provide assistance. It exemplifies distributed cognition in the follow ways:

  • It relies on a network of participants: 911 dispatchers, PulsePoint users, and emergency responders. Each plays a specific role in responding to cardiac emergencies. This shared cognitive load mirrors the distributed cognition model, where tasks are spread across multiple agents to achieve a collective goal.
  • The app uses external tools (e.g., smartphones) to provide real-time alerts and locate Automated External Defibrillators (AEDs). These tools, combined with the users’ actions, form a socio-technical system.
  • The system builds on past actions (e.g., training users in CPR, mapping AED locations) to support real-time decision-making during emergencies.

I’m planning to write more on the design needs of healthcare-related apps in the future, but for now, I hope Pulsepoint serves an example of how members of a collaborative system can work together to achieve amazing results via distributed cognition that otherwise would not have been possible.

That’s what “emergent property” means: combining to make something bigger than we could have achieved alone.

Both distributed cognition and the extended mind remind us that no matter the approach, our tools and networks amplify what we can achieve. Sometimes it takes a village, and sometimes it takes a friend. I know how corny that sounds, but sometimes it’s true.

Leveling up in the DOOM pile that was 2024

Leveling up in the DOOM pile that was 2024 published on No Comments on Leveling up in the DOOM pile that was 2024

2024 was…a learning experience. Every day I learned something, whether I wanted to or not.

It was a crash course in elder care, and an object lesson in patience, resilience, and creative problem-solving. It was, and is, a DOOM pile (a pile which you Didn’t Organize, Only Moved) of a life.

The experience

A DOOM pile is a pile of random stuff that you have to reckon with, sooner or later. This reckoning may– it will— be painful, but you have to do it if you ever want to know where things belong, in order to put (and keep) them in those places. Dealing with the items in a DOOM pile is a never-ending task, because the piles will incessantly, constantly be in a state of regeneration. And you’ll have to keep confronting them, lest you find yourself at the bottom of one.

Where, for example, was I supposed to put the complicated emotions surrounding my mother’s trip to the emergency room– three times– in a state of confusion, to the point of failing to recognize me? Where was I to put my dismay at the realization that she would deny that these trips ever happened, because after all, she didn’t remember them?

Where was I to put feelings of overwhelm that accompanied my election as the president of the board of a non-profit? MakeICT is a “makerspace,” as in, a community of people with vastly different backgrounds who, nevertheless, occupy the same space and have to share their toys– I mean, tools– and get along. And my purpose became one of unifying the seven representatives of these makers, in order to get sh*t done. Talk about hands-on learning.

Where would I deposit the creative frustration and fatigue that rained down on my artistic projects and threatened to drown all motivation to see them through?

Then there were those rabbit holes into which I dove without hesitation, the ones that became my focus of attention and deliberate learning, creating a sense of relief and accomplishment that I wasn’t finding elsewhere– what of those? I picked these subjects carefully from the bottom of the DOOM pile, moving them to the top like some bizarre game of Jenga, in which the pile keeps growing but never falls over.

But I kept throwing myself at the pile, in a constant effort to slow down its growth, if I could not truly end it. Picking up heavy things builds muscles, as I’ve learned from a hobby of welding large bits of metal to each other into hopefully pleasing shapes. So I picked up the heavy stuff, along with everything else– and it was heavy, man.

It still is. It always will be.


The focus

One rabbit hole has been learning Python—a versatile, adaptive language that shapes user experiences. It’s a common tongue, but a relatively new one for me, though my aspirations for using it include designing and building an “AI robot” with its own LLM, communicating and reacting to the world around it.

That’s a ways down the road–really far down the rabbit hole, to put it lightly (and thoroughly abuse the metaphor). But along the way, my hope is to design video games– dialog-heavy games, like the ones I love to play. I want to grow my Python skills and show them off through storytelling, even to create interactive narratives that require prompt engineering to set the scene and develop NPCs (non-player characters) with personalities.

Then the inspiration hit to translate what I’ve been learning into an experience– not just a game, but a way to explore agency. In his book Games: Agency as Art, philosopher C. Thi Nguyen describes games as means of exploring the “alternative agencies” created by game designers, experiences that are fundamental to the concept of gaming itself.

But I am not (yet) a game designer; I need to develop my Python skills further. But that requirement is no obstacle to my creative ideation. The idea to depict the last year as it has played out in my life, in game form, ambushed me like a cat around the corner. The cat doesn’t wait for you to prepare before it pounces, and neither did this idea.


The game (currently untitled)

What I imagine is part visual novel (a linear story with its own exposition) part point-and-click adventure (in which the player chooses items to explore and dialog options for conversations) , and part minigames (simple games within a game).

The game would progress from scene to scene, snippets of my life, a series of vignettes that make up the emergent property of the year’s experience. Each vignette reflects a different themes of growth, and each level is labeled accordingly.


Hospital room (resilience level)

We begin in a hospital room, generic and spartan as hospital rooms inevitably are. It’s January, but you wouldn’t know it except for the small collection of soda cans sitting on the windowsill. That ledge is like a little refrigerator, for the same reason that nobody, ever, can be observed sitting in the chair next to it.

Mom sits in the hospital bed, upright with the room’s only table– the kind of slides under the bed– extending over her lap. Clicking around the room conveys this exposition, as well as why there’s a blanket draped over the reclining chair (because you slept there last night) a pizza roll on the hard cover to the hamper (because again, there are no other tables), and a forgotten polystyrene coffee cup, because getting coffee is the only opportunity for adventure beyond the confines of the room.

The only opportunity, that is, until your mother wants a yogurt parfait. It didn’t come with her breakfast, so you’ll have to fetch it from the cafeteria. This is where the minigame comes in– the maze, in which you navigate the circuitous halls of the hospital on your way to the cafeteria, then back again. It’s a timed exercise, because your mother has an echocardiogram scheduled in half an hour. Success means getting the little plastic cup to her on time, and without forgetting the spoon.


MakeICT metal shop (problem-solving level)

The next scene opens in a metal shop, a somewhat cramped space that houses three welders, an anvil, a plasma cutter, a grinding wheel, and a work table. That table is currently occupied by steel grids so large that they extend well past the edges of the table. Clicking around the room informs you that this is an area in MakeICT, a former elementary school whose classrooms were converted into studios for ceramics, textiles, woodworking, and so on. In this case, you’re in the “hot shop.” A welding helmet sits on top of the grids on the work table.

The hot shop is also occupied by an NPC, whose presence contributes to the feelings of claustrophobia. Your friend, a long-time member of MakeICT, explains that there was apparently an act of vandalism that left the building’s air conditioning window units slashed and thereby destroyed. Stabbed by a knife, it looks like. The institute has purchased new window units from its struggling budget, and they will need cages installed over them, to protect them from the same fate.

The minigame involves your goal of welding the pieces of grate together to create one of these cages. Your vision is hampered by the welding helmet you must use, and if you don’t join the pieces of the cage in the right places, it will fall apart and you must start all over again. Success means completing a full cage, which realistically means having to weld another one, but the game allows you to end here.


ICT Comic Con (creativity level)

You see before you a large table of a vendor’s booth on which is strewn the makings of a pair of animatronic wings. Behind the table are two NPCs expressing concern that your wings will never function properly and you’ll be unable to wear them. Your costume is Tilda Swinton’s character, the angel Gabriel from the 2005 horror/fantasy game Constantine. Unfortunately without Gabriel’s enormous feathered wings, your costume becomes an unrecognizable arrangement of shredded white clothing.

The mini game, therefore, entails your attempt to assemble the wings without either making the feathers so sparse that they’re barely recognizable as wings, or so numerous that the wings become too heavy for the servos in your harness, which provide the power and movement of the wings, to handle and the wings flop dejectedly to the floor. Success means that you reach the happy medium between weight and beauty, but if the wings are layered heavily too many times, you lose the game and must begin again.

Upon achieving success, you receive a message describing how the remainder of the day is spent walking the aisles of the con, with people complimenting your costume and asking to take pictures with you.


Board meeting (leadership level)

In a large multipurpose room at MakeICT you see five rows of plastic chairs, occupied sparsely by a group of thirteen people of various ages in decidedly casual attire– six of whom are fellow board members, you discover by clicking amongst them. This audience stares at you expectantly, waiting for you to say something they can react to in some way. Their faces communicate various levels of engagement, from “barely able to sit still” to “probably asleep.”

This minigame is entirely made up of dialog options, as you must attempt to guide the conversation to stay on-track. Success is when you receive a prompt to bring the issue to a vote, but that option only comes after a series of interactions with NPCs in which you’re given the choice to encourage them to say more, or shut them down (politely!) when they try to change the subject.

Encouraging on-topic discussion allows you to move closer to a motion, while allowing members to digress takes you further away, as measured by a “persuasion meter” at the bottom of your screen. Members do not necessarily wait to be clicked on to speak– on some occasions they force the conversation, in the form of a popup message that cannot be dismissed without choosing a response, which carries the risk of encouraging more popups if you fail to shut them down (again, politely!).

The persuasion meter starts at 50%, but if it drops to 0%, the audience will shout at each other without you having any option to address it and you’ll lose the game.


CBP One app research (critical thinking level)

The scene before you is a desk with a computer and a wide screen monitor, and you hold a tablet. The monitor displays a browser window with an image of a hand holding a smart phone, and the CBP One logo on its screen. Clicking around on this image gives you a brief background of the CBP One app and how it is used today. On the other side of the monitor’s screen is a search window, with links displaying the headlines of ten articles about CBP One. You’re performing research on the app in order to write your own article.

The tablet screen is blank except for instructions on how to play the mini game, which requires you to choose the two headlines that link to articles containing the most comprehensive and reliable information about CBP One. Once you have done so, and had the opportunity to read those articles, you are given a multiple choice quiz on the laptop screen.

The answers to the questions in this quiz are contained within two of the articles that you were given the opportunity to read. If you chose those articles, you’ll have the information necessary to answer the quiz questions correctly. If you didn’t, you’ll be asked the same questions, but the articles you chose will give you misinformation, forcing you to either answer the quiz’s questions with that misinformation or guess at the answers. The quiz questions are on the subjects of the app’s appearance, performance, and criticisms.

A grade of below 70% on the quiz means that you lose– your own article contains misinformation, which spreads amongst the readership and contributes to conspiracy theories. Success in this mini game means scoring 70% or higher on the quiz, resulting in your article containing, and spreading, useful information that informs readership about the truth regarding the CBP One app.

Messy desk (curiosity level)

The scene before you is a cluttered desk, a workspace across which are strewn printouts of papers with titles like “Invisible Agents in AI Design” and “Empathy as a Metric for AI Success”, as well as a thick document emblazoned with the title “The cognitive origins of soul belief: Empathy, responsibility, and purity”. Behind the desk is a large whiteboard with messages scribbled across it.

Clicking on items on and around the desk reveals reflections from research. Clicking on the white board, for example, displays the message “Agency isn’t just about appearance, but behavior. You can put a smiley (or aggrieved) face on a robot, but nobody will be fooled into thinking that it understands something like pain. Maybe the key is not to try and make a ‘pain bot’ that impersonates a nurse or doctor, but to transparently be a robot gathering information to assist doctors and nurses.”

Clicking on the laptop displays an infographic that says “60% of respondents say they don’t want healthcare providers to rely on AI.”

Clicking on the dissertation (the thick document) displays the message “Our imaginative projection of other people’s agency is possible because we are able to recognize in them a source of very real but invisible thoughts and feelings which affect how they behave in the world.”

The mini game displays an image of prototypical “pain bot” and asks players assign features to the pain bot with the goal of achieving the best possible balance of efficiency, empathy, and trustworthiness. You must make selections to favor specific traits, each of which involves unseen tradeoffs in terms of points gained or lost within each area. Examples:

Add facial recognition for emotions? (+5 Empathy, -3 Trust)

Include physiological data sensors? (+3 Efficiency, -2 Empathy)

Use simple, transparent algorithms? (+5 Trust, -2 Efficiency)

After making these choices, you’re able to see your total scores in each category, and an assessment of how well they are balanced. Success means achieving maximal balance, with the message “Next step: secure funding and trust!” If you fall short, your parting message will be “Stakeholders worry about your pain bot’s reliability and adoption.”


The future

Playing this game invites you to step into the “alternate agency” of my year, with all of its challenges to grow and develop.

Each vignette represents tackling one part of the DOOM pile, chipping away at it and diminishing its ability to intimidate.

Each small story is a stand-in for a much larger experience that I hope to have conveyed in a way that doesn’t require playing a fully fleshed-out game in order to grasp the concept. But maybe you can still see the scenes in your head as clearly as I can, despite my tremendous advantage of having lived through those experiences.

Beyond this little experiment of an autobiographical game concept, I’m authoring this post to show, not just tell. This game isn’t just a way to reflect on my year—it’s a way to explore the intersection of storytelling, agency, and design, skills I’m honing as I delve deeper into Python and interactive narratives.

Ultimately, the DOOM pile of life isn’t just a subjective experience—it’s the mountain we all climb to make sense of our stories. For me, learning and self-expression are tools to chip away at that eternal mountain, revealing its contours and helping others see their own path forward.

Because we cannot act upon what we cannot see. Facing the DOOM pile, head on, is the first step in doing something about it.