Healing Through Words: How Creative Writing Empowers Caregivers

“There are only four kinds of people in the world: those who have been caregivers, those who are currently caregivers, those who will be caregivers, and those who will need caregivers.” —former First Lady Rosalynn Carter

Caregiving is a fundamental aspect of human history. Deeply rooted in the fabric of society, it crosses continents, countries, cultures, and religions. In simple terms, a caregiver is someone who meets the daily needs of another person who can’t care for themselves due to age, disability, illness, or mental disorder. Caregiving is a complex field that encompasses professional caregivers, including nurses, home health aides, and certified nursing assistants. It also includes a crucial group—informal caregivers—who are unpaid and are often related to the recipient or are part of their close network.

Some common conditions that informal caregivers provide care for include dementia, Alzheimer’s disease, Parkinson’s disease, cancer, COPD, stroke, developmental disabilities, and mental health and psychiatric disorders. Responsibilities may include feeding, bathing, dressing, running errands, performing household chores, assisting with mobility, providing transportation, administering medication, and coordinating with health care providers.

Although informal caregiving can be personally rewarding, its potential negative impact on caregivers’ finances, family relationships, and personal well-being is undeniable. According to a 2020 report conducted by the National Alliance for Caregiving (NAC) and the American Association of Retired Persons (AARP), 53 million people in the United States provide unpaid care to an adult or child with special needs. Of the 48 million who care for an adult, 61 percent also work full-time or part-time. Many such caregivers eventually take a leave of absence or reduce their work hours.

Furthermore, 30 percent of caregivers belong to the “sandwich generation,” meaning they care for their aging parents in addition to raising their own children or grandchildren. Informal caregivers of adults provide care for an average of 24 hours a week and spend over $7,200 annually on caregiving costs, comprising 26 percent of their income. Perhaps most shockingly, the economic value of unpaid family caregiving is an astounding $600 billion per year.

Unfortunately, as studies show, Americans will only become increasingly reliant on caregivers. By 2034, the U.S. population will include more adults aged 65 and older than children under the age of 18, says AARP. Worryingly, the number of potential caregivers is predicted to decline while the number of people who will require care is expected to increase.

The criticality of informal caregivers was illustrated in the deaths of Gene Hackman and his wife and caregiver, Betsy Arakawa, who were found dead in their home on February 26, 2025. It was later concluded that Arakawa had died around February 12 due to hantavirus pulmonary syndrome. Ninety-five-year-old Hackman, who was unable to care for himself, died around a week later due to cardiovascular illness complicated by advanced Alzheimer’s disease. Authorities added that, given his cognitive impairment, it was possible he was not aware that his wife was deceased during his final days.

Emma Hemming Willis, who has been a vocal advocate for caregivers since her husband Bruce Willis was diagnosed with aphasia in 2022, said of the deaths, “I do really believe that there is some learning in this story… and that is that caregivers need care too and that they are vital, and that it is so important that we show up for them so that they can continue to show up for their person.”

She’s right. Informal caregivers—who are often referred to as “invisible patients” because their needs and well-being are frequently overlooked—need care too, as many experience a deterioration in both physical and mental health. This is largely because most informal caregivers are thrust into their role with little or no skills training. Competencies that caregivers commonly need but often do not receive adequate preparation for include medical and nursing skills, functional rehabilitation skills, and safety supervision skills. As a result, many caregivers are not equipped to perform complex and high-intensity caregiving duties for their loved ones. This gap in skills can lead to significant stress, contributing to caregiver burden.

Consequently, many caregivers report feeling burnt out and physically exhausted, which can lead to feelings of depression, anger, worry, guilt, anxiety, and grief. Some caregivers also experience post-traumatic stress disorder when caring for loved ones with severe medical illnesses. Other symptoms of caregiver stress include fluctuations in weight, changes in sleep habits, social withdrawal, and loss of interest in activities.

Caregivers may also abuse alcohol or other drugs, including prescription medications. Unfortunately, caregivers often don’t find the time to address their health issues, which leads to prolonged illness. Tragically, studies also reveal that caregivers are an at-risk group for suicidal ideation, suicide attempts, and deaths by suicide.

Creative Writing: A Coping Mechanism for Caregivers

Many caregivers choose to cope with their daily challenges by visiting therapists or joining support groups. However, research has also shown that creativity can be an effective way for caregivers to reduce stress and improve mood, as it provides opportunities for them to be reflective, learn new skills, and nurture a sense of accomplishment and purpose. This can help them experience empowerment, flexibility, self-efficacy, and optimism, which, in turn, can allow them to gain new perspectives on their caregiving challenges and find effective coping strategies. Creative activities often recommended for caregivers include painting, dancing, singing, gardening, cooking, and crafting.

However, one coping mechanism that has emerged as particularly powerful is writing. Many caregivers pursue writing because it can be a low-cost, flexible, and private activity that is free from the social constraints that may arise with therapists or support groups.

Moreover, the health effects of writing on the brain are well known. As neurologist Judy Willis put it:

“The practice of writing can enhance the brain’s intake, processing, retaining, and retrieving of information… it promotes the brain’s attentive focus… boosts long-term memory, illuminates patterns, gives the brain time for reflection, and when well-guided, is a source of conceptual development and stimulus of the brain’s highest cognition.”

But how can writing help people cope and heal? A growing body of research in the field of positive psychology has found that writing about gratitude and positive experiences—also known as “gratitude journaling”—can help people feel more optimistic and enhance their overall well-being. It can also reduce depression, stress, and symptoms of physical illness. This is largely because experiencing gratitude can restructure cognitive processes that help individuals shift from negative to positive thinking, leading to enhanced well-being.

“Actually sitting down and thinking about someone or something that happened that day makes me realize how many things for which I am truly grateful,“ wrote commenter Andrea Nancy Fox on the gratitude journal webpage of the Greater Good Science Center at the University of California, Berkeley. “Writing it down helped to make me think about it more.”

In contrast to the approaches taken in positive psychology, much of the existing literature has focused on the healing power of reflecting on one’s negative experiences through writing. In fact, studies show that individuals who have written about distressing life events through “expressive writing” have reported experiencing an impressive range of physical and emotional health benefits. Although expressive writing is frequently practiced in journaling, it can also be tied to memoir writing and fiction writing. These forms of creative writing can serve as constructive outlets for caregivers to actively engage with their emotions, process grief and loss, cope with adversity, and begin the healing process. Each type of writing can enable caregivers to experience these benefits in various ways:

  • Expressive writing can help caregivers explore and process emotions about painful experiences, which can increase self-awareness, reduce stress and depression, enhance overall psychological well-being, and improve physical health.
  • Grief memoirs can help writers extract meaning from their experiences and adopt new life perspectives, allowing them to rebuild their identities in the wake of loss.
  • Fiction writing gives authors the freedom to create more empowering narratives and explore their identities, while also allowing them to process their emotions and grief at a safe distance.

While each form of writing may be different, the act of writing itself can be a life-changing tool for self-discovery and self-empowerment, allowing caregivers to reclaim their unique narratives and envision new futures for themselves.

Expressive Writing

James W. Pennebaker, social psychologist and professor emeritus of psychology at the University of Texas at Austin, was the first to conduct pioneering research into expressive writing, which requires individuals to write about their deepest thoughts and feelings on negative, distressing, or traumatic experiences. This is intended to help the writer process difficult emotions, reflect on their challenges, gain insights, foster self-awareness, and enhance personal growth.

Studies have shown that it can also enable the writer to experience a shift in perspective, allowing them to consider other people’s viewpoints, which is a key factor in the processing of negative emotions. This type of deeply emotional and personal writing has been closely associated with The Diary of a Young Girl by Anne Frank, as the author employed introspection and insight to reflect on her fears and traumas while living through the horrors of World War II.

In his first experiment in 1986, Pennebaker asked participants to write about their deepest thoughts and feelings regarding a significant emotional issue that had impacted them. Participants were instructed to connect the topic to their childhood, their relationships with others, their past, present, and future. They were also informed they could write without concern for spelling or grammar, and that their writing would be confidential. He found that participants who wrote for 15 minutes a day over four consecutive days reported improvements in physical health six months post-test, with fewer doctor’s visits and sick days.

Pennebaker’s groundbreaking work spurred many other researchers to investigate expressive writing using variations of his protocol. In such experiments, writing groups were typically asked to practice expressive writing for three to five sessions, often over consecutive days, for 15 to 20 minutes. A comprehensive academic review of such studies showed that most participants experienced immediate increases in distress, negative mood, and physical symptoms. However, at follow-up, they reported both physical and emotional health benefits. The physical health outcomes included improvements in immune-system functioning, blood pressure, lung function, and liver function. The emotional benefits included improvements in mood, psychological well-being, depressive symptoms, and post-traumatic intrusion and avoidance symptoms. The studies also revealed various social and behavioral outcomes, including reduced absenteeism from work, quicker re-employment after job loss, and improvements in academic performance, sporting performance, and working memory.

Some researchers have specifically examined the impact of expressive writing on caregivers. A 2017 study found that depression was significantly reduced in spousal caregivers of cancer patients after they took part in expressive writing exercises. Furthermore, a 2008 study concluded that expressive writing reduced post-traumatic stress disorder symptoms in caregivers of people with psychosis.

But what are the underlying mechanisms that make expressive writing beneficial? There is no one answer, as studies have shown mixed results for several theories.

One possibility is that the individual experiences catharsis, or a release of strong or repressed negative feelings. Another possibility lies in the interplay between emotional inhibition and confrontation. Repressing thoughts about traumatic events can lead to physiological stress, ruminations, and longer-term illness. In contrast, confronting a trauma may decrease physiological stress and facilitate cognitive integration and understanding. Thirdly, repeated exposure to the traumatic experience through expressive writing may reduce negative emotional reactions to it. Lastly, expressive writing can enable the development of a coherent narrative through cognitive processing.

Pennebaker’s studies found that those who benefited the most from expressive writing were those who used more positive-emotion words, a moderate amount of negative-emotion words, and an increased number of words showing causation and insight, including “because,” “reason,” “understand,” and “realize.” The use of cognitive words indicates the writer is actively structuring, interpreting, and making connections about their negative experience. Creating a coherent narrative of an event can lead to more adaptive internal schemas and a more integrated understanding of it. It is widely argued that this narrative formation is essential for expressive writing to produce benefits.

Many caregivers have practiced expressive writing in their journaling routines. Lisa M. Shulman, professor of neurology at the University of Maryland, started journaling when she was a caregiver for her husband, who passed away from cancer 18 months after his diagnosis. In her book, Before and After Loss: A Neurologist’s Perspective on Loss, Grief, and Our Brain, Shulman linked her healing experience to the aforementioned theories. She explained that practicing expressive writing in her journal entries allowed her to confront and examine what the loss of her husband meant to her, how it affected her identity and future, and what aspects of it were so painful. Seeing her experiences on paper demystified the distressing events and decreased the negative impact they had on her.

She writes in the book’s preface:

“In the early months of writing, I contemplated turning my journal into a book. As I moved into the second year without Bill, I questioned whether I had anything original to contribute and whether the time had come to look forward, not back. Yet the urge to document memories and insights had its own life. Notes scribbled by day and night piled up around me. And with time the book’s purpose came into focus. The goal is less memoir, more guidebook, to shed light on common experiences of traumatic loss and how our brain responds and heals.”

Furthermore, continuously writing, reading, and re-reading her entries helped her emotions become more organized and coherent. This allowed her to examine them from new perspectives and give them new meanings, which enabled her to create new interpretations of her experiences and fit distressing memories into her life narrative. She argued that grieving individuals need these new interpretations to redefine themselves and reshape their identities after loss, as doing so can help them envision new possibilities and plan a new future.

Shulman concluded that journaling gave voice to her turmoil, integrated loss into her life story, and was an essential tool in her healing process. It was both creative and therapeutic, with powerful effects on the brain:

“Journaling and other meditative practices are potent tools to buffer the emotional charge of traumatic loss. Our decision-making and behavior are strongly influenced by this emotional charge. The amygdala, nestled deep in the brain, assigns an emotional weight to events, tagging experiences with their emotional power, their intrinsic positive vs. negative quality, their relative attractiveness or aversiveness. This assigned emotional valence has strong effects on our decisions, actions, and behavior. Judicious decision-making depends upon our capacity to buffer loss’s emotional charge; we need personal strategies like journaling to transform the emotional power of loss.”

Caregivers who are interested in expressive writing can get started with Pennebaker’s Writing to Heal: A Guided Journal for Recovering From Trauma and Emotional Upheaval. He also provides some insightful guidance in his brief interview with Nelda Yaw Buckman.

Memoir Writing

Given that expressive writing can have healing benefits, some studies have investigated the therapeutic effects of writing grief memoirs. Although empirical research into the topic is limited, some have argued that longer form emotional writing with a beginning, middle, and ending—that is not conducted under the time restrictions of Pennebaker’s expressive writing protocol—can help the writer construct narrative coherence, find meaning, reclaim agency, and rebuild their identity in the wake of distressing life events.

In her 2006 book, The Caregiver’s Tale: Loss and Renewal in Memoirs of Family Life, Ann Burack-Weiss provided a qualitative analysis of more than 100 caregiver memoirs. She found that many authors based their memoirs on their journal entries, and that most memoirs were written after the care recipient had passed away. She discovered that the memoirs shared numerous universal themes, as they described the emotional toll of caregiving, the struggle to make tough decisions, the caregiver’s insecurities, and the search for meaning in suffering. She concluded that the main overarching story was that of the caregiver who undergoes a journey that begins with sorrow and culminates in a life-transforming experience of self-discovery, in which they redefine their priorities and gain a new perspective on life.

As Burack-Weiss put it:

“Looking backward, after time has distilled the significance of the event, allows individuals who have provided care to uncover the thoughts and feelings that exist below the radar of scientific inquiry, to reflect upon the impact of the situation on the rest of their lives, and to extract meaning from the experience… and to shape the narrative to their own voices.”

In fact, Burack-Weiss found that by the end of most caregiver memoirs, the authors have learned to make peace with the past, let go of resentment, come to terms with unresolved emotions, discover purpose and pleasure in life, and emerge from the shadow of paralyzing anguish. The reader understands that the author’s life continues and that the caregiving experience has taken on new meaning for them.

In her 2020 article, researcher Katrin Den Elzen connected grief narrative with memoir writing through the work of leading bereavement scholar and practicing clinician Robert Neimeyer, who claims that meaning-making is at the core of grief dynamics. His model proposes that the bereaved can make sense of their loss by building new narratives that combine the loss into their life story. They must also bridge the past to the present and the future by maintaining a continued connection to what was lost. By actively engaging with their pain, the bereaved can find meaning and experience a transformation of their grief, which may lead to personal growth, resilience, and wisdom.

Neimeyer posits that the narratives of our lives comprise three storylines: the external narrative (the objective story), the internal narrative (the inner emotional story), and the reflexive narrative (the meaning-making story). Many memoirists’ depictions of the situation, the events, and the story align with this model.

In fact, Den Elzen included these three storylines in her memoir entitled My Decision, which details the story of how her husband had a brain cyst and became fully paralyzed for eight months before his death. Her memoir’s external narrative depicted her husband’s affliction, while the internal narrative depicted her emotional response to it. In the reflexive, meaning-making story, one of the topics she reflected upon was her frustration with the hospital system, which viewed her husband as an illness rather than a human being. Writing the reflexive, meaning-making story was what enabled her to find the more profound emotional truth in her recollections.

In addition, throughout her writing process, she drew upon the principles of the dialogical self theory, which proposes that the self is not a singular being but instead encompasses numerous “I-positions” that engage in an inner dialogue. In her memoir, these identities included I-as-wife and I-as-widow. Exploring and reconciling these conflicting pre- and post-loss perspectives allowed her to construct narrative coherence, rebuild her identity, and find renewed purpose.

Den Elzen added that redrafting painful experiences multiple times over several months was beneficial because it helped her gain distance from the events, allowing her to externalize them. In the context of narrative therapy, “externalization” is the process by which the grievous event becomes separate from the individual. For Den Elzen, this repeated exposure meant she no longer relived the events every time she thought of them. This change in perspective led to a major shift in her relationship with loss, resulting in immense healing, inner peace, and a sense of freedom.

Budding memoir writers can learn from Jessica Handler’s book Braving the Fire: A Guide to Writing About Grief and Loss, or follow tips from Marion Roach Smith, founder of The Memoir Project.

Fiction Writing

However, some caregivers might not feel comfortable writing a memoir if they’re still too close to the anguish or don’t want to expose or upset loved ones. This may explain why some caregivers have become fiction authors, with many centering their novels on fictional accounts of the caregiving experience.

One example is author Irene Frances Olson, who was a caregiver for her father when he had Alzheimer’s disease. She drew upon some of her own experiences while writing her novel, Requiem for the Status Quo. The book tells the fictional story of Colleen, who becomes a caregiver for her father, Patrick, who, like Olson’s father, has Alzheimer’s disease. When family members who are in denial about his condition are unwilling to help her provide care, she is forced to tap into her inner strengths and abilities, leading to moments of self-discovery and victories for both herself and her father along the way.

But what is it about writing fiction that may help caregivers cope and heal? Fiction writing, in the context of narrative therapy, enables the writer to recount their story in a way that allows them to take control over the narrative. Repurposing personal experiences into fiction can empower the writer to shift their harmful storylines and thought patterns toward more positive and constructive ones. Fictionalizing events can also allow a stepping back from painful experiences, which may enable an examination of traumas or repressed thoughts at a safe distance. This can help increase self-awareness, redefine one’s identity, enable meaning finding, and promote healing.

For instance, a caregiver facing various anxieties, frustrations, and stressors may create a character who is undergoing similar hardships. One such example is Colleen, the fictional protagonist created by Olson. Fictionalizing such a character can help the author externalize distressing situations, explore their emotions, and identify their own resilience, strengths, and unique qualities. Furthermore, through character development, the author can expand on the protagonist’s values and motivations, which may allow more favorable storylines about their hopes, dreams, and aspirations to emerge. Fiction writing can also be done in first person or third person, the latter of which can create a psychological distance that allows the author to bypass their inner critic and shift their narrative toward more positive outcomes.

Through such exploration and retelling, the author can make sense of their life and reframe problematic storylines and negative “identity stories.” As a result, a caregiver who initially viewed themselves as a victim may ultimately reframe themselves as a survivor, healer, warrior, or hero, thereby constructing a more empowering narrative about their life.

Jessica Lourey, author of the book Rewrite Your Life: Discover Your Truth Through the Healing Power of Fiction, wrote her first novel in 2006 as a coping mechanism after her husband’s death. Since then, she has written more than 20 novels, which has allowed her to heal across decades continuously. As she put it:

“The healing I experienced makes sense when you consider Dr. Pennebaker’s discovery that two elements above all else increase the therapeutic value of writing: creating a coherent narrative and shifting perspective. These are not coincidentally the cornerstones of short story and novel writing. Writers call them plot and point of view… Writing fiction… lends empathy to your perspective and provides us practice in controlling attention, emotion, and outcome. We heal when we transmute the chaos of life into the structure of a novel…”

Other bereaved individuals choose to privately write fiction in their journal entries instead of publishing book-length novels. In her 2021 research article, Angela Matthews, a former lecturer at the University of Michigan, described how she used journaling to cope with the loss of her son. She initially recorded good and bad memories with him, which helped her accept her loss. However, when she found it difficult to express some of the darker memories with her son, she revised her journal entries, rearranged the order of events, gave people pseudonyms, combined scenes, added fictional experiences, and wrote in the third person. “I found that writing some of my most traumatic experiences as fiction helped me acknowledge and process my grief but still maintain a protective distance from it,” she writes. “Fictionalizing my story also allowed me to look at my experiences in a new way.”

Rewriting themes, characters, and plots through fiction can be a cathartic part of the healing process that helps caregivers feel less lethargic, angry, or depressed. It can provide a space to think about painful moments while also acknowledging them in relation to the larger landscape of traumatic loss. This can be a crucial step in the transformation from grief.

Those aspiring to write fiction can read Jessica Lourey’s Rewrite Your Life: Discover Your Truth Through the Healing Power of Fiction, or get inspired by the moving presentation she gave at TEDxRapidCity.

Start Your Healing Journey Through Writing

Caregivers enrich the lives of their loved ones by helping them navigate life’s roughest seas. They bring light into their lives during the darkest of times. They uplift them when they feel weighed down by the burden of illness. What caregivers contribute to the world is invaluable.

As vital and heroic as they are, caregivers are only human, and without self-care, they can feel broken and exhausted amidst a whirlpool of conflicting emotions. But the evidence is clear: Writing can have healing powers. So if you are a caregiver, go for it. Pour your heart onto a page. Create a window into your life. Build a safe place to process, examine, and reflect. You might find yourself at the beginning of a healing journey.

The Evolution of Mankind’s First Voice: How Drums Shape the Human Story

From the earliest beats of human civilization to the electric pulse of modern music, no instrument has carried the weight of human experience quite like the drum. It is one of the oldest instruments known to humankind, a vessel for rhythm that transcends borders, cultures, and centuries. Unlike string or wind instruments, whose melodies require delicate skill or intricate craftsmanship, the drum speaks a more primal language—the heartbeats of every living organism, thunderclaps in a storm, the rhythmic pounding of waves, and other patterns of timed sounds and silences.

Throughout history, drums have occupied a unique place in the cultural, psychological, and social lives of societies. They are tools of ritual and celebration, instruments of war and peace, and mediums for storytelling and spiritual transcendence. The human fascination with rhythm is an integral element in humanity’s pattern-identifying abilities—abilities that have enabled our species to evolve apart from other organisms. Nowhere is this curiosity for sound-patterns more apparent than in the omnipresence of drums—from the gentle beat of a hand against a taut skin to the thunderous roar of modern drum kits.

The magic of the drum lies not just in its social role but in its very design. Scientifically, a drum produces sound when a stretched membrane, typically made of animal hide or synthetic material, vibrates after being struck. This vibration sends pressure waves through the air, creating sound that varies depending on the drum’s size, shape, material, and tension. These seemingly simple instruments are capable of a vast range of tones, from sharp, cracking snaps to deep, resonant booms. It is this dynamic versatility that has helped the drum endure as an essential part of the human story.

A Sound Born in Prehistory

The drum’s origins stretch back into prehistory, with archaeological discoveries suggesting that early humans were making and playing drums as far back as 6,000 years ago, on the cusp of the Neolithic period. These early and rudimentary percussion instruments marked a profound development in human culture. Often fashioned from hollowed-out logs or gourds and covered with animal hides, these drums were a collection of materials that early humans could readily gather and make.

Paintings and hieroglyphs found in ancient Egyptian tombs dating to around 3000 BCE depict drummers participating in religious ceremonies and celebrations. In ancient Mesopotamia, drums were used for religious and military applications. Similarly, archaeological evidence from China indicates that drums were used as early as 2000 BCE. This suggests that their influence spread quickly and organically, as cultures recognized their power to carry messages, accompany rituals, and unify communities.

It is easy to understand why drums emerged as a prominent instrument. The rhythmic beating of a drum mimics the innate biological patterns of life itself—the human heartbeat, the steady patter of rain, the cyclical crash of waves. “The brain rhythms of musical performers and their audiences have been measured in concert settings,” writes Nina Kraus, a neuroscientist at Northwestern University and author of the book Of Sound Mind: How Our Brain Constructs a Meaningful Sonic World. “The brain rhythms tend to synchronize, and the more synchronization between performer and listener, the more listeners report enjoying the performance.”

In societies without written language or electronic communication, drums were also practical and effective tools for signaling, whether calling warriors to battle or gathering people for communal rituals. The drum’s sound traveled across distances with a clarity that few early instruments could match, making it invaluable for both practical and spiritual purposes.

Rhythm Across the World

As human civilizations grew and diversified, so too did their musical instruments, including drums. Distinct forms of the percussive instrument appeared in every inhabited corner of the globe, each shaped by the culture, beliefs, and available materials of its makers. In Africa, for example, often considered the cradle of drumming traditions, instruments like the djembe, dunun, and the talking drum played central roles in daily life, religious rites, and oral storytelling. The talking drum, in particular, was designed to mimic human speech, capable of conveying complex messages across vast distances.

On the other hand, Native American cultures revered the drum as a sacred object, using large, communal drums in powwows and smaller water drums in ritual ceremonies. The steady, repetitive rhythm of these instruments was believed to connect the physical and spiritual worlds, creating a link between participants and their ancestors.

In the Middle East, drums such as the doumbek and frame drum held an equally sacred place, appearing in religious ceremonies and court entertainment as far back as ancient Mesopotamia. Across Asia, Japanese taiko drums echoed in Shinto rituals and festivals, while in India, the complex rhythms of the tabla and mridangam expressed and preserved classical music traditions.

Meanwhile, in Europe, drums initially served military and signaling purposes. Medieval armies used snare and bass drums to communicate orders and rally troops, while Renaissance courts gradually introduced percussion into their music. Over time, the drum’s role expanded from the battlefield to the stage. People who have used the drums have maintained the rhythmic authority that these instruments produce from their sharp, powerful echoes and booms.

Drummers and percussionists are also transformed by the rhythms they create. As Grateful Dead drummer Mickey Hart said, “A good groove releases adrenaline in your body. You feel uplifted, you feel centered, you feel calm, you feel powerful. You feel that energy. That’s what good drumming is all about.”

Drumming Into the Modern Era

As the modern world emerged, so did new forms of the drum. The drum kit, a staple of today’s pop music, was popularized in the early 20th century in the United States. Previously, percussion parts were typically divided among multiple musicians, each playing a single instrument. The invention of the bass drum pedal around 1909 allowed a single drummer to combine bass, snare, tom-toms, and cymbals into a compact, multi-functional setup. With its various instruments and assembly, the modern drum kit forever changed the musical milieu with its panoply of percussive parts.

With the rise of jazz in the early 20th century, drummers such as Gene Krupa and Buddy Rich transformed percussion into a lead instrument. They introduced syncopated rhythms, dynamic solos, and expressive improvisation, using the drum kit not just as a timekeeper but as a voice in its own right. “Buddy was known to use pretty high tension on all of his heads, as that helped give his sticks more rebound and the higher frequencies cut through the horn section,” writes Chris Wakelin, a product specialist at Remo, a California-based drum manufacturer.

As rock and roll exploded in the 1950s and ’60s, drummers such as Ringo Starr of The Beatles and John Bonham of Led Zeppelin redefined what percussion could be, providing the pounding heartbeats of songs that would come to define generations. The raw energy of punk rock in the 1970s introduced another evolutionary leap, with drummers embracing speed, aggression, and minimalism to match the genre’s rebellious ethos.

Meanwhile, in funk and hip-hop, drummers like Clyde Stubblefield—renowned for his groundbreaking rhythms with James Brown—laid down grooves that would underpin decades of popular music. Electronic drum machines and digital sampling would later reshape the sonic possibilities of drumming, yet the essential principle—striking a surface to create rhythm—remained unchanged.

Today, the drum kit stands alongside ancient percussion instruments in a fascinating coexistence of the old and the new. Musicians frequently blend traditional hand drums, such as djembes, congas, and bongos, into modern compositions, hybridizing the technology of the modern era with conventional folk rhythms and styles.

Timeless Voices of Tradition

Despite these transformations, many drums—much like horseshoe crabs—have remained essentially unchanged in their influential and simplistic forms. In Japan, the taiko drum is still handcrafted using ancient methods and remains a vital part of both religious ceremonies and theatrical performances. West Africa’s djembe continues to play its ancestral role in social gatherings, rites of passage, and healing rituals, its powerful resonance as meaningful now as it was generations ago.

In Ireland, the bodhrán—a handheld frame drum—persists as a central element in traditional folk music, while in Nigeria, the talking drum still “speaks” at festivals, funerals, and celebrations. These drums are more than instruments: They are carriers of history, embodying the voices of ancestors and the spirit of place.

“One of the unique features of the talking drum instruments is their [ability to closely imitate] the rhythms and intonations of the spoken language,” writes Ushe Mike Ushe, a lecturer at the National Open University of Nigeria in Lagos, in the International Journal of Philosophy and Theology. ”The drums reproduce the sounds of proverbs or praise songs through a skilled performer or specialized ‘drum language.’ The specific pattern of drumming and rhythms is closely linked with spiritual beings or Ogun associated with the traditional Yoruba belief system.”

An Enduring Beat

The history of the drum is ultimately the history of human connection. It predates written language, has traveled through the rise and fall of empires, and continues to shape our present. From ancient ritualistic ceremonies under open skies to modern stadium concerts illuminated by dazzling lights, the drum remains a constant—a beating heart in the collective body of humanity.

As musical traditions continue to evolve, so too will the drum. Yet it seems inevitable that this most ancient of instruments will endure, its primal rhythms forever resonating in the human soul. Wherever people gather to dance, mourn, celebrate, or protest, there will be rhythm, and at its core, there will be the steady, undeniable voice of the drum.

Why Every Student Needs Human Ecology Education Now

Human ecology education is a transformative program that focuses on the interplay between humans and our human ecosystem. It is an interdisciplinary educational field that combines physical and psycho-social life skills, including daily life skills, social presentation and protocol, cultural differences, and ethical decision-making, to develop positive relationships for living in our world.

By first teaching the science and responsibilities of caring for each life, human ecology education empowers individuals to build collective human sustainability. Because the lessons are lived daily, the healthy rhythms and habits of life within family and community are learned, repeated in different contexts, shared for life, and naturally inherited by the next generation, making the impact of human ecology educational programs exponential and generationally ongoing.

Human ecology education emphasizes reciprocal influence and interdependence—the “we, us, and our” of our lives. It enables us to identify with others as fellow humans, rather than just their characteristics, such as race, age, height, gender, or ethnicity. It goes beyond self-focused professional education by considering human relationships in the context of the other sixteen hours of the day.

To quote Anna Trupiano, writing for the Michigan Daily, “The fact of the matter is, nothing truly prepares us for college, and a lot of us end up ‘winging it’ just as I have. And once we’re in college, we are met with the same dilemma—college doesn’t equip us for the rest of our lives.”

A continuous K-12, age-related human ecology program equips students to transition to adulthood with the maturity and skills to live independently while navigating complex social systems at all scales. Graduates will have learned precisely what sharing means and how and why it ultimately benefits them, and they will know how to be self-sufficient and resilient as they face changes in their life stages, unforeseen events, or when personal or local resources diminish.

Evolution of Human Ecology Education

The roots of human ecology education can be traced back to the early Bildung folk education movements in Europe, which share common roots with many human-centered social and political movements worldwide. In 1862, during the early years of the Lincoln administration and the Agricultural Age in America, the Morrill Act was passed. This act traded federal land for the establishment of new state colleges, which offered instruction in agriculture, what came to be known as home economics, and other subjects.

Subsequently, mandatory home economics programs were introduced in the lower grades in public schools for girls, as were the industrial and technological courses for boys. As the Western states developed, the Act enabled homesteaders and rural residents to acquire essential life skills and crop information for achieving self-sufficiency. In the Bildung tradition, practical home economics lessons evolved to include social skills, finance, and civic participation.

Then, in a history-changing moment, during the 1940s, an American, Myles Horton, having become familiar with Danish Bildung education in Europe, established the Highlander School, a folk school in Tennessee, to teach the concepts of the Bildung social agency and empowerment. There, many of the future civil rights leaders of the South, such as Martin Luther King and Rosa Parks, developed their initial ideas of nonviolent protests for civil rights; they learned how to use their personal agency for public social progress, much like the European peasant class had done in Europe. For two centuries, spreading from Denmark, the Scandinavian countries have prioritized this education in their public schools. They also consistently rank at the top of global national happiness and well-being ratings.

The lessons build trust in personal agency, generate empowerment, and drive confidence in moving forward and taking constructive action. Learning in a diverse group environment, such as a school, fosters collective trust and long-term community resilience. As Fortune 500 coach Peggy Klaus writes in her 2008 book, The Hard Truth About Soft Skills, “Soft skills encompass personal, social, communication, and self-management behaviors. They cover a wide spectrum of abilities and traits: being self-aware, trustworthiness, conscientiousness, adaptability, critical thinking, attitude, initiative, empathy, confidence, integrity, self-control, organizational awareness, likability, influence, risk taking, problem solving, leadership, time management, and then some.”

The Gender Thing

Although women have made significant gains throughout the 20th and 21st centuries, the gender roles in home economics and agriculture remain essentially unchanged from those of the Agricultural Age. Even though improvements in health (both physical and mental), sanitation, and life span for all people occurred when home economics education was required for girls, the gender disparity between home and work that disadvantaged women earlier still exists today. This is made clear in the World Economic Forum’s Global Gender Gap Report 2024.

Home economics remained an “only girls” program until the 1970s and 80s, then diminished as women worked outside the home and rebelled against the “stir and stitch” image of the unpaid, homebound “happy housewife.” However, as women entered professions by choice or necessity, the quality of home life, health, and household management suffered for many families. Men, focused on their professional lives, were not inclined to share the unpaid workload. Children have paid the price.

In the 1970s and 1980s, Home Economics programs were removed from schools, and their classrooms and funding were redirected to new technology labs in an act of educational eminent domain. Alas, it was an era when administrators were predominantly male. Thereafter, whether cause or correlation, more families began to fragment, child care became a national problem, and more children became physically and mentally disadvantaged.

We are now in the second generation since home, family, and living education were eliminated from schools. We see the fallout in erroneous beliefs about sustaining health, like vaccinating, obesity statistics, shorter life spans, and child depression and violence. The U.S. lags behind other advanced nations in its citizens’ mental and physical health, poverty rates, and life expectancy. Unfortunately, these losses are more impactful because now, society is more complex and challenging. Many fall behind by default. Only recently has personal financial literacy regained some traction in elementary and secondary schools. In the U.S., we follow the money first.

Cornell University met these challenges by developing its human ecology program to rectify this missing life and living education. It also broadened the discipline to address 21st-century human challenges, including limited resources, a growing income disparity, and climate adaptation. Essentially, Cornell combined the practical living knowledge of Home Economics with the self-actualization of Bildung education. They utilized Home Economics’ foundational physical and home health content, and added more interdisciplinary pedagogy in sociology and psychology to address the realities of urbanization and increased population diversity.

Human ecology education is scarce at all levels of the U.S. education system. The Society for Human Ecology recognizes only 43 colleges and universities in the country for their programs, and few of those have departments explicitly named “Human Ecology.” At the secondary level, human ecology is only offered in two public high schools—Syosset and Niskayuna, both in New York—and at Cornell University. While several private schools and international institutions include human ecology content under other titles, one U.S. college, the College of the Atlantic in Maine, considers it so essential to human life that it offers just one major: Human Ecology.

Key Values of Human Ecology Education

Abraham Maslow identifies and prioritizes the stages of shared human needs in his pyramid chart, the “Hierarchy of Needs.” This chart illustrates the human life stages and how each stage corresponds to human growth, from basic life needs to professional esteem and accomplishment, culminating in self-actualization.

While Maslow faced some criticism for how cultural differences influence the hierarchy sequence, human ecology education programs, culturally tailored to fit, provide the ideal vehicle for accomplishing the stages in Maslow’s learning pyramid, from self-sufficiency and resilience to social integration, esteem, and personal empowerment. This sequence is particularly important when applied to children.

Human ecology education guides students through all the complex physical and psycho-social development elements before graduation and adulthood, like navigating social systems, resource management, professional growth, and social ethics. It provides the life knowledge needed at each stage and develops commonly shared social perceptions early on that help bind communities together in adulthood. For each individual, human ecology programs serve as a buffer against the lifetime stress that builds from disadvantage and/or discrimination, leading to health problems and long-term care needs.

Like a ship’s rudder, human ecology education helps students develop an internal decision-making framework during their formative years. In early grades, the focus is on meeting individual needs, such as the knowledge and skills needed for food, clothing, and shelter, to ensure personal health and safety. These are called ‘negative needs’ since we cannot survive without them, but we don’t think about them when they are met.

Human ecology lessons, encompassing both the physical life skills that meet basic needs and the ‘soft’ psycho-social skills, are experiential, allowing students to see and feel the benefits realistically. The classroom and lab activities instill teamwork and spark students’ interest in science, math, economics, and human health. Each sequential course becomes more complex as students mature, their world widens, and they transition to adulthood.

Several organizations offer curricula that help individuals navigate the human ecosystem and develop critical skills, ranging from home and family survival to social mobility and environmental preservation. Examples include Learning Mole, Notes From a Kitchen, and Teach Simple for survival and quality of life skills, ARISE Foundation for social mobility skills, and PBS Learning Media, which is excellent for helping with the great outdoors and climate resilience.

As young adults, students emerge from high schools with human ecology programs resilient and able to recognize opportunities, know where to seek resources, intelligently weigh the pros and cons, and distinguish between short-term and long-term goals. They understand cause and effect, accept responsibility, and welcome change while maintaining their integrity and that of their families. Knowledge of time and task management, consumer protection, law, finance, health, housing, communication, transportation, and navigating our complex state and national social systems is critical for independent living at any age in this multicultural, transitional world. It’s complicated.

These are complex areas of life with high risks; that’s why, beginning early, each person needs formal education on how to navigate their way through this stormy human sea. The alternative, depending on social osmosis or trial and error, is simply dysfunctional. Because many young people lack this life education, more are failing to “launch” their lives. It is often impossible to make up for lost progress later.

Community college human ecology courses are life-savers for first-year students without a K-12 human ecology education, who may be on the doorstep of living alone. New CC students are more likely to be from marginalized groups, complex urban environments, or lower-income levels; many are single parents, new immigrants, formerly institutionalized, veterans, or are simply eighteen and on their own for the first time. These students must quickly learn how to assimilate, become independent, and plan a new life as they transition into a broader, unfamiliar culture to find a future.

The State of Human Ecology Education

Human ecology education, whether via the school of hard knocks or by educational design, is integral to everyone’s success. However, there are problems: First, if offered in higher education, it is usually considered a psycho-social discipline and is fiercely guarded in those departments; therefore, practical life skills are not included. What could be less intellectual than learning to read a food label, for instance, a key lesson, or how to comprehend a lease, or select a health insurance plan to prevent medical bankruptcy, a significant cause of bankruptcies?

Leaving out that content abandons its Home Economics foundation and implies someone else will be at home dealing with those inescapable tasks responsible for good health and sanctuary. That higher education program planning mistake creates hardships for our society, where 30 to 50 percent of people live alone, depending on age.

Secondly, since college students often lack the life skills they should have acquired in elementary and secondary schools, it generates a list of problems for college presidents, including low attendance, decreasing state funding, declining transfer and graduation rates, and costly student support programs, loans, food programs, crisis counseling, and campus crime. Inexplicably, little is on their list for preventing these problems and teaching students how to live independently, stay healthy, and remain resilient and on track.

Although college presidents recognize that teaching is part of their mission, many devote institutional resources to research and career development. Every college or university should require all freshman students, regardless of age, to complete human ecology coursework to graduate or transfer. Human ecology education is life insurance, literally. Incorporating math, English, science, and economics into daily life experiences provides a foundation for and increases comprehension of those academic requirements.

The college problem list, without human ecology, would be tolerable if colleges and universities reached below themselves and supported teaching human ecology in elementary and secondary schools to prevent personal failure, and prepare for college success and independent living. For example, they could train more human ecology teachers, offer in-service human ecology programs for teachers in related disciplines, and expand existing programs to include knowledge of life and living.

The third problem indicating the need for human ecology for all is that those who struggle the most with independent living and homelessness are men. This demographic has traditionally prioritized the value of professional or trade skills over life skills. Claudia Goldin, an economist at Harvard who won the Nobel Prize in Economics in 2023, has made gender equity on the home front the focus of her economic research, proving that daily self-sufficiency is a human social and economic necessity, not a gender-based cultural habit. This disconnect, the lack of the skills to meet personal needs, is a causal factor in male crime, violence, and homeless statistics, as well as low graduation rates, shorter life spans, obesity, and increasing health problems.

Colleges do not mandate human ecology as essential to every college education because most believe their schools exist solely for professional education, not personal development. This is the gender thing again from previous eras of female subjugation in which all things regarding personal life practiced at home were unpaid, taught by Mom, or suitable only for the lower grades. However, giving birth does not qualify one to manage a household or raise children, and lower grades no longer offer life education, not since the 1980s. Now, with no one at home, professional child care costs often equal or exceed many mortgage payments, and frequently do not provide a good example of resilience or a positive family life.

There are also problems in the lower grades. They face barriers like limited resources, no time, resistance to change, lack of trained teachers, and the same old gender stereotyping, all of which prevent students from transitioning successfully into adulthood.

As we confront climate change and environmental losses, the need to prevent social and ecological decline through personal education is no longer optional. Preventive education is the long-term, bottom-up approach that is the best choice before facing life’s difficulties. Treatment later to save lives is undoubtedly a needed intervention. Still, it is, by definition, a short-term, top-down triage action. It does not stop problems like homelessness or the growing number of people who are burdened for life with adverse childhood experiences (ACDs) because they lack the childhood care needed to meet health and safety needs.

Nationally, there are additional benefits. Human ecology education provides essential adaptation skills as climate disasters become more frequent and costly. Few states require even basic climate science education, and those that do often overlook the importance of personal climate adaptation skills. Additionally, as we struggle with political fragmentation and a growing income disparity, understanding and acknowledging that all people share similar human needs through human ecology education helps unite voters.

That understanding ties generations and cultures together, building a cohesive nation. Imagine the possibilities for saving lives, preserving nature, and conserving community resources if all local public schools taught students, in realistic and practical terms, how to sustain and share community and cultural resources. The time has come to empower all people with the knowledge and skills they need to thrive in the 21st century.

The Rise of AI Warfare: How Autonomous Weapons and Cognitive Warfare Are Reshaping Global Military Strategy

In the 1983 film War Games, a supercomputer known as WOPR (for War Operation Plan Response) is about to provoke a nuclear war between the United States and the Soviet Union, but because of the ingenuity of a teenager (played by Matthew Broderick), catastrophe is averted. In the first Terminator film, which was released a year later, a supercomputer called “Skynet” decides to exterminate humanity because it’s perceived as a threat to its existence rather than to protect American nuclear weapons.

Although these films offered audiences grim scenarios of intelligent machines running amok, they were also prophetic. Artificial intelligence (AI) is so commonplace that it’s routinely applied during a simple Google search. That it is also being integrated into military strategies is hardly any surprise. It’s just that we have little understanding of the capacity of these high-tech weapons (those that are now ready for use and those in development). Nor are we prepared for systems that have the capacity to transform warfare forever.

Throughout history, it is human intelligence that uses the technology, not the technology itself, which has won or lost wars. That may change in the future when human intelligence is focused instead on creating systems that are more capable on the battlefield than those of the adversary.

An “Exponential, Insurmountable Surprise”

Artificial intelligence isn’t a technology that can be easily detected, monitored, or banned, as Amir Husain, the founder and CEO of an AI company, SparkCognition, pointed out in an essay for Media News. Integrating AI elements—visual recognition, language analysis, simulation-based prediction, and advanced forms of search—with existing technologies and platforms “can rapidly yield entirely new and unforeseen capabilities.” The result “can create exponential, insurmountable surprise,” Hussain writes.

Advanced technology in warfare is already widespread. The use of uncrewed aerial vehicles (UAVs)—commonly known as drones—in military settings has set off warnings about “killer robots.” What happens when drones are no longer controlled by humans and can execute military missions on their own? These drones aren’t limited to the air; they can operate on the ground or underwater as well. The introduction of AI, effectively giving these weapons the capacity for autonomy, isn’t far off.

Moreover, they’re cheap to produce and cheap to purchase. The Russians are buying drones from Iran for use in their war in Ukraine, and the Ukrainians have been putting together a cottage industry constructing drones of their own against the Russians. The relative ease with which a commercial drone can be converted into one with a military application also blurs the line between commercial and military enterprises. At this point, though, humans are still in charge.

A similar problem can be seen in information-gathering systems that have dual uses, including satellites, manned and unmanned aircraft, ground and undersea radars, and sensors, all of which have both commercial and military applications. AI can process vast amounts of data from all these systems and then discern meaningful patterns, identifying changes that humans might never notice. American forces were stymied to some degree in wars in Iraq and Afghanistan because they could not process large amounts of data. Even now, remotely piloted UAVs are using AI for autonomous takeoff, landing, and routine flight. All that’s left for human operators to do is concentrate on tactical decisions, such as selecting attack targets and executing attacks.

AI also allows these systems to operate rapidly, determining actions at speeds that are seldom possible if humans are part of the decision-making process. Until now, decision-making speed has been the most important aspect of warfare. If, however, AI systems go head-to-head against humans, AI will invariably come out ahead. However, the possibility that AI systems eliminate the human factor terrifies people who don’t want to see an apocalyptic scenario on celluloid come to pass in reality.

Automated Versus Autonomous

A distinction needs to be made between the term “autonomous” and the term “automated.” If we are controlling the drone, then the drone is automated. But if the drone is programmed to act on its own initiative, we would say it is autonomous. But does the autonomous weapon describe the actual weapon—i.e., a missile on a drone—or the drone itself? Take, for example, the Global Hawk military UAV (drone). It is automated insofar as it is controlled by an operator on the ground, and yet if it loses communication with the ground, the Golden Hawk can land on its own. Does that make it automated or autonomous? Or is it both?

The most important question is whether the system is safety-critical. Translated, that means whether it has the decision-making capacity to use a weapon against a target without intervention from its human operator. It is possible, for example, for a drone to strike a static military target on its own (such as an enemy military base) but not a human target because of the fear that innocent civilians could be injured or killed as collateral damage. Many countries have already developed drones with real-time imagery capable of acting autonomously in the former instance, but not when it comes to human targets.

Drones aren’t the only weapons that can act autonomously. Military systems are being developed by the U.S., China, and several countries in Europe that can act autonomously in the air, on the ground, in water, and underwater with varying degrees of success.

Several types of autonomous helicopters designed so that a soldier can direct them in the field with a smartphone are in development in the U.S., Europe, and China. Autonomous ground vehicles, such as tanks and transport vehicles, and autonomous underwater vehicles are also in development. In almost all cases, however, the agencies developing these technologies are struggling to make the leap from development to operational implementation.

There are many reasons for the lack of success in bringing these technologies to maturity, including cost and unforeseen technical issues, but equally problematic are organizational and cultural barriers. The U.S. has, for instance, struggled to bring autonomous UAVs to operational status, primarily due to organizational infighting and prioritization in favor of manned aircraft.

The Future Warrior

In the battleground of the future, elite soldiers may rely on a head-up display that feeds them a wealth of information that is collected and routed through supercomputers carried in their backpacks using an AI engine. With AI, the data is instantly analyzed, streamlined, and fed back into the head-up display. This is one of many potential scenarios presented by U.S. Defense Department officials. The Pentagon has embraced a relatively simple concept: the “hyper-enabled operator.”

The objective of this concept is to give Special Forces “cognitive overmatch” on the battlefield, or “the ability to dominate the situation by making informed decisions faster than the opponent.” In other words, they will be able to make decisions based on the information they are receiving more rapidly than their enemy. The decision-making model for the military is called the “OODA loop” for “observe, orient, decide, act.” That will come about using computers that register all relevant data and distill them into actionable information through a simple interface like a head-up display.

This display will also offer a “visual environment translation” system designed to convert foreign language inputs into clear English in real time. Known as VITA, the system encompasses both a visual environment translation effort and voice-to-voice translation capabilities. The translation engine will allow the operator to “engage in effective conversations where it was previously impossible.”

VITA, which stands for Versatile Intelligent Translation Assistant, offers users language capabilities in Russian, Ukrainian, and Chinese, including Mandarin, a Chinese dialect. Operators could use their smartphones to scan a street in a foreign country, for example, and immediately obtain a translation of street signs in real-time.

Adversary AI Systems

Military experts divide adversarial attacks into four categories: evasion, inference, poisoning, and extraction. These types of attacks are easily accomplished and often don’t require computing skills. An enemy engaged in evasive attacks could attempt to deceive an AI weapon to avoid detection—hiding a cyberattack, for example, or convincing a sensor that a tank is a school bus. This may require the development of a new type of AI camouflage, such as strategic tape placement, that can fool AI.

Inference attacks occur when an adversary acquires information about an AI system that allows evasive techniques. Poisoning attacks target AI systems during training, interfering with access to the datasets used to train military tools—mislabeling images of vehicles to dupe targeting systems, for instance, or manipulating maintenance data designed to classify imminent system failure as a regular operation.

Extraction attacks exploit access to the AI’s interface to learn enough about the AI’s operation to create a parallel model of the system. If AI systems are not secure from unauthorized users, then an adversary’s users could predict decisions made by those systems and use those predictions to their advantage. For instance, they could predict how an AI-controlled unmanned system will respond to specific visual and electromagnetic stimuli and then proceed to alter its route and behavior.

Deceptive attacks have become increasingly common, as illustrated by cases involving image classification algorithms that are deceived into perceiving images that aren’t there, confusing the meaning of images, and mistaking a turtle for a rifle, for instance. Similarly, autonomous vehicles could be forced to swerve into the wrong lane or speed through a stop sign.

In 2019, China announced a new military strategy, Intelligentized Warfare, which utilizes AI. Officials of the Chinese People’s Liberation Army have stated that their forces can overtake the U.S. military by using AI. One of its intentions is to use this high-tech type of warfare to bring Taiwan under its control without waging conventional warfare. However, only a few of the many Chinese studies on intelligentized warfare have focused on replacing guns with AI. On the other hand, Chinese strategists have made no secret of their intention to control the enemy’s will directly.

That would include the U.S. president, members of Congress, combatant commanders, and citizens. “Intelligence dominance”—also known as cognitive warfare or “control of the brain”—is seen as the new battleground in intelligentized warfare, putting AI to a very different use than most American and allied discussions have envisioned. According to the Pentagon’s 2022 report on Chinese military developments, the People’s Liberation Army is being trained and equipped to use AI-enabled sensors and computer networks to “rapidly identify key vulnerabilities in the U.S. operational system and then combine joint forces across domains to launch precision strikes against those vulnerabilities.”

Controlling an adversary’s mind can affect not just someone’s perceptions of their surroundings but, ultimately, their decisions. For the People’s Liberation Army, cognitive warfare is equal to the other domains of conflict, which are air, land, and sea. In that respect, social media is considered a key battlefield.

Russia has also been developing its own AI capacity. As early as 2014, the Russians inaugurated a National Defense Control Center in Moscow, a centralized command post for assessing and responding to global threats. The center was designed to collect information on enemy moves from multiple sources and provide senior officers with guidance on possible responses.

Russia has declared that it will eventually develop an AI system capable of running the world. Russians are already using AI in Ukraine to jam wireless signals connecting Ukrainian drones to the satellites they rely on for navigation, causing the machines to lose their way and plummet to Earth. The Russian Ministry of Defense (MOD) has explored ways in which AI systems can be developed for uncrewed systems for the air, maritime, and ground domains. At the same time, at least in the short term, official policy is predicated on the belief that humans must remain firmly in the loop.

Meanwhile, the Russians are trying to improve UAV capabilities with AI as a mechanism for command, control, and communications. MOD also emphasizes the use of AI for data collection and analysis as a natural evolution from the current “digital” combat technology and systems development.

“Raven Sentry”: AI in the U.S. War in Afghanistan

The use of AI on the battlefield by U.S. intelligence, while brief, showed promising results. “Raven Sentry,” an AI tool launched in 2019 by a team of American intelligence officers (known as the “nerd locker”), with help from Silicon Valley expertise, was intended to forecast insurgent attacks. The initial use of AI came at a time when U.S. bases were closing, troop numbers were falling, and intelligence resources were being diverted. Raven Sentry relied on open-source data.

“We noticed an opportunity presented by the increased number of commercial satellites and the availability of news reports on the Internet, the proliferation of social media postings, and messaging apps with massive membership,” says Col. Thomas Spahr, chief of staff of the Resolute Support J2 intelligence mission in Kabul, Afghanistan, from July 2019 to July 2020.

The AI tool also drew on historical patterns based on insurgent activities in Afghanistan going back 40 years, which encompassed the Soviet occupation of the country in the 1980s. Environmental factors were also considered. “Historically, insurgents attack on certain days of the year or holidays, for example, or during certain weather and illumination conditions,” Spahr notes. He adds, “The beauty of the AI is that it continues to update that template. The machine would learn as it absorbed more data.” Before its demise in 2021 (with the U.S. withdrawal from Afghanistan), Raven Sentry had demonstrated its feasibility, predicting an insurgent attack with 70 percent accuracy. The AI tool predicted that attacks were more likely to occur when the temperature was above 4 degrees Celsius (or 39.2 degrees Fahrenheit), when lunar illumination was below 30 percent, and when there was no rain. Spahr was satisfied with the results: “We validated that commercially produced, unclassified information can yield predictive intelligence.”

Ukraine as Testing Ground for AI

Ever since the Russian invasion, launched in 2022, Ukraine has become a testing ground for AI in warfare. Outgunned and outmanned, Ukrainian forces have resorted to improvisation, jerry-rigging off-the-shelf devices to transform them into lethal autonomous weapons. The Russian invaders, too, have employed AI, conducting cyberattacks and GPS-jamming systems.

Ukraine’s Saker Scout quadcopters “can find, identify, and attack 64 types of Russian ‘military objects’ on their own.” These drones are designed to operate autonomously, and unlike other drones that Ukrainian forces have deployed, Russia cannot jam them.

By using code found online and hobbyist computers like Raspberry Pi, easily obtained from hardware stores, Ukrainians are able to construct innovative killer robots. Apart from drones, which can be operated with a smartphone, Ukrainians have built a gun turret with autonomous targeting operated with the same controller used by a PlayStation or a tablet. The gun, called Wolly because it bears a resemblance to the Pixar robot WALL-E, can auto-lock on a target up to 1,000 meters (3,280 feet) away and shift between preprogrammed positions to quickly cover a broad area.

The manufacturer is also developing a gun capable of hitting moving targets. It can automatically identify targets as they come over the horizon. The gun targets and aims automatically; all that’s left for the operator to do is press the button and shoot. Many Ukrainian drones, which look like those you can find at Walmart, are called First Person View (FPV) drones. Capable of flying 100 miles per hour, FPV drones have four propellers and a mounted camera that uses wireless to send footage of their flights back to operators. With a bomb on board, an FPV can be converted into a weapon that can take out a tank. They’re cheap, too; one manufacturer, Vyriy, charges $400 each, a small price to pay to disable a tank worth millions of dollars. Vyriy derives its name from a mythical land in Slavic folktales.

If one kamikaze drone is good, dozens of them are better insofar as the greater their number, the greater the chance there is of several reaching their targets. In nature, a swarm of ants behaves as a single living organism, whether the task is collecting food or building a nest. Analogously, a swarm of autonomous drones could act as a single organism—no humans necessary—carrying out a mission regardless of how many are disabled or crash to the ground or whether communication from the ground is disrupted or terminated.

Although humans are still in the “loop,” these weapons could equally be made entirely autonomous. In other words, they could decide which targets to strike without human intervention.

It isn’t as if Ukraine has adopted AI weaponry without any tech experience. In the words of New York Times reporter Paul Mozer, “Ukraine has been a bit of a back office for the global technology industry for a long time.” The country already had a substantial pool of coders and skilled experts who, under emergency conditions, were able to make the transition from civilian uses (such as a dating app) to military purposes. As Mozer reported: “What they’re doing is they’re taking basic code that is around, combining it with some new data from the war, and making it into something entirely different, which is a weapon.”

The reality is, “there’s a lot of cool, exciting stuff happening in the big defense primes,” says P.W. Singer, an author who writes about war and tech. “There’s a lot of cool, exciting stuff happening in the big-tech Silicon Valley companies. There’s a lot of cool, exciting stuff happening in small startups.”

One of those smaller startups is Anduril. After selling the popular virtual reality headset Oculus to Facebook (now Meta), Palmer Luckey, an entrepreneur in his early thirties, went on to found an AI weapons company that is supplying drones to Ukraine. “Ukraine is a very challenging environment to learn in,” he says. “I’ve heard various estimates from the Ukrainians themselves that any given drone typically has a lifespan of about four weeks. The question is, “Can you respond and adapt?” Anduril, named after a sword in The Lord of the Rings, has sold its devices to ten countries, including the U.S.

“I had this belief that the major defense companies didn’t have the right talent or the right incentive structure to invest in things like artificial intelligence, autonomy, robotics,” says Luckey. His company’s drone, called ALTIUS, is intended to be fired out of a tube and unfold itself, extending its wings and tail; then, steering with a propeller, it acts like a plane capable of carrying a 30-pound warhead. Luckey believes that his approach will result in more AI weapons being built in less time and at a lower cost than could be achieved by traditional defense contractors like McDonnell Douglas.

Anduril, founded in 2017, is also developing the Dive-LD, a drone that will be used for surveys in littoral and deep water. “It’s an autonomous underwater vehicle that is able to go very, very long distances, dive to a depth of about 6,000 meters (almost 20,000 feet), which is deep enough to go to the bottom of almost any ocean,” says Luckey. Ukraine is already making its own sea drones—essentially jet skis packed with explosives—which have inflicted severe damage on the Russian navy in the Black Sea.

As Anduril’s CEO Brian Schimpf admits, the introduction of Anduril’s drones to Ukraine has yet to produce any significant results, although he believes that will change. Once they’re launched, these drones will not require guidance from an operator on the ground, making it difficult for the Russians to destroy or disable them by jamming their signals.

“The autonomy onboard is really what sets it apart,” Luckey says. “It’s not a remote-controlled plane. There’s a brain on it that is able to look for targets, identify targets, and fly into those targets.” However, for every innovative weapon system the Ukrainians develop, the Russians counter it with a system that renders it useless. “Technologies that worked really well even a few months ago are now constantly having to change,” says Jacquelyn Schneider, who studies military technology as a fellow at the Hoover Institution, “And the big difference I do see is that software changes the rate of change.”

The War in Gaza: Lavender

In their invasion of Gaza, the Israel Defense Forces (IDF) have increasingly relied on a program supported by artificial intelligence to target Hamas operatives, with problematic consequences. According to an April 2024 report by +972 Magazine (an Israeli-Palestinian publication) and Local Call, a Hebrew language news site, the IDF has been implementing a program known as “Lavender,” whose influence on the military’s operations is so profound that intelligence officials have essentially treated the outputs of the AI machine “as if it were a human decision.”

Lavender was developed by the elite Unit 8200, which is comparable to the National Security Agency in the U.S. or the Government Communications Headquarters in the UK.

The Israeli government has defended Lavender for its practicality and efficiency. “The Israeli military uses AI to augment the decision-making processes of human operators. This use is in accordance with international humanitarian law, as applied by the modern Armed Forces in many asymmetric wars since September 11, 2001,” says Magda Pacholska, a researcher at the TMC Asser Institute and specialist in the intersection between disruptive technologies and military law.

The data collected to identify militants that were used to develop Lavender comes from the more than 2.3 million residents of the Gaza Strip, which was under intense surveillance prior to the Gaza invasion in 2023.

The report states that as many as 37,000 Palestinians were designated as suspected militants who were selected as potential targets. Lavender’s kill lists were prepared in advance of the invasion, launched in response to the Hamas attack of October 7, 2023, which left about 1,200 dead and about 250 hostages taken from Israel. A related AI program, which tracked the movements of individuals on the Lavender list, was called “Where’s Daddy?” Sources for the +972 Magazine report said that initially, there was “no requirement to thoroughly check why the machine made those choices (of targets) or to examine the raw intelligence data on which they were based.” The officials in charge, these sources said, acted as a “rubber stamp” for the machine’s decisions before authorizing a bombing. One intelligence officer who spoke to +972 admitted as much: “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time.”

It was already known that the Lavender program made errors in 10 percent of the cases, meaning that a fraction of the individuals selected as targets might have had no connection with Hamas or any other militant group. The strikes generally occurred at night while the targeted individuals were more likely to be at home, which posed a risk of killing or wounding their families as well.

A score was created for each individual, ranging from 1 to 100, based on how closely he was linked to the armed wing of Hamas or Islamic Jihad. Those with a high score were killed along with their families and neighbors despite the fact that officers reportedly did little to verify the potential targets identified by Lavender, citing “efficiency” reasons. “This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that his colleagues had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”

The IDF had previously used another AI system called “The Gospel,” which was described in a previous investigation by the magazine, as well as in the Israeli military’s own publications, to target buildings and structures suspected of harboring militants. “The Gospel” draws on millions of items of data, producing target lists more than 50 times faster than a team of human intelligence officers ever could. It was used to strike 100 targets a day in the first two months of the Gaza fighting, roughly five times more than in a similar conflict there a decade ago. Those structures of political or military significance for Hamas are known as “power targets.”

Weaknesses of AI Weapons

If an AI weapon is autonomous, it needs to have the capacity for accurate perception. That’s to say, if it mistakes a civilian car for a military target, its response rate isn’t relevant. The civilians in the car die regardless. In many cases, of course, AI systems have excelled at perception as AI-powered machines and algorithms have become refined. When, for instance, the Russian military conducted a test of 80 UAVs simultaneously flying over Syrian battlefields with unified visualization, then Russian Defense Minister Sergei Shoigu compared it to a “semi-fantastic film” that revealed all potential targets.

But problems can creep in. In designing an AI weapon, developers first need access to data. Many AI systems are trained using data that has been labeled by an expert system (e.g., labeling scenes that include an air defense battery), usually a human. An AI’s image-processing capability won’t function well when given images that are different from its training set—for example, pictures produced where lighting conditions are poor, that are at an obtuse angle, or that are partially obscured. AI recognition systems don’t understand what the image is; rather, they learn textures and gradients of the image’s pixels. That means that an AI system may correctly recognize a part of an image but not its entirety, which can result in misclassification.

To better defend AI against deceptive images, engineers subject them to “adversarial training.” This involves feeding a classifier adversarial images so it can identify and ignore those that aren’t going to be targeted. Research by Nicolas Papernot, a graduate student at Pennsylvania State University, shows that a system, even bolstered by adversarial training, may be ineffective if overwhelmed by the sheer number of images. Adversarial images take advantage of a feature found in many AI systems known as “decision boundaries.”

These boundaries are the invisible rules that instruct a system whether it is perceiving a lion or a leopard. The objective would be to create a mental map with lions in one sector and leopards in another. The line dividing these two sectors—the border at which a lion becomes a leopard or leopard a lion—is known as the decision boundary. Jeff Clune, who has also studied adversarial training, remains dubious about such classification systems because they’re too arbitrary.“All you’re doing with these networks is training them to draw lines between clusters of data rather than deeply modeling what it is to be [a] leopard or a lion.”

Large datasets are often labeled by companies that employ manual methods. Obtaining and sharing datasets is a challenge, especially for an organization that prefers to classify data and restrict access to it. A military dataset may contain images produced by thermal-imaging systems, for instance, but unless this dataset is shared with developers, an AI weapon wouldn’t be as effective. For example, AI devices that rely on chatbots limited to hundreds of words might not be able to completely replace a human with a much larger vocabulary.

AI systems are also hampered by their inability to multitask. A human can identify an enemy vehicle, decide on a weapon system to employ against it, predict its path, and then engage the target. An AI system can’t duplicate these steps. At this point, a system trained to identify a T-90 tank most likely would be unable to identify a Chinese Type 99 tank, despite the fact that they are both tanks and both tasks require image recognition. Many researchers are trying to solve this problem by working to enable systems to transfer their learning, but such systems are years away from production.

Predictably, adversaries will try to take advantage of these weaknesses by fooling image recognition engines and sensors. They may also try mounting cyberattacks to evade intrusion detection systems or feed altered data to AI systems that will supply them with false requirements.

U.S. Preparedness

The U.S. Department of Defense has been more partial to contracting for and building hardware than to implementing new technologies. All the same, the Air Force, in cooperation with Boeing, General Atomics, and a company called Kratos, is developing AI-powered drones. The Air Force is also testing pilotless XQ-58A Valkyrie experimental aircraft run by artificial intelligence. This next-generation drone is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets. The objective is to give human pilots a swarm of highly capable robot wingmen to deploy in battle. The Valkyrie is not autonomous, however. Although it will use AI and sensors to identify and evaluate enemy threats, it will still be up to pilots to decide whether or not to strike the target.

Pentagon officials may not be deploying autonomous weapons in battle yet, but they are testing and perfecting weapons that will not rely on human intervention. One example is the Army’s Project Convergence. In a test, conducted as part of the project, held in August 2020 at the Yuma Proving Ground in Arizona, the Army used a variety of air- and ground-based sensors to track simulated enemy forces and then process that data using AI-enabled computers at a base in Washington state. Those computers, in turn, issued fire instructions to ground-based artillery at Yuma. “This entire sequence was supposedly accomplished within 20 seconds,” the Congressional Research Service later reported.

In a U.S. program known as the Replicator initiative, the Pentagon said it planned to mass-produce thousands of autonomous drones. However, no official policy has condoned the use of autonomous weapons, which would allow devices to decide whether to strike a target without a human’s approval.

The Navy has an AI equivalent of Project Convergence called “Project Overmatch.” In the words of Adm. Michael Gilday, chief of naval operations, this is intended “to enable a Navy that swarms the sea, delivering synchronized lethal and nonlethal effects from near-and-far, every axis, and every domain.” Very little has been revealed about the project.

About 7,000 analysts employed by the National Security Agency (NSA) are trying to integrate AI into its operations, according to General Timothy Haugh, who serves as the NSA Director, U.S. Cyber Command Commander, and Chief of the Central Security Service. General Haugh has disclosed that as of 2024, the NSA is engaged in 170 AI projects, of which 10 are considered critical to national security. “Those other 160, we want to create opportunities for people to experiment, leverage, and compliantly use,” he says.

At present, though, AI is still regarded as a supplement to conventional platforms. AI is also envisioned as playing four additional roles: automating planning and strategy; fusing and interpreting signals more efficiently than humans or conventional systems can do; aiding space-based systems, mainly by collecting and synthesizing information to counter hypersonics; and enabling next-generation cyber and information warfare capabilities.

Ethics of AI Use

Although the use of autonomous weapons has been a subject of debate for decades, few observers expect any international deal to establish new regulations, especially as the U.S., China, Israel, Russia, and others race to develop even more advanced weapons. “The geopolitics makes it impossible,” says Alexander Kmentt, Austria’s top negotiator on autonomous weapons at the UN. “These weapons will be used, and they’ll be used in the military arsenal of pretty much everybody.”

Despite such challenges, Human Rights Watch has called for “the urgent negotiation and adoption of a legally binding instrument to prohibit and regulate autonomous weapons systems.” It has launched the Campaign to Stop Killer Robots, which the human rights organization says has been joined by more than 270 groups and 70 countries. Even though the controversy has centered around autonomous weapons, Brian Schimpf, CEO of AI drone manufacturer Anduril, has another perspective. He says AI weapons are “not about taking humans out of the loop. I don’t think that’s the right ethical framework. This is really about how we make human decision-makers more effective and more accountable [for] their decisions.”

All the same, autonomous AI weapons are already under development. Aside from the ethics of relying on a weapon to make life-and-death decisions, there is a problem with AI itself. Errors and miscalculations are relatively common. Algorithms underlying the operations of AI systems are capable of making mistakes—“hallucinations”—in which seemingly reasonable results turn out to be entirely illusory. That could have profound implications for deploying AI weapons that operate with deeply flawed instructions undetectable by human operators. In a particularly dystopian scenario, an adversary might substitute robot generals for human ones, forcing the U.S. to do the same, with the result that AI systems may be pitted against one another on the battlefield with unpredictable and possibly catastrophic consequences.

Dr. Elke Schwarz of Queen Mary University of London views the AI weapon dilemma through a theoretical framework that relies on political science and empirical investigations in her consideration of the ethical dimensions of AI in warfare. She believes that the integration of AI-enabled weapon systems facilitates the objectification of human targets, leading to heightened tolerance for collateral damage. In her view, automation can “weaken moral agency among operators of AI-enabled targeting systems, diminishing their capacity for ethical decision-making.” The bias towards autonomous systems may also encourage the defense industry to rush headlong into funding military AI systems, “influencing perceptions of responsible AI use in warfare.” She urges policymakers to take risks into account before it’s too late.

“(T)he effect of AI is much, much more than the machine gun or plane. It is more like the shift from muscle power to machine power in the last Industrial Revolution,” says Peter Singer, a professor at Arizona State University and a strategist and senior fellow at the U.S. think tank New America, who has written extensively about AI and warfare. “I believe that the advent of AI on the software side and its application into robotics on the hardware side is the equivalent of the industrial revolution when we saw mechanization.” This transformation raises new questions “of right and wrong that we weren’t wrestling with before.” He advocates setting “frameworks to govern the use of AI in warfare” that should apply to those people who are working on the design and use.

One of the issues Singer calls “machine permissibility” is what the machine should be allowed to do apart from human control. He calls attention to a second issue “that we’ve never dealt with before,” which is “machine accountability.” “If something happens, who do we hold responsible if it is the machine that takes the action? It’s very easy to understand that with a regular car, it’s harder to understand that with a so-called driverless car.” On the battlefield, would the machine be held responsible if the target was mistaken or if civilians were killed as a result?