How can we stop “mind-reading” technology from outing queer people against their will?

Woman Wearing Brainwave Scanning Headset Sits in a Chair while Scientist Adjusts the Device, Looks at Displays. In the Modern Brain Study Laboratory Monitors Show EEG Reading and Brain Model.

A research fits a woman with a brainwave scanning headset Photo: Shutterstock

Neurological researchers have developed brain-computer interfaces (BCIs) that use artificial intelligence (AI) to “read minds” and detect people’s thoughts and sexual orientations. Though the tech is still very new and somewhat crude, it has ethicists thinking about the legal protections humans will need to protect their thoughts from anti-LGBTQ+ governments and unprincipled corporations.

Here’s how so-called “mind reading” tech works: Researchers install electrode devices or use other instruments to detect a person’s brain activity. They then observe the brain activity that occurs when a person speaks, listens, thinks, or does a physical activity. An AI computer program then detects patterns between these actions and the resulting brain activity. A computer interface then outputs the results to a device that either recreates the person’s thoughts into words or their actions into moveable bionic limbs.

Earlier instances of this technology have been used to create assistive devices for disabled people. For example, one BCI that detects the electrical brain signals connected to the movement of the lips, tongue, and jaw helped people with neurological conditions create vocal sounds and speech, increasing their expressive abilities. Other similar technologies have helped paralyzed individuals operate computers, write emails, and move wheelchairs just using their thoughts.

Cutting-edge BCI tech is shifting from just expressing human intentions to being able to “read” human thoughts directly from the brain. And while past BCI devices have required invasive brain surgeries, the technology is moving towards non-invasive brain scans, including the use of Functional Magnetic Resonance Imaging (fMRI). fMRIs are large, $3 million machines that people place their heads into to measure blood flow into different parts of the brain.

One recent study placed participants into an fMRI while having them listen to 16 hours of storytelling podcasts, like The Moth. The fMRI tracked the reactions that people’s brains had to hearing specific words and phrases. Then, patients were asked to imagine telling a story while in the fMRI machine.

Researchers used the brain scans to determine what words the person might be thinking. Researchers also used an AI model, powered by the popular ChatGPT software, to help determine the likeliest words that might follow those words, partly based on the brain scans as they imagined telling a story, Vox explained. The results revealed an approximate “decoded” version of a person’s thoughts.

For example, when a participant thought, “Look for a message from my wife saying that she had changed her mind and that she was coming back,” the decoder translated that into, “To see her for some reason I thought she would come to me and say she misses me.” When a participant thought, “Coming down a hill at me on a skateboard and he was going really fast and he stopped just in time,” the decoder translated that into, “He couldn’t get to me fast enough he drove straight up into my lane and tried to ram me.”

The decoding was crude and general at best: The BCI had to be trained on hours of human brain scans; the participants had to be willing with healthy, clear-thinking, high-functioning brains; and the technology is large and expensive.

Developing legal safeguards against AI and BCI “mind-reading”

Nevertheless, the study’s “mind reading” implications were startling, especially when one considers a 2023 Swiss study in which researchers claimed to develop an AI model that used scans of electric brain activity to detect with 83% accuracy if men were gay or straight.

Critics of the Swiss study blasted researchers for excluding bisexuals, reducing people’s personal sexual orientation and identities down to electric brain impulses, and creating technology that “can and will be used as a tool of surveillance and repression in places of the world where LGBT+ expression is punished.”

“AI is fundamentally flawed when it comes to recognizing and categorizing human beings in all their diversity. We see time and again how deep learning applications reinforce outdated stereotypes about gender and sexual orientation because they’re basically a reflection of the real world with all its bias,” said Mathias Wasik, the director of programs at All Out, a global LGBTQ+ human rights organization, according to the global crises journalism site Coda.

“Where it gets dangerous is when these systems are used by governments or corporations to put people into boxes and subject them to discrimination or persecution,” Wasik added.

Technology writer Sigal Samuel notes that, as AI-powered BCI devices become more available to consumers, they’re not likely to be marketed as medical devices. As such, they won’t be subject to federal regulations, leaving companies free to collect and sell customers’ data and authoritarian governments liable to surveil the data for secretive peeks into our innermost thoughts.

In February, the Colorado House of Representatives passed legislation to protect people’s “neural” data. Minnesota is also considering a law to “protect mental privacy” and penalize companies that don’t. A group of neuroscientists, ethicists, and others—called The Morningside Group—recently published a sort of “Bill of Neurorights” that they hope will result in a global treaty to prevent gross violations of the right to private thoughts.

This Bill of Neurorights, Samuel says, contains the five following policy recommendations:

  1. Mental privacy: You should have the right to seclude your brain data so that it’s not stored or sold without your consent.
  2. Personal identity: You should have the right to be protected from alterations to your sense of self that you did not authorize.
  3. Free will: You should retain ultimate control over your decision-making, without unknown manipulation from neurotechnologies.
  4. Fair access to mental augmentation: When it comes to mental enhancement, everyone should enjoy equality of access so that neurotechnology doesn’t only benefit the rich.
  5. Protection from bias: Neurotechnology algorithms should be designed in ways that do not perpetuate bias against particular groups.

Of course, laws and treaties alone won’t stop unprincipled groups from violating people’s brains. But with both Meta’s Mark Zuckerberg and X’s Elon Musk actively developing in-brain technology to create enhanced cybernetic humans, lawmakers and citizens must look to the future on these issues before BCI and AI tech leave citizens’ rights trampled in the past.

Originally Published Here.

Products You May Like

Articles You May Like

Colton Underwood Talks Potential Queer Season of ‘The Bachelor’
International artists’ visa fees to tour the U.S. rise by 250%. What it means – National
Chicago Fire Is Soapy, But In the Best Way
Paul McCartney and John Lennon’s sons release 1st song together – National
Morgan Wallen Apologizes for Throwing Chair Off Roof of Nashville Bar