The Neurodata Divide

by James Cavuoto, editor

Long-time readers of this publication will recall that we have frequently approached the subjects of ethics in neurotechnology and protecting users’ privacy. Particularly in this space, we have long advocated for “neuro vigilance” against potential threats posed by technologies such as brain-computer interfaces and deep-brain stimulation. And as NBR senior contributing editor JoJo Platt points out in her coverage of the recent NIH BRAIN Initiative meeting [see Conference Report p14], scholars such as Nita Farahany have brought these issues to the forefront at scientific meetings and conferences.

But aside from threats to personal privacy and individual rights, inappropriate use of neurotechnology can produce more insidious wide-ranging impacts on society as a whole. This could occur if the collection of neurological data from skewed subsets of the population is used to make inferences about society as a whole that marginalizes under-represented segments of society.

To their credit, vendors of implanted and noninvasive devices that collect brain data have largely heeded the call to protect the propriety of their users’ brain data, normally pointing out that data collected will only be used in aggregate form. But the use of aggregate brain data also poses risks.

A recent report from the U.K. Information Commissioner’s Office sheds considerable light on this issue, pointing out that “neurodivergent” people are particularly at risk of discrimination. “If not developed and tested on a wide enough range of people, there is a risk of inherent bias and inaccurate data being embedded in neurotechnology—negatively affecting people and communities in the U.K.,” the ICO argues. One workplace example the report cites is if specific neuropatterns or information come to be seen as undesirable due to ingrained bias. People with those patterns may then be overlooked for promotions or employment opportunities.

It wasn’t that long ago that homosexuality was classified as a mental disorder in the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders. While that classification was probably not generated from the collection of neurodata, it likely did result from the collection of behavioral standards that was biased at its core.

Let us also not forget that the government’s proscriptions against certain mental states was based largely on normative standards from people whose drug of choice was alcohol, tobacco, or caffeine and not marijuana, cocaine, or amphetamines.

Given these past abuses, it’s not too hard to imagine a scenario where EEG data collected from a skewed sampling of citizens creates a disadvantage for some segment of society that holds a particular ideology that is not dominant in the sampled population.

In the early days of the Internet, some local communities and school districts voiced concerns about a “digital divide” that left poorer students at a distinct disadvantage vis a vis richer students. As more consumer and medical devices collect brain data from users, we must be careful to avoid a neurodata divide that penalizes one segment of society to the advantage of others.

The solution, in our view, lies not in placing more restrictions on the devices that collect the data, but rather, on the agencies and organizations that use that data. In a free and democratic society, that means fair and equal representation, not only at the ballot box, but in the commercial and governmental entities that employ neurotechnology for the public good.