AIs can get diseases too

AIs make mistakes. They are not perfect, they don’t always get things right. If an AI’s job is important, getting something wrong can have major consequences and potential to harm individuals.

Why do AIs make mistakes?

When things go wrong, it’s natural for us to ask why. We want to hold someone accountable for an unsafe condition. We can frame the conversation in a number of ways:

a: It’s bad luck
It’s stochastic. It can’t be helped, nobody is accountable, so let’s learn to live with it.
b: It’s the tech companies
Push the engineers to try harder. Let them risk their careers or face product liability lawsuits. Of course, with few exceptions we don’t see much in the way of direct attribution and consequences for harmfully flawed AI products.
c: It’s the bad guys
Use the law to identify and punish criminals. Of course, this only works when the laws are exceptionally well-written and properly enforced, something we don’t see a lot of when it comes to harmful AIs.

None of these options give us much hope of finding practical, scalable solutions. I think there is an alternative, more practical way to frame this problem.

d: It’s a kind of disease
Diseases are distinctively recognizable conditions. They can be classified and sometimes treated or prevented.

AI is life-like

Disease is the word we use to describe failure modes in evolved (not engineered) systems. As we all know, where there is life there is disease.

AI was inspired by biology. In machine learning as in biology, natural selection is iteration with incentives. Darwinism applied to data models.

Life-like systems give rise to life-like diseases. When AIs make errors, they often reoccur in systematically recognizable and resilient ways. These are not design flaws, because models are evolved, not designed. Every engineered system has an author, but evolved systems disperse accountability over millions of generations.

These failure conditions are artificial diseases, diseases that afflict artificially evolved (not engineered) systems. They are failure modes with distinctive and recognizable characteristics, just like biological diseases.

Cures and justice

To be sure, we don’t exclude bad luck, poor engineering, or criminal actors from their legitimate share of responsibility. If someone’s action leads to harm, we should pursue all available means to hold them accountable.

But when a patient presents with a life-threatening disease, we separate treatment from justice. We strive to treat disease, regardless of whether we can allocate personal blame for it.

When a child gets sick, we generally don’t say it was the child’s fault, or that children are “poorly designed.” When many people get sick in the same manner, we try to find the common reason. We make an effort to prevent the disease from happening in the first place, and hopefully eliminate it from our population.

Medicine for AI’s sake

While many are working hard to apply AI to improve medicine, the field of medicine has much to offer to improve AI.

Medicine gives us a powerful methodology to understand and address disease. We can employ this methodology to help us make AI safer. We start by building a taxonomy to identify and organize problems into distinct categories. We try to understand the etiology of each disease, the mechanisms which cause it to happen. Then, according to this classification, we develop appropriate tools to guide diagnosis, prevention, and treatment.

Informed by this classification, we can develop a body of practice around diagnosis, prevention, and treatment.

Major classes of artificial diseases

Let us begin then, with a first view of a practical disease model. A taxonomy helps us better understand unsafe AI conditions. We can group artificial diseases into four broad categories:

  1. Data malnutrition
    Ingestion of poorly prepared data, such as poor sampling and information pollution
  2. Algorithmic parasites
    Exploitative and hostile activity by adversaries, such as manipulation of ingested data
  3. Fitness drift
    Environmental novelty, aging, and reproductive flaws in neural net models
  4. Recognition diseases
    Corruption of neural signal processing and voting mechanisms

I will have more to say about each of these classes in upcoming postings.

So what

If we accept that AIs are fallible, we must find ways to reduce the harm they can do. The field of medicine provides an excellent model to deal with these problems. We can make AIs safer by studying how their unsafe conditions arise, and creating a medical practice for diagnosis, prevention, and treatment of artificial diseases.

August 2022
See also