Possessed Photography on Unsplash
Possessed Photography on Unsplash

Mistakes Made by Minds and Machines

Written: May 3, 2021 | Released: July 16, 2021

Fascinatingly, human minds and machine learning algorithms are subject to some of the same biases and prediction problems. This is probably not a coincidence – learning has fundamental challenges.

Here is a list of some issues that afflict both minds and machines:

1. Recency Bias

For both humans and machine learning algorithms, the most recently processed information tends to override what was learned from older data.

This is sensible if that new information really is more important, but it is counterproductive if our “learning rate” is too high.

• Machine learning example: you continue training an already trained algorithm on new data, but it starts to “forget” what it learned from the old data.

• Human example: it’s more salient to us that this friend stood us up recently than all the times they’ve been reliable.


2. Overfitting

If the set of hypotheses being considered is too complex relative to the amount/noisiness of data, it’s easy to accidentally choose a hypothesis that fits the data without being generalizable.

We humans often do this when we generalize from examples or anecdotes.

• Machine learning example: fitting a 90-parameter model using only 100 data points leads to near-perfect accuracy on those data points. But that model is likely to have terrible accuracy on new data.

• Human example: if someone meets two people from a particular country and extrapolates from those two people to infer what people there are “like,” they’re probably going to draw some inaccurate conclusions.


3. Underfitting

If we consider an overly simplistic set of hypotheses that can’t explain phenomena accurately (the converse of overfitting), we can get stuck using a relatively inaccurate model of a situation. We can be no more accurate than the most accurate hypothesis considered.

• Machine learning example: using linear models on highly non-linear phenomena.

• Human example: we try to decide whether capitalism is a “good system” or “bad system,” rather than trying to understand in which situations it produces good vs. bad outcomes.


4. Adversarial Inputs

For both humans and machine learning algorithms, an input can be carefully manipulated so that the predictions about it are highly inaccurate.

• Machine learning example: we can take an input of a dog and add extremely tiny changes that convince the algorithm it’s a potato.

• Human example: optical illusions can cause us to misjudge size, color, or other aspects due to subtle elements. In both cases, an input can be manipulated in order to produce inaccurate predictions.


5. Non-interpretability

With complex machine learning algorithms, it can be a struggle to explain why they made the prediction they did.

Likewise, with the human mind, we’re frequently making predictions that we don’t have direct insight into. They just happen automatically.

• Machine learning example: why did the neural network flag this loan application as fraud but not that one? Millions of computations were involved.

• Human example: why did I distrust that person I just met? I got a bad vibe without knowing why.



  

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *