<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>non-interpretability &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/non-interpretability/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Fri, 16 Jul 2021 05:36:18 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>Mistakes Made by Minds and Machines</title>
		<link>https://www.spencergreenberg.com/2021/05/mistakes-made-by-minds-and-machines/</link>
					<comments>https://www.spencergreenberg.com/2021/05/mistakes-made-by-minds-and-machines/#respond</comments>
		
		<dc:creator><![CDATA[Admin]]></dc:creator>
		<pubDate>Mon, 03 May 2021 04:15:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[adversarial inputs]]></category>
		<category><![CDATA[biases]]></category>
		<category><![CDATA[errors]]></category>
		<category><![CDATA[machine learning]]></category>
		<category><![CDATA[non-interpretability]]></category>
		<category><![CDATA[overfitting]]></category>
		<category><![CDATA[recency bias]]></category>
		<category><![CDATA[underfitting]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=2215</guid>

					<description><![CDATA[Written: May 3, 2021 &#124; Released: July 16, 2021 Fascinatingly, human minds and machine learning algorithms are subject to some of the same biases and prediction problems. This is probably not a coincidence &#8211; learning has fundamental challenges. Here is a list of some issues that afflict both minds and machines: 1. Recency Bias For [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>Written: May 3, 2021 | Released: July 16, 2021</em></p>



<p>Fascinatingly, human minds and machine learning algorithms are subject to some of the same biases and prediction problems. This is probably not a coincidence &#8211; learning has fundamental challenges.</p>



<p>Here is a list of some issues that afflict both minds and machines:</p>



<p><strong>1. Recency Bias</strong></p>



<p>For both humans and machine learning algorithms, the most recently processed information tends to override what was learned from older data.</p>



<p>This is sensible if that new information really is more important, but it is counterproductive if our &#8220;learning rate&#8221; is too high.</p>



<p>• Machine learning example: you continue training an already trained algorithm on new data, but it starts to &#8220;forget&#8221; what it learned from the old data.</p>



<p>• Human example: it&#8217;s more salient to us that this friend stood us up recently than all the times they&#8217;ve been reliable.</p>



<hr class="wp-block-separator"/>



<p><strong>2. Overfitting</strong></p>



<p>If the set of hypotheses being considered is too complex relative to the amount/noisiness of data, it&#8217;s easy to accidentally choose a hypothesis that fits the data without being generalizable.</p>



<p>We humans often do this when we generalize from examples or anecdotes.</p>



<p>• Machine learning example: fitting a 90-parameter model using only 100 data points leads to near-perfect accuracy on those data points. But that model is likely to have terrible accuracy on new data.</p>



<p>• Human example: if someone meets two people from a particular country and extrapolates from those two people to infer what people there are &#8220;like,&#8221; they&#8217;re probably going to draw some inaccurate conclusions.</p>



<hr class="wp-block-separator"/>



<p><strong>3. Underfitting</strong></p>



<p>If we consider an overly simplistic set of hypotheses that can&#8217;t explain phenomena accurately (the converse of overfitting), we can get stuck using a relatively inaccurate model of a situation. We can be no more accurate than the most accurate hypothesis considered.</p>



<p>• Machine learning example: using linear models on highly non-linear phenomena.</p>



<p>• Human example: we try to decide whether capitalism is a &#8220;good system&#8221; or &#8220;bad system,&#8221; rather than trying to understand in which situations it produces good vs. bad outcomes.</p>



<hr class="wp-block-separator"/>



<p><strong>4. Adversarial Inputs</strong></p>



<p>For both humans and machine learning algorithms, an input can be carefully manipulated so that the predictions about it are highly inaccurate.</p>



<p>• Machine learning example: we can take an input of a dog and add extremely tiny changes that convince the algorithm it&#8217;s a potato.</p>



<p>• Human example: optical illusions can cause us to misjudge size, color, or other aspects due to subtle elements. In both cases, an input can be manipulated in order to produce inaccurate predictions.</p>



<hr class="wp-block-separator"/>



<p><strong>5. Non-interpretability</strong></p>



<p>With complex machine learning algorithms, it can be a struggle to explain why they made the prediction they did.</p>



<p>Likewise, with the human mind, we&#8217;re frequently making predictions that we don&#8217;t have direct insight into. They just happen automatically.</p>



<p>• Machine learning example: why did the neural network flag this loan application as fraud but not that one? Millions of computations were involved.</p>



<p>• Human example: why did I distrust that person I just met? I got a bad vibe without knowing why.</p>



<hr class="wp-block-separator"/>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2021/05/mistakes-made-by-minds-and-machines/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2215</post-id>	</item>
	</channel>
</rss>
