<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>p-hacking &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/p-hacking/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Tue, 14 Jan 2025 17:03:27 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>Trusting the science</title>
		<link>https://www.spencergreenberg.com/2024/11/trusting-the-science/</link>
					<comments>https://www.spencergreenberg.com/2024/11/trusting-the-science/#comments</comments>
		
		<dc:creator><![CDATA[Admin]]></dc:creator>
		<pubDate>Wed, 20 Nov 2024 15:35:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[antiintellectualism]]></category>
		<category><![CDATA[bias]]></category>
		<category><![CDATA[dichotomous thinking]]></category>
		<category><![CDATA[distrust]]></category>
		<category><![CDATA[fake]]></category>
		<category><![CDATA[fraud]]></category>
		<category><![CDATA[fraudulent science]]></category>
		<category><![CDATA[importance hacking]]></category>
		<category><![CDATA[motivated reasoning]]></category>
		<category><![CDATA[nuanced thinking]]></category>
		<category><![CDATA[p-hacking]]></category>
		<category><![CDATA[polarization]]></category>
		<category><![CDATA[pragmatism]]></category>
		<category><![CDATA[repliation crisis]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[social science]]></category>
		<category><![CDATA[variability]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=4249</guid>

					<description><![CDATA[Is it a bad idea to broadly tell people to just &#8220;trust the science&#8221;? I think so. The reason stems from my thinking that all of the following are important and true (and too often overlooked) regarding science: 1) A lot of science is real AND valuable to society. 2) A lot of &#8220;science&#8221; is [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>Is it a bad idea to broadly tell people to just &#8220;trust the science&#8221;? I think so.</p>



<p>The reason stems from my thinking that all of the following are important and true (and too often overlooked) regarding science:</p>



<p>1) A lot of science is real AND valuable to society.</p>



<p>2) A lot of &#8220;science&#8221; is actually fake &#8211; see, for instance, a decent percentage of papers in psychology 15 years ago.</p>



<p>3) &#8220;Science&#8221; (as an approach to knowledge discovery) is one of humanity&#8217;s greatest inventions &#8211; but in practice, it is reasonably often misapplied, or the process is distorted due to bad incentives or poor training. Unfortunately, not all fields of science have done a good job of being self-correcting either, so sometimes, fields go in bad directions for quite a while and need reform. There are different kinds of bad science:</p>



<p>(i) Sometimes, science is &#8220;bad&#8221; because it uses unsound methods for figuring out the truth (such as when p-hacking is rampant).</p>



<p>(ii) Sometimes it is &#8220;bad&#8221; because it overclaims (e.g., &#8220;Importance Hacking&#8221; where scientists claim they found something important/valuable when they didn&#8217;t actually demonstrate what they claim in their study. Or cases where science is used to &#8220;prove&#8221; questions that can&#8217;t be proven by science &#8211; such as which policy is better in a particular context when it&#8217;s actually a tradeoff between different values).</p>



<p>(iii) Other times science is bad because it is biased (e.g., when people are only willing to run or publish studies that show X but not that show the opposite of X).</p>



<p>(iv) And sometimes science is bad because it&#8217;s simply fraudulent.</p>



<p>4) Promoting broad &#8220;trust the science&#8221; is misguided (and actually harmful) because a bunch of science is fake. If you tell people to always just &#8220;trust the science,&#8221; then you are going to cause them to be tricked by a bunch of bad science, or you are going to contribute to their disillusionment and loss of trust when they discover (correctly) that some of the science you&#8217;re saying is good is actually garbage.</p>



<p>5) The &#8220;distrust all science&#8221; view is probably an even worse take than &#8220;trust the science.&#8221; If you distrust all science, you are likely to miss out on incredible things (such as highly effective treatments), and you set yourself up to fall for tons of things that don&#8217;t work (e.g., widely used unscientific treatments). Those who tell people to always just &#8220;trust the science&#8221; sometimes accidentally push people into the &#8220;distrust all science&#8221; view when those people realize that some of what they are being told to trust is crap.</p>



<p>6) So, hard as it is, rather than promoting either &#8220;trust all science&#8221; or &#8220;distrust all science,&#8221; the course of action I believe in with regard to science education is to teach people that &#8220;Science&#8221; (as a method) is an incredibly powerful and useful invention, but that &#8220;science&#8221; (as actually practiced) is much like every other field: some of it is good, some of it is crap. There are good hairdressers and bad hairdressers, and there is good science and bad science (and unfortunately, some bad science ends up in the very top journals &#8211; while journals and peer review absolutely do block some bad science, they unfortunately still let through quite a lot of it).</p>



<p>Since some science is well done, and some of it is poorly done, it&#8217;s very valuable to learn to tell the difference to make the best use of scientific results &#8211; both with regard to applying it in your own life and using it to form your beliefs about the world.</p>



<p>If we pretend science is all good or all bad, we do a lot of harm. We need nuance to see through the bad stuff while maintaining the tremendous benefits.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This piece was first written on November 20, 2024, and first appeared on my website on January 14, 2025.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2024/11/trusting-the-science/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4249</post-id>	</item>
		<item>
		<title>Seven of the greatest academic works of satire of all time</title>
		<link>https://www.spencergreenberg.com/2023/05/seven-of-the-greatest-academic-works-of-satire-of-all-time/</link>
					<comments>https://www.spencergreenberg.com/2023/05/seven-of-the-greatest-academic-works-of-satire-of-all-time/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 26 May 2023 12:08:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[academic writing]]></category>
		<category><![CDATA[fales positives]]></category>
		<category><![CDATA[fraud]]></category>
		<category><![CDATA[multiple comparisons]]></category>
		<category><![CDATA[p-hacking]]></category>
		<category><![CDATA[predatory journals]]></category>
		<category><![CDATA[publication]]></category>
		<category><![CDATA[research]]></category>
		<category><![CDATA[satire]]></category>
		<category><![CDATA[scientific misconduct]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3450</guid>

					<description><![CDATA[1) What do you do when a predatory journal keeps spam emailing you to get you to make a submission? Submit this paper to their journal:  2) To succeed in academia, you need lots of publications. But the order of authors&#8217; names on a paper impacts who gets the credit. Thankfully, there&#8217;s a technological&#160;solution&#160;to make every author [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p></p>



<p>1) What do you do when a predatory journal keeps spam emailing you to get you to make a submission?</p>



<p>Submit <a rel="noreferrer noopener" href="https://www.scs.stanford.edu/~dm/home/papers/remove.pdf" target="_blank">this paper</a> to their journal: </p>



<p></p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="750" height="878" data-attachment-id="3451" data-permalink="https://www.spencergreenberg.com/2023/05/seven-of-the-greatest-academic-works-of-satire-of-all-time/1-2/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/1.png?fit=1312%2C1536&amp;ssl=1" data-orig-size="1312,1536" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="1" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/1.png?fit=750%2C878&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/1.png?resize=750%2C878&#038;ssl=1" alt="" class="wp-image-3451" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/1.png?resize=875%2C1024&amp;ssl=1 875w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/1.png?resize=256%2C300&amp;ssl=1 256w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/1.png?resize=768%2C899&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/1.png?w=1312&amp;ssl=1 1312w" sizes="(max-width: 750px) 100vw, 750px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>2) To succeed in academia, you need lots of publications. But the order of authors&#8217; names on a paper impacts who gets the credit.</p>



<p>Thankfully, there&#8217;s a technological&nbsp;<a rel="noreferrer noopener" href="https://arxiv.org/pdf/2304.01393.pdf?mibextid=Zxz2cZ" target="_blank">solution</a>&nbsp;to make every author the first author:</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" decoding="async" width="750" height="785" data-attachment-id="3452" data-permalink="https://www.spencergreenberg.com/2023/05/seven-of-the-greatest-academic-works-of-satire-of-all-time/2-2/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/2.jpeg?fit=1390%2C1456&amp;ssl=1" data-orig-size="1390,1456" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="2" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/2.jpeg?fit=750%2C785&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/2.jpeg?resize=750%2C785&#038;ssl=1" alt="" class="wp-image-3452" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/2.jpeg?resize=978%2C1024&amp;ssl=1 978w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/2.jpeg?resize=286%2C300&amp;ssl=1 286w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/2.jpeg?resize=768%2C804&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/2.jpeg?w=1390&amp;ssl=1 1390w" sizes="(max-width: 750px) 100vw, 750px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>3) As much an insightful look at academic practice as it is a work of satire,&nbsp;<a rel="noreferrer noopener" href="https://journals.sagepub.com/doi/pdf/10.1177/0956797611417632?fbclid=IwAR0RJu0xt8p0jM1DXKix4718zfLrNY_nwpp6OzxabN7L69ejrdT60cv96VA" target="_blank">this paper</a>&nbsp;provides groundbreaking evidence that &#8220;people were nearly a year-and-a-half younger after listening to When I&#8217;m Sixty-Four&#8221;:</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" decoding="async" width="750" height="653" data-attachment-id="3453" data-permalink="https://www.spencergreenberg.com/2023/05/seven-of-the-greatest-academic-works-of-satire-of-all-time/3-2/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/3.jpeg?fit=1268%2C1104&amp;ssl=1" data-orig-size="1268,1104" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="3" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/3.jpeg?fit=750%2C653&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/3.jpeg?resize=750%2C653&#038;ssl=1" alt="" class="wp-image-3453" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/3.jpeg?resize=1024%2C892&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/3.jpeg?resize=300%2C261&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/3.jpeg?resize=768%2C669&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/3.jpeg?w=1268&amp;ssl=1 1268w" sizes="(max-width: 750px) 100vw, 750px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>4) A taste of an alternate reality where&nbsp;<a rel="noreferrer noopener" href="https://psyarxiv.com/2uxwk/" target="_blank">academic papers are written to be consumed by normal humans</a>&nbsp;and are fun to read:</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="380" data-attachment-id="3454" data-permalink="https://www.spencergreenberg.com/2023/05/seven-of-the-greatest-academic-works-of-satire-of-all-time/4-2/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/4.jpeg?fit=1868%2C946&amp;ssl=1" data-orig-size="1868,946" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="4" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/4.jpeg?fit=750%2C380&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/4.jpeg?resize=750%2C380&#038;ssl=1" alt="" class="wp-image-3454" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/4.jpeg?resize=1024%2C519&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/4.jpeg?resize=300%2C152&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/4.jpeg?resize=768%2C389&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/4.jpeg?resize=1536%2C778&amp;ssl=1 1536w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/4.jpeg?w=1868&amp;ssl=1 1868w" sizes="auto, (max-width: 750px) 100vw, 750px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>5) Some researchers put a subject in an fMRI (functional Magnetic Resonance Imaging) machine and showed it a series of photos of people in social situations. They found statistically significant evidence of brain activity in some parts of the brain during photo-viewing compared to rest. The only problem was that the &#8220;subject&#8221; in the machine was a&nbsp;<a rel="noreferrer noopener" href="http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf?fbclid=IwAR3PEyQpDXCvpHL0neMqugsR1wAe2D9vdKNgh4mylPFY5caKZydaOncUR3U" target="_blank">dead Atlantic salmon</a>. A reminder of the importance of being mindful when you&#8217;re testing many hypotheses at once.</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="458" data-attachment-id="3463" data-permalink="https://www.spencergreenberg.com/2023/05/seven-of-the-greatest-academic-works-of-satire-of-all-time/5-2-3/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/5.2.jpeg?fit=1682%2C1028&amp;ssl=1" data-orig-size="1682,1028" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="5.2" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/5.2.jpeg?fit=750%2C458&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/5.2.jpeg?resize=750%2C458&#038;ssl=1" alt="" class="wp-image-3463" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/5.2.jpeg?resize=1024%2C626&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/5.2.jpeg?resize=300%2C183&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/5.2.jpeg?resize=768%2C469&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/5.2.jpeg?resize=1536%2C939&amp;ssl=1 1536w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/5.2.jpeg?w=1682&amp;ssl=1 1682w" sizes="auto, (max-width: 750px) 100vw, 750px" /></figure>





<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>6) Fed up with what he perceived as B.S. in academia, physicist Alan Sokal purposely wrote a piece of&nbsp;<a rel="noreferrer noopener" href="https://physics.nyu.edu/sokal/transgress_v2/transgress_v2_singlefile.html?fbclid=IwAR3ySWFnYii2I4_C-vjkg4ajrRcWVMP5AYk-Bai-hbZ7FGjUik1c_EAye8o" target="_blank">technical-sounding nonsense</a>. He managed to get it published in &#8220;Social Text,&#8221; a journal of cultural studies:</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="611" data-attachment-id="3456" data-permalink="https://www.spencergreenberg.com/2023/05/seven-of-the-greatest-academic-works-of-satire-of-all-time/6-2/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/6.jpeg?fit=1722%2C1402&amp;ssl=1" data-orig-size="1722,1402" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="6" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/6.jpeg?fit=750%2C611&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/6.jpeg?resize=750%2C611&#038;ssl=1" alt="" class="wp-image-3456" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/6.jpeg?resize=1024%2C834&amp;ssl=1 1024w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/6.jpeg?resize=300%2C244&amp;ssl=1 300w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/6.jpeg?resize=768%2C625&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/6.jpeg?resize=1536%2C1251&amp;ssl=1 1536w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/6.jpeg?w=1722&amp;ssl=1 1722w" sizes="auto, (max-width: 750px) 100vw, 750px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>7) A massive expansion of Sokal&#8217;s hoax, the creators of the <a rel="noreferrer noopener" href="https://areomagazine.com/2018/10/02/academic-grievance-studies-and-the-corruption-of-scholarship/?fbclid=IwAR1e0tyvH9NlTWKbp9lOtmtDM_Tb9DZsDoMSDT419-t_L40vump3Y6Wt6FI" target="_blank">Grievance Studies Affair</a> (a.k.a., &#8220;Sokal Squared&#8221;) <a rel="noreferrer noopener" href="https://areomagazine.com/2018/10/02/academic-grievance-studies-and-the-corruption-of-scholarship/?fbclid=IwAR1e0tyvH9NlTWKbp9lOtmtDM_Tb9DZsDoMSDT419-t_L40vump3Y6Wt6FI" target="_blank">wrote 20 nonsense papers and submitted them</a> to journals related to cultural, queer, race, gender, fat, and sexuality studies. Of those submitted, seven were accepted for publication.</p>



<p>Note that means that 13 were not accepted, so that&#8217;s at least a somewhat positive sign for academia. On the other hand, if the 7 that were accepted truly were nonsense, that suggests a lack of sufficient quality control for some of these journals when a paper uses the right sounding buzz words or style of argument.</p>



<p>One of those papers from the hoax that was accepted was &#8220;Our Struggle is My Struggle: Solidarity Feminism as an Intersectional Reply to Neoliberal and Choice Feminism&#8221;.</p>



<p><a href="http://norskk.is/bytta/menn/our_struggle_is_my_struggle.pdf">The paper</a> was supposedly based on part of <a href="https://www.google.com/search?q=%22Volume+One%22+%22A+Reckoning%22+%22CHAPTER+12%22&amp;rlz=1C5CHFA_enUS750US750&amp;oq=%22Volume+One%22+%22A+Reckoning%22+%22CHAPTER+12%22&amp;aqs=chrome..69i57j33i160l5.1497j0j1&amp;sourceid=chrome&amp;ie=UTF-8">Chapter 12 of Volume 1</a> of Mein Kampf (by Adolf Hitler). <a href="https://areomagazine.com/2018/10/02/academic-grievance-studies-and-the-corruption-of-scholarship/">According to the hoaxers</a>:</p>



<p><em>&#8220;[We] wonder if they’d publish a feminist rewrite of a chapter from Adolf Hitler’s Mein Kampf. The answer to that question also turns out to be &#8216;yes,&#8217; given that the feminist social work journal Affilia has just accepted it&#8230; Purpose: To see if we could find &#8216;theory&#8217; to make anything grievance-related (in this case, part of Chapter 12 of Volume 1 of Mein Kampf with fashionable buzzwords switched in) acceptable to journals if we mixed and matched fashionable arguments.&#8221;</em></p>



<p>On the other hand, as Michael Keenan pointed out to me after I published the initial version of this post, their paper doesn&#8217;t much resemble the Mein Kampf chapter at all. See <a href="https://michaelkeenan.tumblr.com/post/178734541040/tldr-this-latest-academic-journal-hoax-is">Michael&#8217;s Tumblr post</a>, which outlines ways in which this hoax may have been misrepresented by some people.</p>



<p></p>



<p>The editor of Affilia wrote regarding this paper:</p>



<p><em>&#8220;The reviewer(s) have been very favorable, although there are a few minor outstanding issues to address. Therefore, I invite you to respond to the editorial and reviewer(s)’ comments included at the bottom of this letter and revise your manuscript quickly so that we can move toward publication.&#8221;</em> &#8211; Co-Editor in Chief, Affilia, second review</p>



<figure class="wp-block-image size-large"><img data-recalc-dims="1" loading="lazy" decoding="async" width="750" height="922" data-attachment-id="3457" data-permalink="https://www.spencergreenberg.com/2023/05/seven-of-the-greatest-academic-works-of-satire-of-all-time/7-2/" data-orig-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/7.png?fit=1558%2C1916&amp;ssl=1" data-orig-size="1558,1916" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="7" data-image-description="" data-image-caption="" data-large-file="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/7.png?fit=750%2C922&amp;ssl=1" src="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/7.png?resize=750%2C922&#038;ssl=1" alt="" class="wp-image-3457" srcset="https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/7.png?resize=833%2C1024&amp;ssl=1 833w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/7.png?resize=244%2C300&amp;ssl=1 244w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/7.png?resize=768%2C944&amp;ssl=1 768w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/7.png?resize=1249%2C1536&amp;ssl=1 1249w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/7.png?w=1558&amp;ssl=1 1558w, https://i0.wp.com/www.spencergreenberg.com/wp-content/uploads/2023/06/7.png?w=1500&amp;ssl=1 1500w" sizes="auto, (max-width: 750px) 100vw, 750px" /></figure>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><em>This list was first written on May 26, 2023, and first appeared on this site on June 11, 2023.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2023/05/seven-of-the-greatest-academic-works-of-satire-of-all-time/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3450</post-id>	</item>
		<item>
		<title>Demystifying p-values</title>
		<link>https://www.spencergreenberg.com/2022/12/demystifying-p-values/</link>
					<comments>https://www.spencergreenberg.com/2022/12/demystifying-p-values/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 31 Dec 2022 20:40:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[alpha]]></category>
		<category><![CDATA[alternative hypothesis]]></category>
		<category><![CDATA[Bayesianism]]></category>
		<category><![CDATA[false positives]]></category>
		<category><![CDATA[frequentism]]></category>
		<category><![CDATA[garden of forking paths]]></category>
		<category><![CDATA[multiple hypothesis testing]]></category>
		<category><![CDATA[null hypothesis]]></category>
		<category><![CDATA[null hypothesis significance testing]]></category>
		<category><![CDATA[p-hacking]]></category>
		<category><![CDATA[p-values]]></category>
		<category><![CDATA[probability]]></category>
		<category><![CDATA[publication bias]]></category>
		<category><![CDATA[random chance]]></category>
		<category><![CDATA[replication crisis]]></category>
		<category><![CDATA[statistical significance]]></category>
		<category><![CDATA[statistics]]></category>
		<category><![CDATA[underpowered]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3382</guid>

					<description><![CDATA[There is a tremendous amount of confusion around what a p-value actually is, despite their widespread use in science. Here is my attempt to explain the concept of p-values concisely and clearly (including why they are useful and what often goes wrong with them). — What&#8217;s a p-value? — If you run a study, then [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>There is a tremendous amount of confusion around what a p-value actually is, despite their widespread use in science. Here is my attempt to explain the concept of p-values concisely and clearly (including why they are useful and what often goes wrong with them).</p>



<p><strong>— What&#8217;s a p-value? —</strong></p>



<p>If you run a study, then (all else equal, aside from rare edge cases) the lower the p-value, the lower the chance that your results are due to random chance or luck.</p>



<p>More precisely: a p-value is the probability you&#8217;d get a result at least as extreme as what you got IF there were actually no effect (or if some other pre-specified &#8220;null hypothesis&#8221; is true).</p>



<p>So it&#8217;s a probability calculated based on assuming that there is no effect (or assuming that a pre-specified &#8220;null hypothesis&#8221; is true). Here the phrase &#8220;no effect&#8221; would mean, in the case of a study on a new medicine, that the medicine doesn&#8217;t do anything.</p>



<p>To put it in terms of coin flips: suppose you&#8217;re trying to decide if a coin is fair (i.e., if it has an equal chance of landing on heads and tails &#8211; so that&#8217;s your &#8220;null hypothesis&#8221; in this context). You flip the coin 100 times and get 60 heads. You calculate the p-value (p=0.06).</p>



<p>This p-value tells you there&#8217;s a 6% chance you&#8217;d get 60 or more heads OR 60 or more tails out of 100 flips if the coin were actually fair.</p>



<p>What makes p-values useful is that when they are high, you usually can&#8217;t rule out your effect being due to random chance or luck. And, when they are very low, random chance is (in most cases) unlikely to be the explanation for your result.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>— What&#8217;s the problem with p-values? —</strong></p>



<p>In social science, p&lt;0.05 is often used as the cutoff for a &#8220;successful&#8221; result (i.e., they treat the effect as real and potentially publishable). This is an arbitrary cutoff; there&#8217;s nothing special about 0.05. The phrase &#8220;statistically significant&#8221; is defined simply to mean that p&lt;0.05.</p>



<p>There are many ways that p-values get commonly misused, creating lots of problems. For instance:</p>



<p>• p-values often get misinterpreted as the probability that an effect is not real (recall: p-values are actually the probability of getting a result this extreme if there is no effect, which is not the same thing)</p>



<p>• If you see one study where the main finding&#8217;s p-value is, say, 0.05, and another study where the main finding&#8217;s p-value is, say, 0.01, it&#8217;s tempting to conclude that the finding of the 2nd study is much less likely to be the result of chance (e.g., 1/5th as likely) than the 1st study&#8217;s finding. Unfortunately, we can&#8217;t draw this conclusion. The probability that a study&#8217;s finding is the result of chance is not the same as the p-value, and in fact, it can&#8217;t even be calculated just by knowing the p-value.</p>



<p>• Because a p-value threshold is often used for a result to be publishable (p&lt;0.05 in social science), researchers sometimes engage in fishy methods to get their p-values below the threshold. This is known as &#8220;p-hacking.:</p>



<p>• A result&#8217;s p-value (or &#8220;statistical significance&#8221;) is sometimes focused on instead of focusing on other factors that are also important. For instance, a result may have a low p-value but be such a weak effect that it&#8217;s totally useless or uninteresting.</p>



<p>• While a low p-value helps you rule out the possibility that your effect is merely due to random chance, unfortunately, that&#8217;s all it helps you with. But researchers sometimes act as though it tells them more than that. Even an extremely low p-value doesn&#8217;t mean an effect is &#8220;real&#8221; or that the effect means what you think. Low p-values can result from a variety of causes, including mistakes in experimental design or confounds.</p>



<p>Here&#8217;s another way to think about what a p-value is and isn&#8217;t that some people find helpful: a p-value does not tell you the probability that your result is due to chance. It tells you how consistent your results are with being due to chance. (I&#8217;m paraphrasing from <a href="https://statmodeling.stat.columbia.edu/2013/03/12/misunderstanding-the-p-value/#comment-143473">here</a>.) So, the lower the p-value, the less consistent your results are with them being due to chance.</p>



<p>It&#8217;s interesting to note that, empirically, results with lower p-values are more likely to be genuine effects (i.e., not false positives). I looked at results for 325 psychology study replications, and when the original study p-value was at most 0.01, about 72% replicated. When p&gt;0.01, only 48% did.</p>



<p>Ultimately, p-values are a useful (though often abused) statistical tool.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><strong>— BONUS APPENDIX: what&#8217;s the chance of a hypothesis being &#8220;true&#8221; if p&lt;0.05?  —</strong></p>



<p>One annoying thing about p-values is that they don&#8217;t answer the question we are usually interested in. Usually, we want to know something like &#8220;What&#8217;s the probability that my hypothesis is true?&#8221; or &#8220;What&#8217;s the probability that the effect of this drug is bigger than X?&#8221; but p-values don&#8217;t tell us those things.</p>



<p>However, we can put a different spin on p-values to get them to answer questions that are closer to what we&#8217;re really interested in. Let&#8217;s think of p-values as giving us a decision procedure (in an overly simplified world where you either &#8220;believe&#8221; in an effect or you fail to believe in it).&nbsp;</p>



<p>Suppose you test 100 totally separate, previously unexplored hypotheses about humans, and suppose that you commit to &#8220;believe&#8221; a hypothesis is true if and only if you get p&lt;0.05 (and otherwise, you don&#8217;t believe it).</p>



<p>I think it&#8217;s realistic that in a social science context, most hypotheses studied will be false since discovering novel, publishable hypotheses about humans is hard. So let&#8217;s suppose that 80% of the hypotheses you test are *not* true.&nbsp;</p>



<p>Finally, suppose that you use a large enough number of participants in your studies so that if you are testing for the presence of a real effect, there is an 80% chance you&#8217;ll be able to find it (this 80% figure is a common recommendation for &#8220;statistical power&#8221;).&nbsp;</p>



<p>Under these assumptions, if you test 100 hypotheses, then you will end up believing in 20 hypotheses, and 80% of those you believe will be true (with the other 20% being false positives). That means that of the results you believe in, 80% will be correct! Of course, this assumes no mistakes are made in the process of designing the experiment, running the statistics, and so on.</p>



<p>Here&#8217;s how the math works out if you&#8217;re curious:</p>



<p>• Out of the 100 hypotheses, 20 will be true, and of those, you&#8217;ll believe 16 = 0.80 * 20 (these are the true positives) and fail to believe 4 (these are the false negatives).</p>



<p>• Out of the 100 hypotheses, 80 will be false, and of those, you&#8217;ll believe 4 = 0.05 * 80 (these are the false positives), and you&#8217;ll reject 76 (these are the true negatives).</p>



<p>Of course, if the numbers here had been different, the conclusions would be different as well. For instance, imagine if you started with 2000 hypotheses, and this time, imagine that only 1% of them were true. If the power was still 80%, then:</p>



<p>&nbsp;• Out of the 2000 hypotheses, 20 of them would be true, and of those, you&#8217;d believe 16 (0.80 * 20) of them (these are true positives) and fail to believe 4 of them (these are false negatives).</p>



<p>• Out of the 2000 hypotheses, 1980 would be false, and of those, you&#8217;d believe 99 (0.05*1980) of them (these are false positives), and you&#8217;d reject the other 1881 of them (these are true negatives).</p>



<p>• So, altogether, you&#8217;d believe 115 (16 + 99) hypotheses, of which only 16 would&#8217;ve actually been true, so of the results you believe in, less than 14% would be correct!&nbsp;</p>



<p>From analyses like these, we can see that the probability that a specific hypothesis is true, given that we&#8217;ve found p&lt;0.05, depends on a variety of factors, including the sample size, the true effect size, the base rate probability that a new hypothesis tested by that researcher is true, the probability of errors being made in the experimental design or statistical analysis, and so on.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>In real life:</p>



<p>(1) Studies often don&#8217;t use large enough numbers of participants (and so are underpowered).</p>



<p>(2) Researchers sometimes engage in p-hacking to artificially lower their p-values to help their papers get published.</p>



<p>(3) Researchers often don&#8217;t carefully track how many hypotheses they&#8217;ve really tested.</p>



<p>(4) The decision procedure described above is often not adhered to so strictly (e.g., a result of p=0.08 might be treated as suggestive evidence for the hypothesis, and hence the hypothesis is not rejected).</p>



<p>(5) Real hypotheses often have auxiliary assumptions beyond what the p-value accounts for (such as an assumption that there is a lack of confounders, a lack of serious errors in the experimental setup, and so on).</p>



<p>I personally don&#8217;t like thinking in terms of this decision procedure for p-values because I believe that modeling hypotheses as &#8220;true&#8221; or &#8220;false&#8221; is not a good approach to thinking clearly. This is because I believe it&#8217;s usually much better to think in terms of probabilities rather than a &#8220;true&#8221;/&#8221;false&#8221; dichotomy when trying to understand the answers to complex questions.</p>



<p>Some people have argued that we should switch to a Bayesian approach to hypothesis testing since such an approach avoids many of the issues of p-values (including avoiding the problematic &#8220;true&#8221;/&#8221;false&#8221; dichotomy). But it also introduces other challenges, such as how to come up with an appropriate &#8220;prior&#8221; (which represents one&#8217;s belief about the probability of the hypothesis having different strengths of effects prior to seeing the study results).</p>



<p></p>



<p><em>This piece was first written on December 31, 2022, and first appeared on this site on April 2, 2023.</em></p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p><a href="https://www.guidedtrack.com/programs/4zle8q9/run?essaySpecifier=%3A+Demystifying%20p-values" target="_blank" rel="noreferrer noopener">If you read this line, please do us a favor and click here to answer one quick question.</a></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2022/12/demystifying-p-values/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3382</post-id>	</item>
		<item>
		<title>Importance Hacking: a major (yet rarely-discussed) problem in science</title>
		<link>https://www.spencergreenberg.com/2022/12/importance-hacking-a-major-yet-rarely-discussed-problem-in-science/</link>
					<comments>https://www.spencergreenberg.com/2022/12/importance-hacking-a-major-yet-rarely-discussed-problem-in-science/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 20 Dec 2022 01:45:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[beauty hacking]]></category>
		<category><![CDATA[career incentives]]></category>
		<category><![CDATA[chance]]></category>
		<category><![CDATA[clarity]]></category>
		<category><![CDATA[culture of science]]></category>
		<category><![CDATA[fraud]]></category>
		<category><![CDATA[generalizability]]></category>
		<category><![CDATA[generalizability crisis]]></category>
		<category><![CDATA[hacking]]></category>
		<category><![CDATA[honesty]]></category>
		<category><![CDATA[importance hacking]]></category>
		<category><![CDATA[incentives]]></category>
		<category><![CDATA[integrity]]></category>
		<category><![CDATA[novelty hacking]]></category>
		<category><![CDATA[open science]]></category>
		<category><![CDATA[overclaiming]]></category>
		<category><![CDATA[p-hacking]]></category>
		<category><![CDATA[probability]]></category>
		<category><![CDATA[psychological science]]></category>
		<category><![CDATA[publish or perish]]></category>
		<category><![CDATA[reasoning processes]]></category>
		<category><![CDATA[replication crisis]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[social science]]></category>
		<category><![CDATA[statistics]]></category>
		<category><![CDATA[usefulness hacking]]></category>
		<category><![CDATA[veracity]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3057</guid>

					<description><![CDATA[I first published this post on the Clearer Thinking blog on December 19, 2022, and first cross-posted it to this site on January 21, 2023. You have probably heard the phrase &#8220;replication crisis.&#8221; It refers to the grim fact that, in a number of fields of science, when researchers attempt to replicate previously published studies, [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>I first published this post on the <a href="https://www.clearerthinking.org/post/importance-hacking-a-major-yet-rarely-discussed-problem-in-science">Clearer Thinking blog</a> on December 19, 2022, and first cross-posted it to this site on January 21, 2023.</em></p>



<p id="viewer-1d12a"></p>



<p id="viewer-104ln">You have probably heard the phrase &#8220;replication crisis.&#8221; It refers to the grim fact that, in a number of fields of science, when researchers attempt to replicate previously published studies, they fairly often don&#8217;t get the same results. The magnitude of the problem depends on the field, but in psychology, it seems that something like <a rel="noreferrer noopener" href="http://datacolada.org/47" target="_blank"><u>40% of studies in top journals</u></a> don&#8217;t replicate. We&#8217;ve been tackling this crisis with our new <a rel="noreferrer noopener" href="https://replications.clearerthinking.org/" target="_blank"><u><em>Transparent Replications</em></u></a> project, and this post explains one of our key ideas.</p>



<p id="viewer-2dn5g">Replication failures are sometimes simply due to bad luck, but more often, they are caused by p-hacking &#8211; the use of fishy statistical techniques that lead to statistically significant (but misleading or erroneous) results. As big a problem as p-hacking is, there is another substantial problem in science that gets talked about much less. Although certain subtypes of this problem have been named previously, to my knowledge, the problem itself has no name, so I&#8217;m giving it one: &#8220;Importance Hacking.&#8221;</p>



<p id="viewer-3hoev">Academics want to publish in the top journals in their field. To understand Importance Hacking, let&#8217;s consider a (slightly oversimplified) list of the three most commonly-discussed ways to get a paper published in top psychology journals:</p>



<ol class="wp-block-list">
<li><strong>Conduct valuable research</strong> &#8211; make a genuinely interesting or important discovery, or add something valuable to the state of scientific knowledge. This is, of course, what just about everyone wants to do, but it&#8217;s very, very hard!</li>



<li><strong>Commit fraud</strong> &#8211; for instance, by making up your data. Thankfully, very few people are willing to do this because it&#8217;s so unethical. So this is by far the least used approach.</li>



<li><strong>p-hack</strong> &#8211; use fishy statistics, HARKing (i.e., hypothesizing after the results are known), selective reporting, using hidden <a href="https://en.wikipedia.org/wiki/Researcher_degrees_of_freedom" target="_blank" rel="noreferrer noopener"><u>researcher degrees of freedom</u></a>, etc., in order to get a p&lt;0.05 result that is actually just a false positive. This is a major problem and the focus of the replication crisis. Of course, false positives can also come about without fault, due to bad luck.</li>
</ol>



<p id="viewer-5plkf">But here is a fourth way to get a paper published in a top journal: Importance Hacking.</p>



<p id="viewer-ctrs5">4. <strong>Importance Hack</strong> &#8211; get a result that is actually not interesting, not important, and not valuable, but write about it in such a way that reviewers are convinced it is interesting, important, and/or valuable, so that it gets published.</p>



<p id="viewer-f54g1">For research to be valuable to society (and, in an ideal world, publishable in top journals), it must be true AND interesting (or important, useful, etc.). Researchers sometimes p-hack their results to skirt around the &#8220;true&#8221; criterion (by generating interesting false positives). On the other hand, Importance Hacking is a method for skirting the &#8220;interesting&#8221; criterion.</p>



<p id="viewer-ft7mi">Importance Hacking is related to concepts like <em>hype</em> and <em>overselling</em>, though hype and overselling are far more general. Importance Hacking refers specifically to a phenomenon whereby research with little to no value gets published in top journals due to the use of strategies that lead reviewers to misinterpret the work. On the other hand, hype and overselling are used in many ways in many stages of research (including to make valuable research appear even more valuable).</p>



<p id="viewer-dd0l9">One way to understand importance hacking is by comparing it to p-hacking. P-hacking refers to a set of bad research practices that enable researchers to publish non-existent effects. In other words, p-hacking misleads paper reviewers into thinking that non-existent effects are real. Importance Hacking, on the other hand, encompasses a different set of bad research practices: those that lead paper reviewers to believe that real (i.e., existent) results that have little to no value actually have substantial value.</p>



<p id="viewer-2tioa">This diagram illustrates how I think Importance Hacking interferes with the pipeline of producing valuable research:</p>



<figure class="wp-block-image"><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/static.wixstatic.com/media/f4e552_e1a60b1c65514edf9fef562a77c5c4ba~mv2.jpg/v1/fill/w_1480%2Ch_904%2Cal_c%2Cq_85%2Cusm_0.66_1.00_0.01%2Cenc_auto/f4e552_e1a60b1c65514edf9fef562a77c5c4ba~mv2.jpg?w=750&#038;ssl=1" alt=""/></figure>



<p id="viewer-7u47q">There are a number of subtypes of Importance Hacking based on the method used to make a result appear interesting/important/valuable when it&#8217;s not. Here is how I subdivide them:</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading" id="viewer-brv18"></h2>



<h2 class="wp-block-heading" id="viewer-fh6np">Types of Importance Hacking</h2>



<p id="viewer-a5mla"><strong>1. Hacking Conclusions:</strong> make it seem like you showed some interesting thing X but actually show something else (X′) which sounds similar to X but is much less interesting/important. In these cases, researchers do not truly find what they imply they have found. This phenomenon is also closely connected with validity issues.</p>



<ul class="wp-block-list">
<li><em>Example 1: showing X is true in a simple video game but claiming that X is true in real life.</em></li>



<li><em>Example 2: showing A and B are correlated and claiming that A causes B (when really A and B are probably both caused by some third factor C, which makes the finding much less interesting).</em></li>



<li><em>Example 3: if a researcher claims to be measuring “aggression,” and couches all conclusions in these terms but is actually measuring milliliters of hot sauce that a person puts in someone else&#8217;s food. Their result about aggression will be valid only insofar as it is true that this is a valid measure of aggression.</em></li>



<li>Example 4: some types of hacking conclusions would fall under the terms &#8220;overclaiming&#8221; or &#8220;overgeneralizing;&#8221; Tal Yarkoni has a relevant paper called <a href="https://mzettersten.github.io/assets/pdf/ManyBabies_BBS_commentary.pdf" target="_blank" rel="noreferrer noopener"><em><u>The Generalizability Crisis</u></em></a><em>.</em></li>
</ul>



<p id="viewer-365fm"><strong>2. Hacking Novelty: </strong>refer to something in a way that makes it seem more novel or unintuitive than it is. Perhaps the result is already well known or is merely what just about everyone&#8217;s common sense would already tell them is true. In these cases, researchers really do find what they claim to have found, but what they found is not novel (despite them making it seem so). Hacking Novelty is also connected to the &#8220;Jingle-jangle&#8221; fallacy &#8211; where people can be led to believe two identical concepts are different because they have different names (or, more subtly, because they are operationalized somewhat differently).</p>



<ul class="wp-block-list">
<li><em>Example 1: showing something that is already well-known but giving it a new name that leads people to think it is something new. The concept of “grit” has received this criticism; some people claim it could turn out to be just another word for conscientiousness (or already known facets of conscientiousness) &#8211; though this question does not yet seem to be settled (different sides of this debate can be found in these papers: </em><a rel="noreferrer noopener" href="https://www.researchgate.net/publication/6290064_Grit_Perseverance_and_Passion_for_Long-Term_Goals" target="_blank"><em><u>1</u></em></a><em>, </em><a rel="noreferrer noopener" href="https://journals.sagepub.com/doi/pdf/10.1002/per.2171" target="_blank"><em><u>2</u></em></a><em>, </em><a rel="noreferrer noopener" href="https://drive.google.com/file/d/1NzMPCgZ_Ipbmzewgaj0dmopkfLq582NA/view" target="_blank"><em><u>3</u></em></a><em> and <u><a href="https://www.researchgate.net/publication/304032119_Much_Ado_About_Grit_A_Meta-Analytic_Synthesis_of_the_Grit_Literature">4</a></u>).</em></li>



<li><em>Example 2: showing that A and B are correlated, which seems surprising given how the constructs are named, but if you were to dig into how A and B were measured, it would be obvious they would be correlated.</em></li>



<li><em>Example 3: showing a common-sense result that almost everyone already would predict but making it seem like it&#8217;s not obvious (e.g., by giving it a fancy scientific name).</em></li>
</ul>



<p id="viewer-a209k"><strong>3. Hacking Usefulness: </strong>make a result seem useful or relevant to some important outcome when in fact, it&#8217;s useless and irrelevant. In these cases, researchers find what they claim to have found, but what they find is not useful (despite them making it sound useful).</p>



<ul class="wp-block-list">
<li><em>Example: focusing on statistical significance when the effect size is so small that the result is useless. Clinicians often distinguish between “statistical significance” and “clinical significance” to highlight the pitfalls of ignoring effect sizes when considering the importance of a finding.</em></li>
</ul>



<p id="viewer-etfss"><strong>4. Hacking Beauty: </strong>make a result seem clean and beautiful when in fact, it&#8217;s messy or hard to interpret. In these cases, researchers focus on certain details or results and tell a story around those, but they could have focused on other details or results that would have made the story less pretty, less clear-cut, or harder to make sense of. This is related to Giner-Sorolla’s 2012 paper <a href="https://journals.sagepub.com/doi/pdf/10.1177/1745691612457576" target="_blank" rel="noreferrer noopener"><em><u>Science or art: How aesthetic standards grease the way through the publication bottleneck but undermine science</u></em></a><em>. </em>Hacking beauty sometimes reduces to selective reporting of some kind (i.e., selective reporting of measures, analyses, or studies) or at least of selective focus on certain findings and not others. This becomes more difficult with pre-registration; if you have to report the results of planned analyses, there’s less room to make them look pretty (you could just <em>say</em> they’re pretty, but that seems like overclaiming)</p>



<ul class="wp-block-list">
<li><em>Example: emphasizing the parts of the result that tell a clean story while not including (or burying somewhere in the paper) the parts that contradict that story</em></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p id="viewer-56mr8">Science faces multiple challenges. Over the past decade, the <a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/Replication_crisis" target="_blank"><u>replication crisis</u></a> and subsequent <a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/Open_science" target="_blank"><u>open science movement</u></a> have greatly increased awareness of p-hacking as a problem. Measures have begun to be put in place to reduce p-hacking. Importance Hacking is another substantial problem, but it has received far less attention.</p>



<figure class="wp-block-image"><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/static.wixstatic.com/media/f4e552_94289803042f43d68a85e7c490b1fa1c~mv2.jpg/v1/fill/w_1480%2Ch_1110%2Cal_c%2Cq_85%2Cusm_0.66_1.00_0.01%2Cenc_auto/f4e552_94289803042f43d68a85e7c490b1fa1c~mv2.jpg?w=750&#038;ssl=1" alt=""/><figcaption class="wp-element-caption"><em>Digital art created using the A.I. DALL</em>·<em>E</em></figcaption></figure>



<p id="viewer-at41b"></p>



<p id="viewer-aqs8s">If a pipe is leaking from two holes and its pressure is kept fixed, then repairing one hole will result in the other one leaking faster. Similarly, as best practices increasingly become commonplace as a means to reduce p-hacking, so long as the career pressures to publish in top journals don&#8217;t let up, the occurrence of Importance Hacking may increase.</p>



<p id="viewer-3rjml">It&#8217;s time to start the conversation about how Importance Hacking can be addressed.</p>



<p id="viewer-agpq6">If you&#8217;re interested in learning more about Importance Hacking, you can listen to <a rel="noreferrer noopener" href="https://clearerthinkingpodcast.com/episode/122" target="_blank"><u>psychology professor Alexa Tullett and me discussing it on the Clearer Thinking podcast</u></a> (there, I refer to it as &#8220;Importance Laundering,&#8221; but I now think &#8220;Importance Hacking&#8221; is a better name) or me talking about it on the <a rel="noreferrer noopener" href="https://www.fourbeers.com/98" target="_blank"><u>Two Psychologists Four Beers podcast</u></a>. We also discuss my new project, <a rel="noreferrer noopener" href="https://replications.clearerthinking.org/" target="_blank"><u>Transparent Replications</u></a>, which conducts rapid replications of recently published psychology papers in top journals in an effort to shift incentives and create more reliable, replicable research. If you enjoyed this article, you may be interested in checking our <a rel="noreferrer noopener" href="https://replications.clearerthinking.org/replications/" target="_blank"><u>replication reports</u></a> and learning more <a rel="noreferrer noopener" href="https://replications.clearerthinking.org/about/" target="_blank"><u>about the project</u></a>.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p id="viewer-es1me"><em>Did you like this article? If so, you may like to explore the ClearerThinking Podcast, where I have fun, in-depth conversations with brilliant people about ideas that matter. </em><a rel="noreferrer noopener" href="https://clearerthinkingpodcast.com/" target="_blank"><em><u>Click here to see a full list of episodes</u></em></a><em>.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2022/12/importance-hacking-a-major-yet-rarely-discussed-problem-in-science/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3057</post-id>	</item>
	</channel>
</rss>
