<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>culture of science &#8211; Spencer Greenberg</title>
	<atom:link href="https://www.spencergreenberg.com/tag/culture-of-science/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.spencergreenberg.com</link>
	<description></description>
	<lastBuildDate>Sun, 22 Jan 2023 22:35:54 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<site xmlns="com-wordpress:feed-additions:1">23753251</site>	<item>
		<title>Importance Hacking: a major (yet rarely-discussed) problem in science</title>
		<link>https://www.spencergreenberg.com/2022/12/importance-hacking-a-major-yet-rarely-discussed-problem-in-science/</link>
					<comments>https://www.spencergreenberg.com/2022/12/importance-hacking-a-major-yet-rarely-discussed-problem-in-science/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 20 Dec 2022 01:45:00 +0000</pubDate>
				<category><![CDATA[Essays]]></category>
		<category><![CDATA[beauty hacking]]></category>
		<category><![CDATA[career incentives]]></category>
		<category><![CDATA[chance]]></category>
		<category><![CDATA[clarity]]></category>
		<category><![CDATA[culture of science]]></category>
		<category><![CDATA[fraud]]></category>
		<category><![CDATA[generalizability]]></category>
		<category><![CDATA[generalizability crisis]]></category>
		<category><![CDATA[hacking]]></category>
		<category><![CDATA[honesty]]></category>
		<category><![CDATA[importance hacking]]></category>
		<category><![CDATA[incentives]]></category>
		<category><![CDATA[integrity]]></category>
		<category><![CDATA[novelty hacking]]></category>
		<category><![CDATA[open science]]></category>
		<category><![CDATA[overclaiming]]></category>
		<category><![CDATA[p-hacking]]></category>
		<category><![CDATA[probability]]></category>
		<category><![CDATA[psychological science]]></category>
		<category><![CDATA[publish or perish]]></category>
		<category><![CDATA[reasoning processes]]></category>
		<category><![CDATA[replication crisis]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[social science]]></category>
		<category><![CDATA[statistics]]></category>
		<category><![CDATA[usefulness hacking]]></category>
		<category><![CDATA[veracity]]></category>
		<guid isPermaLink="false">https://www.spencergreenberg.com/?p=3057</guid>

					<description><![CDATA[I first published this post on the Clearer Thinking blog on December 19, 2022, and first cross-posted it to this site on January 21, 2023. You have probably heard the phrase &#8220;replication crisis.&#8221; It refers to the grim fact that, in a number of fields of science, when researchers attempt to replicate previously published studies, [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>I first published this post on the <a href="https://www.clearerthinking.org/post/importance-hacking-a-major-yet-rarely-discussed-problem-in-science">Clearer Thinking blog</a> on December 19, 2022, and first cross-posted it to this site on January 21, 2023.</em></p>



<p id="viewer-1d12a"></p>



<p id="viewer-104ln">You have probably heard the phrase &#8220;replication crisis.&#8221; It refers to the grim fact that, in a number of fields of science, when researchers attempt to replicate previously published studies, they fairly often don&#8217;t get the same results. The magnitude of the problem depends on the field, but in psychology, it seems that something like <a rel="noreferrer noopener" href="http://datacolada.org/47" target="_blank"><u>40% of studies in top journals</u></a> don&#8217;t replicate. We&#8217;ve been tackling this crisis with our new <a rel="noreferrer noopener" href="https://replications.clearerthinking.org/" target="_blank"><u><em>Transparent Replications</em></u></a> project, and this post explains one of our key ideas.</p>



<p id="viewer-2dn5g">Replication failures are sometimes simply due to bad luck, but more often, they are caused by p-hacking &#8211; the use of fishy statistical techniques that lead to statistically significant (but misleading or erroneous) results. As big a problem as p-hacking is, there is another substantial problem in science that gets talked about much less. Although certain subtypes of this problem have been named previously, to my knowledge, the problem itself has no name, so I&#8217;m giving it one: &#8220;Importance Hacking.&#8221;</p>



<p id="viewer-3hoev">Academics want to publish in the top journals in their field. To understand Importance Hacking, let&#8217;s consider a (slightly oversimplified) list of the three most commonly-discussed ways to get a paper published in top psychology journals:</p>



<ol class="wp-block-list">
<li><strong>Conduct valuable research</strong> &#8211; make a genuinely interesting or important discovery, or add something valuable to the state of scientific knowledge. This is, of course, what just about everyone wants to do, but it&#8217;s very, very hard!</li>



<li><strong>Commit fraud</strong> &#8211; for instance, by making up your data. Thankfully, very few people are willing to do this because it&#8217;s so unethical. So this is by far the least used approach.</li>



<li><strong>p-hack</strong> &#8211; use fishy statistics, HARKing (i.e., hypothesizing after the results are known), selective reporting, using hidden <a href="https://en.wikipedia.org/wiki/Researcher_degrees_of_freedom" target="_blank" rel="noreferrer noopener"><u>researcher degrees of freedom</u></a>, etc., in order to get a p&lt;0.05 result that is actually just a false positive. This is a major problem and the focus of the replication crisis. Of course, false positives can also come about without fault, due to bad luck.</li>
</ol>



<p id="viewer-5plkf">But here is a fourth way to get a paper published in a top journal: Importance Hacking.</p>



<p id="viewer-ctrs5">4. <strong>Importance Hack</strong> &#8211; get a result that is actually not interesting, not important, and not valuable, but write about it in such a way that reviewers are convinced it is interesting, important, and/or valuable, so that it gets published.</p>



<p id="viewer-f54g1">For research to be valuable to society (and, in an ideal world, publishable in top journals), it must be true AND interesting (or important, useful, etc.). Researchers sometimes p-hack their results to skirt around the &#8220;true&#8221; criterion (by generating interesting false positives). On the other hand, Importance Hacking is a method for skirting the &#8220;interesting&#8221; criterion.</p>



<p id="viewer-ft7mi">Importance Hacking is related to concepts like <em>hype</em> and <em>overselling</em>, though hype and overselling are far more general. Importance Hacking refers specifically to a phenomenon whereby research with little to no value gets published in top journals due to the use of strategies that lead reviewers to misinterpret the work. On the other hand, hype and overselling are used in many ways in many stages of research (including to make valuable research appear even more valuable).</p>



<p id="viewer-dd0l9">One way to understand importance hacking is by comparing it to p-hacking. P-hacking refers to a set of bad research practices that enable researchers to publish non-existent effects. In other words, p-hacking misleads paper reviewers into thinking that non-existent effects are real. Importance Hacking, on the other hand, encompasses a different set of bad research practices: those that lead paper reviewers to believe that real (i.e., existent) results that have little to no value actually have substantial value.</p>



<p id="viewer-2tioa">This diagram illustrates how I think Importance Hacking interferes with the pipeline of producing valuable research:</p>



<figure class="wp-block-image"><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/static.wixstatic.com/media/f4e552_e1a60b1c65514edf9fef562a77c5c4ba~mv2.jpg/v1/fill/w_1480%2Ch_904%2Cal_c%2Cq_85%2Cusm_0.66_1.00_0.01%2Cenc_auto/f4e552_e1a60b1c65514edf9fef562a77c5c4ba~mv2.jpg?w=750&#038;ssl=1" alt=""/></figure>



<p id="viewer-7u47q">There are a number of subtypes of Importance Hacking based on the method used to make a result appear interesting/important/valuable when it&#8217;s not. Here is how I subdivide them:</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h2 class="wp-block-heading" id="viewer-brv18"></h2>



<h2 class="wp-block-heading" id="viewer-fh6np">Types of Importance Hacking</h2>



<p id="viewer-a5mla"><strong>1. Hacking Conclusions:</strong> make it seem like you showed some interesting thing X but actually show something else (X′) which sounds similar to X but is much less interesting/important. In these cases, researchers do not truly find what they imply they have found. This phenomenon is also closely connected with validity issues.</p>



<ul class="wp-block-list">
<li><em>Example 1: showing X is true in a simple video game but claiming that X is true in real life.</em></li>



<li><em>Example 2: showing A and B are correlated and claiming that A causes B (when really A and B are probably both caused by some third factor C, which makes the finding much less interesting).</em></li>



<li><em>Example 3: if a researcher claims to be measuring “aggression,” and couches all conclusions in these terms but is actually measuring milliliters of hot sauce that a person puts in someone else&#8217;s food. Their result about aggression will be valid only insofar as it is true that this is a valid measure of aggression.</em></li>



<li>Example 4: some types of hacking conclusions would fall under the terms &#8220;overclaiming&#8221; or &#8220;overgeneralizing;&#8221; Tal Yarkoni has a relevant paper called <a href="https://mzettersten.github.io/assets/pdf/ManyBabies_BBS_commentary.pdf" target="_blank" rel="noreferrer noopener"><em><u>The Generalizability Crisis</u></em></a><em>.</em></li>
</ul>



<p id="viewer-365fm"><strong>2. Hacking Novelty: </strong>refer to something in a way that makes it seem more novel or unintuitive than it is. Perhaps the result is already well known or is merely what just about everyone&#8217;s common sense would already tell them is true. In these cases, researchers really do find what they claim to have found, but what they found is not novel (despite them making it seem so). Hacking Novelty is also connected to the &#8220;Jingle-jangle&#8221; fallacy &#8211; where people can be led to believe two identical concepts are different because they have different names (or, more subtly, because they are operationalized somewhat differently).</p>



<ul class="wp-block-list">
<li><em>Example 1: showing something that is already well-known but giving it a new name that leads people to think it is something new. The concept of “grit” has received this criticism; some people claim it could turn out to be just another word for conscientiousness (or already known facets of conscientiousness) &#8211; though this question does not yet seem to be settled (different sides of this debate can be found in these papers: </em><a rel="noreferrer noopener" href="https://www.researchgate.net/publication/6290064_Grit_Perseverance_and_Passion_for_Long-Term_Goals" target="_blank"><em><u>1</u></em></a><em>, </em><a rel="noreferrer noopener" href="https://journals.sagepub.com/doi/pdf/10.1002/per.2171" target="_blank"><em><u>2</u></em></a><em>, </em><a rel="noreferrer noopener" href="https://drive.google.com/file/d/1NzMPCgZ_Ipbmzewgaj0dmopkfLq582NA/view" target="_blank"><em><u>3</u></em></a><em> and <u><a href="https://www.researchgate.net/publication/304032119_Much_Ado_About_Grit_A_Meta-Analytic_Synthesis_of_the_Grit_Literature">4</a></u>).</em></li>



<li><em>Example 2: showing that A and B are correlated, which seems surprising given how the constructs are named, but if you were to dig into how A and B were measured, it would be obvious they would be correlated.</em></li>



<li><em>Example 3: showing a common-sense result that almost everyone already would predict but making it seem like it&#8217;s not obvious (e.g., by giving it a fancy scientific name).</em></li>
</ul>



<p id="viewer-a209k"><strong>3. Hacking Usefulness: </strong>make a result seem useful or relevant to some important outcome when in fact, it&#8217;s useless and irrelevant. In these cases, researchers find what they claim to have found, but what they find is not useful (despite them making it sound useful).</p>



<ul class="wp-block-list">
<li><em>Example: focusing on statistical significance when the effect size is so small that the result is useless. Clinicians often distinguish between “statistical significance” and “clinical significance” to highlight the pitfalls of ignoring effect sizes when considering the importance of a finding.</em></li>
</ul>



<p id="viewer-etfss"><strong>4. Hacking Beauty: </strong>make a result seem clean and beautiful when in fact, it&#8217;s messy or hard to interpret. In these cases, researchers focus on certain details or results and tell a story around those, but they could have focused on other details or results that would have made the story less pretty, less clear-cut, or harder to make sense of. This is related to Giner-Sorolla’s 2012 paper <a href="https://journals.sagepub.com/doi/pdf/10.1177/1745691612457576" target="_blank" rel="noreferrer noopener"><em><u>Science or art: How aesthetic standards grease the way through the publication bottleneck but undermine science</u></em></a><em>. </em>Hacking beauty sometimes reduces to selective reporting of some kind (i.e., selective reporting of measures, analyses, or studies) or at least of selective focus on certain findings and not others. This becomes more difficult with pre-registration; if you have to report the results of planned analyses, there’s less room to make them look pretty (you could just <em>say</em> they’re pretty, but that seems like overclaiming)</p>



<ul class="wp-block-list">
<li><em>Example: emphasizing the parts of the result that tell a clean story while not including (or burying somewhere in the paper) the parts that contradict that story</em></li>
</ul>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p id="viewer-56mr8">Science faces multiple challenges. Over the past decade, the <a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/Replication_crisis" target="_blank"><u>replication crisis</u></a> and subsequent <a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/Open_science" target="_blank"><u>open science movement</u></a> have greatly increased awareness of p-hacking as a problem. Measures have begun to be put in place to reduce p-hacking. Importance Hacking is another substantial problem, but it has received far less attention.</p>



<figure class="wp-block-image"><img data-recalc-dims="1" decoding="async" src="https://i0.wp.com/static.wixstatic.com/media/f4e552_94289803042f43d68a85e7c490b1fa1c~mv2.jpg/v1/fill/w_1480%2Ch_1110%2Cal_c%2Cq_85%2Cusm_0.66_1.00_0.01%2Cenc_auto/f4e552_94289803042f43d68a85e7c490b1fa1c~mv2.jpg?w=750&#038;ssl=1" alt=""/><figcaption class="wp-element-caption"><em>Digital art created using the A.I. DALL</em>·<em>E</em></figcaption></figure>



<p id="viewer-at41b"></p>



<p id="viewer-aqs8s">If a pipe is leaking from two holes and its pressure is kept fixed, then repairing one hole will result in the other one leaking faster. Similarly, as best practices increasingly become commonplace as a means to reduce p-hacking, so long as the career pressures to publish in top journals don&#8217;t let up, the occurrence of Importance Hacking may increase.</p>



<p id="viewer-3rjml">It&#8217;s time to start the conversation about how Importance Hacking can be addressed.</p>



<p id="viewer-agpq6">If you&#8217;re interested in learning more about Importance Hacking, you can listen to <a rel="noreferrer noopener" href="https://clearerthinkingpodcast.com/episode/122" target="_blank"><u>psychology professor Alexa Tullett and me discussing it on the Clearer Thinking podcast</u></a> (there, I refer to it as &#8220;Importance Laundering,&#8221; but I now think &#8220;Importance Hacking&#8221; is a better name) or me talking about it on the <a rel="noreferrer noopener" href="https://www.fourbeers.com/98" target="_blank"><u>Two Psychologists Four Beers podcast</u></a>. We also discuss my new project, <a rel="noreferrer noopener" href="https://replications.clearerthinking.org/" target="_blank"><u>Transparent Replications</u></a>, which conducts rapid replications of recently published psychology papers in top journals in an effort to shift incentives and create more reliable, replicable research. If you enjoyed this article, you may be interested in checking our <a rel="noreferrer noopener" href="https://replications.clearerthinking.org/replications/" target="_blank"><u>replication reports</u></a> and learning more <a rel="noreferrer noopener" href="https://replications.clearerthinking.org/about/" target="_blank"><u>about the project</u></a>.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p id="viewer-es1me"><em>Did you like this article? If so, you may like to explore the ClearerThinking Podcast, where I have fun, in-depth conversations with brilliant people about ideas that matter. </em><a rel="noreferrer noopener" href="https://clearerthinkingpodcast.com/" target="_blank"><em><u>Click here to see a full list of episodes</u></em></a><em>.</em></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.spencergreenberg.com/2022/12/importance-hacking-a-major-yet-rarely-discussed-problem-in-science/feed/</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">3057</post-id>	</item>
	</channel>
</rss>
