Standaard Boekhandel gebruikt cookies en gelijkaardige technologieën om de website goed te laten werken en je een betere surfervaring te bezorgen.
Hieronder kan je kiezen welke cookies je wilt inschakelen:
Technische en functionele cookies
Deze cookies zijn essentieel om de website goed te laten functioneren, en laten je toe om bijvoorbeeld in te loggen. Je kan deze cookies niet uitschakelen.
Analytische cookies
Deze cookies verzamelen anonieme informatie over het gebruik van onze website. Op die manier kunnen we de website beter afstemmen op de behoeften van de gebruikers.
Marketingcookies
Deze cookies delen je gedrag op onze website met externe partijen, zodat je op externe platformen relevantere advertenties van Standaard Boekhandel te zien krijgt.
Je kan maximaal 250 producten tegelijk aan je winkelmandje toevoegen. Verwijdere enkele producten uit je winkelmandje, of splits je bestelling op in meerdere bestellingen.
A novel perspective on scientific fraud—how undisclosed “tweaks” to research designs and model specifications fuel the credibility crisis in science.
In The Credibility Crisis in Science, leading social scientists Thomas Plümper and Eric Neumayer argue that the most important fraudulent strategy is crucially underappreciated. While data fabrication and manipulation are widely recognized as fraudulent, “tweaks”—the intentional selection of research designs and model specifications based on the results they give—are not. The authors of this book contend that the term “scientific fraud” must include tweaks. Tweakers, like other fraudsters, deceive readers by concealing their manipulation of empirical results and they do so to further their own interests.
The authors show how easily observational data analyses, experimental designs, and causal models are tweaked in ways that are extremely difficult, often impossible, to detect. As a consequence, the credibility crisis in science is even more severe than both scientists and the public believe.
Plümper and Neumayer argue that conventional strategies to deter, prevent, and detect fraud will not work for tweaks. The authors put forth two potential solutions: first, a classification system that categorizes data based on its susceptibility to manipulation and the probability of such manipulation being identified, and second, the proposal that journal editors and reviewers, rather than authors, select robustness tests.