Psychology must learn a lesson from fraud case

Psychology must learn a lesson from fraud case

In 1977, two jumbo jets collided at an airport on the island of Tenerife. The highly respected pilot of the KLM aircraft was under pressure to depart, and ignored several signs and warnings that the Pan Am Jumbo had not yet cleared the runway.

They took off in the fog without proper clearance and 583 people were killed. The failure of the checks and balances imposed to deal with low visibility, arrogance, stress and a tendency to ignore conflicting information resulted in disaster. These measures have since been improved, and commercial aviation is safe today not because it pointed the finger at the erring pilot, but because it learned the hard lesson.

Psychology, my field of science, recently uncovered a devastating case of longstanding misconduct by Diderik Staples, a highly respected scientist at Tilburg University in the Netherlands (see Nature 479, 15; 2011). In response, we must also take a serious look at the circumstances under which the malpractice occurred, in order to improve checks and balances to avoid recurrence.

The committee investigating Staples’ misconduct has yet to identify his fraud-tainted research papers, but it has already noted that the closed culture that characterizes much psychology research greatly aided Staples’ deception.

One could argue that his misconduct is extraordinary, regardless of the research culture. However, the petty crimes that all scientists are tempted to commit, as Jennifer Crocker reported in World View last month (see Nature 479, 151; 2011), are more likely when there is less investigation.

The interim report of the investigative committee revealed that Staples often refused to share his research data with colleagues, even co-authors on the papers. To scientists in other fields, this may sound extraordinary; Sadly for psychologists, this is common practice.

In a 2006 study published in American Psychologist, I helped show that nearly three-quarters of researchers who had published a paper in a high-impact psychology journal did not share their data (JM Wicherts et al. Am Psychol. . 61, 726 -728; 2006). Many data sets, the authors said, were kept incorrectly, while others were kept secret because they were part of ongoing work, or because of ethical rules to protect participants’ privacy.

This kind of confidentiality has long been the most common excuse that psychologists offer for not sharing data, but in practice, most fail to document their data in a way that allows others to do their work quickly and easily. allowed to check. It is not uncommon for data to be shared only for list variables such as VAR00001 to VAR00019, and there is no explanation.

It is not just misconduct that thrives in such secrecy. Also make common and more insidious failures of error and bias in data analysis – for example, the use of incorrect tests, reporting errors due to similarly named variables, supporting results confirming a hypothesis and reporting highly positive statistical results .

For scientists in other fields, not sharing data may seem extraordinary; Sadly for psychologists, this is common practice.

It is surprising that psychology researchers go to great lengths to blind their data collection to potentially confounding effects, including the expectations of participants, observers and experimenters, but seem oblivious to problems in subsequent analysis and reporting.

In aviation, a co-pilot checks the pilot’s every move, and the actions are recorded in a ‘black box’ to reconstruct any errors. In psychology, co-authors rarely verify the analysis of a study, which is effectively conducted inside a box. Readers of published papers are shown dense summaries of results – without proper data collection, they can only hope for the best.

Psychology’s culture of secrecy produces shoddy science. Published psychology papers show frequent errors in reanalysis of statistics, and the more reluctant authors share their data, the more likely their papers will be inaccuracies. Or to put it in another way – the results which need checking the most cannot be checked.

How to lift the veil of secrecy? Mandatory collection of raw data in journal articles or online appendices in repositories should be a precondition for publication. This will not only help uncover misconduct — the curious patterns in Staples’ data that led to its downfall — but will also help prevent and later correct honest mistakes and unfair positive reporting.

With online publication, data can often be published together with the researchers’ chosen statistical analysis and their summaries of the results. As part of a growing concern over scientific openness, grant-making organizations, academic publishers, and professional organizations including the American Psychological Organization are already considering such options.

Leave a Comment

Your email address will not be published. Required fields are marked *