When does the hard work and ambition to collect and interpret data become shoddy science? A large gray area seems to exist, where validating a project’s aims and getting published start to take over, and the quality of results are sacrificed.
At what point does that happen however? I ask, because I’m feeling that sort of pressure in the last few months myself: I don’t get new and difficult experiments to work on the first or second try often, but I get them to work; the results look good, but they don’t match up well with our expectations and hypotheses. And, the punchline is, that I’m being reminded that it’s okay: the project can be ditched and/or I’ll only be involved with this group for six more months.
What kind of message is that sending?
Sure, I can rationalize this pressure to prioritize publishing over good science, even if I don’t accept it. Our livelihoods depend on getting published first, and really understand the science second, unfortunately. In an ideal world, there wouldn’t be this discrepancy, but this isn’t an ideal world. And it has long been recognized that the scientific enterprise is imperfect, flawed even, in that it can reward shoddy science in various ways. Unfortunately, there really isn’t a better metric to evaluate a scientist’s resumé than his publication record, except perhaps their funding record (which depends on their publication record, so this is cyclical), qualitative comparisons of which journals a scientist has published in, and how many citations their papers have received.
Perhaps this kind of pressure just comes with the territory, however. Sigh…