Here is the way the story goes. In the beginning, there is excitement and enthusiasm. A grant has been funded, the required approvals to study people have been obtained, the starting gate opens, and data collection begins. In the background, I may even hear pieces of 'My old Kentucky Home' playing. If I am truly lucky, I may even be wearing a stylish hat.
But, I digress.
Once out of the starting gate, things may go as planned or they may get a little strange. But, it’s not my job to push for one horse to win over the other. Instead, it’s my job to watch, unemotionally, as the results unfold, and to keep bias and cheating out of the race and whatever equations may enter the race. I have the assigned role of data warden, driving hard for more (data) to be collected, spending many careful hours evaluating the results, often falling deeply asleep on my keyboard in the wee hours of the morning.
At the end though, it’s no bother and no hard feelings as long as what comes of the race is knowing something that we didn’t know before. Whether positive or negative, if we’ve identified something, whether what we expected and liked or what we didn’t expect and may not like, at least we’ve done the thing we were supposed to do in academic research: advance the state of knowledge; provide a nugget that in the long run, can benefit people and society.
But, over the years, this seemingly simple race to advance knowledge for the benefit of someone, somewhere.... has become increasingly muddy. The track is a mess. The rules of the game have gone awry.
In the present day, we can now spend hundreds and hundreds of hours collecting data, analyzing that data, trying to create something meaningful from the effort, and in the end, run into the statistics police who are blocking the home stretch to the finish line, and ultimately, I accomplish absolutely nothing.
In their most common form, the statistics police are those who review manuscripts submitted for publication in this or that academic journal. Over the years, they have become increasingly fearful that we might possibly make a conclusion based on the data that is not a 100% certainty. They seem to have forgotten that statistics are inherently, by their very definition, uncertain and there is always a possibility of misinterpreting the data. Instead of allowing such uncertainty into the hallowed pages of the academic journal with appropriate statements to that effect, the statistics police will often push our conclusions back into one of two corners. Corner 1 allows us to say that we have proven the obvious. Corner 2 allows us to say that we have proven nothing at all.
Either corner defeats the entire purpose of the whole process of research.
When did it become worse to suggest what could be true and provide food for thought rather than restrict ourselves to what is true and obvious and therefore not worth reading in the first place?
Is it not better to say what we can with the sample sizes we have rather than wait to fund a sample size we will never have?
In reviewing my publications over the years, I can safely say that Statistically Speaking, I’ve done almost nothing… which is a mockery of the hundreds of hours spent on research not just by me but by the many all over the country who take on academic research to expand horizons, to serve society, to advance knowledge… and all that corny rhetoric that used to make academic research noble and worth doing.
I would rather run a race and lose than run a race and be stopped short of the finish line… in spite of the fact that I am not a horse, nor do I own a Derby hat.
Statistically or otherwise, that sounds like good sense.
No comments:
Post a Comment