By Jessica Love
November 24, 2011
As he knelt by the grave of his mother and father
the taste of dill, or tarragon—
he could barely tell one from the other—
filled his mouth.
So begins Paul Muldoon’s poem “Milkweed and Monarch.” It is a nice example—though by no means an extreme one—of what I will call matrix verb delay. The matrix verb in a sentence is the “main” verb (filled in the example above), the one that would remain if the dependent clauses like As he knelt by the grave of his mother and father and parentheticals like he could barely tell one from the other were stripped away. According to many writers, delaying the main verb until the end of the sentence creates a sort of tension, a sense of (to quote The Rocky Horror Picture Show) antici…pation.
The psycholinguist in me wondered if writers were on to something—if maybe suspending the matrix verb did have measurable psychological consequences. Specifically, I predicted that matrix verb delay might lead to faster reading times as readers very literally pushed themselves toward the relief that the matrix verb would bring. So I constructed a number of sentences that varied in their placement of the matrix verb and set out to conduct a test.
My experiment, however, didn’t work. Whatever differences occurred between conditions—a few milliseconds here and there—were much better explained by chance than by the placement of the matrix verb. In short, I had a null effect. What’s so discouraging about a null effect is not that it means your hypothesis was incorrect. What it means is that you don’t know whether your hypothesis was incorrect or whether your hypothesis was spot on and your experiment just wasn’t clever or rigorous or sensitive enough to test it.
Perhaps, say, readers need to give a damn in order to be affected by the placement of a matrix verb, and my single-sentence stimuli were too bland or choppy or unnatural to interest them. Perhaps my sentences varied along a dimension that didn’t occur to me while I was writing them, and the increased variability buried my effect. Or perhaps what I measured—sentence reading times—was too rough, whereas a word-by-word (or clause-by-clause) reading measure would have been sensitive enough to detect an effect.
Though the results of my experiment were not publishable (nor should they have been), it sometimes seems as though the burden of proof in publishing scientific results is a bit lopsided. If I had systematically tested each of these alternative explanations, and found them all to be inadequate, shouldn’t I be able to argue, at least tentatively, that matrix verb placement has no effect on reading speed?
Not necessarily. Publishing a null effect is notoriously difficult in most scientific fields. A colleague of mine—someone far more experienced and highly regarded than me—recently published a null result, one with wide-ranging implications, but he was able to do so only after six replications (one is generally sufficient for a positive finding) and a particularly open-minded set of peer-reviewers. This bias against null effects is why when we turn on the news we hear a whole lot about the (often tiny and erratic) effects of dill—or was it tarragon?—on our blood pressure or our eyesight, but we never hear what it doesn’t effect, even if what it doesn’t effect is more important or surprising. Or, to put it another way (which may or may not make you read more breathlessly), when we hear a whole lot about the effects of dill consumption, but nothing about what goes unaffected, this is why.
Jessica Love is a contributing editor of the SCHOLAR. She holds a doctorate in cognitive psychology and edits Kellogg Insight at Northwestern University.