"Proofs" and "evidence" for causality, in most "science'...
In this month's Scientific American, these issues get discussed. Sadly, the whole article focuses on classical null hypothesis rejection using p values, or "confidence" intervals. Alternatives are discussed, such as Baysian approaches, or a new gimmick, how things don't seem right by analogy with repeated coin tossing. I wish the writer had had more knowledge of research methodology, including various study approaches, and analytical biases. Current approaches merely attempt to rule out errors based on chance. The real meat of the process comes with discussion of results. This would include: Degrees of strength of findings, dose response trends, analyses of biases and efffect modification, appropriateness of study designs, biological/physical/chemical coherence, funding/publication biases,----and how results fit with existing evidence in building stronger theory of any scientifically answerable questions. But it does touch on how hard it is for humans to think outside of existing theoretical boxes. Yet ignores how all this relates to crafting real policy in the real world.
Meet the Author of this Blog
online today!
Ocean Coast, Maine, USA
Retired, but busy. Years left to enjoy. Handy, curious, multilingual (German, French, Spanish, learning Portuguese). Love animals. Live on a salt water ocean bay just south of Canada. Angling off the rocky beach. Mussels. Watching the oceans reclaim
[read more]
Comments (4)