

LETTER TO EDITOR 

Year : 2010  Volume
: 1
 Issue : 2  Page : 6465 


Some commonly observed statistical errors in clinical trials published in Indian Medical Journals
Jaykaran
Department of Pharmacology, Government Medical College, Surat, Gujarat, India
Date of Web Publication  15Jan2011 
Correspondence Address: Jaykaran Department of Pharmacology, Government Medical College, Surat, Gujarat India
Source of Support: None, Conflict of Interest: None  Check 
DOI: 10.4103/09769234.75709
How to cite this article: Jaykaran. Some commonly observed statistical errors in clinical trials published in Indian Medical Journals. J Pharm Negative Results 2010;1:645 
How to cite this URL: Jaykaran. Some commonly observed statistical errors in clinical trials published in Indian Medical Journals. J Pharm Negative Results [serial online] 2010 [cited 2019 Sep 23];1:645. Available from: http://www.pnrjournal.com/text.asp?2010/1/2/64/75709 
Sir,
Standard of reporting of statistics in clinical trials published in various medical journals is not satisfactory. A growing body of literature points to persistent statistical errors, flaws and deficiencies in clinical trials published in the western and Indian medical journals. ^{[1],[2]} In this letter, I want to highlight some commonly observed statistical errors which I found in clinical trials published in Indian medical journals, so that readers of medical journals can critically appraise the clinical trials.
One of the major problems is inadequate reporting about the sample size. In clinical trials, sample should be big enough to have a high chance of detecting, as statistically significant, a worthwhile effect if it exists, and thus to be reasonably sure that no benefit exists if it is not found in trial. For sample size calculation in hypothesis testing, the researcher must know the effect size, standard deviation, significance level and power of study. Exact sample size should be calculated during the design phase of clinical trials, preferably with the help of a statistician, and the method of calculation of sample size should be reported in the manuscript. ^{[1]}
Another problem is inappropriate/ wrong statistical tests. I observed that even simple statistical tests like "chisquare test" or "the Student t test" are misused. Before applying any statistical test, the assumptions for that statistical test should be fulfilled and reported in the manuscript. If some obscure / less known statistical test is used, then justification for using that test and/ or proper reference should be given. One common problem I observed in the "statistics" section of clinical trials published in Indian medical journals is regarding the "where appropriate" statement. In many clinical trials, it is mentioned that "appropriate statistical tests were used to analyze the data" or "t tests were used for quantitative data, and chisquare test was used for qualitative data." These kinds of statements should be avoided as they provide insufficient information. ^{[1],[3]}
Failure to report adjustment for multiple endpoints is also a frequently observed problem in clinical trials published in Indian medical journals. Multiplicity of inferences can be because of multisample comparisons, interim and subgroup analyses and multiple endpoints. This multiplicity is associated with false positivity, i.e., likelihood of getting a significant result just by chance. If with one statistical test, the chance of a significant result is 5%, then after 20 tests, it will increase to 40%. Investigator should use various methods described to adjust multiple endpoints and hence type I error. Not only International Conference on Harmonization (ICH) E9 guideline but also Consolidated Standards of Reporting Trials (CONSORT) statement demands use of procedures like Bonferroni correction and composite end point method etc for adjustment of multiple endpoints. ^{[4]}
Results are usually reported as "P values" only. It is reported that P values are often misinterpreted; and even if they are interpreted correctly, there are some limitations. Results should be explained as absolute difference between the two groups for the endpoint as well as 95% confidence interval around the difference. This confidence interval should be reported with, or instead of, P value. Confidence intervals tell the reader exactly the range of values with which the data are statistically compatible. In spite of instruction to authors of many Indian medical journals to mention about the reporting of exact P value, many clinical trials published in Indian medical journals report arbitrary P values, like "P < .05" or "P > .05" or "P = NS." Exact P values should be reported rather than arbitrary values. ^{[5]}
In clinical trials published in Indian medical journals, "intention to treat" (ITT) principle is usually not followed. Meaning of "intention to treat" principle is that all patients randomized into clinical trial are to be accounted for in the primary analysis, and all primary events observed during the followup period are to be accounted for as well. If either of these aspects is not adhered to, the analysis of results may easily be biased in unpredictable directions and thus the interpretation of the results compromised. ^{[1],[5]}
I observed that, in many of the clinical trials baseline comparisons between the two groups for various endpoints are reported with P values for each. In randomized controlled trials, recruitment of subjects is done with proper randomization techniques, hence the differences observed between the groups are considered to be chance findings. So there is no need for reporting of baseline comparisons. Any difference between the groups should be adjusted by various statistical techniques during the analysis of results, but P values need not be reported. ^{[5]}
I believe that, a statistician should be included at an early stage in the clinical trials to prevent these errors. Investigators performing trials should have some background knowledge of biostatistics, and there should be rigorous statistical review of manuscripts before sending them for peer review. Journals should have few statisticians in the editorial team.
References   
1.  Karan J, Kantharia ND, Yadav P, Bhardwaj P. Reporting statistics in clinical trials published in Indian journals: A survey. Pak J Med Sci Q 2010;26:2126. Available from: http://www.pjms.com.pk/issues/janmar2010/pdf/article44.pdf [last accessed on 2010 Apr 4]. 
2.  Hopewell S, Dutton S, Mee Yu, Chan A. Altman DG. The quality of reports of randomized trials in 2000 and 2006: Comparative study of articles indexed in Pub Med. BMJ 2010;340:c723. Available from http://www.bmj.com/cgi/content/full/340/mar23_1/c723 [last accessed 2010 Apr 2]. 
3.  Strasak AM, Zaman Q, Pfeiffer KP, Göbel G, Ulmer H. Statistical errors in medical research: A review of common pitfalls. Swiss Med Wkly 2007;137:449. Available from: http://www.smw.ch/docs/pdf200x/2007/03/smw11587.pdf [last accessed on 2010 Apr 5]. 
4.  Neuhauser M. How to deal with multiple endpoints in clinical trials. Fundam Clin Pharmacol 2006;20:51523. Available from: http://www3.interscience.wiley.com/journal/118553180/full [last accessed on 2010 Apr 4]. 
5.  Tom L. Twenty statistical errors even YOU can find in biomedical research articles. Croat Med J 2004;45:36170. Available from: http://www.cmj.hr/2004/45/4/15311405.pdf [last accessed on 2010 Apr 2]. 
