Advertisment ACS-IndiaSymposium
 
Journal of Pharmaceutical Negative Results
  Print this page Email this page Small font sizeDefault font sizeIncrease font size 
Search Article 
  
Advanced search 
 Home | About us | Editorial board | Search | Ahead of print | Current issue | Archives | Submit article | Instructions | Subscribe | Contacts  
 


 
  Table of Contents  
ORIGINAL ARTICLE
Year : 2011  |  Volume : 2  |  Issue : 2  |  Page : 87-90  

Reporting of sample size and power in negative clinical trials published in Indian medical journals


Department of Pharmacology, Govt. Medical College, Surat, India

Date of Web Publication25-Nov-2011

Correspondence Address:
Jaykaran
Department of Pharmacology Govt. Medical College, Surat, Gujarat
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/0976-9234.90220

Rights and Permissions
   Abstract 

Background and Aim: It is observed that negative clinical trials published in medical journals are poor in reporting of sample size calculation, various components of calculation of sample size. It is also observed that they are underpowered to detect the actual difference between the treatment outcomes. Because of scarcity of these data for Indian medical journals we designed this study to critically analyze the Indian medical journals for reporting of sample size, components of sample size and power. We calculated post hoc power for 30% and 50% difference between the treatment outcomes. Materials and Methods: All the negative clinical trials published in five Indian medical journals (Indian Journal of Pharmacology (IJP), Indian Pediatrics (IP), Indian Journal of Dermatology (IJD), Indian Journal of Dermatology, Vanereology and Leprology (IJDVL), and Journal of Postgraduate Medicine (JPGM)), between 2001 and 2008, were analyzed by each author for reporting of the sample size and components of the sample size. Post hoc power for 30% and 50% differences between the outcome was calculated by G Power software. All data were expressed as frequency, percentages, and 95% confidence interval around the percentages with the help of SPSS ver. 17. Results: The median sample size was observed to be 33 (range 9-85). Power was calculated in 28 (41.1%, 95% CI 30.2% to 53%) trials. The sample size was not calculated in 34 (50%, 95% CI 38.4% to 61.5%) trials. The full sample size was calculated in only 2 (2.9%, 95% CI 0.8% to 10.1%) trials. Post hoc power above 80% for 30% of difference was reported in 3 (4.4%, 95% CI 1.5% to 12%) trials and for 50% difference in outcome it was reported in 42 (61.7%, 95% CI 49.8% to 72.3) trials. Conclusion: Negative clinical trials published in five Indian journals are poor in reporting of sample size calculation. It is also observed that most of trials are underpowered to see the 30% and 50% difference between the outcomes. There is a need to generate more awareness regarding the sample size and power calculation

Keywords: Negative clinical trials, power, sample size


How to cite this article:
Jaykaran, Yadav P, Kantharia N D. Reporting of sample size and power in negative clinical trials published in Indian medical journals. J Pharm Negative Results 2011;2:87-90

How to cite this URL:
Jaykaran, Yadav P, Kantharia N D. Reporting of sample size and power in negative clinical trials published in Indian medical journals. J Pharm Negative Results [serial online] 2011 [cited 2019 Nov 15];2:87-90. Available from: http://www.pnrjournal.com/text.asp?2011/2/2/87/90220


   Introduction Top


Calculation of the sample size and power is an important prerequisite before conduction of any clinical trial. Calculation of the sample size should not only be reported but also justified in the published clinical trials. [1],[2],[3],[4] The aim of calculating the sample size is to include enough number of participants so that clinically relevant effects can be measured. [5],[6] It is difficult to generalize the results of clinical trials having less sample size to the normal patient population. [7],[8],[9],[10],[11] Conduction of these underpowered clinical trials raises many ethical issues. [12],[13]

It is observed that the clinical trials published in various medical journals are poor in reporting various methodological aspects including sample size calculation and power. [2],[10],[12],[13],[14],[15],[16]

Not only the positive trials but also the negative trials should be evaluated thoroughly before generalizing their results. [17] Editors and readers of a paper reporting a negative result should know whether the study was planned with adequate power to detect the hypothesized effect. It is essential for investigators to be aware that a negative result such as this can arise for a number of reasons: the real difference between groups was less than the hypothesized amount; there was no difference between groups; the variance of the observed data was greater than anticipated; there were confounding factors in the conduct of the study or analysis of the data that led to a smaller difference than that actually exists; and the real difference between groups is as great or greater than hypothesized, but this result is a case of a type II error. [17]

The relationship between negative findings and statistical power has been reported by Freiman et al.[10] in a review of 71 randomized controlled trials with negative results published during 1960--1977. These trials were drawn from a collection of 300 simple two-group parallel design trials. They were interested in assessing whether trials with negative results had sufficient statistical power to detect a 25% and a 50% relative difference between treatment interventions. Their review indicated that most of the trials had low power to detect these effects.

We decided to survey the negative clinical trials published in Indian medical journals for the reporting of sample size calculation and power. We also decided to calculated post hoc power of the studies on the basis of various parameters given in the trial and to see whether they have sufficient power to detect 30% and 50% difference in the outcome. We believe that this work is important as very few studies are done regarding reporting of statistics in Indian medical journals. [18],[19]


   Materials and Methods Top


We searched five Indian medical journals (Indian Journal of Pharmacology (IJP), Indian Pediatrics (IP), Indian Journal of Dermatology (IJD), Indian Journal of Dermatology, Vanereology and Leprology (IJDVL), and Journal of Postgraduate Medicine (JPGM). All two-arm parallel-group randomized clinical trials published in these journals in 8 years (2001--2008) are downloaded from journal websites. Out of these clinical trials only clinical trials having the negative results for primary end points (most important end point if primary end point is not given) are considered for the study.

Each author surveyed these negative clinical trials for reporting of sample size calculation and power on predesigned proforma. We looked for reporting of the sample size and various components of sample size calculation (reporting of power, comment on power, reporting of delta, reason for delta, assumption of the control group, reason for the control group, confidence interval, and reporting of one tailed or two tailed). [5],[6] We noted whether the sample size was calculated fully or partially or was not calculated at all. We also calculated post hoc power of trials for 30% and 50% difference between the outcomes with the help of G Power software. [20] In post hoc analyses, power is computed as a function of alpha (type 1 error), the population effect size parameter, and the sample size used in a study. [20]

Discrepancies observed between the authors were resolved by consensus

Statistics

All data were expressed as frequency, percentages, and 95% confidence interval around the percentages with the help of SPSS ver. 17.


   Results Top


There were 198 trials published in 8 years in five Indian medical journals. Out of these 198, 76 (38.3%, 95% CI 31.8--45.3%) were multicentric trials. Out of these 198, 68 (34.3%, 95% CI 28--41.2%) were negative trials (IJP=7, IP=24, IJD=14 IJDVL= 15, JPGM=8). The median number of subjects included in these negative trials was 33 (range 9 to 85). Power calculation was reported in 28 (41.1%, 95% CI 30.2--53%) negative clinical trials. The median reported power was 90% (range 80--95%).Other results are summarized in [Table 1] and [Table 2].
Table 1: Reporting of the sample size and power in negative clinical trials


Click here to view
Table 2: Reporting of various components of sample size calculation in negative clinical trials


Click here to view



   Discussion Top


In this study we found that negative clinical trials published in five Indian medical journals are poor in reporting of sample size calculation and power. We also observed that very few trials 3 (4.4%, 95% CI 1.5--12%) have power more than 80% for 30% difference in outcome. A total of 42 (61.7%, 95% CI 49.8--72.3%) trials have power more than 80% for 50% difference in outcome.

If the negative clinical trial has a sufficient sample size/power then its results are interpretable and it can be said that the treatment group has no clinically important difference for the measured variable. But if the negative clinical trial has less sample size/power, then the insignificance of the result may be because of less power of the study. Hence, it is important to report power and sample size in clinical trials. [2],[3]

Findings of our study are similar to the study done by Frieman et al. (1978). In that study only 7% trials had at least 80% power to detect a 25% relative change between the treatment groups and 31% had a 50% relative change between the treatment groups. [10]

Our study also supported by the study done by Moher et al. In that study only 16% and 36% trials had sufficient statistical power (80%) to detect a 25% or 50% relative difference, respectively. [2]

Our observation is also supported by some other studies though they were not related only to negative clinical trials. Like in the study done by DerSimonian et al. and Pocock et al., general methodological and statistical problems of clinical trials published in 1979 and 1985 were evaluated respectively. Both reports found that statistical power was discussed in only about 12% of the published randomized controlled trials selected for the review. [21],[22] In a survey done by us on articles published in Indian medical journals, Indian pediatrics, and Indian Journal of Pharmacology, we found that sample size calculation was not mentioned in any article of Indian Journal of Pharmacology and in Indian pediatrics it was mentioned in 24% of articles. [18],[19]

There is also ethical importance of calculation of the sample size and power. It is observed that ethics committees did not permit the conduction of clinical trial having an unnecessary large sample size or small sample size. In trials designed with a large sample size there is a chance of wastage of resources and unnecessary exposure of subjects to intervention. A trial with small sample size is unethical as these kinds of trials are not scientifically sound and hence cannot give valid results which can justify the use of subjects and resources in the clinical trial. [8],[12]

It was observed that though the sample size was calculated in some trials, all the information regarding the components of sample size calculation was missing. Two (2.9%, 95% CI 0.8% to 10.1%) clinical trials in our study reported all the components of sample size calculation. All the components should be reported in published clinical trials so that post hoc sample size calculation can be done to check the adequacy and hence validity of results. In a study done by Pierre Charles et al., it was observed that there was a significant difference between reported sample size and replicated sample size in about 30% of trials. [23] In this study it was observed that in 43% clinical trials all the components of sample size calculation were not mentioned; in our study we observed this in 47% of trials.

There is a possibility that the actual sample size calculation was done but not reported. Though it seems unlikely, in a study done by Liberati et al. where authors were contacted personally to ask about the reason for not reporting sample size, it was observed that there were very few authors who calculated but not reported the sample size in the published article. [24]

Our study is not devoid of some limitations like including only five Indian journals, fixation of power at 80%, and not seeing the trend of improvement with years.

We want to suggest that all the components of sample size calculation should be reported in clinical trials and in the case of negative clinical trials post hoc power should be given so that more information regarding the validity of results can be gained. We suggest that CONSORT statement should be followed strictly for reporting of clinical trials. [3]

We believe that negative clinical trials published in five Indian journals are not devoid of methodological problems and important among these is reporting of the sample size and power. In the absence of this information it is difficult to get an idea about the validity of results. There is a need of generating awareness regarding these methodological aspects among the researchers.

 
   References Top

1.ICH Harmonised Tripartite Guideline. Statistical principles for clinical trials. International Conference on Harmonisation E9 Expert Working Group. Stat Med 1999;18:1905-42.  Back to cited text no. 1
[PUBMED]    
2.Moher D, Dulberg CS, Wells GA. Statistical power, sample size, and their reporting in randomized controlled trials. JAMA 1994;272:122- 4.  Back to cited text no. 2
[PUBMED]    
3.Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, et al. The revised CONSORT statement for reporting randomized trials: Explanation and elaboration. Ann Intern Med 2001;134:663-94.  Back to cited text no. 3
[PUBMED]  [FULLTEXT]  
4.Moher D, Schulz KF, Altman DG. The CONSORT statement: Revised recommendations for improving the quality of reports of parallel group randomised trials. Lancet 2001;357:1191-4.   Back to cited text no. 4
[PUBMED]  [FULLTEXT]  
5.Machin D, Campbell M, Fayers P, Pinol A. Sample size tables for clinical studies. 2 nd ed. Oxford: Blackwell Science; 1997.  Back to cited text no. 5
    
6.Schulz KF, Grimes DA. Sample size calculations in randomized trials: Mandatory and mystical. Lancet 2005;365:1348-53.  Back to cited text no. 6
[PUBMED]  [FULLTEXT]  
7.Lakatos E. Sample size determination. In: Redmond C, Colton T, editors. Biostatistics in clinical trials. Chichester: John Wiley and Sons; 2001.  Back to cited text no. 7
    
8.Altman DG. Statistics and ethics in medical research: III How large a sample? Br Med J 1980;281:1336-8.  Back to cited text no. 8
[PUBMED]  [FULLTEXT]  
9.Senn S. Statistical issues in drug development. Chichester: John Wiley and Sons; 1997.  Back to cited text no. 9
    
10.Freiman JA, Chalmers TC, Smith H Jr, Kuebler RR. The importance of beta, the type II error and sample size in the design and interpretation of the randomized control trial. Survey of 71 "negative"trials. N Engl J Med 1978;299:690-4.  Back to cited text no. 10
[PUBMED]  [FULLTEXT]  
11.Wooding WM. Planning pharmaceutical clinical trials. Chichester: John Wiley and Sons; 1994.  Back to cited text no. 11
    
12.Halpern SD, Karlawish JH, Berlin JA. The continuing unethical conduct of under powered clinical trials. JAMA 2002;288:358-62.  Back to cited text no. 12
[PUBMED]  [FULLTEXT]  
13.Cleophas RC, Cleophas TJ. Is selective reporting of clinical research unethical as well as unscientific? Int J Clin Pharmacol Ther 1999;37:1- 7.  Back to cited text no. 13
    
14.Ruiz-Canela M, de Irala-Estevez J, Martinez-Gonzalez MA, Gomez-Gracia E, Fernandez-Crehuet J. Methodological quality and reporting of ethical requirements in clinical trials. J Med Ethics 2001;27:172-6.  Back to cited text no. 14
    
15.Loeb C, Gandolfo C. Methodological Problems of Clinical Trials in Multi-Infarct Dementia. Neuroepidemiology 1990;9:223-7.  Back to cited text no. 15
[PUBMED]    
16.Puopolo M, Pocchiari M, Petrini C. Clinical trials and methodological problems in prion diseases. Lancet Neurol 2009;8:782-3.  Back to cited text no. 16
[PUBMED]  [FULLTEXT]  
17.Hebert S, Wright M, Dittus S, Elasy A. Prominent medical journals often provide insufficient information to asses the validity of studies with negative results. J Negat Results Biomed 2002;1:1.  Back to cited text no. 17
    
18.Karan J, Goyal JP, Bhardwaj P, Yadav P. Statistical reporting in Indian Pediatrics. Indian Pediatr 2009;46:811-2.  Back to cited text no. 18
[PUBMED]  [FULLTEXT]  
19.Jaykaran, Yadav P, Bhardwaj P, Goyal J. Problems in reporting of statistics: Comparison between journal related to basic science with journal related to clinical practice. Internet J Epidemiol 2009;7:1.  Back to cited text no. 19
    
20.Faul F, Erdfelder E, Lang AG, Buchner A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods 2007;39:175-91.  Back to cited text no. 20
[PUBMED]    
21.DerSimonian R, Charette LJ, McPeek B, Mosteller F. Reporting on methods in clinical trials. N Engl J Med 1982;306:1332-7.  Back to cited text no. 21
[PUBMED]  [FULLTEXT]  
22.Pocock SJ, Hughes MD, Lee RJ. Statistical problems in the reporting of clinical trials: A survey of three medical journals. N Engl J Med 1987;317:426-32.  Back to cited text no. 22
[PUBMED]  [FULLTEXT]  
23.Charles P, Giraudeau B, Dechartres A, Baron G, Ravaud P. Reporting of sample size calculation in randomized controlled trials: Review. Br Med J 2009;338:1732.  Back to cited text no. 23
    
24.Liberati A, Himel HN, Chalmers TC. A quality assessment of randomized control trials of primary treatment of breast cancer. J Clin Oncol 1986;4:942-51.  Back to cited text no. 24
[PUBMED]  [FULLTEXT]  



 
 
    Tables

  [Table 1], [Table 2]


This article has been cited by
1 How to calculate sample size for different study designs in medical research?
Charan, J., Biswas, T.
Indian Journal of Psychological Medicine. 2012; 35(2): 121-126
[Pubmed]



 

Top
  
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
    Abstract
   Introduction
    Materials and Me...
   Results
   Discussion
    References
    Article Tables

 Article Access Statistics
    Viewed3333    
    Printed251    
    Emailed0    
    PDF Downloaded505    
    Comments [Add]    
    Cited by others 1    

Recommend this journal