Statistical methods in cancer research

C. Polgár, Zsolt Orosz, J. Fodor, James Majeski

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

This work has presented and elaborated on some of the more fundamental statistical methods that can be applied to medical data and that appear most frequently in the present literature. Familiarity with these concepts is necessary for understanding information so analyzed and is the foundation for critical evaluation of a variety of medical data, bur the results of the analysis of clinical data can be no better than the data themselves. To evaluate the quality of the data obtained in a clinical trial, the following guidelines from Simon and Wittes [150] are recommended: Authors should discuss briefly the quality control methods used to ensure that the data is complete and accurate. A reliable procedure should be cited for ensuring that all patients entered on study are actually reported on. If no such procedures are in place, their absence should be noted. Any procedures employed to ensure that assessment of major endpoints is reliable should be mentioned (e.g., second-party review of responses) or their absence noted. All patients registered on study should be accounted for. The report should specify for each treatment the number of patients who were not eligible, who died, or who withdrew before treatment began. The distribution of follow-up yimes should be described for each treatment, and the number of patients lost to follow-up should be given. The study should have the smallest possible inevaluability rate for major endpoints. If more than 10% of eligible patients should be lost to follow-up or considered inevaluable for response owing to early death, protocol violation, or missing information, we recommend great caution in interpreting the results. In randomized studies, the report should include a comparison of survival or other major endpoints for all eligible patients as randomized, that is with no exclusions other than those not meeting eligibility criteria. The sample size should be sufficient to either establish or conclusively rule out the existence of effects of clinically meaningful magnitude. For "negative" results in therapeutic comparisons, the adequacy of sample size should be demonstrated by either presenting confidence limits for true treatment differerices or calculating statistical power for detecting differences. Authors should state whether there was an initial target sample size and, if so, what it was. They should specify how frequently interim analyses were performed and how the decisions to stop accrual and report results were made. All claims of therapeutic efficacy should be based on explicit comparison with a specific control group, except in special circumstances under which each patient is his own control. If non-randomized controls are used, the characteristics of the patients should be presented in detail and compared with those of the experimental group. Potential sources of bias should be discussed adequately. The patients studied should be described adequately. Applicability of conclusions to other patients should be dealt with carefully. Claims of subset-specific treatment differences must be documented carefully statistically as more than the random results of multiple subset analyses. The methods of statistical analysis should be described in sufficient detail that a knowledgeable reader could reproduce the analysis if the data was available.

Original languageEnglish
Pages (from-to)201-223
Number of pages23
JournalJournal of Surgical Oncology
Volume76
Issue number3
DOIs
Publication statusPublished - 2001

Fingerprint

Research
Neoplasms
Sample Size
Lost to Follow-Up
Therapeutics
Quality Control
Clinical Trials
Guidelines
Control Groups
Survival

ASJC Scopus subject areas

  • Surgery
  • Oncology

Cite this

Statistical methods in cancer research. / Polgár, C.; Orosz, Zsolt; Fodor, J.; Majeski, James.

In: Journal of Surgical Oncology, Vol. 76, No. 3, 2001, p. 201-223.

Research output: Contribution to journalArticle

Polgár, C. ; Orosz, Zsolt ; Fodor, J. ; Majeski, James. / Statistical methods in cancer research. In: Journal of Surgical Oncology. 2001 ; Vol. 76, No. 3. pp. 201-223.
@article{2dd83a498f8a459ba56a7d3d7cc6a941,
title = "Statistical methods in cancer research",
abstract = "This work has presented and elaborated on some of the more fundamental statistical methods that can be applied to medical data and that appear most frequently in the present literature. Familiarity with these concepts is necessary for understanding information so analyzed and is the foundation for critical evaluation of a variety of medical data, bur the results of the analysis of clinical data can be no better than the data themselves. To evaluate the quality of the data obtained in a clinical trial, the following guidelines from Simon and Wittes [150] are recommended: Authors should discuss briefly the quality control methods used to ensure that the data is complete and accurate. A reliable procedure should be cited for ensuring that all patients entered on study are actually reported on. If no such procedures are in place, their absence should be noted. Any procedures employed to ensure that assessment of major endpoints is reliable should be mentioned (e.g., second-party review of responses) or their absence noted. All patients registered on study should be accounted for. The report should specify for each treatment the number of patients who were not eligible, who died, or who withdrew before treatment began. The distribution of follow-up yimes should be described for each treatment, and the number of patients lost to follow-up should be given. The study should have the smallest possible inevaluability rate for major endpoints. If more than 10{\%} of eligible patients should be lost to follow-up or considered inevaluable for response owing to early death, protocol violation, or missing information, we recommend great caution in interpreting the results. In randomized studies, the report should include a comparison of survival or other major endpoints for all eligible patients as randomized, that is with no exclusions other than those not meeting eligibility criteria. The sample size should be sufficient to either establish or conclusively rule out the existence of effects of clinically meaningful magnitude. For {"}negative{"} results in therapeutic comparisons, the adequacy of sample size should be demonstrated by either presenting confidence limits for true treatment differerices or calculating statistical power for detecting differences. Authors should state whether there was an initial target sample size and, if so, what it was. They should specify how frequently interim analyses were performed and how the decisions to stop accrual and report results were made. All claims of therapeutic efficacy should be based on explicit comparison with a specific control group, except in special circumstances under which each patient is his own control. If non-randomized controls are used, the characteristics of the patients should be presented in detail and compared with those of the experimental group. Potential sources of bias should be discussed adequately. The patients studied should be described adequately. Applicability of conclusions to other patients should be dealt with carefully. Claims of subset-specific treatment differences must be documented carefully statistically as more than the random results of multiple subset analyses. The methods of statistical analysis should be described in sufficient detail that a knowledgeable reader could reproduce the analysis if the data was available.",
author = "C. Polg{\'a}r and Zsolt Orosz and J. Fodor and James Majeski",
year = "2001",
doi = "10.1002/jso.1035",
language = "English",
volume = "76",
pages = "201--223",
journal = "Journal of Surgical Oncology",
issn = "0022-4790",
publisher = "Wiley-Liss Inc.",
number = "3",

}

TY - JOUR

T1 - Statistical methods in cancer research

AU - Polgár, C.

AU - Orosz, Zsolt

AU - Fodor, J.

AU - Majeski, James

PY - 2001

Y1 - 2001

N2 - This work has presented and elaborated on some of the more fundamental statistical methods that can be applied to medical data and that appear most frequently in the present literature. Familiarity with these concepts is necessary for understanding information so analyzed and is the foundation for critical evaluation of a variety of medical data, bur the results of the analysis of clinical data can be no better than the data themselves. To evaluate the quality of the data obtained in a clinical trial, the following guidelines from Simon and Wittes [150] are recommended: Authors should discuss briefly the quality control methods used to ensure that the data is complete and accurate. A reliable procedure should be cited for ensuring that all patients entered on study are actually reported on. If no such procedures are in place, their absence should be noted. Any procedures employed to ensure that assessment of major endpoints is reliable should be mentioned (e.g., second-party review of responses) or their absence noted. All patients registered on study should be accounted for. The report should specify for each treatment the number of patients who were not eligible, who died, or who withdrew before treatment began. The distribution of follow-up yimes should be described for each treatment, and the number of patients lost to follow-up should be given. The study should have the smallest possible inevaluability rate for major endpoints. If more than 10% of eligible patients should be lost to follow-up or considered inevaluable for response owing to early death, protocol violation, or missing information, we recommend great caution in interpreting the results. In randomized studies, the report should include a comparison of survival or other major endpoints for all eligible patients as randomized, that is with no exclusions other than those not meeting eligibility criteria. The sample size should be sufficient to either establish or conclusively rule out the existence of effects of clinically meaningful magnitude. For "negative" results in therapeutic comparisons, the adequacy of sample size should be demonstrated by either presenting confidence limits for true treatment differerices or calculating statistical power for detecting differences. Authors should state whether there was an initial target sample size and, if so, what it was. They should specify how frequently interim analyses were performed and how the decisions to stop accrual and report results were made. All claims of therapeutic efficacy should be based on explicit comparison with a specific control group, except in special circumstances under which each patient is his own control. If non-randomized controls are used, the characteristics of the patients should be presented in detail and compared with those of the experimental group. Potential sources of bias should be discussed adequately. The patients studied should be described adequately. Applicability of conclusions to other patients should be dealt with carefully. Claims of subset-specific treatment differences must be documented carefully statistically as more than the random results of multiple subset analyses. The methods of statistical analysis should be described in sufficient detail that a knowledgeable reader could reproduce the analysis if the data was available.

AB - This work has presented and elaborated on some of the more fundamental statistical methods that can be applied to medical data and that appear most frequently in the present literature. Familiarity with these concepts is necessary for understanding information so analyzed and is the foundation for critical evaluation of a variety of medical data, bur the results of the analysis of clinical data can be no better than the data themselves. To evaluate the quality of the data obtained in a clinical trial, the following guidelines from Simon and Wittes [150] are recommended: Authors should discuss briefly the quality control methods used to ensure that the data is complete and accurate. A reliable procedure should be cited for ensuring that all patients entered on study are actually reported on. If no such procedures are in place, their absence should be noted. Any procedures employed to ensure that assessment of major endpoints is reliable should be mentioned (e.g., second-party review of responses) or their absence noted. All patients registered on study should be accounted for. The report should specify for each treatment the number of patients who were not eligible, who died, or who withdrew before treatment began. The distribution of follow-up yimes should be described for each treatment, and the number of patients lost to follow-up should be given. The study should have the smallest possible inevaluability rate for major endpoints. If more than 10% of eligible patients should be lost to follow-up or considered inevaluable for response owing to early death, protocol violation, or missing information, we recommend great caution in interpreting the results. In randomized studies, the report should include a comparison of survival or other major endpoints for all eligible patients as randomized, that is with no exclusions other than those not meeting eligibility criteria. The sample size should be sufficient to either establish or conclusively rule out the existence of effects of clinically meaningful magnitude. For "negative" results in therapeutic comparisons, the adequacy of sample size should be demonstrated by either presenting confidence limits for true treatment differerices or calculating statistical power for detecting differences. Authors should state whether there was an initial target sample size and, if so, what it was. They should specify how frequently interim analyses were performed and how the decisions to stop accrual and report results were made. All claims of therapeutic efficacy should be based on explicit comparison with a specific control group, except in special circumstances under which each patient is his own control. If non-randomized controls are used, the characteristics of the patients should be presented in detail and compared with those of the experimental group. Potential sources of bias should be discussed adequately. The patients studied should be described adequately. Applicability of conclusions to other patients should be dealt with carefully. Claims of subset-specific treatment differences must be documented carefully statistically as more than the random results of multiple subset analyses. The methods of statistical analysis should be described in sufficient detail that a knowledgeable reader could reproduce the analysis if the data was available.

UR - http://www.scopus.com/inward/record.url?scp=0035086280&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0035086280&partnerID=8YFLogxK

U2 - 10.1002/jso.1035

DO - 10.1002/jso.1035

M3 - Article

C2 - 11276025

AN - SCOPUS:0035086280

VL - 76

SP - 201

EP - 223

JO - Journal of Surgical Oncology

JF - Journal of Surgical Oncology

SN - 0022-4790

IS - 3

ER -