### Abstract

One of the characteristics of many methods used in neuropsychopharmacology is that a large number of parameters (P) are measured in relatively few subjects (n). Functional magnetic resonance imaging, electroencephalography (EEG) and genomic studies are typical examples. For example one microarray chip can contain thousands of probes. Therefore, in studies using microarray chips, P may be several thousand-fold larger than n. Statistical analysis of such studies is a challenging task and they are refereed to in the statistical literature such as the small “n” big “P” problem. The problem has many facets including the controversies associated with multiple hypothesis testing. A typical scenario in this context is, when two or more groups are compared by the individual attributes. If the increased classification error due to the multiple testing is neglected, then several highly significant differences will be discovered. But in reality, some of these significant differences are coincidental, not reproducible findings. Several methods were proposed to solve this problem. In this review we discuss two of the proposed solutions, algorithms to compare sets and statistical hypothesis tests controlling the false discovery rate.

Original language | Hungarian |
---|---|

Pages (from-to) | 23-30 |

Number of pages | 8 |

Journal | Neuropsychopharmacologia Hungarica |

Volume | 17 |

Issue number | 1 |

Publication status | Published - 2015 |

### Fingerprint

### ASJC Scopus subject areas

- Neuroscience(all)
- Pharmacology, Toxicology and Pharmaceutics(all)
- Neuropsychology and Physiological Psychology
- Clinical Neurology

### Cite this

**A kis „n”, nagy „P” probléma a neuropszichofarmakológiában, avagy hogyan kontrolláljuk a hamis felfedezések arányát.** / Péter, Petschner; Bagdy, G.; Tóthfalusi, L.

Research output: Contribution to journal › Article

*Neuropsychopharmacologia Hungarica*, vol. 17, no. 1, pp. 23-30.

}

TY - JOUR

T1 - A kis „n”, nagy „P” probléma a neuropszichofarmakológiában, avagy hogyan kontrolláljuk a hamis felfedezések arányát

AU - Péter, Petschner

AU - Bagdy, G.

AU - Tóthfalusi, L.

PY - 2015

Y1 - 2015

N2 - One of the characteristics of many methods used in neuropsychopharmacology is that a large number of parameters (P) are measured in relatively few subjects (n). Functional magnetic resonance imaging, electroencephalography (EEG) and genomic studies are typical examples. For example one microarray chip can contain thousands of probes. Therefore, in studies using microarray chips, P may be several thousand-fold larger than n. Statistical analysis of such studies is a challenging task and they are refereed to in the statistical literature such as the small “n” big “P” problem. The problem has many facets including the controversies associated with multiple hypothesis testing. A typical scenario in this context is, when two or more groups are compared by the individual attributes. If the increased classification error due to the multiple testing is neglected, then several highly significant differences will be discovered. But in reality, some of these significant differences are coincidental, not reproducible findings. Several methods were proposed to solve this problem. In this review we discuss two of the proposed solutions, algorithms to compare sets and statistical hypothesis tests controlling the false discovery rate.

AB - One of the characteristics of many methods used in neuropsychopharmacology is that a large number of parameters (P) are measured in relatively few subjects (n). Functional magnetic resonance imaging, electroencephalography (EEG) and genomic studies are typical examples. For example one microarray chip can contain thousands of probes. Therefore, in studies using microarray chips, P may be several thousand-fold larger than n. Statistical analysis of such studies is a challenging task and they are refereed to in the statistical literature such as the small “n” big “P” problem. The problem has many facets including the controversies associated with multiple hypothesis testing. A typical scenario in this context is, when two or more groups are compared by the individual attributes. If the increased classification error due to the multiple testing is neglected, then several highly significant differences will be discovered. But in reality, some of these significant differences are coincidental, not reproducible findings. Several methods were proposed to solve this problem. In this review we discuss two of the proposed solutions, algorithms to compare sets and statistical hypothesis tests controlling the false discovery rate.

KW - False discovery rate

KW - FMRI

KW - Functional imaging studies

KW - Gene set enrichment analysis

KW - Microarray

KW - Permutation test

KW - Statistics

UR - http://www.scopus.com/inward/record.url?scp=84927643887&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84927643887&partnerID=8YFLogxK

M3 - Article

C2 - 25935380

AN - SCOPUS:84927643887

VL - 17

SP - 23

EP - 30

JO - Neuropsychopharmacologia Hungarica

JF - Neuropsychopharmacologia Hungarica

SN - 1419-8711

IS - 1

ER -