When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections
  • How to Report Statistics

How to Report Statistics

Ensure appropriateness and rigor, avoid flexibility and above all never manipulate results

In many fields, a statistical analysis forms the heart of both the methods and results sections of a manuscript. Learn how to report statistical analyses, and what other context is important for publication success and future reproducibility.

A matter of principle

First and foremost, the statistical methods employed in research must always be:

Checklist icon

Appropriate for the study design

Data management icon

Rigorously reported in sufficient detail for others to reproduce the analysis

Fairness icon

Free of manipulation, selective reporting, or other forms of “spin”

Just as importantly, statistical practices must never be manipulated or misused . Misrepresenting data, selectively reporting results or searching for patterns  that can be presented as statistically significant, in an attempt to yield a conclusion that is believed to be more worthy of attention or publication is a serious ethical violation. Although it may seem harmless, using statistics to “spin” results can prevent publication, undermine a published study, or lead to investigation and retraction.

Supporting public trust in science through transparency and consistency

Along with clear methods and transparent study design, the appropriate use of statistical methods and analyses impacts editorial evaluation and readers’ understanding and trust in science.

In 2011  False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant exposed that “flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates” and demonstrated “how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis”.

Arguably, such problems with flexible analysis lead to the “ reproducibility crisis ” that we read about today. 

A constant principle of rigorous science The appropriate, rigorous, and transparent use of statistics is a constant principle of rigorous, transparent, and Open Science. Aim to be thorough, even if a particular journal doesn’t require the same level of detail. Trust in science is all of our responsibility. You cannot create any problems by exceeding a minimum standard of information and reporting.

statistical analysis of research paper

Sound statistical practices

While it is hard to provide statistical guidelines that are relevant for all disciplines, types of research, and all analytical techniques,  adherence to rigorous and appropriate principles remains key. Here are some ways to ensure your statistics are sound.

Define your analytical methodology before you begin Take the time to consider and develop a thorough study design that defines your line of inquiry, what you plan to do, what data you will collect, and how you will analyze it. (If you applied for research grants or ethical approval, you probably already have a plan in hand!) Refer back to your study design at key moments in the research process, and above all, stick to it.

To avoid flexibility and improve the odds of acceptance, preregister your study design with a journal Many journals offer the option to submit a study design for peer review before research begins through a practice known as preregistration. If the editors approve your study design, you’ll receive a provisional acceptance for a future research article reporting the results. Preregistering is a great way to head off any intentional or unintentional flexibility in analysis.  By declaring your analytical approach in advance you’ll increase the credibility and reproducibility of your results and help address publication bias, too. Getting peer review feedback on your study design and analysis plan before it has begun (when you can still make changes!) makes your research even stronger AND increases your chances of publication—even if the results are negative or null. Never underestimate how much you can help increase the public’s trust in science by planning your research in this way.

Imagine replicating or extending your own work, years in the future Imagine that you are describing your approach to statistical analysis for your future self, in exactly the same way as we have described for writing your methods section . What would you need to know to replicate or extend your own work? When you consider that you might be at a different institution, working with different colleagues,  using different programs, applications, resources — or maybe even adopting new statistical techniques that have emerged — you can help yourself imagine the level of reporting specificity that you yourself would require to redo or extend your work. Consider:

  • Which details would you need to be reminded of? 
  • What did you do to the raw data before analysis?
  • Did the purpose of the analysis change before or during the experiments?
  • What participants did you decide to exclude? 
  • What process did you adjust, during your work? 

Even if a necessary adjustment you made was not ideal, transparency is the key to ensuring this is not regarded as an issue in the future. It is far better to transparently convey any non-optimal techniques or constraints than to conceal them, which could result in reproducibility or ethical issues downstream.

Existing standards, checklists, guidelines for specific disciplines

You can apply the Open Science practices outlined above no matter what your area of expertise—but in many cases, you may still need more detailed guidance specific to your own field. Many  disciplines, fields, and projects have worked hard to develop guidelines and resources  to help with statistics, and to identify and avoid bad statistical practices. Below, you’ll find some of the key materials. 

TIP: Do you have a specific journal in mind?

Be sure to read the submission guidelines for the specific journal you are submitting to, in order to discover any journal- or field-specific policies, initiatives or tools to utilize.

Articles on statistical methods and reporting

Makin, T.R.,  Orban de Xivry, J. Science Forum: Ten common statistical mistakes to watch out for when writing or reviewing a manuscript . eLife 2019;8:e48175 (2019).  https://doi.org/10.7554/eLife.48175  

Munafò, M., Nosek, B., Bishop, D. et al. A manifesto for reproducible science . Nat Hum Behav 1, 0021 (2017). https://doi.org/10.1038/s41562-016-0021    

Writing tips

Your use of statistics should be rigorous, appropriate, and uncompromising in avoidance of analytical flexibility. While this is difficult, do not compromise on rigorous standards for credibility!

What to do

  • Remember that trust in science is everyone’s responsibility.
  • Keep in mind future replicability.
  • Consider preregistering your analysis plan to have it (i) reviewed before results are collected to check problems before they occur and (ii) to avoid any analytical flexibility.
  • Follow principles, but also checklists and field- and journal-specific guidelines.
  • Consider a commitment to rigorous and transparent science a personal responsibility, and not simple adhering to journal guidelines.
  • Be specific about all decisions made during the experiments that someone reproducing your work would need to know.
  • Consider a course in advanced and new statistics, if you feel you have not focused on it enough during your research training.

What not to do

Don’t

  • Misuse statistics to influence significance or other interpretations of results
  • Conduct your statistical analyses if you are unsure of what you are doing—seek feedback (e.g. via preregistration) from a statistical specialist first.
  • How to Write a Great Title
  • How to Write an Abstract
  • How to Write Your Methods
  • How to Write Discussions and Conclusions
  • How to Edit Your Work

There is no excerpt because this is a protected post.

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Basic statistical tools in research and data analysis

Zulfiqar ali.

Department of Anaesthesiology, Division of Neuroanaesthesiology, Sheri Kashmir Institute of Medical Sciences, Soura, Srinagar, Jammu and Kashmir, India

S Bala Bhaskar

1 Department of Anaesthesiology and Critical Care, Vijayanagar Institute of Medical Sciences, Bellary, Karnataka, India

Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

INTRODUCTION

Statistics is a branch of science that deals with the collection, organisation, analysis of data and drawing of inferences from the samples to the whole population.[ 1 ] This requires a proper design of the study, an appropriate selection of the study sample and choice of a suitable statistical test. An adequate knowledge of statistics is necessary for proper designing of an epidemiological study or a clinical trial. Improper statistical methods may result in erroneous conclusions which may lead to unethical practice.[ 2 ]

Variable is a characteristic that varies from one individual member of population to another individual.[ 3 ] Variables such as height and weight are measured by some type of scale, convey quantitative information and are called as quantitative variables. Sex and eye colour give qualitative information and are called as qualitative variables[ 3 ] [ Figure 1 ].

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g001.jpg

Classification of variables

Quantitative variables

Quantitative or numerical data are subdivided into discrete and continuous measurements. Discrete numerical data are recorded as a whole number such as 0, 1, 2, 3,… (integer), whereas continuous data can assume any value. Observations that can be counted constitute the discrete data and observations that can be measured constitute the continuous data. Examples of discrete data are number of episodes of respiratory arrests or the number of re-intubations in an intensive care unit. Similarly, examples of continuous data are the serial serum glucose levels, partial pressure of oxygen in arterial blood and the oesophageal temperature.

A hierarchical scale of increasing precision can be used for observing and recording the data which is based on categorical, ordinal, interval and ratio scales [ Figure 1 ].

Categorical or nominal variables are unordered. The data are merely classified into categories and cannot be arranged in any particular order. If only two categories exist (as in gender male and female), it is called as a dichotomous (or binary) data. The various causes of re-intubation in an intensive care unit due to upper airway obstruction, impaired clearance of secretions, hypoxemia, hypercapnia, pulmonary oedema and neurological impairment are examples of categorical variables.

Ordinal variables have a clear ordering between the variables. However, the ordered data may not have equal intervals. Examples are the American Society of Anesthesiologists status or Richmond agitation-sedation scale.

Interval variables are similar to an ordinal variable, except that the intervals between the values of the interval variable are equally spaced. A good example of an interval scale is the Fahrenheit degree scale used to measure temperature. With the Fahrenheit scale, the difference between 70° and 75° is equal to the difference between 80° and 85°: The units of measurement are equal throughout the full range of the scale.

Ratio scales are similar to interval scales, in that equal differences between scale values have equal quantitative meaning. However, ratio scales also have a true zero point, which gives them an additional property. For example, the system of centimetres is an example of a ratio scale. There is a true zero point and the value of 0 cm means a complete absence of length. The thyromental distance of 6 cm in an adult may be twice that of a child in whom it may be 3 cm.

STATISTICS: DESCRIPTIVE AND INFERENTIAL STATISTICS

Descriptive statistics[ 4 ] try to describe the relationship between variables in a sample or population. Descriptive statistics provide a summary of data in the form of mean, median and mode. Inferential statistics[ 4 ] use a random sample of data taken from a population to describe and make inferences about the whole population. It is valuable when it is not possible to examine each member of an entire population. The examples if descriptive and inferential statistics are illustrated in Table 1 .

Example of descriptive and inferential statistics

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g002.jpg

Descriptive statistics

The extent to which the observations cluster around a central location is described by the central tendency and the spread towards the extremes is described by the degree of dispersion.

Measures of central tendency

The measures of central tendency are mean, median and mode.[ 6 ] Mean (or the arithmetic average) is the sum of all the scores divided by the number of scores. Mean may be influenced profoundly by the extreme variables. For example, the average stay of organophosphorus poisoning patients in ICU may be influenced by a single patient who stays in ICU for around 5 months because of septicaemia. The extreme values are called outliers. The formula for the mean is

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g003.jpg

where x = each observation and n = number of observations. Median[ 6 ] is defined as the middle of a distribution in a ranked data (with half of the variables in the sample above and half below the median value) while mode is the most frequently occurring variable in a distribution. Range defines the spread, or variability, of a sample.[ 7 ] It is described by the minimum and maximum values of the variables. If we rank the data and after ranking, group the observations into percentiles, we can get better information of the pattern of spread of the variables. In percentiles, we rank the observations into 100 equal parts. We can then describe 25%, 50%, 75% or any other percentile amount. The median is the 50 th percentile. The interquartile range will be the observations in the middle 50% of the observations about the median (25 th -75 th percentile). Variance[ 7 ] is a measure of how spread out is the distribution. It gives an indication of how close an individual observation clusters about the mean value. The variance of a population is defined by the following formula:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g004.jpg

where σ 2 is the population variance, X is the population mean, X i is the i th element from the population and N is the number of elements in the population. The variance of a sample is defined by slightly different formula:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g005.jpg

where s 2 is the sample variance, x is the sample mean, x i is the i th element from the sample and n is the number of elements in the sample. The formula for the variance of a population has the value ‘ n ’ as the denominator. The expression ‘ n −1’ is known as the degrees of freedom and is one less than the number of parameters. Each observation is free to vary, except the last one which must be a defined value. The variance is measured in squared units. To make the interpretation of the data simple and to retain the basic unit of observation, the square root of variance is used. The square root of the variance is the standard deviation (SD).[ 8 ] The SD of a population is defined by the following formula:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g006.jpg

where σ is the population SD, X is the population mean, X i is the i th element from the population and N is the number of elements in the population. The SD of a sample is defined by slightly different formula:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g007.jpg

where s is the sample SD, x is the sample mean, x i is the i th element from the sample and n is the number of elements in the sample. An example for calculation of variation and SD is illustrated in Table 2 .

Example of mean, variance, standard deviation

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g008.jpg

Normal distribution or Gaussian distribution

Most of the biological variables usually cluster around a central value, with symmetrical positive and negative deviations about this point.[ 1 ] The standard normal distribution curve is a symmetrical bell-shaped. In a normal distribution curve, about 68% of the scores are within 1 SD of the mean. Around 95% of the scores are within 2 SDs of the mean and 99% within 3 SDs of the mean [ Figure 2 ].

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g009.jpg

Normal distribution curve

Skewed distribution

It is a distribution with an asymmetry of the variables about its mean. In a negatively skewed distribution [ Figure 3 ], the mass of the distribution is concentrated on the right of Figure 1 . In a positively skewed distribution [ Figure 3 ], the mass of the distribution is concentrated on the left of the figure leading to a longer right tail.

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g010.jpg

Curves showing negatively skewed and positively skewed distribution

Inferential statistics

In inferential statistics, data are analysed from a sample to make inferences in the larger collection of the population. The purpose is to answer or test the hypotheses. A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. Hypothesis tests are thus procedures for making rational decisions about the reality of observed effects.

Probability is the measure of the likelihood that an event will occur. Probability is quantified as a number between 0 and 1 (where 0 indicates impossibility and 1 indicates certainty).

In inferential statistics, the term ‘null hypothesis’ ( H 0 ‘ H-naught ,’ ‘ H-null ’) denotes that there is no relationship (difference) between the population variables in question.[ 9 ]

Alternative hypothesis ( H 1 and H a ) denotes that a statement between the variables is expected to be true.[ 9 ]

The P value (or the calculated probability) is the probability of the event occurring by chance if the null hypothesis is true. The P value is a numerical between 0 and 1 and is interpreted by researchers in deciding whether to reject or retain the null hypothesis [ Table 3 ].

P values with interpretation

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g011.jpg

If P value is less than the arbitrarily chosen value (known as α or the significance level), the null hypothesis (H0) is rejected [ Table 4 ]. However, if null hypotheses (H0) is incorrectly rejected, this is known as a Type I error.[ 11 ] Further details regarding alpha error, beta error and sample size calculation and factors influencing them are dealt with in another section of this issue by Das S et al .[ 12 ]

Illustration for null hypothesis

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g012.jpg

PARAMETRIC AND NON-PARAMETRIC TESTS

Numerical data (quantitative variables) that are normally distributed are analysed with parametric tests.[ 13 ]

Two most basic prerequisites for parametric statistical analysis are:

  • The assumption of normality which specifies that the means of the sample group are normally distributed
  • The assumption of equal variance which specifies that the variances of the samples and of their corresponding population are equal.

However, if the distribution of the sample is skewed towards one side or the distribution is unknown due to the small sample size, non-parametric[ 14 ] statistical techniques are used. Non-parametric tests are used to analyse ordinal and categorical data.

Parametric tests

The parametric tests assume that the data are on a quantitative (numerical) scale, with a normal distribution of the underlying population. The samples have the same variance (homogeneity of variances). The samples are randomly drawn from the population, and the observations within a group are independent of each other. The commonly used parametric tests are the Student's t -test, analysis of variance (ANOVA) and repeated measures ANOVA.

Student's t -test

Student's t -test is used to test the null hypothesis that there is no difference between the means of the two groups. It is used in three circumstances:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g013.jpg

where X = sample mean, u = population mean and SE = standard error of mean

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g014.jpg

where X 1 − X 2 is the difference between the means of the two groups and SE denotes the standard error of the difference.

  • To test if the population means estimated by two dependent samples differ significantly (the paired t -test). A usual setting for paired t -test is when measurements are made on the same subjects before and after a treatment.

The formula for paired t -test is:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g015.jpg

where d is the mean difference and SE denotes the standard error of this difference.

The group variances can be compared using the F -test. The F -test is the ratio of variances (var l/var 2). If F differs significantly from 1.0, then it is concluded that the group variances differ significantly.

Analysis of variance

The Student's t -test cannot be used for comparison of three or more groups. The purpose of ANOVA is to test if there is any significant difference between the means of two or more groups.

In ANOVA, we study two variances – (a) between-group variability and (b) within-group variability. The within-group variability (error variance) is the variation that cannot be accounted for in the study design. It is based on random differences present in our samples.

However, the between-group (or effect variance) is the result of our treatment. These two estimates of variances are compared using the F-test.

A simplified formula for the F statistic is:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g016.jpg

where MS b is the mean squares between the groups and MS w is the mean squares within groups.

Repeated measures analysis of variance

As with ANOVA, repeated measures ANOVA analyses the equality of means of three or more groups. However, a repeated measure ANOVA is used when all variables of a sample are measured under different conditions or at different points in time.

As the variables are measured from a sample at different points of time, the measurement of the dependent variable is repeated. Using a standard ANOVA in this case is not appropriate because it fails to model the correlation between the repeated measures: The data violate the ANOVA assumption of independence. Hence, in the measurement of repeated dependent variables, repeated measures ANOVA should be used.

Non-parametric tests

When the assumptions of normality are not met, and the sample means are not normally, distributed parametric tests can lead to erroneous results. Non-parametric tests (distribution-free test) are used in such situation as they do not require the normality assumption.[ 15 ] Non-parametric tests may fail to detect a significant difference when compared with a parametric test. That is, they usually have less power.

As is done for the parametric tests, the test statistic is compared with known values for the sampling distribution of that statistic and the null hypothesis is accepted or rejected. The types of non-parametric analysis techniques and the corresponding parametric analysis techniques are delineated in Table 5 .

Analogue of parametric and non-parametric tests

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g017.jpg

Median test for one sample: The sign test and Wilcoxon's signed rank test

The sign test and Wilcoxon's signed rank test are used for median tests of one sample. These tests examine whether one instance of sample data is greater or smaller than the median reference value.

This test examines the hypothesis about the median θ0 of a population. It tests the null hypothesis H0 = θ0. When the observed value (Xi) is greater than the reference value (θ0), it is marked as+. If the observed value is smaller than the reference value, it is marked as − sign. If the observed value is equal to the reference value (θ0), it is eliminated from the sample.

If the null hypothesis is true, there will be an equal number of + signs and − signs.

The sign test ignores the actual values of the data and only uses + or − signs. Therefore, it is useful when it is difficult to measure the values.

Wilcoxon's signed rank test

There is a major limitation of sign test as we lose the quantitative information of the given data and merely use the + or – signs. Wilcoxon's signed rank test not only examines the observed values in comparison with θ0 but also takes into consideration the relative sizes, adding more statistical power to the test. As in the sign test, if there is an observed value that is equal to the reference value θ0, this observed value is eliminated from the sample.

Wilcoxon's rank sum test ranks all data points in order, calculates the rank sum of each sample and compares the difference in the rank sums.

Mann-Whitney test

It is used to test the null hypothesis that two samples have the same median or, alternatively, whether observations in one sample tend to be larger than observations in the other.

Mann–Whitney test compares all data (xi) belonging to the X group and all data (yi) belonging to the Y group and calculates the probability of xi being greater than yi: P (xi > yi). The null hypothesis states that P (xi > yi) = P (xi < yi) =1/2 while the alternative hypothesis states that P (xi > yi) ≠1/2.

Kolmogorov-Smirnov test

The two-sample Kolmogorov-Smirnov (KS) test was designed as a generic method to test whether two random samples are drawn from the same distribution. The null hypothesis of the KS test is that both distributions are identical. The statistic of the KS test is a distance between the two empirical distributions, computed as the maximum absolute difference between their cumulative curves.

Kruskal-Wallis test

The Kruskal–Wallis test is a non-parametric test to analyse the variance.[ 14 ] It analyses if there is any difference in the median values of three or more independent samples. The data values are ranked in an increasing order, and the rank sums calculated followed by calculation of the test statistic.

Jonckheere test

In contrast to Kruskal–Wallis test, in Jonckheere test, there is an a priori ordering that gives it a more statistical power than the Kruskal–Wallis test.[ 14 ]

Friedman test

The Friedman test is a non-parametric test for testing the difference between several related samples. The Friedman test is an alternative for repeated measures ANOVAs which is used when the same parameter has been measured under different conditions on the same subjects.[ 13 ]

Tests to analyse the categorical data

Chi-square test, Fischer's exact test and McNemar's test are used to analyse the categorical or nominal variables. The Chi-square test compares the frequencies and tests whether the observed data differ significantly from that of the expected data if there were no differences between groups (i.e., the null hypothesis). It is calculated by the sum of the squared difference between observed ( O ) and the expected ( E ) data (or the deviation, d ) divided by the expected data by the following formula:

An external file that holds a picture, illustration, etc.
Object name is IJA-60-662-g018.jpg

A Yates correction factor is used when the sample size is small. Fischer's exact test is used to determine if there are non-random associations between two categorical variables. It does not assume random sampling, and instead of referring a calculated statistic to a sampling distribution, it calculates an exact probability. McNemar's test is used for paired nominal data. It is applied to 2 × 2 table with paired-dependent samples. It is used to determine whether the row and column frequencies are equal (that is, whether there is ‘marginal homogeneity’). The null hypothesis is that the paired proportions are equal. The Mantel-Haenszel Chi-square test is a multivariate test as it analyses multiple grouping variables. It stratifies according to the nominated confounding variables and identifies any that affects the primary outcome variable. If the outcome variable is dichotomous, then logistic regression is used.

SOFTWARES AVAILABLE FOR STATISTICS, SAMPLE SIZE CALCULATION AND POWER ANALYSIS

Numerous statistical software systems are available currently. The commonly used software systems are Statistical Package for the Social Sciences (SPSS – manufactured by IBM corporation), Statistical Analysis System ((SAS – developed by SAS Institute North Carolina, United States of America), R (designed by Ross Ihaka and Robert Gentleman from R core team), Minitab (developed by Minitab Inc), Stata (developed by StataCorp) and the MS Excel (developed by Microsoft).

There are a number of web resources which are related to statistical power analyses. A few are:

  • StatPages.net – provides links to a number of online power calculators
  • G-Power – provides a downloadable power analysis program that runs under DOS
  • Power analysis for ANOVA designs an interactive site that calculates power or sample size needed to attain a given power for one effect in a factorial ANOVA design
  • SPSS makes a program called SamplePower. It gives an output of a complete report on the computer screen which can be cut and paste into another document.

It is important that a researcher knows the concepts of the basic statistical methods used for conduct of a research study. This will help to conduct an appropriately well-designed study leading to valid and reliable results. Inappropriate use of statistical techniques may lead to faulty conclusions, inducing errors and undermining the significance of the article. Bad statistics may lead to bad research, and bad research may lead to unethical practice. Hence, an adequate knowledge of statistics and the appropriate use of statistical tests are important. An appropriate knowledge about the basic statistical methods will go a long way in improving the research designs and producing quality medical research which can be utilised for formulating the evidence-based guidelines.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Frequently asked questions

What is statistical analysis.

Statistical analysis is the main method for analyzing quantitative research data . It uses probabilities and models to test predictions about a population from sample data.

Frequently asked questions: Statistics

As the degrees of freedom increase, Student’s t distribution becomes less leptokurtic , meaning that the probability of extreme values decreases. The distribution becomes more and more similar to a standard normal distribution .

The three categories of kurtosis are:

  • Mesokurtosis : An excess kurtosis of 0. Normal distributions are mesokurtic.
  • Platykurtosis : A negative excess kurtosis. Platykurtic distributions are thin-tailed, meaning that they have few outliers .
  • Leptokurtosis : A positive excess kurtosis. Leptokurtic distributions are fat-tailed, meaning that they have many outliers.

Probability distributions belong to two broad categories: discrete probability distributions and continuous probability distributions . Within each category, there are many types of probability distributions.

Probability is the relative frequency over an infinite number of trials.

For example, the probability of a coin landing on heads is .5, meaning that if you flip the coin an infinite number of times, it will land on heads half the time.

Since doing something an infinite number of times is impossible, relative frequency is often used as an estimate of probability. If you flip a coin 1000 times and get 507 heads, the relative frequency, .507, is a good estimate of the probability.

Categorical variables can be described by a frequency distribution. Quantitative variables can also be described by a frequency distribution, but first they need to be grouped into interval classes .

A histogram is an effective way to tell if a frequency distribution appears to have a normal distribution .

Plot a histogram and look at the shape of the bars. If the bars roughly follow a symmetrical bell or hill shape, like the example below, then the distribution is approximately normally distributed.

Frequency-distribution-Normal-distribution

You can use the CHISQ.INV.RT() function to find a chi-square critical value in Excel.

For example, to calculate the chi-square critical value for a test with df = 22 and α = .05, click any blank cell and type:

=CHISQ.INV.RT(0.05,22)

You can use the qchisq() function to find a chi-square critical value in R.

For example, to calculate the chi-square critical value for a test with df = 22 and α = .05:

qchisq(p = .05, df = 22, lower.tail = FALSE)

You can use the chisq.test() function to perform a chi-square test of independence in R. Give the contingency table as a matrix for the “x” argument. For example:

m = matrix(data = c(89, 84, 86, 9, 8, 24), nrow = 3, ncol = 2)

chisq.test(x = m)

You can use the CHISQ.TEST() function to perform a chi-square test of independence in Excel. It takes two arguments, CHISQ.TEST(observed_range, expected_range), and returns the p value.

Chi-square goodness of fit tests are often used in genetics. One common application is to check if two genes are linked (i.e., if the assortment is independent). When genes are linked, the allele inherited for one gene affects the allele inherited for another gene.

Suppose that you want to know if the genes for pea texture (R = round, r = wrinkled) and color (Y = yellow, y = green) are linked. You perform a dihybrid cross between two heterozygous ( RY / ry ) pea plants. The hypotheses you’re testing with your experiment are:

  • This would suggest that the genes are unlinked.
  • This would suggest that the genes are linked.

You observe 100 peas:

  • 78 round and yellow peas
  • 6 round and green peas
  • 4 wrinkled and yellow peas
  • 12 wrinkled and green peas

Step 1: Calculate the expected frequencies

To calculate the expected values, you can make a Punnett square. If the two genes are unlinked, the probability of each genotypic combination is equal.

The expected phenotypic ratios are therefore 9 round and yellow: 3 round and green: 3 wrinkled and yellow: 1 wrinkled and green.

From this, you can calculate the expected phenotypic frequencies for 100 peas:

Step 2: Calculate chi-square

Χ 2 = 8.41 + 8.67 + 11.6 + 5.4 = 34.08

Step 3: Find the critical chi-square value

Since there are four groups (round and yellow, round and green, wrinkled and yellow, wrinkled and green), there are three degrees of freedom .

For a test of significance at α = .05 and df = 3, the Χ 2 critical value is 7.82.

Step 4: Compare the chi-square value to the critical value

Χ 2 = 34.08

Critical value = 7.82

The Χ 2 value is greater than the critical value .

Step 5: Decide whether the reject the null hypothesis

The Χ 2 value is greater than the critical value, so we reject the null hypothesis that the population of offspring have an equal probability of inheriting all possible genotypic combinations. There is a significant difference between the observed and expected genotypic frequencies ( p < .05).

The data supports the alternative hypothesis that the offspring do not have an equal probability of inheriting all possible genotypic combinations, which suggests that the genes are linked

You can use the chisq.test() function to perform a chi-square goodness of fit test in R. Give the observed values in the “x” argument, give the expected values in the “p” argument, and set “rescale.p” to true. For example:

chisq.test(x = c(22,30,23), p = c(25,25,25), rescale.p = TRUE)

You can use the CHISQ.TEST() function to perform a chi-square goodness of fit test in Excel. It takes two arguments, CHISQ.TEST(observed_range, expected_range), and returns the p value .

Both correlations and chi-square tests can test for relationships between two variables. However, a correlation is used when you have two quantitative variables and a chi-square test of independence is used when you have two categorical variables.

Both chi-square tests and t tests can test for differences between two groups. However, a t test is used when you have a dependent quantitative variable and an independent categorical variable (with two groups). A chi-square test of independence is used when you have two categorical variables.

The two main chi-square tests are the chi-square goodness of fit test and the chi-square test of independence .

A chi-square distribution is a continuous probability distribution . The shape of a chi-square distribution depends on its degrees of freedom , k . The mean of a chi-square distribution is equal to its degrees of freedom ( k ) and the variance is 2 k . The range is 0 to ∞.

As the degrees of freedom ( k ) increases, the chi-square distribution goes from a downward curve to a hump shape. As the degrees of freedom increases further, the hump goes from being strongly right-skewed to being approximately normal.

To find the quartiles of a probability distribution, you can use the distribution’s quantile function.

You can use the quantile() function to find quartiles in R. If your data is called “data”, then “quantile(data, prob=c(.25,.5,.75), type=1)” will return the three quartiles.

You can use the QUARTILE() function to find quartiles in Excel. If your data is in column A, then click any blank cell and type “=QUARTILE(A:A,1)” for the first quartile, “=QUARTILE(A:A,2)” for the second quartile, and “=QUARTILE(A:A,3)” for the third quartile.

You can use the PEARSON() function to calculate the Pearson correlation coefficient in Excel. If your variables are in columns A and B, then click any blank cell and type “PEARSON(A:A,B:B)”.

There is no function to directly test the significance of the correlation.

You can use the cor() function to calculate the Pearson correlation coefficient in R. To test the significance of the correlation, you can use the cor.test() function.

You should use the Pearson correlation coefficient when (1) the relationship is linear and (2) both variables are quantitative and (3) normally distributed and (4) have no outliers.

The Pearson correlation coefficient ( r ) is the most common way of measuring a linear correlation. It is a number between –1 and 1 that measures the strength and direction of the relationship between two variables.

This table summarizes the most important differences between normal distributions and Poisson distributions :

When the mean of a Poisson distribution is large (>10), it can be approximated by a normal distribution.

In the Poisson distribution formula, lambda (λ) is the mean number of events within a given interval of time or space. For example, λ = 0.748 floods per year.

The e in the Poisson distribution formula stands for the number 2.718. This number is called Euler’s constant. You can simply substitute e with 2.718 when you’re calculating a Poisson probability. Euler’s constant is a very useful number and is especially important in calculus.

The three types of skewness are:

  • Right skew (also called positive skew ) . A right-skewed distribution is longer on the right side of its peak than on its left.
  • Left skew (also called negative skew). A left-skewed distribution is longer on the left side of its peak than on its right.
  • Zero skew. It is symmetrical and its left and right sides are mirror images.

Skewness of a distribution

Skewness and kurtosis are both important measures of a distribution’s shape.

  • Skewness measures the asymmetry of a distribution.
  • Kurtosis measures the heaviness of a distribution’s tails relative to a normal distribution .

Difference between skewness and kurtosis

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (“ x affects y because …”).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses . In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

The alternative hypothesis is often abbreviated as H a or H 1 . When the alternative hypothesis is written using mathematical symbols, it always includes an inequality symbol (usually ≠, but sometimes < or >).

The null hypothesis is often abbreviated as H 0 . When the null hypothesis is written using mathematical symbols, it always includes an equality symbol (usually =, but sometimes ≥ or ≤).

The t distribution was first described by statistician William Sealy Gosset under the pseudonym “Student.”

To calculate a confidence interval of a mean using the critical value of t , follow these four steps:

  • Choose the significance level based on your desired confidence level. The most common confidence level is 95%, which corresponds to α = .05 in the two-tailed t table .
  • Find the critical value of t in the two-tailed t table.
  • Multiply the critical value of t by s / √ n .
  • Add this value to the mean to calculate the upper limit of the confidence interval, and subtract this value from the mean to calculate the lower limit.

To test a hypothesis using the critical value of t , follow these four steps:

  • Calculate the t value for your sample.
  • Find the critical value of t in the t table .
  • Determine if the (absolute) t value is greater than the critical value of t .
  • Reject the null hypothesis if the sample’s t value is greater than the critical value of t . Otherwise, don’t reject the null hypothesis .

You can use the T.INV() function to find the critical value of t for one-tailed tests in Excel, and you can use the T.INV.2T() function for two-tailed tests.

You can use the qt() function to find the critical value of t in R. The function gives the critical value of t for the one-tailed test. If you want the critical value of t for a two-tailed test, divide the significance level by two.

You can use the RSQ() function to calculate R² in Excel. If your dependent variable is in column A and your independent variable is in column B, then click any blank cell and type “RSQ(A:A,B:B)”.

You can use the summary() function to view the R²  of a linear model in R. You will see the “R-squared” near the bottom of the output.

There are two formulas you can use to calculate the coefficient of determination (R²) of a simple linear regression .

R^2=(r)^2

The coefficient of determination (R²) is a number between 0 and 1 that measures how well a statistical model predicts an outcome. You can interpret the R² as the proportion of variation in the dependent variable that is predicted by the statistical model.

There are three main types of missing data .

Missing completely at random (MCAR) data are randomly distributed across the variable and unrelated to other variables .

Missing at random (MAR) data are not randomly distributed but they are accounted for by other observed variables.

Missing not at random (MNAR) data systematically differ from the observed values.

To tidy up your missing data , your options usually include accepting, removing, or recreating the missing data.

  • Acceptance: You leave your data as is
  • Listwise or pairwise deletion: You delete all cases (participants) with missing data from analyses
  • Imputation: You use other data to fill in the missing data

Missing data are important because, depending on the type, they can sometimes bias your results. This means your results may not be generalizable outside of your study because your data come from an unrepresentative sample .

Missing data , or missing values, occur when you don’t have data stored for certain variables or participants.

In any dataset, there’s usually some missing data. In quantitative research , missing values appear as blank cells in your spreadsheet.

There are two steps to calculating the geometric mean :

  • Multiply all values together to get their product.
  • Find the n th root of the product ( n is the number of values).

Before calculating the geometric mean, note that:

  • The geometric mean can only be found for positive values.
  • If any value in the data set is zero, the geometric mean is zero.

The arithmetic mean is the most commonly used type of mean and is often referred to simply as “the mean.” While the arithmetic mean is based on adding and dividing values, the geometric mean multiplies and finds the root of values.

Even though the geometric mean is a less common measure of central tendency , it’s more accurate than the arithmetic mean for percentage change and positively skewed data. The geometric mean is often reported for financial indices and population growth rates.

The geometric mean is an average that multiplies all values and finds a root of the number. For a dataset with n numbers, you find the n th root of their product.

Outliers are extreme values that differ from most values in the dataset. You find outliers at the extreme ends of your dataset.

It’s best to remove outliers only when you have a sound reason for doing so.

Some outliers represent natural variations in the population , and they should be left as is in your dataset. These are called true outliers.

Other outliers are problematic and should be removed because they represent measurement errors , data entry or processing errors, or poor sampling.

You can choose from four main ways to detect outliers :

  • Sorting your values from low to high and checking minimum and maximum values
  • Visualizing your data with a box plot and looking for outliers
  • Using the interquartile range to create fences for your data
  • Using statistical procedures to identify extreme values

Outliers can have a big impact on your statistical analyses and skew the results of any hypothesis test if they are inaccurate.

These extreme values can impact your statistical power as well, making it hard to detect a true effect if there is one.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

There are various ways to improve power:

  • Increase the potential effect size by manipulating your independent variable more strongly,
  • Increase sample size,
  • Increase the significance level (alpha),
  • Reduce measurement error by increasing the precision and accuracy of your measurement devices and procedures,
  • Use a one-tailed test instead of a two-tailed test for t tests and z tests.

A power analysis is a calculation that helps you determine a minimum sample size for your study. It’s made up of four main components. If you know or have estimates for any three of these, you can calculate the fourth component.

  • Statistical power : the likelihood that a test will detect an effect of a certain size if there is one, usually set at 80% or higher.
  • Sample size : the minimum number of observations needed to observe an effect of a certain size with a given power level.
  • Significance level (alpha) : the maximum risk of rejecting a true null hypothesis that you are willing to take, usually set at 5%.
  • Expected effect size : a standardized way of expressing the magnitude of the expected result of your study, usually based on similar studies or a pilot study.

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

The risk of making a Type II error is inversely related to the statistical power of a test. Power is the extent to which a test can correctly detect a real effect when there is one.

To (indirectly) reduce the risk of a Type II error, you can increase the sample size or the significance level to increase statistical power.

The risk of making a Type I error is the significance level (or alpha) that you choose. That’s a value that you set at the beginning of your study to assess the statistical probability of obtaining your results ( p value ).

The significance level is usually set at 0.05 or 5%. This means that your results only have a 5% chance of occurring, or less, if the null hypothesis is actually true.

To reduce the Type I error probability, you can set a lower significance level.

In statistics, a Type I error means rejecting the null hypothesis when it’s actually true, while a Type II error means failing to reject the null hypothesis when it’s actually false.

In statistics, power refers to the likelihood of a hypothesis test detecting a true effect if there is one. A statistically powerful test is more likely to reject a false negative (a Type II error).

If you don’t ensure enough power in your study, you may not be able to detect a statistically significant result even when it has practical significance. Your study might not have the ability to answer your research question.

While statistical significance shows that an effect exists in a study, practical significance shows that the effect is large enough to be meaningful in the real world.

Statistical significance is denoted by p -values whereas practical significance is represented by effect sizes .

There are dozens of measures of effect sizes . The most common effect sizes are Cohen’s d and Pearson’s r . Cohen’s d measures the size of the difference between two groups while Pearson’s r measures the strength of the relationship between two variables .

Effect size tells you how meaningful the relationship between variables or the difference between groups is.

A large effect size means that a research finding has practical significance, while a small effect size indicates limited practical applications.

Using descriptive and inferential statistics , you can make two types of estimates about the population : point estimates and interval estimates.

  • A point estimate is a single value estimate of a parameter . For instance, a sample mean is a point estimate of a population mean.
  • An interval estimate gives you a range of values where the parameter is expected to lie. A confidence interval is the most common type of interval estimate.

Both types of estimates are important for gathering a clear idea of where a parameter is likely to lie.

Standard error and standard deviation are both measures of variability . The standard deviation reflects variability within a sample, while the standard error estimates the variability across samples of a population.

The standard error of the mean , or simply standard error , indicates how different the population mean is likely to be from a sample mean. It tells you how much the sample mean would vary if you were to repeat a study using new samples from within a single population.

To figure out whether a given number is a parameter or a statistic , ask yourself the following:

  • Does the number describe a whole, complete population where every member can be reached for data collection ?
  • Is it possible to collect data for this number from every member of the population in a reasonable time frame?

If the answer is yes to both questions, the number is likely to be a parameter. For small populations, data can be collected from the whole population and summarized in parameters.

If the answer is no to either of the questions, then the number is more likely to be a statistic.

The arithmetic mean is the most commonly used mean. It’s often simply called the mean or the average. But there are some other types of means you can calculate depending on your research purposes:

  • Weighted mean: some values contribute more to the mean than others.
  • Geometric mean : values are multiplied rather than summed up.
  • Harmonic mean: reciprocals of values are used instead of the values themselves.

You can find the mean , or average, of a data set in two simple steps:

  • Find the sum of the values by adding them all up.
  • Divide the sum by the number of values in the data set.

This method is the same whether you are dealing with sample or population data or positive or negative numbers.

The median is the most informative measure of central tendency for skewed distributions or distributions with outliers. For example, the median is often used as a measure of central tendency for income distributions, which are generally highly skewed.

Because the median only uses one or two values, it’s unaffected by extreme outliers or non-symmetric distributions of scores. In contrast, the mean and mode can vary in skewed distributions.

To find the median , first order your data. Then calculate the middle position based on n , the number of values in your data set.

\dfrac{(n+1)}{2}

A data set can often have no mode, one mode or more than one mode – it all depends on how many different values repeat most frequently.

Your data can be:

  • without any mode
  • unimodal, with one mode,
  • bimodal, with two modes,
  • trimodal, with three modes, or
  • multimodal, with four or more modes.

To find the mode :

  • If your data is numerical or quantitative, order the values from low to high.
  • If it is categorical, sort the values by group, in any order.

Then you simply need to identify the most frequently occurring value.

The interquartile range is the best measure of variability for skewed distributions or data sets with outliers. Because it’s based on values that come from the middle half of the distribution, it’s unlikely to be influenced by outliers .

The two most common methods for calculating interquartile range are the exclusive and inclusive methods.

The exclusive method excludes the median when identifying Q1 and Q3, while the inclusive method includes the median as a value in the data set in identifying the quartiles.

For each of these methods, you’ll need different procedures for finding the median, Q1 and Q3 depending on whether your sample size is even- or odd-numbered. The exclusive method works best for even-numbered sample sizes, while the inclusive method is often used with odd-numbered sample sizes.

While the range gives you the spread of the whole data set, the interquartile range gives you the spread of the middle half of a data set.

Homoscedasticity, or homogeneity of variances, is an assumption of equal or similar variances in different groups being compared.

This is an important assumption of parametric statistical tests because they are sensitive to any dissimilarities. Uneven variances in samples result in biased and skewed test results.

Statistical tests such as variance tests or the analysis of variance (ANOVA) use sample variance to assess group differences of populations. They use the variances of the samples to assess whether the populations they come from significantly differ from each other.

Variance is the average squared deviations from the mean, while standard deviation is the square root of this number. Both measures reflect variability in a distribution, but their units differ:

  • Standard deviation is expressed in the same units as the original values (e.g., minutes or meters).
  • Variance is expressed in much larger units (e.g., meters squared).

Although the units of variance are harder to intuitively understand, variance is important in statistical tests .

The empirical rule, or the 68-95-99.7 rule, tells you where most of the values lie in a normal distribution :

  • Around 68% of values are within 1 standard deviation of the mean.
  • Around 95% of values are within 2 standard deviations of the mean.
  • Around 99.7% of values are within 3 standard deviations of the mean.

The empirical rule is a quick way to get an overview of your data and check for any outliers or extreme values that don’t follow this pattern.

In a normal distribution , data are symmetrically distributed with no skew. Most values cluster around a central region, with values tapering off as they go further away from the center.

The measures of central tendency (mean, mode, and median) are exactly the same in a normal distribution.

Normal distribution

The standard deviation is the average amount of variability in your data set. It tells you, on average, how far each score lies from the mean .

In normal distributions, a high standard deviation means that values are generally far from the mean, while a low standard deviation indicates that values are clustered close to the mean.

No. Because the range formula subtracts the lowest number from the highest number, the range is always zero or a positive number.

In statistics, the range is the spread of your data from the lowest to the highest value in the distribution. It is the simplest measure of variability .

While central tendency tells you where most of your data points lie, variability summarizes how far apart your points from each other.

Data sets can have the same central tendency but different levels of variability or vice versa . Together, they give you a complete picture of your data.

Variability is most commonly measured with the following descriptive statistics :

  • Range : the difference between the highest and lowest values
  • Interquartile range : the range of the middle half of a distribution
  • Standard deviation : average distance from the mean
  • Variance : average of squared distances from the mean

Variability tells you how far apart points lie from each other and from the center of a distribution or a data set.

Variability is also referred to as spread, scatter or dispersion.

While interval and ratio data can both be categorized, ranked, and have equal spacing between adjacent values, only ratio scales have a true zero.

For example, temperature in Celsius or Fahrenheit is at an interval scale because zero is not the lowest possible temperature. In the Kelvin scale, a ratio scale, zero represents a total lack of thermal energy.

A critical value is the value of the test statistic which defines the upper and lower bounds of a confidence interval , or which defines the threshold of statistical significance in a statistical test. It describes how far from the mean of the distribution you have to go to cover a certain amount of the total variation in the data (i.e. 90%, 95%, 99%).

If you are constructing a 95% confidence interval and are using a threshold of statistical significance of p = 0.05, then your critical value will be identical in both cases.

The t -distribution gives more probability to observations in the tails of the distribution than the standard normal distribution (a.k.a. the z -distribution).

In this way, the t -distribution is more conservative than the standard normal distribution: to reach the same level of confidence or statistical significance , you will need to include a wider range of the data.

A t -score (a.k.a. a t -value) is equivalent to the number of standard deviations away from the mean of the t -distribution .

The t -score is the test statistic used in t -tests and regression tests. It can also be used to describe how far from the mean an observation is when the data follow a t -distribution.

The t -distribution is a way of describing a set of observations where most observations fall close to the mean , and the rest of the observations make up the tails on either side. It is a type of normal distribution used for smaller sample sizes, where the variance in the data is unknown.

The t -distribution forms a bell curve when plotted on a graph. It can be described mathematically using the mean and the standard deviation .

In statistics, ordinal and nominal variables are both considered categorical variables .

Even though ordinal data can sometimes be numerical, not all mathematical operations can be performed on them.

Ordinal data has two characteristics:

  • The data can be classified into different categories within a variable.
  • The categories have a natural ranked order.

However, unlike with interval data, the distances between the categories are uneven or unknown.

Nominal and ordinal are two of the four levels of measurement . Nominal level data can only be classified, while ordinal level data can be classified and ordered.

Nominal data is data that can be labelled or classified into mutually exclusive categories within a variable. These categories cannot be ordered in a meaningful way.

For example, for the nominal variable of preferred mode of transportation, you may have the categories of car, bus, train, tram or bicycle.

If your confidence interval for a difference between groups includes zero, that means that if you run your experiment again you have a good chance of finding no difference between groups.

If your confidence interval for a correlation or regression includes zero, that means that if you run your experiment again there is a good chance of finding no correlation in your data.

In both of these cases, you will also find a high p -value when you run your statistical test, meaning that your results could have occurred under the null hypothesis of no relationship between variables or no difference between groups.

If you want to calculate a confidence interval around the mean of data that is not normally distributed , you have two choices:

  • Find a distribution that matches the shape of your data and use that distribution to calculate the confidence interval.
  • Perform a transformation on your data to make it fit a normal distribution, and then find the confidence interval for the transformed data.

The standard normal distribution , also called the z -distribution, is a special normal distribution where the mean is 0 and the standard deviation is 1.

Any normal distribution can be converted into the standard normal distribution by turning the individual values into z -scores. In a z -distribution, z -scores tell you how many standard deviations away from the mean each value lies.

The z -score and t -score (aka z -value and t -value) show how many standard deviations away from the mean of the distribution you are, assuming your data follow a z -distribution or a t -distribution .

These scores are used in statistical tests to show how far from the mean of the predicted distribution your statistical estimate is. If your test produces a z -score of 2.5, this means that your estimate is 2.5 standard deviations from the predicted mean.

The predicted mean and distribution of your estimate are generated by the null hypothesis of the statistical test you are using. The more standard deviations away from the predicted mean your estimate is, the less likely it is that the estimate could have occurred under the null hypothesis .

To calculate the confidence interval , you need to know:

  • The point estimate you are constructing the confidence interval for
  • The critical values for the test statistic
  • The standard deviation of the sample
  • The sample size

Then you can plug these components into the confidence interval formula that corresponds to your data. The formula depends on the type of estimate (e.g. a mean or a proportion) and on the distribution of your data.

The confidence level is the percentage of times you expect to get close to the same estimate if you run your experiment again or resample the population in the same way.

The confidence interval consists of the upper and lower bounds of the estimate you expect to find at a given level of confidence.

For example, if you are estimating a 95% confidence interval around the mean proportion of female babies born every year based on a random sample of babies, you might find an upper bound of 0.56 and a lower bound of 0.48. These are the upper and lower bounds of the confidence interval. The confidence level is 95%.

The mean is the most frequently used measure of central tendency because it uses all values in the data set to give you an average.

For data from skewed distributions, the median is better than the mean because it isn’t influenced by extremely large values.

The mode is the only measure you can use for nominal or categorical data that can’t be ordered.

The measures of central tendency you can use depends on the level of measurement of your data.

  • For a nominal level, you can only use the mode to find the most frequent value.
  • For an ordinal level or ranked data, you can also use the median to find the value in the middle of your data set.
  • For interval or ratio levels, in addition to the mode and median, you can use the mean to find the average value.

Measures of central tendency help you find the middle, or the average, of a data set.

The 3 most common measures of central tendency are the mean, median and mode.

  • The mode is the most frequent value.
  • The median is the middle number in an ordered data set.
  • The mean is the sum of all values divided by the total number of values.

Some variables have fixed levels. For example, gender and ethnicity are always nominal level data because they cannot be ranked.

However, for other variables, you can choose the level of measurement . For example, income is a variable that can be recorded on an ordinal or a ratio scale:

  • At an ordinal level , you could create 5 income groupings and code the incomes that fall within them from 1–5.
  • At a ratio level , you would record exact numbers for income.

If you have a choice, the ratio level is always preferable because you can analyze data in more ways. The higher the level of measurement, the more precise your data is.

The level at which you measure a variable determines how you can analyze your data.

Depending on the level of measurement , you can perform different descriptive statistics to get an overall summary of your data and inferential statistics to see if your results support or refute your hypothesis .

Levels of measurement tell you how precisely variables are recorded. There are 4 levels of measurement, which can be ranked from low to high:

  • Nominal : the data can only be categorized.
  • Ordinal : the data can be categorized and ranked.
  • Interval : the data can be categorized and ranked, and evenly spaced.
  • Ratio : the data can be categorized, ranked, evenly spaced and has a natural zero.

No. The p -value only tells you how likely the data you have observed is to have occurred under the null hypothesis .

If the p -value is below your threshold of significance (typically p < 0.05), then you can reject the null hypothesis, but this does not necessarily mean that your alternative hypothesis is true.

The alpha value, or the threshold for statistical significance , is arbitrary – which value you use depends on your field of study.

In most cases, researchers use an alpha of 0.05, which means that there is a less than 5% chance that the data being tested could have occurred under the null hypothesis.

P -values are usually automatically calculated by the program you use to perform your statistical test. They can also be estimated using p -value tables for the relevant test statistic .

P -values are calculated from the null distribution of the test statistic. They tell you how often a test statistic is expected to occur under the null hypothesis of the statistical test, based on where it falls in the null distribution.

If the test statistic is far from the mean of the null distribution, then the p -value will be small, showing that the test statistic is not likely to have occurred under the null hypothesis.

A p -value , or probability value, is a number describing how likely it is that your data would have occurred under the null hypothesis of your statistical test .

The test statistic you use will be determined by the statistical test.

You can choose the right statistical test by looking at what type of data you have collected and what type of relationship you want to test.

The test statistic will change based on the number of observations in your data, how variable your observations are, and how strong the underlying patterns in the data are.

For example, if one data set has higher variability while another has lower variability, the first data set will produce a test statistic closer to the null hypothesis , even if the true correlation between two variables is the same in either data set.

The formula for the test statistic depends on the statistical test being used.

Generally, the test statistic is calculated as the pattern in your data (i.e. the correlation between variables or difference between groups) divided by the variance in the data (i.e. the standard deviation ).

  • Univariate statistics summarize only one variable  at a time.
  • Bivariate statistics compare two variables .
  • Multivariate statistics compare more than two variables .

The 3 main types of descriptive statistics concern the frequency distribution, central tendency, and variability of a dataset.

  • Distribution refers to the frequencies of different responses.
  • Measures of central tendency give you the average for each response.
  • Measures of variability show you the spread or dispersion of your dataset.

Descriptive statistics summarize the characteristics of a data set. Inferential statistics allow you to test a hypothesis or assess whether your data is generalizable to the broader population.

In statistics, model selection is a process researchers use to compare the relative value of different statistical models and determine which one is the best fit for the observed data.

The Akaike information criterion is one of the most common methods of model selection. AIC weights the ability of the model to predict the observed data against the number of parameters the model requires to reach that level of precision.

AIC model selection can help researchers find a model that explains the observed variation in their data while avoiding overfitting.

In statistics, a model is the collection of one or more independent variables and their predicted interactions that researchers use to try to explain variation in their dependent variable.

You can test a model using a statistical test . To compare how well different models fit your data, you can use Akaike’s information criterion for model selection.

The Akaike information criterion is calculated from the maximum log-likelihood of the model and the number of parameters (K) used to reach that likelihood. The AIC function is 2K – 2(log-likelihood) .

Lower AIC values indicate a better-fit model, and a model with a delta-AIC (the difference between the two AIC values being compared) of more than -2 is considered significantly better than the model it is being compared to.

The Akaike information criterion is a mathematical test used to evaluate how well a model fits the data it is meant to describe. It penalizes models which use more independent variables (parameters) as a way to avoid over-fitting.

AIC is most often used to compare the relative goodness-of-fit among different models under consideration and to then choose the model that best fits the data.

A factorial ANOVA is any ANOVA that uses more than one categorical independent variable . A two-way ANOVA is a type of factorial ANOVA.

Some examples of factorial ANOVAs include:

  • Testing the combined effects of vaccination (vaccinated or not vaccinated) and health status (healthy or pre-existing condition) on the rate of flu infection in a population.
  • Testing the effects of marital status (married, single, divorced, widowed), job status (employed, self-employed, unemployed, retired), and family history (no family history, some family history) on the incidence of depression in a population.
  • Testing the effects of feed type (type A, B, or C) and barn crowding (not crowded, somewhat crowded, very crowded) on the final weight of chickens in a commercial farming operation.

In ANOVA, the null hypothesis is that there is no difference among group means. If any group differs significantly from the overall group mean, then the ANOVA will report a statistically significant result.

Significant differences among group means are calculated using the F statistic, which is the ratio of the mean sum of squares (the variance explained by the independent variable) to the mean square error (the variance left over).

If the F statistic is higher than the critical value (the value of F that corresponds with your alpha value, usually 0.05), then the difference among groups is deemed statistically significant.

The only difference between one-way and two-way ANOVA is the number of independent variables . A one-way ANOVA has one independent variable, while a two-way ANOVA has two.

  • One-way ANOVA : Testing the relationship between shoe brand (Nike, Adidas, Saucony, Hoka) and race finish times in a marathon.
  • Two-way ANOVA : Testing the relationship between shoe brand (Nike, Adidas, Saucony, Hoka), runner age group (junior, senior, master’s), and race finishing times in a marathon.

All ANOVAs are designed to test for differences among three or more groups. If you are only testing for a difference between two groups, use a t-test instead.

Multiple linear regression is a regression model that estimates the relationship between a quantitative dependent variable and two or more independent variables using a straight line.

Linear regression most often uses mean-square error (MSE) to calculate the error of the model. MSE is calculated by:

  • measuring the distance of the observed y-values from the predicted y-values at each value of x;
  • squaring each of these distances;
  • calculating the mean of each of the squared distances.

Linear regression fits a line to the data by finding the regression coefficient that results in the smallest MSE.

Simple linear regression is a regression model that estimates the relationship between one independent variable and one dependent variable using a straight line. Both variables should be quantitative.

For example, the relationship between temperature and the expansion of mercury in a thermometer can be modeled using a straight line: as temperature increases, the mercury expands. This linear relationship is so certain that we can use mercury thermometers to measure temperature.

A regression model is a statistical model that estimates the relationship between one dependent variable and one or more independent variables using a line (or a plane in the case of two or more independent variables).

A regression model can be used when the dependent variable is quantitative, except in the case of logistic regression, where the dependent variable is binary.

A t-test should not be used to measure differences among more than two groups, because the error structure for a t-test will underestimate the actual error when many groups are being compared.

If you want to compare the means of several groups at once, it’s best to use another statistical test such as ANOVA or a post-hoc test.

A one-sample t-test is used to compare a single population to a standard value (for example, to determine whether the average lifespan of a specific town is different from the country average).

A paired t-test is used to compare a single population before and after some experimental intervention or at two different points in time (for example, measuring student performance on a test before and after being taught the material).

A t-test measures the difference in group means divided by the pooled standard error of the two group means.

In this way, it calculates a number (the t-value) illustrating the magnitude of the difference between the two group means being compared, and estimates the likelihood that this difference exists purely by chance (p-value).

Your choice of t-test depends on whether you are studying one group or two groups, and whether you care about the direction of the difference in group means.

If you are studying one group, use a paired t-test to compare the group mean over time or after an intervention, or use a one-sample t-test to compare the group mean to a standard value. If you are studying two groups, use a two-sample t-test .

If you want to know only whether a difference exists, use a two-tailed test . If you want to know if one group mean is greater or less than the other, use a left-tailed or right-tailed one-tailed test .

A t-test is a statistical test that compares the means of two samples . It is used in hypothesis testing , with a null hypothesis that the difference in group means is zero and an alternate hypothesis that the difference in group means is different from zero.

Statistical significance is a term used by researchers to state that it is unlikely their observations could have occurred under the null hypothesis of a statistical test . Significance is usually denoted by a p -value , or probability value.

Statistical significance is arbitrary – it depends on the threshold, or alpha value, chosen by the researcher. The most common threshold is p < 0.05, which means that the data is likely to occur less than 5% of the time under the null hypothesis .

When the p -value falls below the chosen alpha value, then we say the result of the test is statistically significant.

A test statistic is a number calculated by a  statistical test . It describes how far your observed data is from the  null hypothesis  of no relationship between  variables or no difference among sample groups.

The test statistic tells you how different two or more groups are from the overall population mean , or how different a linear slope is from the slope predicted by a null hypothesis . Different test statistics are used in different statistical tests.

Statistical tests commonly assume that:

  • the data are normally distributed
  • the groups that are being compared have similar variance
  • the data are independent

If your data does not meet these assumptions you might still be able to use a nonparametric statistical test , which have fewer requirements but also make weaker inferences.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

The Scribbr Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

  • How it works

Step-by-Step Guide to Statistical Analysis

It would not be wrong to say that statistics are utilised in almost every aspect of society. You might have also heard the phrase, “you can prove anything with statistics.” Or “facts are stubborn things, but statistics are pliable, which implies the results drawn from statistics can never be trusted.

But what if certain conditions are applied, and you analyse these statistics before getting somewhere? Well, that sounds totally reliable and straight from the horse’s mouth. That is what statistical analysis is.

It is the branch of science responsible for rendering various analytical techniques and tools to deal with big data. In other words, it is the science of identifying, organising, assessing and interpreting data to make interferences about a particular populace.Every statistical dissection follows a specific pattern, which we call the Statistical Analysis Process.

It precisely concerns data collection , interpretation, and presentation. Statistical analyses can be carried out when handling a huge extent of data to solve complex issues. Above all, this process delivers importance to insignificant numbers and data that often fills in the missing gaps in research.

This guide will talk about the statistical data analysis types, the process in detail, and its significance in today’s statistically evolved era.

Types of Statistical Data Analysis

Though there are many types of statistical data analysis, these two are the most common ones:

Descriptive Statistics

Inferential statistics.

Let us discuss each in detail.

It quantitatively summarises the information in a significant way so that whoever is looking at it might detect relevant patterns instantly. Descriptive statistics are divided into measures of variability and measures of central tendency. Measures of variability consist of standard deviation, minimum and maximum variables, skewness, kurtosis, and variance , while measures of central tendency include the mean, median , and mode .

  • Descriptive statistics sum up the characteristics of a data set
  • It consists of two basic categories of measures: measures of variability and measures of central tendency
  • Measures of variability describe the dispersion of data in the data set
  • Measures of central tendency define the centre of a data set

With inferential statistics , you can be in a position to draw conclusions extending beyond the immediate data alone. We use this technique to infer from the sample data what the population might think or make judgments of the probability of whether an observed difference between groups is dependable or undependable. Undependable means it has happened by chance.

  • Inferential Statistics is used to estimate the likelihood that the collected data occurred by chance or otherwise
  • It helps conclude a larger population from which you took samples
  • It depends upon the type of measurement scale along with the distribution of data

Other Types Include:

Predictive Analysis: making predictions of future events based on current facts and figures

Prescriptive Analysis: examining data to find out the required actions for a particular situation

Exploratory Data Analysis (EDA): previewing of data and assisting in getting key insights into it

Casual Analysis: determining the reasons behind why things appear in a certain way

Mechanistic Analysis: explaining how and why things happen rather than how they will take place subsequently

Statistical Data Analysis: The Process

The statistical data analysis involves five steps:

  • Designing the Study
  • Gathering Data
  • Describing the Data
  • Testing Hypotheses
  • Interpreting the Data

Step 1: Designing the Study

The first and most crucial step in a scientific inquiry is stating a research question and looking for hypotheses to support it.

Examples of research questions are:

  • Can digital marketing increase a company’s revenue exponentially?
  • Can the newly developed COVID-19 vaccines prevent the spreading of the virus?

As students and researchers, you must also be aware of the background situation. Answer the following questions.

What information is there that has already been presented by other researchers?

How can you make your study stand apart from the rest?

What are effective ways to get your findings?

Once you have managed to get answers to all these questions, you are good to move ahead to another important part, which is finding the targeted population .

What population should be under consideration?

What is the data you will need from this population?

But before you start looking for ways to gather all this information, you need to make a hypothesis, or in this case, an educated guess. Hypotheses are statements such as the following:

  • Digital marketing can increase the company’s revenue exponentially.
  • The new COVID-19 vaccine can prevent the spreading of the virus.

Remember to find the relationship between variables within a population when writing a statistical hypothesis. Every prediction you make can be either null or an alternative hypothesis.

While the former suggests no effect or relationship between two or more variables, the latter states the research prediction of a relationship or effect.

How to Plan your Research Design?

After deducing hypotheses for your research, the next step is planning your research design. It is basically coming up with the overall strategy for data analysis.

There are three ways to design your research:

1. Descriptive Design:

In a descriptive design, you can assess the characteristics of a population by using statistical tests and then construe inferences from sample data.

2. Correlational Design:

As the name suggests, with this design, you can study the relationships between different variables .

3. Experimental Design:

Using statistical tests of regression and comparison, you can evaluate a cause-and-effect relationship.

Step 2: Collecting Data

Collecting data from a population is a challenging task. It not only can get expensive but also take years to come to a proper conclusion. This is why researchers are instead encouraged to collect data from a sample.

Sampling methods in a statistical study refer to how we choose members from the population under consideration or study. If you select a sample for your study randomly, the chances are that it would be biased and probably not the ideal data for representing the population.

This means there are reliable and non-reliable ways to select a sample.

Reliable Methods of Sampling

Simple Random Sampling: a method where each member and set of members have an equal chance of being selected for the sample

Stratified Random Sampling: population here is first split into groups then members are selected from each group

Clutter Random Sampling: the population is divided into groups, and members are randomly chosen from some groups.

Systematic Random Sampling: members are selected in order. The starting point is chosen by chance, and every nth member is set for the sample.

Non-Reliable Methods of Sampling

Voluntary Response Sampling: choosing a sample by sending out a request for members of a population to join. Some might join, and others might not respond

Convenient Sampling: selecting a sample readily available by chance

Here are a few important terms you need to know for conducting samples in statistics:

Population standard deviation: estimated population parameter on the basis of the previous study

Statistical Power: the chances of your study detecting an effect of a certain size

Expected Effect Size: it is an indication of how large the expected findings of your research be

Significance Level (alpha): it is the risk of rejecting a true null hypothesis

Step 3: Describing the Data

Once you are done finalising your samples, you are good to go with their inspection by calculating descriptive statistics , which we discussed above.

There are different ways to inspect your data.

  • By using a scatter plot to visualise the relationship between two or more variables
  • A bar chart displaying data from key variables to view how the responses have been distributed
  • Via frequency distribution where data from each variable can be organised

When you visualise data in the form of charts, bars, and tables, it becomes much easier to assess whether your data follow a normal distribution or skewed distribution. You can also get insights into where the outliers are and how to get them fixed.

Is the Statistics assignment pressure too much to handle?

How about we handle it for you.

Put in the order right now in order to save yourself time, money, and nerves at the last minute.

How is a Skewed Distribution Different from a Normal One?

A normal distribution is where the set of information or data is distributed symmetrically around a centre. This is where most values lie, with the values getting smaller at the tail ends.

On the other hand, if one of the tails is longer or smaller than the other, the distribution would be skewed . They are often called asymmetrical distributions, as you cannot find any sort of symmetry in them.

The skewed distribution can be of two ways: left-skewed distribution and right-skewed distribution . When the left tail is longer than the right one, it is left-stewed distribution, while the right tail is longer in a right-strewed distribution.

Now, let us discuss the calculation of measures of central tendency. You might have heard about this one already.

What do Measures of Central Tendency Do?

Well, it precisely describes where most of the values lie in a data set. Having said that, the three most heard and used measures of central tendency are:

When considered from low to high, this is the value in the exact centre.

Mode is the most wanted or popular response in the data set.

You calculate the mean by simply adding all the values and dividing by the total number.Coming to how you can calculate the , which is equally important.

Measures of variability

Measures of variability give you an idea of how to spread out or dispersed values in a data set.

The four most common ones you must know about are:

Standard Deviation

The average distance between different values in your data set and the mean

Variance is the square of the standard deviation.

The range is the highest value subtracted from the data set's minimum value.

Interquartile Range

Interquartile range is the highest value minus lowest of the data set

Step 4: Testing your Hypotheses

Two terms you need to know in order to learn about testing a hypothesis:

Statistic-a number describing a sample

Parameter-a number describing a population

So, what exactly are hypotheses testing?

It is where an analyst or researcher tests all the assumptions made earlier regarding a population parameter. The methodology opted for by the researcher solely depends on the nature of the data utilised and the reason for its analysis.

The only objective is to evaluate the plausibility of hypotheses with the help of sample data. The data here can either come from a larger population or a sample to represent the whole population .

How it Works?

These four steps will help you understand what exactly happens in hypotheses testing.

  • The first thing you need to do is state the two hypotheses made at the beginning.
  • The second is formulating an analysis plan that depicts how the data can be assessed.
  • Next is physically analysing the sample data about the plan.
  • The last and final step is going through the results and assessing whether you need to reject the null hypothesis or move forward with it.

Questions might arise on knowing if the null hypothesis is plausible, and this is where statistical tests come into play.

Statistical tests let you determine where your sample data could lie on an expected distribution if the null hypotheses were plausible. Usually, you get two types of outputs from statistical tests:

  • A test statistic : this shows how much your data differs from the null hypothesis
  • A p-value: this value assesses the likelihood of getting your results if the null hypothesis is true

Step 5: Interpreting the Data

You have made it to the final step of statistical analysis , where all the data you found useful till now will be interpreted. In order to check the usability of data, researchers compare the p-value to a set significant level, which is 0.05, so that they can know if the results are statistically important or not. That is why this process in hypothesis testing is called statistical significance .

Remember that the results you get here are unlikely to have arisen because of probability. There are lower chances of such findings if the null hypothesis is plausible.

By the end of this process, you must have answers to the following questions:

  • Does the interpreted data answer your original question? If yes, how?
  • Can you defend against objections with this data?
  • Are there limitations to your conclusions?

If the final results cannot help you find clear answers to these questions, you might have to go back, assess and repeat some of the steps again. After all, you want to draw the most accurate conclusions from your data.

You May Also Like

Learn about the steps required to successfully complete their research project. Make sure to follow these steps in their respective order.

Looking for an easy guide to follow to write your essay? Here is our detailed essay guide explaining how to write an essay and examples and types of an essay.

Applying to a university and looking for a guide to write your UCAS personal statement? We have covered all aspects of the UCAS statement to help you get to your dream university.

More Interesting Articles

Ready to place an order, useful links, learning resources.

DMCA.com Protection Status

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works
  • Published: 15 June 2020

Reporting statistical methods and outcome of statistical analyses in research articles

  • Mariusz Cichoń 1  

Pharmacological Reports volume  72 ,  pages 481–485 ( 2020 ) Cite this article

15k Accesses

11 Citations

1 Altmetric

Metrics details

Working on a manuscript?

Introduction.

Statistical methods constitute a powerful tool in modern life sciences. This tool is primarily used to disentangle whether the observed differences, relationships or congruencies are meaningful or may just occur by chance. Thus, statistical inference is an unavoidable part of scientific work. The knowledge of statistics is usually quite limited among researchers representing the field of life sciences, particularly when it comes to constraints imposed on the use of statistical tools and possible interpretations. A common mistake is that researchers take for granted the ability to perform a valid statistical analysis. However, at the stage of data analysis, it may turn out that the gathered data cannot be analysed with any known statistical tools or that there are critical flaws in the interpretation of the results due to violations of basic assumptions of statistical methods. A common mistake made by authors is to thoughtlessly copy the choice of the statistical tests from other authors analysing similar data. This strategy, although sometimes correct, may lead to an incorrect choice of statistical tools and incorrect interpretations. Here, I aim to give some advice on how to choose suitable statistical methods and how to present the results of statistical analyses.

Important limits in the use of statistics

Statistical tools face a number of constraints. Constraints should already be considered at the stage of planning the research, as mistakes made at this stage may make statistical analyses impossible. Therefore, careful planning of sampling is critical for future success in data analyses. The most important is ensuring that the general population is sampled randomly and independently, and that the experimental design corresponds to the aims of the research. Planning a control group/groups is of particular importance. Without a suitable control group, any further inference may not be possible. Parametric tests are stronger (it is easier to reject a null hypothesis), so they should always be preferred, but such methods can be used only when the data are drawn from a general population with normal distribution. For methods based on analysis of variance (ANOVA), residuals should come from a general population with normal distribution, and in this case there is an additional important assumption of homogeneity of variance. Inferences made from analyses violating these assumptions may be incorrect.

Statistical inference

Statistical inference is asymmetrical. Scientific discovery is based on rejecting null hypotheses, so interpreting non-significant results should be taken with special care. We never know for sure why we fail to reject the null hypothesis. It may indeed be true, but it is also possible that our sample size was too small or variance too large to capture the differences or relationships. We also may fail just by chance. Assuming a significance level of p = 0.05 means that we run the risk of rejecting a null hypothesis in 5% of such analyses. Thus, interpretation of non-significant results should always be accompanied by the so-called power analysis, which shows the strength of our inference.

Experimental design and data analyses

The experimental design is a critical part of study planning. The design must correspond to the aims of the study presented in the Introduction section. In turn, the statistical methods must be suited to the experimental design so that the data analyses will enable the questions stated in the Introduction to be answered. In general, simple experimental designs allow the use of simple methods like t-tests, simple correlations, etc., while more complicated designs (multifactor designs) require more advanced methods (see, Fig. 1 ). Data coming from more advanced designs usually cannot be analysed with simple methods. Therefore, multifactor designs cannot be followed by a simple t-test or even with one-way ANOVA, as factors may not act independently, and in such a case the interpretation of the results of one-way ANOVA may be incorrect. Here, it is particularly important that one may be interested in a concerted action of factors (interaction) or an action of a given factor while controlling for other factors (independent action of a factor). But even with one factor design with more than two levels, one cannot use just a simple t-test with multiple comparisons between groups. In such a case, one-way ANOVA should be performed followed by a post hoc test. The post hoc test can be done only if ANOVA rejects the null hypothesis. There is no point in using the post hoc test if the factors have only two levels (groups). In this case, the differences are already clear after ANOVA.

figure 1

Test selection chart

Description of statistical methods in the Materials and methods section

It is in the author’s interest to provide the reader with all necessary information to judge whether the statistical tools used in the paper are the most suitable to answer the scientific question and are suited to the data structure. In the Materials and methods section, the experimental design must be described in detail, so that the reader may easily understand how the study was performed and later why such specific statistical methods were chosen. It must be clear whether the study is planned to test the relationships or differences between groups. Here, the reader should already understand the data structure, what the dependent variable is, what the factors are, and should be able to determine, even without being directly informed, whether the factors are categorical or continuous, and whether they are fixed or random. The sample size used in the analysis should be clearly stated. Sometimes sample sizes used in analyses are smaller than the original. This can happen for various reasons, for example if one fails to perform some measurements, and in such a case, the authors must clearly explain why the original sample size differs from the one used in the analyses. There must be a very good reason to omit existing data points from the analyses. Removing the so-called outliers should be an exception rather than the rule.

A description of the statistical methods should come at the end of the Materials and methods section. Here, we start by introducing the statistical techniques used to test predictions formulated in the Introduction. We describe in detail the structure of the statistical model (defining the dependent variable, the independent variables—factors, interactions if present, character of the factors—fixed or random). The variables should be defined as categorical or continuous. In the case of more advanced models, information on the methods of effects estimation or degrees of freedom should be provided. Unless there are good reasons, interactions should always be tested, even if the study is not aimed at testing an interaction. If the interaction is not the main aim of the study, non-significant interactions should be dropped from the model and new analyses without interactions should be carried out and such results reported. If the interaction appears to be significant, one cannot remove it from the model even if the interaction is not the main aim of the study. In such a case, only the interaction can be interpreted, while the interpretation of the main effects is not allowed. The author should clearly describe how the interactions will be dealt with. One may also consider using a model selection procedure which should also be clearly described.

The authors should reassure the reader that the assumptions of the selected statistical technique are fully met. It must be described how the normality of data distribution and homogeneity of variance was checked and whether these assumptions have been met. When performing data transformation, one needs to explain how it was done and whether the transformation helped to fulfil the assumptions of the parametric tests. If these assumptions are not fulfilled, one may apply non-parametric tests. It must be clearly stated why non-parametric tests are performed. Post hoc tests can be performed only when the ANOVA/Kruskal–Wallis test shows significant effects. These tests are valid for the main effects only when the interaction is not included in the model. These tests are also applicable for significant interactions. There are a number of different post hoc tests, so the selected test must be introduced in the materials and methods section.

The significance level is often mentioned in the materials and methods section. There is common consensus among researchers in life sciences for a significance level set at p = 0.05, so it is not strictly necessary to report this conventional level unless the authors always give the I type error (p-value) throughout the paper. If the author sets the significance level at a lower value, which could be the case, for example, in medical sciences, the reader must be informed about the use of a more conservative level. If the significance level is not reported, the reader will assume p = 0.05. In general, it does not matter which statistical software was used for the analyses. However, the outcome may differ slightly between different software, even if exactly the same model is set. Thus, it may be a good practice to report the name of the software at the end of the subsection describing the statistical methods. If the original code of the model analysed is provided, it would be sensible to inform the reader of the specific software and version that was used.

Presentation of the outcome in the Results section

Only the data and the analyses needed to test the hypotheses and predictions stated in the Introduction and those important for discussion should be placed in the Results section. All other outcome might be provided as supplementary materials. Some descriptive statistics are often reported in the Results section, such as means, standard errors (SE), standard deviation (SD), confidence interval (CI). It is of critical importance that these estimates can only be provided if the described data are drawn from a general population with normal distribution; otherwise median values with quartiles should be provided. A common mistake is to provide the results of non-parametric tests with parametric estimates. If one cannot assume normal distribution, providing arithmetic mean with standard deviation is misleading, as they are estimates of normal distribution. I recommend using confidence intervals instead of SE or SD, as confidence intervals are more informative (non-overlapping intervals suggest the existence of potential differences).

Descriptive statistics can be calculated from raw data (measured values) or presented as estimates from the calculated models (values corrected for independent effects of other factors in the model). The issue whether estimates from models or statistics calculated from the raw data provided throughout the paper should be clearly stated in the Materials and methods section. It is not necessary to report the descriptive statistics in the text if it is already reported in the tables or can be easily determined from the graphs.

The Results section is a narrative text which tells the reader about all the findings and guides them to refer to tables and figures if present. Each table and figure should be referenced in the text at least once. It is in the author’s interest to provide the reader the outcome of the statistical tests in such a way that the correctness of the reported values can be assessed. The value of the appropriate statistics (e.g. F, t, H, U, z, r) must always be provided, along with the sample size (N; non-parametric tests) or degrees of freedom (df; parametric tests) and I type error (p-value). The p-value is an important information, as it tells the reader about confidence related to rejecting the null hypothesis. Thus one needs to provide an exact value of I type error. A common mistake is to provide information as an inequality (p < 0.05). There is an important difference for interpretation if p = 0.049 or p = 0.001.

The outcome of simple tests (comparing two groups, testing relationship between two variables) can easily be reported in the text, but in case of multivariate models, one may rather report the outcome in the form of a table in which all factors with their possible interactions are listed with their estimates, statistics and p-values. The results of post hoc tests, if performed, may be reported in the main text, but if one reports differences between many groups or an interaction, then presenting such results in the form of a table or graph could be more informative.

The main results are often presented graphically, particularly when the effects appear to be significant. The graphs should be constructed so that they correspond to the analyses. If the main interest of the study is in an interaction, then it should be depicted in the graph. One should not present interaction in the graph if it appeared to be non-significant. When presenting differences, the mean or median value should be visualised as a dot, circle or some other symbol with some measure of variability (quartiles if a non-parametric test was performed, and SD, SE or preferably confidence intervals in the case of parametric tests) as whiskers below and above the midpoint. The midpoints should not be linked with a line unless an interaction is presented or, more generally, if the line has some biological/logical meaning in the experimental design. Some authors present differences as bar graphs. When using bar graphs, the Y -axis must start from a zero value. If a bar graph is used to show differences between groups, some measure of variability (SD, SE, CI) must also be provided, as whiskers, for example. Graphs may present the outcome of post hoc tests in the form of letters placed above the midpoint or whiskers, with the same letter indicating lack of differences and different letters signalling pairwise differences. The significant differences can also be denoted as asterisks or, preferably, p-values placed above the horizontal line linking the groups. All this must be explained in the figure caption. Relationships should be presented in the form of a scatterplot. This could be accompanied by a regression line, but only if the relationship is statistically significant. The regression line is necessary if one is interested in describing a functional relationship between two variables. If one is interested in correlation between variables, the regression line is not necessary, but could be placed in order to visualise the relationship. In this case, it must be explained in the figure caption. If regression is of interest, then providing an equation of this regression is necessary in the figure caption. Remember that graphs serve to represent the analyses performed, so if the analyses were carried out on the transformed data, the graphs should also present transformed data. In general, the tables and figure captions must be self-explanatory, so that the reader is able to understand the table/figure content without reading the main text. The table caption should be written in such a way that it is possible to understand the statistical analysis from which the results are presented.

Guidelines for the Materials and methods section:

Provide detailed description of the experimental design so that the statistical techniques will be understandable for the reader.

Make sure that factors and groups within factors are clearly introduced.

Describe all statistical techniques applied in the study and provide justification for each test (both parametric and non-parametric methods).

If parametric tests are used, describe how the normality of data distribution and homogeneity of variance (in the case of analysis of variance) was checked and state clearly that these important assumptions for parametric tests are met.

Give a rationale for using non-parametric tests.

If data transformation was applied, provide details of how this transformation was performed and state clearly that this helped to achieve normal distribution/homogeneity of variance.

In the case of multivariate analyses, describe the statistical model in detail and explain what you did with interactions.

If post hoc tests are used, clearly state which tests you use.

Specify the type of software and its version if you think it is important.

Guidelines for presentation of the outcome of statistical analyses in the Results section:

Make sure you report appropriate descriptive statistics—means, standard errors (SE), standard deviation (SD), confidence intervals (CI), etc. in case of parametric tests or median values with quartiles in case of non-parametric tests.

Provide appropriate statistics for your test (t value for t-test, F for ANOVA, H for Kruskal–Wallis test, U for Mann–Whitney test, χ 2 for chi square test, or r for correlation) along with the sample size (non-parametric tests) or degrees of freedom (df; parametric tests).

t 23  = 3.45 (the number in the subscript denotes degree of freedom, meaning the sample size of the first group minus 1 plus the sample size of the second group minus 1 for the test with independent groups, or number of pairs in paired t-test minus 1).

F 1,23  = 6.04 (first number in the subscript denotes degrees of freedom for explained variance—number of groups within factor minus 1, second number denotes degree of freedom for unexplained variance—residual variance). F-statistics should be provided separately for all factors and interactions (only if interactions are present in the model).

H = 13.8, N 1  = 15, N 2  = 18, N 3  = 12 (N 1, N 2, N 3 are sample sizes for groups compared).

U = 50, N 1  = 20, N 2  = 19 for Mann–Whitney test (N 1 and N 2 are sample sizes for groups).

χ 2  = 3.14 df = 1 (here meaning e.g. 2 × 2 contingency table).

r = 0.78, N = 32 or df = 30 (df = N − 2).

Provide exact p-values (e.g. p = 0.03), rather than standard inequality (p ≤ 0.05)

If the results of statistical analysis are presented in the form of a table, make sure the statistical model is accurately described so that the reader will understand the context of the table without referring to the text. Please ensure that the table is cited in the text.

The figure caption should include all information necessary to understand what is seen in the figure. Describe what is denoted by a bar, symbols, whiskers (mean/median, SD, SE, CI/quartiles). If you present transformed data, inform the reader about the transformation you applied. If you present the results of a post hoc test on the graph, please note what test was used and how you denote the significant differences. If you present a regression line on the scatter plot, give information as to whether you provide the line to visualise the relationship or you are indeed interested in regression, and in the latter case, give the equation for this regression line.

Further reading in statistics:

Sokal and Rolf. 2011. Biometry. Freeman.

Zar. 2010. Biostatistical analyses. Prentice Hall.

McDonald, J.H. 2014. Handbook of biological statistics. Sparky House Publishing, Baltimore, Maryland.

Quinn and Keough. 2002. Experimental design and data analysis for biologists. Cambridge University Press.

Author information

Authors and affiliations.

Institute of Environmental Sciences, Jagiellonian University, Gronostajowa 7, 30-376, Kraków, Poland

Mariusz Cichoń

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mariusz Cichoń .

Rights and permissions

Reprints and Permissions

About this article

Cite this article.

Cichoń, M. Reporting statistical methods and outcome of statistical analyses in research articles. Pharmacol. Rep 72 , 481–485 (2020). https://doi.org/10.1007/s43440-020-00110-5

Download citation

Published : 15 June 2020

Issue Date : June 2020

DOI : https://doi.org/10.1007/s43440-020-00110-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us

statistical analysis of research paper

Statistical Papers

Statistical Papers provides a forum for the presentation and critical assessment of statistical methods. In particular, the journal encourages the discussion of methodological foundations as well as potential applications.

This journal stresses statistical methods that have broad applications; however, it does give special attention to statistical methods that are relevant to the economic and social sciences. In addition to original research papers (regular articles), readers will find survey articles, short communications, reports on statistical software, and book reviews.

  • Provides a forum for critical assessments of statistical methods
  • Fosters discussion of methodological foundations and potential applications
  • Stresses statistical methods that have broad applications
  • Draws attention to statistical methods that are relevant to the economic and social sciences
  • 90% of authors who answered a survey reported that they would definitely publish or probably publish in the journal again

Journal information

  • Werner G. Müller,
  • Carsten Jentsch,
  • Shuangzhe Liu,
  • Ulrike Schneider

Journal metrics

Latest issue.

statistical analysis of research paper

Issue 4, August 2023

S.I: mODa 13: Model-Oriented Data Analysis and Optimum Design

Latest articles

Change point in variance of fractionally integrated noise.

  • Tianxiao Pang
  • Content type: Regular Article
  • Published: 25 September 2023

statistical analysis of research paper

Least squares estimation for a class of uncertain Vasicek model and its application to interest rates

statistical analysis of research paper

On the validity of the bootstrap hypothesis testing in functional linear regression

  • Omid Khademnoe
  • S. Mohammad E. Hosseini-Nasab
  • Published: 20 September 2023

statistical analysis of research paper

Detection of multiple change-points in high-dimensional panel data with cross-sectional and temporal dependence

Authors (first, second and last of 4).

  • Marie-Christine Düker
  • Seok-Oh Jeong
  • Changryong Baek

statistical analysis of research paper

A semiparametric dynamic higher-order spatial autoregressive model

  • Yuping Wang

statistical analysis of research paper

Journal updates

Write & submit: overleaf latex template.

Overleaf LaTeX Template

For authors

Working on a manuscript.

Avoid the most common mistakes and prepare your manuscript for journal editors.

About this journal

  • Australian Business Deans Council (ABDC) Journal Quality List
  • Current Index to Statistics
  • EBSCO Advanced Placement Source
  • EBSCO Business Source
  • EBSCO Discovery Service
  • EBSCO MasterFILE
  • Google Scholar
  • Japanese Science and Technology Agency (JST)
  • Journal Citation Reports/Science Edition
  • Mathematical Reviews
  • Norwegian Register for Scientific Journals and Series
  • OCLC WorldCat Discovery Service
  • ProQuest ABI/INFORM
  • ProQuest Advanced Technologies & Aerospace Database
  • ProQuest-ExLibris Primo
  • ProQuest-ExLibris Summon
  • Research Papers in Economics (RePEc)
  • Science Citation Index Expanded (SCIE)
  • TD Net Discovery Service
  • UGC-CARE List (India)

Rights and permissions

Springer policies

© Springer-Verlag GmbH Germany, part of Springer Nature

Trending now

What is human computer interaction a complete guide to hci, ai's transformative role in the internet of things, the ultimate guide to creating chatbots, exploring local search in ai: algorithms and applications, ai in manufacturing: here's everything you should know, feature engineering, advantages and disadvantages of artificial intelligence, top 10 machine learning algorithms you need to know in 2023, program preview: chart your success path in 2023 with the ut dallas ai/ml bootcamp, keras vs tensorflow vs pytorch: understanding the most popular deep learning frameworks, what is statistical analysis types, methods and examples.

What Is Statistical Analysis?

Table of Contents

Statistical analysis is the process of collecting and analyzing data in order to discern patterns and trends. It is a method for removing bias from evaluating data by employing numerical analysis. This technique is useful for collecting the interpretations of research, developing statistical models, and planning surveys and studies.

Statistical analysis is a scientific tool in AI and ML that helps collect and analyze large amounts of data to identify common patterns and trends to convert them into meaningful information. In simple words, statistical analysis is a data analysis tool that helps draw meaningful conclusions from raw and unstructured data. 

The conclusions are drawn using statistical analysis facilitating decision-making and helping businesses make future predictions on the basis of past trends. It can be defined as a science of collecting and analyzing data to identify trends and patterns and presenting them. Statistical analysis involves working with numbers and is used by businesses and other institutions to make use of data to derive meaningful information. 

Master the Right AI Tools for the Right Job!

Master the Right AI Tools for the Right Job!

Types of Statistical Analysis

Given below are the 6 types of statistical analysis:

Descriptive Analysis

Descriptive statistical analysis involves collecting, interpreting, analyzing, and summarizing data to present them in the form of charts, graphs, and tables. Rather than drawing conclusions, it simply makes the complex data easy to read and understand.

Inferential Analysis

The inferential statistical analysis focuses on drawing meaningful conclusions on the basis of the data analyzed. It studies the relationship between different variables or makes predictions for the whole population.

Predictive Analysis

Predictive statistical analysis is a type of statistical analysis that analyzes data to derive past trends and predict future events on the basis of them. It uses machine learning algorithms, data mining , data modelling , and artificial intelligence to conduct the statistical analysis of data.

Prescriptive Analysis

The prescriptive analysis conducts the analysis of data and prescribes the best course of action based on the results. It is a type of statistical analysis that helps you make an informed decision. 

Exploratory Data Analysis

Exploratory analysis is similar to inferential analysis, but the difference is that it involves exploring the unknown data associations. It analyzes the potential relationships within the data. 

Causal Analysis

The causal statistical analysis focuses on determining the cause and effect relationship between different variables within the raw data. In simple words, it determines why something happens and its effect on other variables. This methodology can be used by businesses to determine the reason for failure. 

Importance of Statistical Analysis

Statistical analysis eliminates unnecessary information and catalogs important data in an uncomplicated manner, making the monumental work of organizing inputs appear so serene. Once the data has been collected, statistical analysis may be utilized for a variety of purposes. Some of them are listed below:

  • The statistical analysis aids in summarizing enormous amounts of data into clearly digestible chunks.
  • The statistical analysis aids in the effective design of laboratory, field, and survey investigations.
  • Statistical analysis may help with solid and efficient planning in any subject of study.
  • Statistical analysis aid in establishing broad generalizations and forecasting how much of something will occur under particular conditions.
  • Statistical methods, which are effective tools for interpreting numerical data, are applied in practically every field of study. Statistical approaches have been created and are increasingly applied in physical and biological sciences, such as genetics.
  • Statistical approaches are used in the job of a businessman, a manufacturer, and a researcher. Statistics departments can be found in banks, insurance businesses, and government agencies.
  • A modern administrator, whether in the public or commercial sector, relies on statistical data to make correct decisions.
  • Politicians can utilize statistics to support and validate their claims while also explaining the issues they address.

Learn How to Design the Perfect AI/ML Resume

Learn How to Design the Perfect AI/ML Resume

Benefits of Statistical Analysis

Statistical analysis can be called a boon to mankind and has many benefits for both individuals and organizations. Given below are some of the reasons why you should consider investing in statistical analysis:

  • It can help you determine the monthly, quarterly, yearly figures of sales profits, and costs making it easier to make your decisions.
  • It can help you make informed and correct decisions.
  • It can help you identify the problem or cause of the failure and make corrections. For example, it can identify the reason for an increase in total costs and help you cut the wasteful expenses.
  • It can help you conduct market analysis and make an effective marketing and sales strategy.
  • It helps improve the efficiency of different processes.

Statistical Analysis Process

Given below are the 5 steps to conduct a statistical analysis that you should follow:

  • Step 1: Identify and describe the nature of the data that you are supposed to analyze.
  • Step 2: The next step is to establish a relation between the data analyzed and the sample population to which the data belongs. 
  • Step 3: The third step is to create a model that clearly presents and summarizes the relationship between the population and the data.
  • Step 4: Prove if the model is valid or not.
  • Step 5: Use predictive analysis to predict future trends and events likely to happen. 

Statistical Analysis Methods

Although there are various methods used to perform data analysis, given below are the 5 most used and popular methods of statistical analysis:

Mean or average mean is one of the most popular methods of statistical analysis. Mean determines the overall trend of the data and is very simple to calculate. Mean is calculated by summing the numbers in the data set together and then dividing it by the number of data points. Despite the ease of calculation and its benefits, it is not advisable to resort to mean as the only statistical indicator as it can result in inaccurate decision making. 

Standard Deviation

Standard deviation is another very widely used statistical tool or method. It analyzes the deviation of different data points from the mean of the entire data set. It determines how data of the data set is spread around the mean. You can use it to decide whether the research outcomes can be generalized or not. 

Regression is a statistical tool that helps determine the cause and effect relationship between the variables. It determines the relationship between a dependent and an independent variable. It is generally used to predict future trends and events.

Hypothesis Testing

Hypothesis testing can be used to test the validity or trueness of a conclusion or argument against a data set. The hypothesis is an assumption made at the beginning of the research and can hold or be false based on the analysis results. 

Sample Size Determination

Sample size determination or data sampling is a technique used to derive a sample from the entire population, which is representative of the population. This method is used when the size of the population is very large. You can choose from among the various data sampling techniques such as snowball sampling, convenience sampling, and random sampling. 

Statistical Analysis Software

Everyone can't perform very complex statistical calculations with accuracy making statistical analysis a time-consuming and costly process. Statistical software has become a very important tool for companies to perform their data analysis. The software uses Artificial Intelligence and Machine Learning to perform complex calculations, identify trends and patterns, and create charts, graphs, and tables accurately within minutes. 

Statistical Analysis Examples

Look at the standard deviation sample calculation given below to understand more about statistical analysis.

The weights of 5 pizza bases in cms are as follows:

Calculation of Mean = (9+2+5+4+12)/5 = 32/5 = 6.4

Calculation of mean of squared mean deviation = (6.76+19.36+1.96+5.76+31.36)/5 = 13.04

Sample Variance = 13.04

Standard deviation = √13.04 = 3.611

Career in Statistical Analysis

A Statistical Analyst's career path is determined by the industry in which they work. Anyone interested in becoming a Data Analyst may usually enter the profession and qualify for entry-level Data Analyst positions right out of high school or a certificate program — potentially with a Bachelor's degree in statistics, computer science, or mathematics. Some people go into data analysis from a similar sector such as business, economics, or even the social sciences, usually by updating their skills mid-career with a statistical analytics course.

Statistical Analyst is also a great way to get started in the normally more complex area of data science. A Data Scientist is generally a more senior role than a Data Analyst since it is more strategic in nature and necessitates a more highly developed set of technical abilities, such as knowledge of multiple statistical tools, programming languages, and predictive analytics models.

Aspiring Data Scientists and Statistical Analysts generally begin their careers by learning a programming language such as R or SQL. Following that, they must learn how to create databases, do basic analysis, and make visuals using applications such as Tableau. However, not every Statistical Analyst will need to know how to do all of these things, but if you want to advance in your profession, you should be able to do them all.

Based on your industry and the sort of work you do, you may opt to study Python or R, become an expert at data cleaning, or focus on developing complicated statistical models.

You could also learn a little bit of everything, which might help you take on a leadership role and advance to the position of Senior Data Analyst. A Senior Statistical Analyst with vast and deep knowledge might take on a leadership role leading a team of other Statistical Analysts. Statistical Analysts with extra skill training may be able to advance to Data Scientists or other more senior data analytics positions.

Choose the Right Program

Supercharge your career in AI and ML with Simplilearn's comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Program Name AI Engineer Post Graduate Program In Artificial Intelligence Post Graduate Program In Artificial Intelligence Geo All Geos All Geos IN/ROW University Simplilearn Purdue Caltech Course Duration 11 Months 11 Months 11 Months Coding Experience Required Basic Basic No Skills You Will Learn 10+ skills including data structure, data manipulation, NumPy, Scikit-Learn, Tableau and more. 16+ skills including chatbots, NLP, Python, Keras and more. 8+ skills including Supervised & Unsupervised Learning Deep Learning Data Visualization, and more. Additional Benefits Get access to exclusive Hackathons, Masterclasses and Ask-Me-Anything sessions by IBM Applied learning via 3 Capstone and 12 Industry-relevant Projects Purdue Alumni Association Membership Free IIMJobs Pro-Membership of 6 months Resume Building Assistance Upto 14 CEU Credits Caltech CTME Circle Membership Cost $$ $$$$ $$$$ Explore Program Explore Program Explore Program

Become Proficient in Statistics Today

Hope this article assisted you in understanding the importance of statistical analysis in every sphere of life. Artificial Intelligence (AI) can help you perform statistical analysis and data analysis very effectively and efficiently. 

If you are a science wizard and fascinated by the role of AI in statistical analysis, check out this amazing Caltech Post Graduate Program in AI & ML course in collaboration with Caltech. With a comprehensive syllabus and real-life projects, this course is one of the most popular courses and will help you with all that you need to know about Artificial Intelligence. 

Find our Artificial Intelligence Engineer Online Bootcamp in top cities:

About the author.

Simplilearn

Simplilearn is one of the world’s leading providers of online training for Digital Marketing, Cloud Computing, Project Management, Data Science, IT, Software Development, and many other emerging technologies.

Recommended Programs

Artificial Intelligence Engineer

Post Graduate Program in AI and Machine Learning

*Lifetime access to high-quality, self-paced e-learning content.

What Is Statistical Modeling?

What Is Statistical Modeling?

Recommended resources.

Free eBook: Guide To The CCBA And CBAP Certifications

Free eBook: Guide To The CCBA And CBAP Certifications

Understanding Statistical Process Control (SPC) and Top Applications

Understanding Statistical Process Control (SPC) and Top Applications

A Complete Guide on the Types of Statistical Studies

A Complete Guide on the Types of Statistical Studies

Digital Marketing Salary Guide 2021

Digital Marketing Salary Guide 2021

Data Analysis in Excel: The Best Guide

Data Analysis in Excel: The Best Guide

A Complete Guide to Get a Grasp of Time Series Analysis

A Complete Guide to Get a Grasp of Time Series Analysis

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Covid 19 - Lockdown, let out your PhD indagation with our Expert

Words Doctorate

Words Doctorate

Talk to our Consultant

+44 745 124 8474

Live Support in 214 Countries

Types of statistical analysis.

Types of Statistical Analysis

The objective of statistical analysis is to collect data and further analyse the collected data. These data are large information, which needs computation for gaining relevant conclusions. The target of statistical analysis is to deduce information from a bulk of data and express them through graphs, calculations, charts, and tables. 

Descriptive Statistical Analysis

The descriptive type of statistical analysis offers descriptions of the data. It summarises all the information about the collected data so that comprehensive meaning can be attained from the interpretation. Based on the usage of descriptive statistical analysis , the research attains the necessary conclusion, along with elaborate quantitative descriptions about the attained data.

For example:

To find out the performance of a student in a year, the calculation of average marks attained throughout the year should be assessed. The average calculation is the summation of the marks that the student acquired in every subject in a year divided by the total count of the subjects. The attained average is the single number that describes the entire performance scale of the student.

Inferential Statistical Analysis

The scope of inferential statistical analysis is to offer generalisation about the information of huge data set through the mode of sampling. In this analytical venture, the researcher considers a sample to represent bulk data. The approach follows the proceedings of either-

Estimating Parameters, or

Testing Hypothesis

Considering unbiased 100 to 200 people to represent the population of a particular location is called sampling. Inferential statistical analysis critically evaluates the information collected from the sample to derive a relevant conclusion.

Predictive Statistical Analysis

The objective of predictive statistical analysis is to make predictions based on regular events from the past. It analyses a series of events so that necessary possibilities can be enlisted about ‘what will happen in future. This analytical approach is used in every domain of life. Based on complex event processing, graph analysis, simulation, algorithms, business rules, and machine learning, it justifies the process of making decisions. These decisions show effectiveness on the subject to offer a solution to a future issue. This analytical approach is usually used to make predictions related to uncertainties and risks. The business domains of financial service, marketing, and online services imply predictive analysis to attain competitive advantages.

With the advent of social media platforms, users get recommendations for different sources of entertainment and shopping. These recommendations are based on the data collected from the online streaming platforms attained through the analysis of data used by the user. The history, shopping sites, search keywords, are the means to predict the right recommendations to the user to spend money for entertainment and shopping.

Prescriptive Statistical Analysis

The prescriptive statistical analysis aims to derive ‘what will happen, along with the precision on ‘when it will happen. Based on the statistical evaluations led by modelling, data mining, and artificial intelligence, the prescriptive statistical analysis combines the collected information from the internal sources, with data attained from third-party sources. From there, this analytical approach gains insight into the means of developing better possibilities for the respective operation or event. This analysis is capable of predicting what might happen in future and what should be done to gain the best from the consequences. It is the provision to gain insight into the selection of different choices for different actions based on former recommendations.

Self-driving cars like Waymo is the result of prescriptive statistical analysis. This car undergoes innumerable calculations to accomplish a trip. Based on prescribed situations and sensor-based information, this car makes decisions to drive by itself.

Causal Statistical Analysis

The relevance of causal statistical analysis is based on its capability to offer reasons to know ‘why’ certain even is happening that way. It is a statistical analysis that seeks to find the causes that lead to certain events of success or failure. This analytical approach is effective in resisting or preventing disasters at large.

In the case of managing the COVID-19 pandemic, the researchers analysed series of former epidemics through machine learning and a robust statistical algorithm. The objective is to find the causes of its spread and the ways to prevent it. Based on the cause-specific derivations, the researcher tried to incur the impacts of COVID-19 in future. Even the cases and developments of COVID-19 in 2020 are analysed to build better shields for restricting its spread in 2021. 

Exploratory Data Analysis

Exploratory data analysis (EDA) is focused on evaluating different sets of data and thereby summarising the core relevant concerns to the research question. It is identified as an exponential entity to inferential statistics, which is expressed through various visualisation methods. The data scientists use this statistical analysis to identify patterns and from there gain insight into the domain of unknown knowledge hidden within. These derivations are expressed through either graphical or non-graphical representations, following the process of -

Find unknown relationships à Check hypotheses à Make assumptions

In a process of predicting the trend of giving tip in a dining party to the waiter, the variables considered are the amount of the tip, total bill amount, gender or the payer, section of smoking or the non-smoking area, day, time and party size. An EDA speculation derives that hypothetically tipping can depend on the count of people available at the dining party. As number of people attending the party increases, the bill amount increases and hence tipping decreases.

Mechanistic Analysis

Mechanistic Statistical Analysis is applied in terms of big industries. The core approach of this statistical analysis aims to understand the exact kind of changes in the considered variables, which are subject to lead to other forms of variables. The entire process depends on assumption mechanised through a given systematic approach. It is liable to be affected by the interaction among the internal components. There is no room for influence from external components.

The incidences of car crashes can be analysed through mechanical statistical analysis. In doing so innumerable information about the ways passengers and drivers react to the crashes can be marked under variables. All these variables can be used for the development of a determined mode of the mechanistic model giving details about the impacts and reactions on crashes. These data can be further developed for generating safety features in the cars.

Eventually, it can be stated that the application of different types of statistical analysis can be implemented to varied kinds of information. For all these types of statistical analyses, the core objective is to derive results so that a better situation can be generated in future.

Comodo Secure

Logo for The Wharton School

  • Youth Program
  • Wharton Online

Research Papers / Publications

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts

Statistics articles within Scientific Reports

Article 27 September 2023 | Open Access

Public health factors help explain cross country heterogeneity in excess death during the COVID19 pandemic

  • Min Woo Sun
  • , David Troxell
  •  &  Robert Tibshirani

Article 26 September 2023 | Open Access

Qualitative, energy and environmental aspects of microwave drying of pre-treated apple slices

  • Ebrahim Taghinezhad
  • , Mohammad Kaveh
  •  &  José Blasco

Time series forecasting methods in emergency contexts

  • P. Villoria Hernandez
  • , I. Mariñas-Collado
  •  &  M. C. Rodriguez Sánchez

Article 22 September 2023 | Open Access

Frugal day-ahead forecasting of multiple local electricity loads by aggregating adaptive models

  • Guillaume Lambert
  • , Bachir Hamrouche
  •  &  Joseph de Vilmarest

Article 20 September 2023 | Open Access

Different estimation techniques for constant-partially accelerated life tests of chen distribution using complete data

  • H. M. M. Radwan
  •  &  Abdulaziz Alenazi

Article 19 September 2023 | Open Access

Uncertainty analysis of contagion processes based on a functional approach

  • Dunia López-Pintado
  • , Sara López-Pintado
  •  &  Zonghui Yao

Article 16 September 2023 | Open Access

An algorithm for discovering vital nodes in regional networks based on stable path analysis

  • , Yimin Liu
  •  &  Zhiyuan Tao

Article 13 September 2023 | Open Access

A generalisation of the method of regression calibration

  • Mark P. Little
  • , Nobuyuki Hamada
  •  &  Lydia B. Zablotska

Article 12 September 2023 | Open Access

A comparative study of compartmental models for COVID-19 transmission in Ontario, Canada

  • Yuxuan Zhao
  •  &  Samuel W. K. Wong

Symmetry of gamma distribution data about the mean after processing with EWMA function

  • Mohammad M. Hamasha
  • , Mohammed S. Obeidat
  •  &  Adnan Mukkatash

Article 11 September 2023 | Open Access

The prediction of Chongqing's GDP based on the LASSO method and chaotic whale group algorithm–back propagation neural network–ARIMA model

  • Juntao Chen
  •  &  Jibo Wu

Article 07 September 2023 | Open Access

Clustering microbiome data using mixtures of logistic normal multinomial models

  •  &  Sanjeena Subedi

Reliability analysis of the triple modular redundancy system under step-partially accelerated life tests using Lomax distribution

  • Laila A. Al-Essa
  • , Alaa H. Abdel-Hamid
  •  &  Atef F. Hashem

Triple exponentially weighted moving average control chart with measurement error

  • , Muhammad Arslan
  •  &  Nevine M. Gunaime

Article 02 September 2023 | Open Access

Environmental and economic determinants of temporal dynamics of the ruminant movement network of Senegal

  • Katherin Michelle García García
  • , Andrea Apolloni
  •  &  Alexis Delabouglise

Exploring the landscape of dismantling strategies based on the community structure of networks

  • F. Musciotto
  •  &  S. Miccichè

Article 30 August 2023 | Open Access

Influence of mammographic density and compressed breast thickness on true mammographic sensitivity: a cohort study

  • Rickard Strandberg
  • , Maya Illipse
  •  &  Keith Humphreys

Article 29 August 2023 | Open Access

Optimization with artificial intelligence of the machinability of Hardox steel, which is exposed to different processes

  • Mehmet Altuğ
  •  &  Hasan Söyler

Article 24 August 2023 | Open Access

Trends and projection of incidence, mortality, and disability-adjusted life years of HIV in the Middle East and North Africa (1990–2030)

  • Zahra Khorrami
  • , Mohammadreza Balooch Hasankhani
  •  &  Hamid Sharifi

A framework for Li-ion battery prognosis based on hybrid Bayesian physics-informed neural networks

  • Renato G. Nascimento
  • , Felipe A. C. Viana
  •  &  Chetan S. Kulkarni

Article 23 August 2023 | Open Access

Chemical features and machine learning assisted predictions of protein-ligand short hydrogen bonds

  • Shengmin Zhou
  • , Yuanhao Liu
  •  &  Lu Wang

Neural superstatistics for Bayesian estimation of dynamic cognitive models

  • Lukas Schumacher
  • , Paul-Christian Bürkner
  •  &  Stefan T. Radev

Repetitive sampling inspection plan for cancer patients using exponentiated half-logistic distribution under indeterminacy

  • Gadde Srinivasa Rao
  •  &  Peter Josephat Kirigiti

Article 22 August 2023 | Open Access

Identifying COVID-19 cases and extracting patient reported symptoms from Reddit using natural language processing

  •  &  Fang Jin

Understanding the impact of along-transect resolution on acoustic surveys

  • Guillermo Boyra
  • , Iosu Paradinas
  •  &  Enrique Nogueira

Article 19 August 2023 | Open Access

Rapid determination of levels of the main constituents in e-liquids by near infrared spectroscopy

  • Anaïs R. F. Hoffmann
  • , Jana Jeffery
  •  &  Michał Brokl

Article 16 August 2023 | Open Access

Comparing predictions among competing risks models with rare events: application to KNOW-CKD study—a multicentre cohort study of chronic kidney disease

  • , Soohyeon Lee
  •  &  Kook-Hwan Oh

Article 15 August 2023 | Open Access

Ranking routes in semiconductor wafer fabs

  • Shreya Gupta
  • , John J. Hasenbein
  •  &  Byeongdong Kim

Article 14 August 2023 | Open Access

Economic statistical model of the np chart for monitoring defectives

  • Salah Haridy
  • , Batool Alamassi
  •  &  Hamdi Bashir

Article 08 August 2023 | Open Access

The relation between authoritarian leadership and belief in fake news

  • Juan Ospina
  • , Gábor Orosz
  •  &  Steven Spencer

Article 07 August 2023 | Open Access

Modified generalized Weibull distribution: theory and applications

  • Mustafa S. Shama
  • , Amirah Saeed Alharthi
  •  &  Hassan M. Aljohani

Machine learning and statistical models for analyzing multilevel patent data

  •  &  Yanchao Gao

Article 02 August 2023 | Open Access

Bayesian reconstruction of magnetic resonance images using Gaussian processes

  • , Chad W. Farris
  •  &  Keith A. Brown

Article 01 August 2023 | Open Access

Statistical inferences under step stress partially accelerated life testing based on multiple censoring approaches using simulated and real-life engineering data

  • Ahmadur Rahman
  • , Mustafa Kamal
  •  &  Aned Al Mutairi

Article 28 July 2023 | Open Access

Analysis of Covid-19 data using discrete Marshall–Olkinin Length Biased Exponential: Bayesian and frequentist approach

  • Hassan M. Aljohani
  • , Muhammad Ahsan-ul-Haq
  •  &  Abdisalam Hassan Muse

Techniques to produce and evaluate realistic multivariate synthetic data

  • , Erin E. E. Fowler
  •  &  Steven Eschrich

A universal null-distribution for topological data analysis

  • Omer Bobrowski
  •  &  Primoz Skraba

Article 26 July 2023 | Open Access

Optimal sampling and statistical inferences for Kumaraswamy distribution under progressive Type-II censoring schemes

  • Osama E. Abo-Kasem
  • , Ahmed R. El Saeed
  •  &  Amira I. El Sayed

Article 22 July 2023 | Open Access

Urban population prediction based on multi-objective lioness optimization algorithm and system dynamics model

  • , Yanyan Yu
  •  &  Bo Wang

Generation of synthetic microstructures containing casting defects: a machine learning approach

  • Arjun Kalkur Matpadi Raghavendra
  • , Laurent Lacourt
  •  &  Henry Proudhon

Article 15 July 2023 | Open Access

Identifying oscillatory brain networks with hidden Gaussian graphical spectral models of MEEG

  • Deirel Paz-Linares
  • , Eduardo Gonzalez-Moreira
  •  &  Pedro A. Valdes-Sosa

Integer time series models for tuberculosis in Africa

  • Oluwadare O. Ojo
  • , Saralees Nadarajah
  •  &  Malick Kebe

Article 10 July 2023 | Open Access

Novel deep learning method for coronary artery tortuosity detection through coronary angiography

  • Miriam Cobo
  • , Francisco Pérez-Rojas
  •  &  José A. Vega

Article 08 July 2023 | Open Access

Geo-epidemiology of malaria incidence in the Vhembe District to guide targeted elimination strategies, South-Africa, 2015–2018: a local resurgence

  • Sokhna Dieng
  • , Temitope Christina Adebayo-Ojo
  •  &  Jean Gaudart

Article 07 July 2023 | Open Access

Short-term impact of diurnal temperature range on cardiovascular diseases mortality in residents in northeast China

  • , Zhimin Hong
  •  &  Chunyang Li

SCOPE: predicting future diagnoses in office visits using electronic health records

  • Pritam Mukherjee
  • , Marie Humbert-Droz
  •  &  Olivier Gevaert

Article 30 June 2023 | Open Access

Spatio-temporal modelling of routine health facility data for malaria risk micro-stratification in mainland Tanzania

  • Sumaiyya G. Thawer
  • , Monica Golumbeanu
  •  &  Victor A. Alegana

Article 24 June 2023 | Open Access

Efficient class of estimators for finite population mean using auxiliary attribute in stratified random sampling

  • Housila P. Singh
  • , Anurag Gupta
  •  &  Rajesh Tailor

Article 20 June 2023 | Open Access

Delineating COVID-19 subgroups using routine clinical data identifies distinct in-hospital outcomes

  • Bojidar Rangelov
  • , Alexandra Young
  •  &  Mark Radon

Article 17 June 2023 | Open Access

Method comparison and estimation of causal effects of insomnia on health outcomes in a survey sampled population

  • , Joon Chung
  •  &  Tamar Sofer

Advertisement

Browse broader subjects

  • Mathematics and computing

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

statistical analysis of research paper

Tutlance

  • Statistics Research Paper

Statistics Research Paper Writing Guide + Examples

statistics research paper

A statistics research paper discusses and analyzes the numerical data. A statistics research paper should cover all the aspects regarding the distribution of data, frequency tables and graphs..

A statistics research paper is similar to a survey research paper in many ways. Both papers focus on collecting information about some specific topic using surveys. They both use statistical methods to collect, analyze and present this information..

To explain how they are different, consider two types of statistics: means and relationships; it may be easier to understand this by first explaining how they’re related: The mean is (most often) calculated through addition while relationships are typically found through multiplication. This also explains why you calculate means at the beginning of a statistics project (before any relationship has been discovered), while responsibility for calculating relationships typically falls to the end of a statistics project. The next time you’re in class, try to count how many times your instructor mentions “mean,” as opposed to “relationship!”

A statistics paper is based on a relationship between two or more variables (often referred to as independent and dependent variables). You can think of these variables almost like social security numbers—each person has one SSN that distinguishes them from all other people; similarly, each data point (a unique combination of values for an independent variable) has its own set of values for one or more variables..

An example: if we are interested in knowing whether there is a relationship between hospital beds per 1,000 residents and the reading scores of senior citizens, defined as ages 65 and over, in a given city or country, then we must gather data on both variables. We will need to find out how many hospital beds there are per 1,000 residents for each of the cities or countries represented in our sample..

This type of research paper writing can also be based only on means (instead of relationships). Consider two more examples: If a psychologist wants to see if age is related to memory loss, but doesn’t care whether this relationship is positive or negative (i.e., she simply wants to know if older people tend to have better memories than younger people), all she cares about are the mean memory scores for groups defined by age.

On the other hand, if the psychologist wants to know if older people tend to have better memories than younger, but she also cares about how large this difference is (e.g.. their mean score is 10 points higher), then her research project will include calculations of relationships between age and memory scores.

How to write a statistics research paper

Getting started with your research paper is a difficult task. All you need to do is hop online and search for some information on how to write it but what are the necessary steps?

You can learn them by following this short guide:

Start by proper research:

Write an introduction, transition into your thesis statement, and finish off with a conclusion of sorts – this structure will not only help you produce a strong foundation, but also provide the reader with clues regarding where you’re going in particular when writing statistics essay.

The introduction should be broad enough to capture attention of the reader and yet narrow enough to indicate that it’s about something specific.

It can also serve as a springboard for later argumentation or present the central idea.

What makes a good research paper introduction , however, is a little bit of mystery or enigma that makes the reader go “I want to know more about this, why do they think so?”

Finding journal articles on statistics research paper

The next step is to locate journals and magazines. It’s best if you have some knowledge about what kind of scholarly work this will entail – statistics papers are not often found in tabloids but rather in peer reviewed sources.

If you’re struggling with compiling a comprehensive essay list of online sources, try asking for help at your school’s librarian or library.

There are also other options such as using one of those article databases which contain articles from all over the world and sorted by category: perhaps there’s something there – it doesn’t even have to be a journal article.

There are also books out there that have statistics papers in them – in case you want to go a little bit old school.

Writing body paragraphs

The next step is writing your body or main part of the statistics research paper, so it makes sense first to decide on what kind of statistical test you’ll use and then look up relevant information about it.

Use this information as the building blocks for paragraphs going into details: why did the statistician choose it?

What are some common criticisms/counterpoints?

Which should be supported by specific examples and further justification of why they’re relevant. The next section will deal with interpreting numerical results and drawing conclusions – this involves taking those numbers and making something meaningful out of them beyond just comparing them to each other. It could be a lot to take in, so it’s best if you break this down and look up specific information for different parts of it.

Write your paper

Now that you have collected all the necessary material, statistics research paper writing shouldn’t be hard – remember not to simply copy/paste the information from somewhere else without citing your sources and making sure that your work is better than what you’ve copied.

The final step will involve polishing and proofreading your work – make sure there are no mistakes when submitting or publishing online.

You should always use correct grammar, spelling and punctuation as well as consistent referencing/citation style (MLA, APA etc.).

And there you have it: statistics research paper.

Write perfect conclusion

The next step is writing your conclusions. You’ve already done a lot of the work for this during the body part so it shouldn’t be anything out of the ordinary.

Make sure you reiterate what you said in the introduction and sometimes add some more commentary on particular things that could/can serve as relevant examples for future research or topics to investigate further.

One thing which is especially important here is presenting results/data clearly and making them easy to understand even for people who aren’t statisticians themselves. This can help greatly with potential critics.

A statistics research paper, then, should include a summary of the methods used to gather and analyze your data (usually presented in sections 2 & 3), followed by your findings (usually presented in sections 4 – 7). All of this material must be contained within a single document.

Your paper is organized differently from other types of research reports. The most common order has been presented here so as to make it easier for you to adapt these sections back into your own planning:

  • Introduction – this section is often called the “literature review” and it contains a brief overview of your topic, including noteworthy definitions and facts. This discussion usually includes information regarding the shortcomings of prior research in this area, if there are any..
  • Methodology – this section includes a description of how you obtained (gathered) your data; examples would include surveys, interviews or usage logs..
  • Data – if you did not collect data yourself, then consider presenting a chart from an existing source that will help readers understand your results..
  • Summary & Discussion – here you’ll present the most important numerical findings related to your study..
  • Conclusions & Recommendations – be sure to make recommendations based on your findings.
  • References – provide a list of pertinent sources from which you obtained information and ideas..
  • Data Analysis – this section is usually presented as a subsection of the data section. Here you’ll report how you analyzed your data, including all calculations and inferences..
  • Appendices – this is an optional section in which to place tables that provide detailed information about your study..
  • Acknowledgments – it’s always nice to thank someone for their help (in this case, by providing a list of everyone who contributed to the project)..
  • Related Reading – refer interested parties to other journal articles, books or websites related to this material for further reading..
  • Tables – these are necessary in order to present your results. They also summarize large amounts of information in a small amount of space..
  • Figures – Graphs and charts provide data in an easy to understand manner.
  • Graphs – A graph is a visual representation of data..
  • Charts – A chart is a graphical display of numerical comparisons. It can be useful for showing the relationship between two or more things, such as trends over time.
  • List Of Participants – this is a list of everyone who helped you with the project.
  • Certificate Of Approval – every department or institute has its own rules about what must be included here: usually some statement saying that your research followed ethical standards and was approved by the proper authorities..

Note: The examples provided above are not intended to represent all possible formats that could be used but rather they provide information about what most researchers tend to do.

Also note: Depending on your course or assignment, you might be required to use certain formatting styles or these requirements may vary from one class or professor to another. But it’s always important to know how and when to cite (reference) sources within your paper.

The last few sections are optional, depending on the format guidelines established by the instructor for your assignment..

Good luck with this paper!

Statistics research paper outline template

The following is the general format and structure of a statistics research paper: introduction/background, problem statement, objectives, materials & methods, results and discussion sections. Use this table of contents as an outline when you are beginning your research.

Statistics research paper outline template

As you begin each portion make sure to refer back to this outline.

  • Problem statement
  • Introduction/background
  • Materials and methods discussion & results
  • Data analysis plan/approach
  • Discussion (use subheadings if necessary)
  • Conclusion and recommendations

Writing an idealized outline for a statistics research paper. Each of the research paper outline item above has been discussed below in depth.

  • Introduction – background information about the topic, relevance of particular issue, significance of data to the problem; 1-2 paragraphs
  • Problem statement – state what problem was studied and why – this must be in your initial set of literature review sources (statistics research paper citations); 1 paragraph. State briefly how it relates to overall field or area of study;
  • Objectives – list primary and secondary objectives that you were trying to achieve during the course of writing a stats paper; separate each with period. Do not use more than 2 levels of sub-objectives when doing outline.
  • Materials & methods – explain your data gathering process (for collection of raw statistics), how you set up the experiment when doing statistical hypothesis testing, and any other relevant information concerning the creation of a statistics research paper; 2-3 paragraphs
  • Data analysis/results – state how you analyzed data collected (tables, graphs, charts etc.) that is not available in published works or articles for stats research paper; 2-3 paragraphs
  • Discussion section – this is important! Don’t forget to discuss results presented in data analysis/results section as well as recap your problem statement; 3-4 paragraphs. Discussion of main results is an important part of a research paper; don’t forget to include one!  Defend your hypothesis (if applicable) and describe its importance. Also compare with other studies done on similar topics; present similarities & differences in data collection methods & outcome measurements used vs others. Take time to explain how each area is different.
  • Conclusion & recommendations: In your research paper conclusion , reiterate the significance of your hypothesis; based on results it is either confirmed, disconfirmed or weakened/strengthened. Also indicate what you would like to do now that this is done (if not requested in guidelines) – future research, more studies should be done using same methods etc. Give an indication of how long each study might take and who can benefit from its findings. If needed you also have space here to discuss any recommendations for future work (suggestions are good but don’t sound too pushy).
  • References: The references section is usually under a separate heading that includes title & author of the paper/book, date of publication and page numbers. In your bibliography, various books & articles, including editions & versions if needed, that were used as sources when doing your research paper or study (will likely contain other authors’ articles you read).
  • Appendices – tables, figures, charts, appendices with raw data, etc. The appendices – anything else that was created during statistical analysis while writing stats research paper can go here (you may need extra sheets of paper if there are too many graphs etc. to include as part of the main text)

Statistics research writing tips:

Make sure to explain your purpose for the study and also give some background information on the problem. This background information should be used as a way of showing why your statistics research paper is important and significant to your field.  Here are some ways to say it without coming off as too boring or unprofessional. They’re quite general but good enough:

  • “This paper will investigate…”
  • “The objective of this study is…”
  • “It is not found if…”
  • “There have been a few studies concerning..”
  • “As there has been much debate about…”
  • “No prior study exists that.”

State what your hypothesis was; how you came up with it and any problems you faced trying to prove it.

  • Now, state your results and findings, including the statistical analysis if necessary (such as significance or not).
  • Write only what was found using charts or graphs, tables, and numbers that back up your claims about the study.
  • Check all your spelling and grammar again.
  • Pay attention to commas, semicolons and spaces.
  • Use a spell checker if possible or ask someone else to proof-read it for you.

This is how simple it is to write a great statistics term paper or research paper. If you get stuck, you can ask for research paper writing help from expert tutors.

Statistics Research Topics

Wondering what to write a research paper on statistics and probability about?

Statistics is a branch of mathematics dealing with the collection, analysis and interpretation or data.

Statistics is used in many fields, including natural science , social sciences, business and engineering.

A statistician collects, computes and analyzes numerical data to summarize information.

If you are looking for a statistics research paper topic ideas you’ve come to the right place!

statistics research paper topics

Check out this list below of major research paper topics in statistics and probability for college students:

Statistics Research Topics – Probability

Description: Probability deals with events that have uncertain outcomes..  It involves mathematical calculations using formulas involving random variables such as probability density functions (pdf), probability distributions , expected values or moments E(x), variance V(X).. In other words a probability distribution summarizes all possible outcomes in terms of probabilities based on theoretical assumptions or data collection.

A specific probability distribution can be constructed from a collection of frequencies of events in the long run.

This is known as the Central Limit Theorem  which says that if we take averages of random variables over very large numbers, their distributions will approach normal (i.e. bell-shaped curve) regardless of the shape or other details about them.

It’s also a probability measure used to find out how likely it is that one event A happens given that another event B happened first.

Statistics Research Topics – Descriptive statistics

Description: descriptive statistics collects and interprets numerical data in terms of distributions, graphs, measures and relationships among variables.

For example mean, median and mode are measures of central tendency and dispersion.

On the other hand, standard deviation measures dispersion. This is a statistical technique used to summarize data in terms of its most important features.

Descriptive statistics is also necessary for analyzing real-life situations.

It provides information that’s useful for making business decisions.

Statistics Research Topics – Testing significance of relationships (correlation)

Description: correlation deals with measuring the strength , direction and stability of relationship between two or more variables.

A positive correlation indicates that as the value of one variable increases , so does the value of another variable.

For example, if employees in a call center perform better at work when they are seated close to their supervisors, this would be an example of positive correlation because it that shows that as one variable (seating arrangements) increases the other variable (performance) also increases.

On the other hand, if two variables are negatively correlated which means that as one variable increases, the other decreases.

For example, if some countries have a high GDP per capita and low population growth rate then this would be an example of negative correlation because it shows that as your income rises your population falls.

Statistics – Research Paper – Sampling

Description: sampling deals with determining appropriate sample size s for a study based on specific requirements.

Like when you decide to choose five people out of hundreds in order to conduct surveys or research studies.

The main idea behind sampling is to reduce information loss by minimizing unnecessary information.

Also, sampling is also used to make inferences about a population or to study it indirectly by studying a representative sample (group of people) which is expected to be close enough.

Statistics Research Topics – Hypothesis testing

Description: hypothesis testing deals with conducting statistical tests that determine whether or not the data provided supports certain claim or statistically significant conclusion.

For example if you want to test whether the data shows that girls earn higher grades than boys in math classes.

This would be done using a formal statistical procedure called the t-test. By calculating two values and comparing them we can see if there’s a difference between their means.

If it turns out that one group has an average significantly different from the other, then it’s considered significant.

Statistics – Research Paper – Analysis of variance (ANOVA)

Description: analysis of variance (ANOVA) deals with comparing means between at least three or more groups to determine if they’re similar within a certain margin.

It can also be used for determining whether an overall mean is different than another overall mean across several groups.

ANOVA helps determine if there are significant differences in values that would affect results and conclusions drawn from two or more related populations.

For example, you want to know if there’s any difference among four brands of your favorite soft drink so you conduct an experiment by taking 10 people who all like this particular kind and have them taste each of the brands and see which one they prefer.

Statistics Research Topics – Confidence intervals

Description: confidence intervals are used when conducting statistical studies dealing with hypothesis testing which involves making observations about a population based on data collected.

It is essentially ranges of values meant to indicate variability between certain estimated parameters within a group or group of people.

It helps shows how much uncertainty exists within estimates for a single population parameter.

This is done by adding and subtracting margins of error to the original estimation..

Statistics – Research Topics – Non-parametric tests

Description: non-parametric tests are used when conducting statistical studies to make comparisons between two or more samples using data that’s not completely numerical.

This is usually done by converting scores (e.g., number grades, percentages) into ranks so that you’re able to compare between them easier.

Some examples of these tests includes 1 rank sum test, sign test etc…

If you want find out how statistics can help your business especially in terms of improving operations efficiency and productivity then contact us today! We will be glad to guide you through this step by process!

Join us today and we will help you develop a strategic plan and help you write a statistics paper fast.

In academia, for example, research papers are useful to gain knowledge, learning from other people’s mistakes: A good statistics research paper can add value to your coursework if it has been written by a professional essay writer who understands both the subject of the paper and how it is relevant to all fields of study.

Professional Statistics Research Paper Writers

Tutlance is a hub for the best statistics research paper writers from an array of academic disciplines and backgrounds. Our specialists work with you to understand your need, then write a top-class statistic paper that fits the bill and exceed your expectation in every way. You can be sure that our statistics research paper writers will deliver high quality work 100% plagiarism free!

Why Choose Us?

Tutlance.com remains the best homework writing service to pay for a statistics research paper because of our outstanding writers and deliverables:

  • Guaranteed high quality research papers.
  • Unlimited free revisions for you until you are satisfied with your order’s quality  and content.
  • Plagiarism-free Papers delivered on time on every order.
  • 24/7 Customer Support availability via live chat, phone calls, or email contacts . No delays in answering!

Other guides:

  • How to write a statistical analysis paper .

Statistics Research Paper Writing Help from Our Statistics Essay Experts

Get help writing any statistics paper from professional online statistics tutors . We make statistics writing and research easier. You can save time by ordering a paper written for you from our statistics tutors.

Statistics writing services are available for all college students who wish to engage in the study of data using numerical methods to analyze various aspects such as distribution, central tendency, dispersion, relationship between two or more sets of samples etc. Many people think that statistical analysis implies only numerical procedures but it is much more than just that. The practice has many applications in varied fields like Customer Relationship Management (CRM), Business Intelligence (BI) , Marketing and Sales Analysis, Data Mining, Quality Control (QC), Operations Research (OR), Finance and Economics etc.. Statistics have been used in every part of human life since ages because humans are curious about their environment around them.

How can Tutlance help me get my statistics essay done?

We understand how demanding writing a statistics essay can be given that it involves working with complex statistical data. Before we send your order to our writers, we will check for accuracy and precision. We provide professional statistics essay writing help to students in all academic levels: high school, college and university. Our experts can also write a research proposal or any other paper that involves interpreting data and drawing conclusions based on scientific research methods.

Can you help me come up with a good statistics research paper template?

Yes we can help you create a good statistics research paper template sample! Statistics research paper sample is a good guide to follow while writing your own term paper. It can be an essay, dissertation or thesis. Our writers provide free examples of statistics papers that you can use as a starting point when writing your own work.

We have plenty of experts with experience in various fields such as sociology, anthropology, economics and many more. Just place an order online or contact us by phone at any time. We will assign the most suitable expert to write your statistics research paper for money .

Can you help me write a business statistics research paper?

Yes we can help you write a business statistics research paper . We have professional writers who will prepare a creative and impactful paper based on your instructions. Our experts specialize in writing dissertations, term papers, essays and other academic papers for students across the globe. For college students busy with their final semester exams, this is the best time to get some assistance from our experts .

If you are struggling to do your statistics essay or research paper because of lack of topic ideas or need any kind of guidance regarding how to go about your work , take advantage of our services today by getting in touch with us at Tutlance.com . We’d be more than glad to assist you with any sort of statistical issue you might be experiencing in your academic life. Our company is renowned for providing high quality business statistics research paper writing assistance to all our college students at a very affordable price.

Read more about business statistics assignment help , psychology statistics homework help .

Can you provide an example of a probability and statistics research paper?

Yes, we can provide an example of a probability and statistics research paper. It might be required for students who face difficulty in writing their own term papers or essays. We will not only provide a sample essay but also supply you with relevant instructions that are necessary to carry out the process of academic writing.

Other kinds of examples of research papers in probability and statistics include:

  • probability theory research paper,
  • psychology statistics research paper.

We can provide sample statistics research papers on any topic. Contact our statistics paper writers for cheap probability theory homework help .

Can Tutlance actually do my statistics homework?

Absolutely! Tutlance employs only the most qualified and experienced statisticians who are highly proficient in statistical techniques such as probability sampling, regression analysis, chi-square test. They possess the ability to interpret complex data results including mean , median , mode etc. We have access to different online channels through which we can get your homework done fast and effectively.

Read more – how can I hire someone to do my statistics homework and can you help me with my math – resource pages.

Contact us today or fill out this no obligation form now – Ask a question online and get help with statistics project research paper.

do my stats homework

  • Coursework Help
  • Essay Writing Help
  • Homework Help
  • Take My Online Class
  • Pay Someone To Do My Homework
  • Pay Someone To Do My Math Homework
  • Assignment Help
  • Dissertation Help
  • Research Paper Help
  • Thesis Help
  • Term Paper Help
  • Case Study Writing Help
  • Personal Statement Help

Statistics Help Pages

  • Custom statistics project writing service
  • Statistics Problem Solver Online
  • Write my statistics paper

Related Pages

  • Math answers
  • Algebra help
  • Trigonometry help
  • Calculus help
  • Accounting homework help
  • Finance assignment help
  • Statistics help online

Online Services

  • Take my online math class
  • Take my online statistics class
  • Take my online marketing class
  • Take my online accounting class
  • Take my online class
  • Take my online exam
  • Take my online test

Other Services

  • Homework Answers
  • Online Tutors Near Me
  • Mathematics homework help
  • Mathematics tutors
  • Statistics homework help
  • Statistics tutors
  • Applied statistics homework help
  • Applied statistics tutors
  • Mystatlab statistics homework help
  • Mystatlab statistics tutors
  • Elementary statistics homework help
  • Elementary statistics tutors
  • Business statistics homework help
  • Business statistics tutors
  • Psychology statistics homework help
  • Psychology statistics tutors

Other Resources

  • MIT Mathematics Courses
  • Why Study Statistics- Boston University
  • Dept of Statistics – Columbia University
  • UCI – Statistics
  • Penn State – Dept. Stats

statistics answer generator

Related Research Paper Writing Guides

  • How to write a research paper
  • Research paper thesis statement
  • Hook for research paper
  • Research paper on mass shootings in america
  • Create a school shooting research paper outline
  • How to write hypothesis in a research paper
  • How to write a meta analysis research paper
  • How to write a research paper abstract
  • Research paper conclusion
  • Research paper introduction paragraph
  • Research paper outline, examples, & template
  • APA format research paper outline
  • How to write a research paper in mla format
  • Literature review in research paper
  • Results section of a research paper
  • How to write the methods section of a research paper
  • How to write a research proposal
  • Research paper title page
  • Research Paper Format
  • Exploratory data analysis research paper
  • Content analysis paper
  • Capstone project
  • Data analysis in research paper
  • Research paper analysis

Probability and statistics research paper examples

probability and statistics research paper

Hire a Homework Doer in 3 Simple Steps!

Tutlance is the best website to solve statistics problems for you.

Table of Contents

Statistics Research Paper

View sample Statistics Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our custom writing service s for professional assistance. We offer high-quality assignments for reasonable rates.

Statistics Research Paper

Need a Custom-Written Essay or a Research Paper?

Academic writing, editing, proofreading, and problem solving services, more statistics research papers:.

  • Time Series Research Paper
  • Crime Statistics Research Paper
  • Economic Statistics Research Paper
  • Education Statistics Research Paper
  • Health Statistics Research Paper
  • Labor Statistics Research Paper
  • History of Statistics Research Paper
  • Survey Sampling Research Paper
  • Multidimensional Scaling Research Paper
  • Sequential Statistical Methods Research Paper
  • Simultaneous Equation Estimation Research Paper
  • Statistical Clustering Research Paper
  • Statistical Sufficiency Research Paper
  • Censuses Of Population Research Paper
  • Stochastic Models Research Paper
  • Stock Market Predictability Research Paper
  • Structural Equation Modeling Research Paper
  • Survival Analysis Research Paper
  • Systems Modeling Research Paper
  • Nonprobability Sampling Research Paper

1. Introduction

Statistics is a body of quantitative methods associated with empirical observation. A primary goal of these methods is coping with uncertainty. Most formal statistical methods rely on probability theory to express this uncertainty and to provide a formal mathematical basis for data description and for analysis. The notion of variability associated with data, expressed through probability, plays a fundamental role in this theory. As a consequence, much statistical effort is focused on how to control and measure variability and/or how to assign it to its sources.

Almost all characterizations of statistics as a field include the following elements:

(a) Designing experiments, surveys, and other systematic forms of empirical study.

(b) Summarizing and extracting information from data.

(c) Drawing formal inferences from empirical data through the use of probability.

(d) Communicating the results of statistical investigations to others, including scientists, policy makers, and the public.

This research paper describes a number of these elements, and the historical context out of which they grew. It provides a broad overview of the field, that can serve as a starting point to many of the other statistical entries in this encyclopedia.

2. The Origins Of The Field of Statistics

The word ‘statistics’ is related to the word ‘state’ and the original activity that was labeled as statistics was social in nature and related to elements of society through the organization of economic, demographic, and political facts. Paralleling this work to some extent was the development of the probability calculus and the theory of errors, typically associated with the physical sciences. These traditions came together in the nineteenth century and led to the notion of statistics as a collection of methods for the analysis of scientific data and the drawing of inferences therefrom.

As Hacking (1990) has noted: ‘By the end of the century chance had attained the respectability of a Victorian valet, ready to be the logical servant of the natural, biological and social sciences’ ( p. 2). At the beginning of the twentieth century, we see the emergence of statistics as a field under the leadership of Karl Pearson, George Udny Yule, Francis Y. Edgeworth, and others of the ‘English’ statistical school. As Stigler (1986) suggests:

Before 1900 we see many scientists of different fields developing and using techniques we now recognize as belonging to modern statistics. After 1900 we begin to see identifiable statisticians developing such techniques into a unified logic of empirical science that goes far beyond its component parts. There was no sharp moment of birth; but with Pearson and Yule and the growing number of students in Pearson’s laboratory, the infant discipline may be said to have arrived. (p. 361)

Pearson’s laboratory at University College, London quickly became the first statistics department in the world and it was to influence subsequent developments in a profound fashion for the next three decades. Pearson and his colleagues founded the first methodologically-oriented statistics journal, Biometrika, and they stimulated the development of new approaches to statistical methods. What remained before statistics could legitimately take on the mantle of a field of inquiry, separate from mathematics or the use of statistical approaches in other fields, was the development of the formal foundations of theories of inference from observations, rooted in an axiomatic theory of probability.

Beginning at least with the Rev. Thomas Bayes and Pierre Simon Laplace in the eighteenth century, most early efforts at statistical inference used what was known as the method of inverse probability to update a prior probability using the observed data in what we now refer to as Bayes’ Theorem. (For a discussion of who really invented Bayes’ Theorem, see Stigler 1999, Chap. 15). Inverse probability came under challenge in the nineteenth century, but viable alternative approaches gained little currency. It was only with the work of R. A. Fisher on statistical models, estimation, and significance tests, and Jerzy Neyman and Egon Pearson, in the 1920s and 1930s, on tests of hypotheses, that alternative approaches were fully articulated and given a formal foundation. Neyman’s advocacy of the role of probability in the structuring of a frequency-based approach to sample surveys in 1934 and his development of confidence intervals further consolidated this effort at the development of a foundation for inference (cf. Statistical Methods, History of: Post- 1900 and the discussion of ‘The inference experts’ in Gigerenzer et al. 1989).

At about the same time Kolmogorov presented his famous axiomatic treatment of probability, and thus by the end of the 1930s, all of the requisite elements were finally in place for the identification of statistics as a field. Not coincidentally, the first statistical society devoted to the mathematical underpinnings of the field, The Institute of Mathematical Statistics, was created in the United States in the mid-1930s. It was during this same period that departments of statistics and statistical laboratories and groups were first formed in universities in the United States.

3. Emergence Of Statistics As A Field

3.1 the role of world war ii.

Perhaps the greatest catalysts to the emergence of statistics as a field were two major social events: the Great Depression of the 1930s and World War II. In the United States, one of the responses to the depression was the development of large-scale probability-based surveys to measure employment and unemployment. This was followed by the institutionalization of sampling as part of the 1940 US decennial census. But with World War II raging in Europe and in Asia, mathematicians and statisticians were drawn into the war effort, and as a consequence they turned their attention to a broad array of new problems. In particular, multiple statistical groups were established in both England and the US specifically to develop new methods and to provide consulting. (See Wallis 1980, on statistical groups in the US; Barnard and Plackett 1985, for related efforts in the United Kingdom; and Fienberg 1985). These groups not only created imaginative new techniques such as sequential analysis and statistical decision theory, but they also developed a shared research agenda. That agenda led to a blossoming of statistics after the war, and in the 1950s and 1960s to the creation of departments of statistics at universities—from coast to coast in the US, and to a lesser extent in England and elsewhere.

3.2 The Neo-Bayesian Revival

Although inverse probability came under challenge in the 1920s and 1930s, it was not totally abandoned. John Maynard Keynes (1921) wrote A Treatise on Probability that was rooted in this tradition, and Frank Ramsey (1926) provided an early effort at justifying the subjective nature of prior distributions and suggested the importance of utility functions as an adjunct to statistical inference. Bruno de Finetti provided further development of these ideas in the 1930s, while Harold Jeffreys (1938) created a separate ‘objective’ development of these and other statistical ideas on inverse probability.

Yet as statistics flourished in the post-World War II era, it was largely based on the developments of Fisher, Neyman and Pearson, as well as the decision theory methods of Abraham Wald (1950). L. J. Savage revived interest in the inverse probability approach with The Foundations of Statistics (1954) in which he attempted to provide the axiomatic foundation from the subjective perspective. In an essentially independent effort, Raiffa and Schlaifer (1961) attempted to provide inverse probability counterparts to many of the then existing frequentist tools, referring to these alternatives as ‘Bayesian.’ By 1960, the term ‘Bayesian inference’ had become standard usage in the statistical literature, the theoretical interest in the development of Bayesian approaches began to take hold, and the neo-Bayesian revival was underway. But the movement from Bayesian theory to statistical practice was slow, in large part because the computations associated with posterior distributions were an overwhelming stumbling block for those who were interested in the methods. Only in the 1980s and 1990s did new computational approaches revolutionize both Bayesian methods, and the interest in them, in a broad array of areas of application.

3.3 The Role Of Computation In Statistics

From the days of Pearson and Fisher, computation played a crucial role in the development and application of statistics. Pearson’s laboratory employed dozens of women who used mechanical devices to carry out the careful and painstaking calculations required to tabulate values from various probability distributions. This effort ultimately led to the creation of the Biometrika Tables for Statisticians that were so widely used by others applying tools such as chisquare tests and the like. Similarly, Fisher also developed his own set of statistical tables with Frank Yates when he worked at Rothamsted Experiment Station in the 1920s and 1930s. One of the most famous pictures of Fisher shows him seated at Whittingehame Lodge, working at his desk calculator (see Box 1978).

The development of the modern computer revolutionized statistical calculation and practice, beginning with the creation of the first statistical packages in the 1960s—such as the BMDP package for biological and medical applications, and Datatext for statistical work in the social sciences. Other packages soon followed—such as SAS and SPSS for both data management and production-like statistical analyses, and MINITAB for the teaching of statistics. In 2001, in the era of the desktop personal computer, almost everyone has easy access to interactive statistical programs that can implement complex statistical procedures and produce publication-quality graphics. And there is a new generation of statistical tools that rely upon statistical simulation such as the bootstrap and Markov Chain Monte Carlo methods. Complementing the traditional production-like packages for statistical analysis are more methodologically oriented languages such as S and S-PLUS, and symbolic and algebraic calculation packages. Statistical journals and those in various fields of application devote considerable space to descriptions of such tools.

4. Statistics At The End Of The Twentieth Century

It is widely recognized that any statistical analysis can only be as good as the underlying data. Consequently, statisticians take great care in the the design of methods for data collection and in their actual implementation. Some of the most important modes of statistical data collection include censuses, experiments, observational studies, and sample Surveys, all of which are discussed elsewhere in this encyclopedia. Statistical experiments gain their strength and validity both through the random assignment of treatments to units and through the control of nontreatment variables. Similarly sample surveys gain their validity for generalization through the careful design of survey questionnaires and probability methods used for the selection of the sample units. Approaches to cope with the failure to fully implement randomization in experiments or random selection in sample surveys are discussed in Experimental Design: Compliance and Nonsampling Errors.

Data in some statistical studies are collected essentially at a single point in time (cross-sectional studies), while in others they are collected repeatedly at several time points or even continuously, while in yet others observations are collected sequentially, until sufficient information is available for inferential purposes. Different entries discuss these options and their strengths and weaknesses.

After a century of formal development, statistics as a field has developed a number of different approaches that rely on probability theory as a mathematical basis for description, analysis, and statistical inference. We provide an overview of some of these in the remainder of this section and provide some links to other entries in this encyclopedia.

4.1 Data Analysis

The least formal approach to inference is often the first employed. Its name stems from a famous article by John Tukey (1962), but it is rooted in the more traditional forms of descriptive statistical methods used for centuries.

Today, data analysis relies heavily on graphical methods and there are different traditions, such as those associated with

(a) The ‘exploratory data analysis’ methods suggested by Tukey and others.

(b) The more stylized correspondence analysis techniques of Benzecri and the French school.

(c) The alphabet soup of computer-based multivariate methods that have emerged over the past decade such as ACE, MARS, CART, etc.

No matter which ‘school’ of data analysis someone adheres to, the spirit of the methods is typically to encourage the data to ‘speak for themselves.’ While no theory of data analysis has emerged, and perhaps none is to be expected, the flexibility of thought and method embodied in the data analytic ideas have influenced all of the other approaches.

4.2 Frequentism

The name of this group of methods refers to a hypothetical infinite sequence of data sets generated as was the data set in question. Inferences are to be made with respect to this hypothetical infinite sequence. (For details, see Frequentist Inference).

One of the leading frequentist methods is significance testing, formalized initially by R. A. Fisher (1925) and subsequently elaborated upon and extended by Neyman and Pearson and others (see below). Here a null hypothesis is chosen, for example, that the mean, µ, of a normally distributed set of observations is 0. Fisher suggested the choice of a test statistic, e.g., based on the sample mean, x, and the calculation of the likelihood of observing an outcome as or more extreme as x is from µ 0, a quantity usually labeled as the p-value. When p is small (e.g., less than 5 percent), either a rare event has occurred or the null hypothesis is false. Within this theory, no probability can be given for which of these two conclusions is the case.

A related set of methods is testing hypotheses, as proposed by Neyman and Pearson (1928, 1932). In this approach, procedures are sought having the property that, for an infinite sequence of such sets, in only (say) 5 percent for would the null hypothesis be rejected if the null hypothesis were true. Often the infinite sequence is restricted to sets having the same sample size, but this is unnecessary. Here, in addition to the null hypothesis, an alternative hypothesis is specified. This permits the definition of a power curve, reflecting the frequency of rejecting the null hypothesis when the specified alternative is the case. But, as with the Fisherian approach, no probability can be given to either the null or the alternative hypotheses.

The construction of confidence intervals, following the proposal of Neyman (1934), is intimately related to testing hypotheses; indeed a 95 percent confidence interval may be regarded as the set of null hypotheses which, had they been tested at the 5 percent level of significance, would not have been rejected. A confidence interval is a random interval, having the property that the specified proportion (say 95 percent) of the infinite sequence, of random intervals would have covered the true value. For example, an interval that 95 percent of the time (by auxiliary randomization) is the whole real line, and 5 percent of the time is the empty set, is a valid 95 percent confidence interval.

Estimation of parameters—i.e., choosing a single value of the parameters that is in some sense best—is also an important frequentist method. Many methods have been proposed, both for particular models and as general approaches regardless of model, and their frequentist properties explored. These methods usually extended to intervals of values through inversion of test statistics or via other related devices. The resulting confidence intervals share many of the frequentist theoretical properties of the corresponding test procedures.

Frequentist statisticians have explored a number of general properties thought to be desirable in a procedure, such as invariance, unbiasedness, sufficiency, conditioning on ancillary statistics, etc. While each of these properties has examples in which it appears to produce satisfactory recommendations, there are others in which it does not. Additionally, these properties can conflict with each other. No general frequentist theory has emerged that proposes a hierarchy of desirable properties, leaving a frequentist without guidance in facing a new problem.

4.3 Likelihood Methods

The likelihood function (first studied systematically by R. A. Fisher) is the probability density of the data, viewed as a function of the parameters. It occupies an interesting middle ground in the philosophical debate, as it is used both by frequentists (as in maximum likelihood estimation) and by Bayesians in the transition from prior distributions to posterior distributions. A small group of scholars (among them G. A. Barnard, A. W. F. Edwards, R. Royall, D. Sprott) have proposed the likelihood function as an independent basis for inference. The issue of nuisance parameters has perplexed this group, since maximization, as would be consistent with maximum likelihood estimation, leads to different results in general than does integration, which would be consistent with Bayesian ideas.

4.4 Bayesian Methods

Both frequentists and Bayesians accept Bayes’ Theorem as correct, but Bayesians use it far more heavily. Bayesian analysis proceeds from the idea that probability is personal or subjective, reflecting the views of a particular person at a particular point in time. These views are summarized in the prior distribution over the parameter space. Together the prior distribution and the likelihood function define the joint distribution of the parameters and the data. This joint distribution can alternatively be factored as the product of the posterior distribution of the parameter given the data times the predictive distribution of the data.

In the past, Bayesian methods were deemed to be controversial because of the avowedly subjective nature of the prior distribution. But the controversy surrounding their use has lessened as recognition of the subjective nature of the likelihood has spread. Unlike frequentist methods, Bayesian methods are, in principle, free of the paradoxes and counterexamples that make classical statistics so perplexing. The development of hierarchical modeling and Markov Chain Monte Carlo (MCMC) methods have further added to the current popularity of the Bayesian approach, as they allow analyses of models that would otherwise be intractable.

Bayesian decision theory, which interacts closely with Bayesian statistical methods, is a useful way of modeling and addressing decision problems of experimental designs and data analysis and inference. It introduces the notion of utilities and the optimum decision combines probabilities of events with utilities by the calculation of expected utility and maximizing the latter (e.g., see the discussion in Lindley 2000).

Current research is attempting to use the Bayesian approach to hypothesis testing to provide tests and pvalues with good frequentist properties (see Bayarri and Berger 2000).

4.5 Broad Models: Nonparametrics And Semiparametrics

These models include parameter spaces of infinite dimensions, whether addressed in a frequentist or Bayesian manner. In a sense, these models put more inferential weight on the assumption of conditional independence than does an ordinary parametric model.

4.6 Some Cross-Cutting Themes

Often different fields of application of statistics need to address similar issues. For example, dimensionality of the parameter space is often a problem. As more parameters are added, the model will in general fit better (at least no worse). Is the apparent gain in accuracy worth the reduction in parsimony? There are many different ways to address this question in the various applied areas of statistics.

Another common theme, in some sense the obverse of the previous one, is the question of model selection and goodness of fit. In what sense can one say that a set of observations is well-approximated by a particular distribution? (cf. Goodness of Fit: Overview). All statistical theory relies at some level on the use of formal models, and the appropriateness of those models and their detailed specification are of concern to users of statistical methods, no matter which school of statistical inference they choose to work within.

5. Statistics In The Twenty-first Century

5.1 adapting and generalizing methodology.

Statistics as a field provides scientists with the basis for dealing with uncertainty, and, among other things, for generalizing from a sample to a population. There is a parallel sense in which statistics provides a basis for generalization: when similar tools are developed within specific substantive fields, such as experimental design methodology in agriculture and medicine, and sample surveys in economics and sociology. Statisticians have long recognized the common elements of such methodologies and have sought to develop generalized tools and theories to deal with these separate approaches (see e.g., Fienberg and Tanur 1989).

One hallmark of modern statistical science is the development of general frameworks that unify methodology. Thus the tools of Generalized Linear Models draw together methods for linear regression and analysis of various models with normal errors and those log-linear and logistic models for categorical data, in a broader and richer framework. Similarly, graphical models developed in the 1970s and 1980s use concepts of independence to integrate work in covariance section, decomposable log-linear models, and Markov random field models, and produce new methodology as a consequence. And the latent variable approaches from psychometrics and sociology have been tied with simultaneous equation and measurement error models from econometrics into a broader theory of covariance analysis and structural equations models.

Another hallmark of modern statistical science is the borrowing of methods in one field for application in another. One example is provided by Markov Chain Monte Carlo methods, now used widely in Bayesian statistics, which were first used in physics. Survival analysis, used in biostatistics to model the disease-free time or time-to-mortality of medical patients, and analyzed as reliability in quality control studies, are now used in econometrics to measure the time until an unemployed person gets a job. We anticipate that this trend of methodological borrowing will continue across fields of application.

5.2 Where Will New Statistical Developments Be Focused ?

In the issues of its year 2000 volume, the Journal of the American Statistical Association explored both the state of the art of statistics in diverse areas of application, and that of theory and methods, through a series of vignettes or short articles. These essays provide an excellent supplement to the entries of this encyclopedia on a wide range of topics, not only presenting a snapshot of the current state of play in selected areas of the field but also affecting some speculation on the next generation of developments. In an afterword to the last set of these vignettes, Casella (2000) summarizes five overarching themes that he observed in reading through the entire collection:

(a) Large datasets.

(b) High-dimensional/nonparametric models.

(c) Accessible computing.

(d) Bayes/frequentist/who cares?

(e) Theory/applied/why differentiate?

Not surprisingly, these themes fit well those that one can read into the statistical entries in this encyclopedia. The coming together of Bayesian and frequentist methods, for example, is illustrated by the movement of frequentists towards the use of hierarchical models and the regular consideration of frequentist properties of Bayesian procedures (e.g., Bayarri and Berger 2000). Similarly, MCMC methods are being widely used in non-Bayesian settings and, because they focus on long-run sequences of dependent draws from multivariate probability distributions, there are frequentist elements that are brought to bear in the study of the convergence of MCMC procedures. Thus the oft-made distinction between the different schools of statistical inference (suggested in the preceding section) is not always clear in the context of real applications.

5.3 The Growing Importance Of Statistics Across The Social And Behavioral Sciences

Statistics touches on an increasing number of fields of application, in the social sciences as in other areas of scholarship. Historically, the closest links have been with economics; together these fields share parentage of econometrics. There are now vigorous interactions with political science, law, sociology, psychology, anthropology, archeology, history, and many others.

In some fields, the development of statistical methods has not been universally welcomed. Using these methods well and knowledgeably requires an understanding both of the substantive field and of statistical methods. Sometimes this combination of skills has been difficult to develop.

Statistical methods are having increasing success in addressing questions throughout the social and behavioral sciences. Data are being collected and analyzed on an increasing variety of subjects, and the analyses are becoming increasingly sharply focused on the issues of interest.

We do not anticipate, nor would we find desirable, a future in which only statistical evidence was accepted in the social and behavioral sciences. There is room for, and need for, many different approaches. Nonetheless, we expect the excellent progress made in statistical methods in the social and behavioral sciences in recent decades to continue and intensify.

Bibliography:

  • Barnard G A, Plackett R L 1985 Statistics in the United Kingdom, 1939–1945. In: Atkinson A C, Fienberg S E (eds.) A Celebration of Statistics: The ISI Centennial Volume. Springer-Verlag, New York, pp. 31–55
  • Bayarri M J, Berger J O 2000 P values for composite null models (with discussion). Journal of the American Statistical Association 95: 1127–72
  • Box J 1978 R. A. Fisher, The Life of a Scientist. Wiley, New York
  • Casella G 2000 Afterword. Journal of the American Statistical Association 95: 1388
  • Fienberg S E 1985 Statistical developments in World War II: An international perspective. In: Anthony C, Atkinson A C, Fienberg S E (eds.) A Celebration of Statistics: The ISI Centennial Volume. Springer-Verlag, New York, pp. 25–30
  • Fienberg S E, Tanur J M 1989 Combining cognitive and statistical approaches to survey design. Science 243: 1017–22
  • Fisher R A 1925 Statistical Methods for Research Workers. Oliver and Boyd, London
  • Gigerenzer G, Swijtink Z, Porter T, Daston L, Beatty J, Kruger L 1989 The Empire of Chance. Cambridge University Press, Cambridge, UK
  • Hacking I 1990 The Taming of Chance. Cambridge University Press, Cambridge, UK
  • Jeffreys H 1938 Theory of Probability, 2nd edn. Clarendon Press, Oxford, UK
  • Keynes J 1921 A Treatise on Probability. Macmillan, London
  • Lindley D V 2000/1932 The philosophy of statistics (with discussion). The Statistician 49: 293–337
  • Neyman J 1934 On the two different aspects of the representative method: the method of stratified sampling and the method of purposive selection (with discussion). Journal of the Royal Statistical Society 97: 558–625
  • Neyman J, Pearson E S 1928 On the use and interpretation of certain test criteria for purposes of statistical inference. Part I. Biometrika 20A: 175–240
  • Neyman J, Pearson E S 1932 On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society, Series. A 231: 289–337
  • Raiffa H, Schlaifer R 1961 Applied Statistical Decision Theory. Harvard Business School, Boston
  • Ramsey F P 1926 Truth and probability. In: The Foundations of Mathematics and Other Logical Essays. Kegan Paul, London, pp.
  • Savage L J 1954 The Foundations of Statistics. Wiley, New York
  • Stigler S M 1986 The History of Statistics: The Measurement of Uncertainty Before 1900. Harvard University Press, Cambridge, MA
  • Stigler S M 1999 Statistics on the Table: The History of Statistical Concepts and Methods. Harvard University Press, Cambridge, MA
  • Tukey John W 1962 The future of data analysis. Annals of Mathematical Statistics 33: 1–67
  • Wald A 1950 Statistical Decision Functions. Wiley, New York
  • Wallis W 1980 The Statistical Research Group, 1942–1945 (with discussion). Journal of the American Statistical Association 75: 320–35

ORDER HIGH QUALITY CUSTOM PAPER

statistical analysis of research paper

Research Papers on Statistical Analysis

Statistical analysis comprises of mathematical processes that are used to summarize data. Many people prepare a research paper on this subject which is no doubt a challenging task for them. Therefore, Researchomatic is providing you ease to take help from hundreds of statistical analysis research papers. These papers will be helpful for individuals to prepare their own statistical analysis research papers.

  • Click to Read More

The Nationwide Gender Differences In The Causes Of Death

Reasons for abortion, female labor force of us and japan, apa research paper, u.s. department of us human and health, spurious correlation, analysis of sat scores, sources of statistical information, physical warmth & emotional faces, generate free bibliography in all citation styles.

Researchomatic helps you cite your academic research in multiple formats, such as APA, MLA, Harvard, Chicago & Many more. Try it for Free!

statistical analysis of research paper

IMAGES

  1. 😊 Statistical analysis paper. Free statistics project Essays and Papers

    statistical analysis of research paper

  2. School essay: Example of statistical analysis in research paper

    statistical analysis of research paper

  3. Research method ch14 selected research papers

    statistical analysis of research paper

  4. Explorable Multiverse Analyses

    statistical analysis of research paper

  5. (PDF) Critical Analysis of Statistical Methods used in Research Papers

    statistical analysis of research paper

  6. Statistics research statement sample by researchstatement74 on DeviantArt

    statistical analysis of research paper

VIDEO

  1. ||Question paper of Statistical physics 2023 M.Sc. 2nd sem. || #hpu #HP UNIVERSITY

  2. Lecture 8 Research Descriptive statistics

  3. Research Methodology and Statistics (L1)

  4. Lecture 1.5 Statistical Process

  5. Tools for statistical analysis /Research methodology /malayalam

  6. Statistical Analysis सांख्यिकीय विश्लेषण M.Com. first semester question Paper 2022

COMMENTS

  1. The Beginner's Guide to Statistical Analysis

    Step 1: Write your hypotheses and plan your research design To collect valid data for statistical analysis, you first need to specify your hypotheses and plan out your research design. Writing statistical hypotheses The goal of research is often to investigate a relationship between variables within a population.

  2. (PDF) An Overview of Statistical Data Analysis

    Public Full-text [email protected] PDF | The use of statistical software in academia and enterprises has been evolving over the last years. More often than not, students, professors,... |...

  3. How to Report Statistics

    In many fields, a statistical analysis forms the heart of both the methods and results sections of a manuscript. Learn how to report statistical analyses, and what other context is important for publication success and future reproducibility. A matter of principle First and foremost, the statistical methods employed in research must always be:

  4. Basic statistical tools in research and data analysis

    Statistics is a branch of science that deals with the collection, organisation, analysis of data and drawing of inferences from the samples to the whole population. [ 1] This requires a proper design of the study, an appropriate selection of the study sample and choice of a suitable statistical test.

  5. PDF Anatomy of a Statistics Paper (with examples)

    important writing you will do for the paper. IMHO your reader will either be interested and continuing on with your paper, or... A scholarly introduction is respectful of the literature. In my experience, the introduction is part of a paper that I will outline relatively early in the process, but will nish and repeatedly edit at the end of the ...

  6. What is statistical analysis?

    A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation ("x affects y because …"). A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses.

  7. PDF Study Design and Statistical Analysis

    Study Design and Statistical Analysis A Practical Guide for Clinicians This book takes the reader through the entire research process: choosing a question, designing a study, collecting the data, using univariate, bivariate and multivariable analysis, and publishing the results.

  8. Step-by-Step Guide to Statistical Analysis

    These four steps will help you understand what exactly happens in hypotheses testing. The first thing you need to do is state the two hypotheses made at the beginning. The second is formulating an analysis plan that depicts how the data can be assessed. Next is physically analysing the sample data about the plan.

  9. PDF Structure of a Data Analysis Report

    The overall structure of a data analysis report is simple: 1. Introduction 2. Body 3. Conclusion(s)/Discussion 4. Appendix/Appendices The data analysis report is written for several different audiences at the same time: •Primary audience: A primary collaborator or client.

  10. Reporting statistical methods and outcome of statistical ...

    1 Altmetric Metrics Introduction Statistical methods constitute a powerful tool in modern life sciences. This tool is primarily used to disentangle whether the observed differences, relationships or congruencies are meaningful or may just occur by chance. Thus, statistical inference is an unavoidable part of scientific work.

  11. Statistical Papers

    Statistical Papers provides a forum for the presentation and critical assessment of statistical methods. In particular, the journal encourages the discussion of methodological foundations as well as potential applications.

  12. What is Statistical Analysis? Types, Methods and Examples

    Statistical analysis is a scientific tool in AI and ML that helps collect and analyze large amounts of data to identify common patterns and trends to convert them into meaningful information. In simple words, statistical analysis is a data analysis tool that helps draw meaningful conclusions from raw and unstructured data.

  13. 7 Types of Statistical Analysis with Best Examples

    Based on the usage of descriptive statistical analysis, the research attains the necessary conclusion, along with elaborate quantitative descriptions about the attained data. For example: To find out the performance of a student in a year, the calculation of average marks attained throughout the year should be assessed.

  14. Statistics

    A central tenet of statistics is to describe the variations in a data set or population using probability distributions. This analysis aids understanding of what underlies these variations and...

  15. Research Papers / Publications

    In this research, we propose a Bayesian dual-network clustering method as a novel way to make these two decisions simultaneously. To accomplish this, the method represents data and parameters as two separate networks with nodes being the unit of analysis (e.g., SKUs).

  16. Statistics

    Article 19 September 2023 | Open Access Uncertainty analysis of contagion processes based on a functional approach Dunia López-Pintado , Sara López-Pintado & Zonghui Yao Article 16 September 2023...

  17. Statistical Analysis of Data in Research Methodology

    This Research Paper is based on the statistical analysis of data used in the research methodology and how to manipulate the data for specific uses in the research purpose. Further, it shows how to ...

  18. Statistical Research Reports and Studies

    Statistical Research Reports and Studies These research reports are intended to make results of Census Bureau research available to others and to encourage discussion on a variety of topics.we offer a number of research reports on topics in statistics and computing. Beginning August 1, 2001

  19. A review of the statistical analysis used in papers published in

    Advanced statistics including receiver operating characteristics (ROC), odds ratio, regression techniques, multiway ANOVA, and nonparametric ANOVA studies accounted for only 41 (19%) in which a procedure had been used. We conclude that descriptive analysis and basic statistical techniques account for most of the statistical tests reported.

  20. A review of the statistical analysis used in papers published in

    Research design and statistical procedures used in The Journal of Family Practice. Journal of Family Practice 1986;23:564-566. 25 Greenland S. Quantitative methods in the review of epidemiologic literature. Epidemiology Review 1987;9:1-30. 26 Morris RW. A statistical study of papers in The JournalofBone and Joint Surgery (Br) 1984.

  21. Statistics Research Paper Writing Guide + Examples

    Now, state your results and findings, including the statistical analysis if necessary (such as significance or not). Write only what was found using charts or graphs, tables, and numbers that back up your claims about the study. Check all your spelling and grammar again. Pay attention to commas, semicolons and spaces.

  22. Statistics Research Paper

    We offer high-quality assignments for reasonable rates. Statistics is a term used to refer to both a field of scientific inquiry and a body of quantitative methods. The field of statistics has a 350-year intellectual history rooted in the origins of probability and the rudimentary tools of political arithmetic of the seventeenth century.

  23. Free Statistical Analysis Research Papers & Research Papers topics

    Researchomatic helps you cite your academic research in multiple formats, such as APA, MLA, Harvard, Chicago & Many more. Try it for Free! Researchomatic is the largest e-library that contains millions of free Statistical Analysis Research Papers topics & Statistical Analysis Research Papers examples for students of all academic levels.