Last updated: 2019-05-21

Checks: 1 1

Knit directory: MSTPsummerstatistics/

This reproducible R Markdown analysis was created with workflowr (version 1.3.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


The R Markdown file has unstaged changes. To know which version of the R Markdown file created these results, you’ll want to first commit it to the Git repo. If you’re still working on the analysis, you can ignore this warning. When you’re finished, you can run wflow_publish to commit the R Markdown file and build the HTML.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .DS_Store
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/

Unstaged changes:
    Modified:   analysis/Bayes.Rmd
    Modified:   analysis/introR.Rmd
    Modified:   analysis/powerAnalyses.Rmd

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view them.

File Version Author Date Message
html 4ce8e85 Anthony Hung 2019-05-21 bandersnatch add
html 096760a Anthony Hung 2019-05-18 Build site.
html da98ae8 Anthony Hung 2019-05-17 Build site.
Rmd 239723e Anthony Hung 2019-05-08 Update learning objectives
html 2ec7944 Anthony Hung 2019-05-06 Build site.
html d45dca4 Anthony Hung 2019-05-06 Republish
Rmd ee75486 Anthony Hung 2019-05-04 Build site.

Introduction

Before going to the lab to carry out any type of full-scale experiment, it is important to determine how many samples and replicates you will need to include in the experiment to best answer the question you would like to answer. Power analyses allow researchers to determine the smallest sample size required to detect the effect size of a given comparison at a given significance level.

Performing a power analysis before carrying out an experiment has many benefits, among them including:

Performing a power analysis after running an experiment is also useful, particularly in the case of a negative result. A question to motivate why it is useful to perform power analyses even after a study is complete, you can ask yourself: “if I performed an experiment and did not detect a statistically significant result, does it necessarily mean that the null hypothesis you were testing is true”?

Our objectives today are to review the concept of power, discuss what a power analysis is, and different ways to carry out a power analysis.

Reviewing the concept of Power

Recall that there are four possible scenarios when performing a hypothesis test on a null hypothesis. We have previously discussed in some detail the concept of Type 1 and Type 2 errors, which will occur with some probability in any type of test that you will perform.

\(H_0\) is True \(H_0\) is False
reject \(H_0\) P(Type 1 error) = \(\alpha\) P(True Positive) = Power = \(1- \beta\)
fail to reject \(H_0\) P(True Negative) = \(1-\alpha\) P(Type 2 error) = \(\beta\)

Power can be thought of as the probability of rejecting the null hypothesis given that the null hypothesis is false (the probability that we are in the top right-hand quadrant of the table).