Last updated: 2022-08-14

Checks: 7 0

Knit directory: workflowr/

This reproducible R Markdown analysis was created with workflowr (version 1.7.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20190717) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version 061748a. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .DS_Store
    Ignored:    .Rproj.user/
    Ignored:    analysis/DNase_example_cache/
    Ignored:    analysis/running_mcmc_cache/
    Ignored:    data/DHS_Index_and_Vocabulary_hg38_WM20190703.txt
    Ignored:    data/DNase_chr21/
    Ignored:    data/DNase_chr22/
    Ignored:    data/dat_FDR01_hg38.RData
    Ignored:    output/DNase/

Untracked files:
    Untracked:  data/DHS_Index_and_Vocabulary_hg38_WM20190703.txt.gz
    Untracked:  data/DHS_Index_and_Vocabulary_metadata.xlsx

Unstaged changes:
    Modified:   analysis/.DS_Store
    Modified:   analysis/DNase_example.Rmd
    Modified:   analysis/pairwise_fitting_cache/html/__packages
    Deleted:    analysis/pairwise_fitting_cache/html/unnamed-chunk-3_49e4d860f91e483a671b4b64e8c81934.RData
    Deleted:    analysis/pairwise_fitting_cache/html/unnamed-chunk-3_49e4d860f91e483a671b4b64e8c81934.rdb
    Deleted:    analysis/pairwise_fitting_cache/html/unnamed-chunk-3_49e4d860f91e483a671b4b64e8c81934.rdx
    Deleted:    analysis/pairwise_fitting_cache/html/unnamed-chunk-6_4e13b65e2f248675b580ad2af3613b06.RData
    Deleted:    analysis/pairwise_fitting_cache/html/unnamed-chunk-6_4e13b65e2f248675b580ad2af3613b06.rdb
    Deleted:    analysis/pairwise_fitting_cache/html/unnamed-chunk-6_4e13b65e2f248675b580ad2af3613b06.rdx
    Modified:   analysis/preprocessing_cache/html/__packages
    Deleted:    analysis/preprocessing_cache/html/unnamed-chunk-11_d0dcbf60389f2e00d36edbf7c0da270d.RData
    Deleted:    analysis/preprocessing_cache/html/unnamed-chunk-11_d0dcbf60389f2e00d36edbf7c0da270d.rdb
    Deleted:    analysis/preprocessing_cache/html/unnamed-chunk-11_d0dcbf60389f2e00d36edbf7c0da270d.rdx
    Modified:   data/.DS_Store
    Modified:   data/tpm_zebrafish.tsv.gz
    Modified:   output/.DS_Store
    Deleted:    output/chain.rds
    Deleted:    output/hyperparameters.Rdata
    Modified:   output/red_class.txt
    Deleted:    output/retained_classes.txt

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/priors.Rmd) and HTML (docs/priors.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
Rmd f1a7b55 Hillary Koch 2022-08-13 add DNase analysis
html f1a7b55 Hillary Koch 2022-08-13 add DNase analysis
html fc7424f Hillary Koch 2022-07-30 add gelman rubin example
Rmd f1bf19d Hillary Koch 2022-07-30 minor text chanegs
Rmd c1e13d0 Hillary Koch 2022-07-30 working with new computer
html c1e13d0 Hillary Koch 2022-07-30 working with new computer

—Special considerations: this portion is highly parallelizable—

We are now just about ready to set up our MCMC. First, we need to determine the hyperparameters in the priors of our Gaussian mixture. These are all calculated in an empirical Bayesian manner – that is, we can recycle information from the pairwise fits to inform our priors in the full-information mixture. This task can be split into 2 sub-tasks:

  1. computing the prior hyperparameters for the cluster mixing weights

  2. computing every other hyperparameter

The former is the most essential, as it helps us remove more candidate latent classes, ensuring that the number of clusters is fewer than the number of observations. An important note: this is the only step of CLIMB that requires some sort of human intervention, but it does need to happen. A threshold, called \(\delta\) in the manuscript, determines how strict one is about including classes in the final model. \(\delta\in\{0,1,\ldots,\binom{D}{2}\}\). We will get into selecting \(\delta\) shortly.

To get the prior weights on each candidate latent class, use the function get_prior_weights(). This function defaults to the settings used in the CLIMB manuscript. The user can specify:

  1. reduced_classes: the matrix of candidate latent classes generated by get_reduced_classes()

  2. fits: the list of pairwise fits generated by get_pairwise_fits()

  3. parallel: logical specifying if the analysis should be run in parallel (defaults to FALSE)

  4. ncores: if in parallel, how many cores to use. Defaults to 20.

  5. delta: this is the range of thresholds to try, but it will defaults to a sequence of all possible thresholds.

NB: while parallelization is always available here, it is not always necessary. Speed of this portion depends on sample size, dimension, and the number of candidate latent classes (in reduced_classes).

Now, we are ready to compute the prior weights.

# Read in the candidate latent classes produced in the last step
reduced_classes <- read.table("output/red_class.txt", sep = "\t")

# load in the pairwise fits from the first step
# (in this example case, I am simply loading the data from the package)
data("fits")

# Compute the prior weights
prior_weights <- get_prior_weights(reduced_classes, fits, parallel = FALSE)

prior_weights is a list of vectors. Each vector corresponds to the computed prior weights for a given value of \(\delta\). Here, prior_weights[[j]] corresponds to the prior weights when \(\delta = j-1\).

Here requires the human intervention

We can plot how the number of latent classes included in the final model changes as we relax \(\delta\).

# this is just grabbing the sample size and dimension
n <- length(fits[[1]]$cluster)
D <- as.numeric(strsplit(tail(names(fits),1), "_")[[1]][2])

# to avoid degenerate distributions, we will only keep clusters such that the prior
# weight times the sample size is greater than the dimension.
plot(
    0:choose(D,2),
    sapply(prior_weights, function(X)
    sum(X * n > D)),
    ylab = "number of retained classes",
    xlab =  expression(delta))

Version Author Date
fc7424f Hillary Koch 2022-07-30

This toy example is much cleaner than a real data set, but typically we expect to see that, as we relax \(\delta\) away from 0, more classes are included in the final model. We have not identified a uniformly best way to select \(\delta\); a decent rule of thumb has simply been to include as many classes as one can while retaining computational feasibility, and selecting the smallest value of \(\delta\) that gives this result. In this toy example, we might as well retain all classes, and thus select the prior weights corresponding to \(\delta = 1\). Let’s store that in the variable p.

# Select out prior weights for delta = 1
p <- prior_weights[[2]]

# Filter out classes which have too small of a prior weight
# (In this toy example, we actually retain everything,
#   but this is not typical for higher-dimensional/empirical analyses)
retained_classes <- reduced_classes[p * n > D, ]
p <- p[p * n > D,]

# save the retained classes for downstream analysis
readr::write_tsv(retained_classes, file = "output/retained_classes.txt", col_names = FALSE)

Obtaining the remaining hyperparameters

Now that the human intervention is over, the rest is simple. Just use the function get_hyperparameters() to compute empirical estimates of the remaining prior hyperparameters.

# load the data back in
data("sim")

# obtain the hyperparameters
hyp <- get_hyperparameters(sim$data, fits, retained_classes, p)

# view the output
str(hyp)
List of 4
 $ Psi0  : num [1:3, 1:3, 1:13] 0.816 0 0.488 0 1 ...
 $ mu0   : num [1:13, 1:3] 2.68 2.68 2.68 0 0 ...
 $ alpha : num [1:13] 0.101 0.1403 0.0207 0.0988 0.0502 ...
 $ kappa0: num [1:13] 151 210 31 148 75 178 166 110 46 136 ...

hyp$kappa0 controls the informativity of the priors. To reduce informativity, one can make the elements of kappa0 smaller (but still larger than \(D\)!). For example, you could use something like hyp$kappa0 <- rep(10, D), instead of the automatic choice (proportional to hyp$alpha) which is returned from the get_hyperparameters function.

We can save these hyperparameters for the next step in the analysis:

save(hyp, file = "output/hyperparameters.Rdata")

After these analyses, we have a model to describe our data, and are ready to run the MCMC.

Session Information

print(sessionInfo())
R version 4.2.1 (2022-06-23)
Platform: aarch64-apple-darwin20 (64-bit)
Running under: macOS Monterey 12.5

Matrix products: default
BLAS:   /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] CLIMB_1.0.0

loaded via a namespace (and not attached):
 [1] tidyselect_1.1.2     xfun_0.31            bslib_0.4.0         
 [4] purrr_0.3.4          testthat_3.1.4       vctrs_0.4.1         
 [7] generics_0.1.3       htmltools_0.5.3      yaml_2.3.5          
[10] utf8_1.2.2           rlang_1.0.4          jquerylib_0.1.4     
[13] later_1.3.0          pillar_1.8.0         glue_1.6.2          
[16] DBI_1.1.3            bit64_4.0.5          foreach_1.5.2       
[19] lifecycle_1.0.1      plyr_1.8.7           stringr_1.4.0       
[22] workflowr_1.7.0      mvtnorm_1.1-3        LaplacesDemon_16.1.6
[25] codetools_0.2-18     evaluate_0.15        knitr_1.39          
[28] tzdb_0.3.0           fastmap_1.1.0        doParallel_1.0.17   
[31] httpuv_1.6.5         parallel_4.2.1       fansi_1.0.3         
[34] highr_0.9            Rcpp_1.0.9           readr_2.1.2         
[37] promises_1.2.0.1     cachem_1.0.6         vroom_1.5.7         
[40] jsonlite_1.8.0       abind_1.4-5          bit_4.0.4           
[43] fs_1.5.2             brio_1.1.3           hms_1.1.1           
[46] digest_0.6.29        stringi_1.7.8        dplyr_1.0.9         
[49] rprojroot_2.0.3      cli_3.3.0            tools_4.2.1         
[52] magrittr_2.0.3       sass_0.4.2           tibble_3.1.8        
[55] crayon_1.5.1         whisker_0.4          tidyr_1.2.0         
[58] pkgconfig_2.0.3      ellipsis_0.3.2       assertthat_0.2.1    
[61] rmarkdown_2.14       rstudioapi_0.13      iterators_1.0.14    
[64] JuliaCall_0.17.4     R6_2.5.1             git2r_0.30.1        
[67] compiler_4.2.1