The Psychological Science Accelerator (PSA), supported by the John Templeton Foundation (JTF), welcomes study proposals to test the generalizability of phenomena related to JTF strategic priorities with large, global samples.
The PSA is a distributed network of researchers from the behavioral sciences across the globe. Our more than 1300 collaborators represent 84 countries from all six populated continents. The mission of the PSA is to accelerate the accumulation of reliable and generalizable evidence in psychological science. This call for studies will select projects that fit within the 5 topic areas, broadly construed, that are prioritized by the JTF.
JTF Priority Topics
To be eligible for this call, projects must investigate questions related to one or more of the strategic priorities of JTF. Below is a list of those priorities. Each is followed by some example research questions and topics. These are meant to just be examples; they are not an exhaustive list of the research questions that could be encompassed by each priority.
- The dynamics of religious change. Why do religions flourish or lose adherents? What kinds of features of religious organizations drive or repel members? Why and how do people switch religious identities?
- Intellectual humility. Which factors enhance or inhibit intellectual humility? How can we better study the concept of intellectual humility?
- Religious cognition. What is the nature of religious belief and how can it be better measured? What causes religious or spiritual experiences and what are the effects of those experiences? How do individuals develop and revise their religious beliefs?
- The science of character virtue. In particular, what helps individuals to develop curiosity and love? What are the consequences of those virtues? How do religions and perceptions of supernatural agents affect virtue and morality development?
- Health, religion, and spirituality. Are there associations between religious/spiritual beliefs, experiences, practices, and identities, and physical and mental health? If so, are there underlying causal mechanisms?
PSA projects are selected using a rigorous review process. Researchers submit detailed proposals to the PSA for consideration. These proposals resemble 5,000 words Stage 1 Registered Reports (e.g., Chambers, 2013), and contain a theoretical introduction, a description of the planned sample and methods, as well as hypotheses and an analysis plan to test those hypotheses.
The proposals then undergo several rounds of review, overseen by the PSA’s Study Selection Committee (SSC). First, each proposal is screened for feasibility given the PSA’s current capacity and resources. For instance, a proposal that requires dozens of collection sites with fMRI machines is likely to be rejected based on infeasibility. Simultaneously, the SSC screens out submissions deemed low quality. Proposals that pass this screening are then sent out for peer review. Each proposal is reviewed by 5-10 reviewers, consisting of both PSA members and external experts (see Supporting Materials for peer review rating criteria). These reviewers are selected based on their methodological and/or theoretical expertise related to a given proposal. All submissions are also rated quantitatively by the entire PSA network. The (SSC) then aggregates two sources of info, the comments of the reviewers and ratings from the network, before selecting projects for the PSA to pursue.
Funding and Personnel Support for Studies Resulting from this Call
Projects selected through this special call will be advertised to members of the PSA network in collaboration with the proposing authors. Each study will be supported by $40,000 of direct funding for data collection. The proposing author teams will also be collectively supported by 3 full-time scientific staff members (a Research Coordinator, a Postdoctoral Researcher, and a Research Scientist), an additional ⅓ time senior staff member (Dr. Christopher R. Chartier), a ½ time staff member equivalent team of undergraduate research assistants, several members of PSA committees (at their discretion), and members of the network at large (at their discretion).
The following components are required for all submissions:
- Cover Page, including the title of the study, date of the latest draft, and keywords
- Abstract of up to 150 words
- Main body submission text of up to 5,000 words
- A version of the submission with a cover page included
- A masked version of the submission without the cover page
- Supplementary materials
The following guidelines are intended to assist you in the preparation of your study submission to the PSA. Submissions normally include a description of the key background literature and motivation for the study, hypotheses, study procedures, proposed statistical analysis plan, a statistical power analysis, and pilot data (wherever applicable).
A review of the relevant literature that motivates the research question and a full description of the study aims and hypotheses.
A full description of proposed sample characteristics, including criteria for data inclusion and exclusion (e.g., outlier extraction). Procedures for objectively defining exclusion criteria caused by technical errors or for any other reasons must be specified, including details of how and under what conditions data would be replaced.
A description of study procedures in sufficient detail to allow another researcher to repeat the methodology exactly, without requiring further information.
Proposed analysis pipeline, including all preprocessing steps, and a precise description of all planned analyses, including appropriate correction for multiple comparisons. Specify all covariates or regressors. Specify analysis decisions that are contingent on the outcome of prior analyses.
Studies involving Neyman-Pearson inference must include a statistical power analysis. Estimated effect sizes should be justified with reference to the existing literature or theory. Because publication bias inflates published estimates of effect size, power analysis should be based on the lowest available or meaningful estimate of the effect size.
In the case of highly uncertain effect sizes, variable sample size and interim data analysis is permissible but with inspection points stated in advance, appropriate Type I error correction for ‘peeking’ employed, and a final stopping rule for data collection outlined.
For studies involving analyses with Bayes factors, the predictions of the theory must be specified so that a Bayes factor can be calculated. Authors should indicate what distribution will be used to represent the predictions of the theory and how its parameters will be specified.
Full descriptions must be provided of any outcome-neutral criteria that must be met for successful testing of the stated hypotheses. Such quality checks might include the absence of floor or ceiling effects in data distributions, positive controls, or other quality checks that are orthogonal to the experimental hypotheses.
Include full questionnaires, stimuli, and materials needed to conduct the study. Pilot data can be included to establish proof of concept, effect size estimations, or feasibility of proposed methods. Simulated data and analysis scripts are recommended to provide clarity about the exclusion criteria and analysis plan.
These guidelines were adapted from https://osf.io/pukzy.
Two studies will be selected and implemented through this year’s call. The project will also include a second round of submissions, with two additional studies to be selected, in 2023.
In your submission, you should clearly state how your project aligns with the goals of this special call and discuss why your research topic would specifically benefit from global data collection (e.g., theoretical reasons to predict global variation).
In evaluating submissions for this special call for studies, we will prioritize projects whose designs are most well suited for promoting generalizability. In particular, we will prioritize studies that seek global samples (as opposed to studies that wish to sample from one or a few countries), studies with very high statistical power, and studies that are more strongly grounded in previous research (such as replication and generalization studies). All of these criteria are intended to maximize our chances of producing generalizable insights on phenomena-of-interest.
In addition, we will evaluate proposals for this special call together so that we can ensure variation across the projects. In particular, we will strive to select a package of studies that, collectively, represent a range of JTF priorities, study designs (e.g., experimental vs. correlational), and researcher backgrounds (e.g., research specialty, researcher location, etc.). This way, we will avoid overrepresenting a given topic, type of research, or type of researcher in this initiative.
Selection Process and Timeline
Submission deadline: July 20, 2022.
Initial feasibility and quality review: July 20- July 27, 2022.
Reviewer selection: July 27-August 3, 2022.
Network rating solicitation: August 3, 2022.
Review submission deadline: August 17, 2022.
Network rating deadline: August 17, 2022
Study selection decision deadline: August 31, 2022.
Possible submission outcomes include “desk rejection” upon initial review, rejection upon full review, provisional acceptance, or an invitation to revise and resubmit for next year’s call.
Following a successful period of needs assessment, preparation, personnel identification, lab recruitment, and pre-registration, provisionally selected studies will commence data collection in early 2023 and end data collection in late 2023.
For pre-submission inquiries, please email Chris Chartier at firstname.lastname@example.org. He is happy to meet with proposing authors and/or answer questions about what is likely to make for a strong submission.