The Psychological Science Accelerator: Call for Study Submissions (Deadline: June 20th)

The Psychological Science Accelerator (PSA), a network of 300 labs collaborating to collect large-scale international samples of psychological data, is currently accepting study proposals from all areas of psychological science. Anyone can submit a proposal, whether or not they are a member of the PSA. Our mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science, reducing the distance between truth about human behavior and mental processes and our current understanding. For a full overview of the PSA, please see our pre-print introducing our policies and procedures (https://psyarxiv.com/785qu/).

Capture

Proposed studies can test novel hypotheses or focus on the replication of previous findings, can be basic or applied in focus, and can be exploratory or confirmatory in nature. Because accepted studies will likely involve considerable use of resources, the study selection process begins with the preparation and evaluation of a Registered Report style submission by authors hoping to collect data via the PSA network. Our Study Selection Committee will conduct an initial feasibility evaluation for all proposed studies (see below for more information on this feasibility check). Studies that pass this check will be evaluated by 10 peer reviewers and ourStudy Selection Committee for final selection. We plan to accept 2-3 studies during this round of review. Selected studies will then proceed through the PSA workflow depicted in the figure below.

processfigures1_001.jpg

Please email submissions to our Director, Dr. Christopher R. Chartier, cchartie@ashland.edu. Submissions will be accepted until June 20th, 2018.

Feasibility

All feasibility decisions are made with respect to our current, and ever changing, resources. Although the PSA is comprised of hundreds of labs from around the world who have agreed to volunteer some of their resources to PSA projects, we are currently unable to accommodate all types of designs. Submissions are more likely to pass the initial feasibility check if they have the following characteristics:

  • Do not require specialized equipment (e.g., eye-tracking, EEG) or proprietary experimental software (e.g., E-Prime) to be used at the data collection sites, unless the proposing team can provide these resources to data collection labs
  • Experimental materials and analysis scripts can be shared easily and made publicly available
  • Do not require hard-to-reach samples (e.g., clinical populations). We hope to better accommodate such sampling in the future.
  • Target sample size per site is less than 150 participants
  • Target number of data collection sites is less than 150
  • Duration of an individual data collection session is less than 90 minutes
  • The likelihood and severity of risk to the participant is kept to a minimum, such that the risk is not greater than what participants would face normally and would not require special consideration or deliberation from an ethics board.

Characteristics of strong submissions

Beyond simply being feasible given current PSA resources, strong submissions will also:

  • Accurately and clearly describe literature relevant to the study’s goals and design, such that researchers unfamiliar with the subject can understand the basic concepts behind the theory/phenomenon and the purpose of the research.
  • Clearly articulate the purpose of the research, relevant research questions, and hypotheses.
  • Clearly articulate the research design, with a focus on sound methodology appropriate to the research questions, including adequate power analysis to justify sample size.
  • Provide examples of relevant material, for example websites, experimental scripts (e.g., E-prime, Inquist, OpenSesame), precise experimental design, and/or stimuli.
  • Accurately and clearly describe an analysis strategy appropriate to the research questions and design. Pilot or simulated data and working analysis scripts are ideal for clarity.
  • Make a compelling case for the importance of large-scale collaborative data collection.

Submission Format and Guidelines

The following components are required for all submissions:

  • Cover Page, including title of the study, date of the latest draft, and keywords
  • Abstract of up to 150 words
  • Main body submission text of up to 5,000 words
  • References
  • Supplementary materials

The following guidelines are intended to assist you in the preparation of your study submission to the Psychological Science Accelerator. Submissions normally include a description of the key background literature and motivation for the study, hypotheses, study procedures, proposed statistical analysis plan, a statistical power analysis, and pilot data (wherever applicable).

Introduction

A review of the relevant literature that motivates the research question and a full description of the study aims and hypotheses.

Methods

A full description of proposed sample characteristics, including criteria for data inclusion and exclusion (e.g. outlier extraction). Procedures for objectively defining exclusion criteria due to technical errors or for any other reasons must be specified, including details of how and under what conditions data would be replaced.

A description of study procedures in sufficient detail to allow another researcher to repeat the methodology exactly, without requiring further information.

Proposed analysis pipeline, including all preprocessing steps, and a precise description of all planned analyses, including appropriate correction for multiple comparisons. Specify all covariates or regressors. Specify analysis decisions that are contingent on the outcome of prior analyses.

Results

Studies involving Neyman-Pearson inference must include a statistical power analysis. Estimated effect sizes should be justified with reference to the existing literature or theory. Since publication bias overinflates published estimates of effect size, power analysis should be based on the lowest available or meaningful estimate of the effect size.

In the case of highly uncertain effect sizes, a variable sample size and interim data analysis is permissible but with inspection points stated in advance, appropriate Type I error correction for ‘peeking’ employed, and a final stopping rule for data collection outlined.

For studies involving analyses with Bayes factors, the predictions of the theory must be specified so that a Bayes factor can be calculated. Authors should indicate what distribution will be used to represent the predictions of the theory and how its parameters will be specified.  

Full descriptions must be provided of any outcome-neutral criteria that must be met for successful testing of the stated hypotheses. Such quality checks might include the absence of floor or ceiling effects in data distributions, positive controls, or other quality checks that are orthogonal to the experimental hypotheses.

Supplemental Materials

Include full questionnaires, stimuli, and materials needed to conduct the study. Pilot data can be included to establish proof of concept, effect size estimations, or feasibility of proposed methods. Simulated data and analysis scripts are ideal for clarity of the exclusion criteria and analysis plan.

These guidelines were adapted from https://osf.io/pukzy.

 

Please email submissions to our Director, Dr. Christopher R. Chartier, cchartie@ashland.edu. Submissions will be accepted until June 20th, 2018.

The Accelerated CREP

The Collaborative Replication and Education Project (CREP) and the Psychological Science Accelerator are partnering on a project for the 2018-2019 replication season. The mission of the Accelerator is to accelerate the accumulation of reliable and generalizeable evidence in psychological science. The mission of the CREP is to improve undergraduate training through crowdsourced replication. We think these two missions can be pursued in tandem.

crepacc

 

 

 

 

 

 

The CREP (http://osf.io/wfc6u) is a crowdsourced replication project designed for undergraduate researchers. We invite students to replicate a study from a pre-selected list, chosen because they are both highly cited and feasible for undergraduates to complete. Contributors receive CREP approval by demonstrating research transparency at both pre and post data collection stages in order to maximize the value of their replication data for future meta-analyses. Once there are enough samples to draw meaningful conclusions from the data, all contributors are encouraged to collaborate on a research paper. Since launching in 2013, over 350 students have started over 100 replication projects. We have one manuscript in press (Leighton et al. 2018), two more nearing completion (Wagge et al.; Ghelfi et al.), and several more that still need more data, listed here.

The Psychological Science Accelerator is a more recent crowdsourced project, and though it is similar to the CREP in many ways, it is also more advanced in development and scope (Moshontz et al., 2018). It is a network of over 230 laboratories that democratically select studies and then conduct them on a global scale. The major difference is that the CREP is specifically focused on involving undergraduates in the educational component of replication science, while the Accelerator is primarily focused on accelerating psychological science more generally, though educators can certainly work with undergraduates on all Accelerator projects.

The CREP and Accelerator have decided to coordinate a pilot test of an “Accelerated CREP” study. This pilot will evaluate the feasibility of the Accelerator formally adding an undergraduate education component, via the CREP, on a more regular basis. It is also an opportunity for the CREP to extend their contributor pool beyond their historical audience, and complete data collection for one CREP study much more quickly than normal. Among the Accelerator’s over 230 active laboratories, we imagine that a subset of PIs would like to either implement the CREP as part of their research methods courses or work with undergraduate researchers on the Accelerated CREP who would benefit from taking “ownership” of a project and contributing to a large-scale collaboration outside of the typical Accelerator workflow.

For this partnership, Accelerator members and one or more undergraduate researchers they supervise, are invited and encouraged to work through the CREP process for a single study between August 1, 2018 and May 30, 2019. Where possible, we recommend that contributing teams assign a student as the contact person to interact with the CREP Review team, in order for them to experience all aspects of the research experience.

Here we give an overview of the full process for the first “Accelerated CREP” study, which differs in a few notable ways from the standard operating procedures of the Accelerator and CREP.

Phase 1 (submission and evaluation, complete). The CREP team conducted their normal study selection process for the 2018/2019 academic year. Each year, the CREP team selects one to three new studies to add to the list of available studies. They identify the top three or four cited articles in the top journal in nine sub-disciplines, then code those studies for feasibility and potential student interest. This year they selected one new study, Turri, Buckwalter, & Blou (2015), “Knowledge and Luck”, Study 1 (https://osf.io/n5b3w/) with a target N = 100 for each local site.

Phase 2 (Preparation). The CREP invites Direct (or close) replications from contributors. As such, the protocol should match that of the published article. For advanced students, the CREP invites Direct+Plus replications which involve the addition of one or more additional independent or dependent variables that are collected after (or independently) of the original study.

The Turri et al. study 1 is exciting to replicate because the methods are all available in the publication, and they can be administered via computer. Further, there may be interesting moderator variables that could be tested across multiple labs.

The CREP asks contributors to create and edit an OSF page (https://osf.io/srh4k/) and provide their materials, ethics approval, and a video of their procedure. For the Accelerated CREP, we hope to recruit undergraduates from 30 or more locations to join the project.

Phase 3 (Implementation). CREP contributors submit their project page for review by the CREP team twice, once before and once after data collection. The pre-data review verifies that the contributor is meeting CREP standards for ethics, materials, and procedure. For the post-data review, the CREP team reviews the presentation of the data and the results to verify the data are useable in the meta-analysis.

Review teams including two faculty and a student administrative advisor, David Redman, will be led by one of the CREP’s experienced Executive Reviewers, Dr. Jordan Wagge. Faculty on contributing teams will be invited to serve as reviewers on other CREP contributor’s projects in order to ensure high quality replications.

Phase 4 (dissemination). Because the CREP is decentralized, the local data sets are posted publicly in order to go through the post-data review. Contributors are encouraged to present their findings at conferences, but the data are collated for the drafting and submitting of a manuscript reporting the combined findings, because no single replication provides definitive results. In contrast to normal CREP procedure, we invite potential authors to indicate their interest in authorship at the beginning rather than the end of the project. Braedon Hall will act as 1st author and the coordinating author for an associated preregistered report of the findings under the guidance of graduate advisor, and lead executive reviewer, Jordan Wagge.

The organizers of this partnership consider this a tentative relationship which we will re-evaluate for possible future implementation. In the meantime, come along and join us as we Accelerate a CREP study.

Jon Grahe (CREP Project Leader & Pacific Lutheran University)

Christopher R. Chartier (Accelerator Director & Ashland University)

 

Sign Up Here!

 

 

 

Additional pronunciation guidance and derivation from Jon Grahe:

“Why CREP rhymes with Grapes”

the grape metaphor for replication science

When considering a bunch of grapes, all from the same DNA, they represent a wonderful metaphor for replication science. Grapes from the same bunch, or different bunches from the same vine, all share DNA, but they are rarely, if ever, identical. They differ from each other in a number of factors much like replications differ from each other, and from an original study. Because of growing conditions, contextual differences; grapes can differ in size, color, and ripeness. All the same DNA, but still different.

In comparison to grapes, replications also differ in size, color, and ripeness. Size is the easiest metaphor to recognize, researchers might have more access or resources to collect data from more participants. Color in research reflects all the diversity in application of a replication, not just diversity of participants and experimenters, but also the time of day, the decorations of the lab, the educational culture on campus, and all the other variables that make research laboratories unique. Finally, ripeness reflects age and experience; certainly applicable in research as replications are conducted by experimenters exploring the task for the first time and by professionals who have completed more studies than they can remember.  

And of course, no one can stop at just one grape, and no one should stop at just one replication.

New Study Accepted: The Object Orientation Effect

We are pleased to announce that our third official study, proposed by Dr. Sau-Chin Chen of Tzu-Chi University, has been selected. It will examine the extent to which the object orientation effect, in which language comprehension can guide later perception, extends across numerous world languages. For example, a picture of a flying eagle is identified faster after reading “He saw the eagle in the sky” than “He saw the eagle in the nest.”

aguila picture Eagle Bald American Bald-eagle-44

Here is a bit more information from Sau-Chin:

When we read a sentence describing an object in context, such as ‘The eagle is in the sky’, our minds simulate the visual properties of the eagle. This general concept is supported by findings from the sentence-picture verification task. In this task, target objects that match the probe sentence (e.g., the flying eagle) are identified faster than target objects that mismatch the probe sentence (e.g., the sitting eagle). This pattern is called the “match advantage.” So far, this effect has been relatively robust across a range of characteristics of target objects, such as shape, color, size, and orientation. These specific match advantages are named the shape effect, the color effect, the size effect, and the orientation effect, respectively. 

However, there is some inconsistency in orientation effect findings between the studies with a comprehension task (Stanfield & Zwaan, 2001; Zwaan & Pecher, 2012) and studies without a comprehension task (De Koning, Wassenburg, Bos, & van der Schoot, 2017; Hoeben-Mannaert, Dijkstra, & Zwaan, 2017). This project aims to identify the source of these inconsistent findings and assess the extent to which the effect generalizes across languages.

We have also decided to “bundle” this study with Dr. Curtis Phill’s investigation of the gendered nature of social category representations. This bundling will allow us to efficiently collect data for both studies in a large and international sample using a single combined and brief (less than 30 minutes) data collection session in each lab.

We will begin recruiting specific labs from our network for this combined data collection process in the coming weeks. Please congratulate Sau-Chin and stay tuned!

Please visit our Get Involved page to sign up for our mailing list and express your interest in participating.

Accepted for 2018: Gendered Social Category Representations

We have selected the Psychological Science Accelerator’s second study. Congratulations to Dr. Curtis Phills, University of North Florida, who submitted a very compelling proposal to investigate the degree to which cognitive representations of a host of social categories are gendered. Data collection will begin in early 2018.

phills

One of the strengths of this submission, beyond its intellectual merits, is the fact that it can be administered quite quickly and easily in a range of research settings. It remains possible that this study will be paired with another brief study into a single data collection session.

Dr. Phills provided the following background and rationale for his proposed study:

“Research related to the intersectional invisibility hypothesis (Purdie-Vaughns & Eibach, 2008) and the gendered nature of race (Carpinella, Chen, Hamilton, & Johnson, 2015; Goff, Thomas, & Jackson, 2008; Johnson, Freeman, & Pauker, 2012; Thomas, Dovidio, & West, 2014) suggest men and women may not be equally represented in the cognitive representations of social categories. Research has found people are more likely to think of a Black man than a Black woman when imagining a Black person (Schug, Alt, & Klauer, 2015) and associate Black men more quickly with Black people than Black women (Thomas et al., 2014). Understanding the extent to which men are overrepresented in the cognitive representations of minority groups is a necessary step in designing effective anti-bias interventions. For example, if Black women are not included in the cognitive representation of Black people then interventions designed to reduce bias against Black people or Black men may not ameliorate bias against Black women. This project is designed to investigate the over-representation of men in the cognitive representations of many social categories (e.g., racial, ethnic, gender, political, and religious categories) around the world. Ultimately, it is hoped that this project will contribute to the design of more effective anti-bias interventions by encouraging researchers to specifically include the women of minority groups–especially for groups in which men are over-represented.”

We look forward to working with Dr. Phills to finalize data the data collection protocol and data analysis plan soon!

The Psychological Science Accelerator’s First Study

We are excited to announce that we have selected our first study to be conducted with the Psychological Science Accelerator distributed laboratory network.

Ben Jones and Lisa DeBruine of the University of Glasgow (http://facelab.org/) submitted an excellent proposal to test if Oosterhof and Todorov’s (2008) valence-dominance model of social perception generalizes across world regions. In their submission they explain,

Oosterhof and Todorov (2008 PNAS) found that Principal Component Analysis of trait ratings of face images made by students at a US university produced two components. The first component, which they labeled ‘valence’, was highly correlated with rated trustworthiness. The second, which they labeled ‘dominance’, was highly correlated with rated dominance. Although this two-component model of social judgments of faces has become very influential, the extent to which it applies to trait ratings of faces made in other regions of the world is not yet known. The proposed project would use confirmatory factor analysis to establish whether the model described in Oosterhof and Todorov (2008 PNAS) can (1) be replicated in a new sample of North American raters and (2) can also explains trait-ratings made in other world regions (United Nations Country Grouping: Africa, Asia, Central America, Eastern Europe, European Union, Middle East, North America, Oceania, South America, The Caribbean).”

Image result for todorov face perception

Their blinded submission was reviewed by over 40 members of the Psychological Science Accelerator. Our Study Selection Committee found it feasible for our initial efforts, our Advisory Committees noted many strengths of the submission and the likely impact of such a study, and we ultimately decided it was an excellent study to kick-off the Accelerator!

In the coming days and weeks, all experimental materials, protocols, translated instructions, and analysis scripts will be finalized in a collaborative effort between the proposing authors and our committee members. We look forward to subsequently matching laboratories from our network with this exciting project.

While we will invest considerable data collection in this study, it will not come close to exhausting the overall data collection capacity of the Psychological Science Accelerator for 2018. Thus, we continue to review the other excellent and exciting submissions that we received following our first call for studies. More announcements will be coming soon!

If you would like to join the Psychological Science Accelerator, to assist in data collection for this specific study, or to be involved going forward, please sign up here to receive more information!