A Large-Scale, Multi-Site Examination of Stereotype Threat Across Varying Operationalizations. A Newly Selected Accelerator Study.

We are excited to announce that we have selected “A Large-Scale, Multi-Site Examination of Stereotype Threat Across Varying Operationalizations” as our next official PSA study! This proposal was submitted by Patrick S. Forscher (University of Arkansas), Valerie Jones Taylor (Lehigh University), Neil A. Lewis, Jr. (Cornell University), and Daniel Cavagnaro (California State Fullerton).

This project will test a core proposition of stereotype threat theory, which predicts that the possibility of confirming a negative stereotype can cause people to underperform on the very tasks on which they are stereotyped. Despite its immense theoretical and practical implications, the studies supporting stereotype threat theory suffer from small samples and varying operational definitions of the focal stereotype threat effect.

This project aims to resolve the first problem by leveraging the PSA’s large network of US-based labs to gather a large sample of African American students – larger than would be easily manageable with a single lab’s resources.  For the second problem, the project proposes to use an adaptive design to find, among four procedures each to increase and decrease stereotype threat, the comparison that provides the best evidence for a performance difference.

You can read more about the selected project at its  OSF project page, where we’ve deposited a pdf of the project proposal. If you’re interested in joining the project, fill out this this form, where Patrick, Valerie, Neil, and Dan are already recruiting collaborating labs.

Congratulations to Patrick, Valerie, Neil, and Daniel! We can’t wait to work closely with them on this exciting project.

Update on the Accelerated CREP: Registered Replication Report Submission Coming Soon. More Labs Welcome to Join!

On October 22nd, we will submit a Stage 1 Registered Replication Report (peer review prior to data collection) manuscript to Advances in Methods and Practices in Psychological Science for our Accelerated CREP project, an RRR of Turri, Buckwalter, and Blouw (2015). You can read a preprint of the manuscript here.

We welcome public feedback on the manuscript as well as new contributors who would like to join the project prior to our submission. Remember that all CREP projects are partly focused on training students, so we invite teams of graduate or undergraduate students or their supervisors to sign up here!

To be included in this Stage 1 submission, contributors will need to sign up, review the manuscript, suggest any necessary edits, commit to data collection for the project in 2019, and approve of the manuscript for submission to AMPPS by October 22nd. While teams can still join after the Stage 1 submission and earn co-authorship on the final manuscript, the more sites we have involved by the Stage 1 submission, the stronger our project will be, and the more likely we are to secure an in principle acceptance!

You can also read an updated version of the original blog post announcing the Accelerated CREP project below. Updates appear in bold and italics.

-The Accelerated CREP Team
800px-Plato_Silanion_Musei_Capitolini_MC1377 (1)

Plato suggested knowledge was “justified true belief.”

 

(updated original post below)

The Accelerated CREP

The Collaborative Replication and Education Project (CREP) and the Psychological Science Accelerator are partnering on a project for the 2018-2019 replication season. The mission of the Accelerator is to accelerate the accumulation of reliable and generalizable evidence in psychological science. The mission of the CREP is to improve undergraduate training through crowdsourced replication. We think these two missions can be pursued in tandem.

The CREP (http://osf.io/wfc6u) is a crowdsourced replication project designed for undergraduate researchers. We invite students to replicate a study from a pre-selected list, chosen because they are both highly cited and feasible for undergraduates to complete. Contributors receive CREP approval by demonstrating research transparency at both pre and post data collection stages in order to maximize the value of their replication data for future meta-analyses. Once there are enough samples to draw meaningful conclusions from the data, all contributors are encouraged to collaborate on a research paper. Since launching in 2013, over 350 students have started over 100 replication projects. We have one manuscript in press (Leighton et al. 2018), two more nearing completion (Wagge et al.; Ghelfi et al.), and several more that still need more data, listed here.

The Psychological Science Accelerator is a more recent crowdsourced project, and though it is similar to the CREP in many ways, it is also more advanced in development and scope (Moshontz et al., 2018). It is a network of 364 laboratories that democratically select studies and then conduct them on a global scale. The major difference is that the CREP is specifically focused on involving undergraduates in the educational component of replication science, while the Accelerator is primarily focused on accelerating psychological science more generally, though educators can certainly work with undergraduates on all Accelerator projects.

The CREP and Accelerator have decided to coordinate a pilot test of an “Accelerated CREP” study. This pilot will evaluate the feasibility of the Accelerator formally adding an undergraduate education component, via the CREP, on a more regular basis. It is also an opportunity for the CREP to extend their contributor pool beyond their historical audience, and complete data collection for one CREP study much more quickly than normal. Among the Accelerator’s over 364 active laboratories, we imagine that a subset of PIs would like to either implement the CREP as part of their research methods courses or work with undergraduate researchers on the Accelerated CREP who would benefit from taking “ownership” of a project and contributing to a large-scale collaboration outside of the typical Accelerator workflow.

For this partnership, Accelerator members and one or more undergraduate researchers they supervise, are invited and encouraged to work through the CREP process for a single study between January 1, 2019 and January 1, 2020. While the CREP typically runs studies through the US academic year, doing so in this case would prevent many non-US labs from participating equally. Where possible, we recommend that contributing teams assign a student as the contact person to interact with the CREP Review team, in order for them to experience all aspects of the research experience.

After submitting a Registered Replication Report (RRR) proposal to Advances in Methods and Practices in Psychological Science (AMPPS), we have been invited to submit this collaborative replication of Experiment 1 of Turri, Buckwalter, and Blouw (2015) as a Phase 1 RRR. AMPPS also suggested some methodological changes to broaden the generality of our findings – which we have since incorporated into our protocol. If the Phase 1 RRR is accepted, AMPPS will also call for additional contributors through APS.

Here we give an overview of the full process for the first “Accelerated CREP” study, which differs in a few notable ways from the standard operating procedures of the Accelerator and CREP.

Phase 1 (submission and evaluation, complete). The CREP team conducted their normal study selection process for the 2018/2019 academic year (now the 2019 calendar year). Each year, the CREP team selects one to three new studies to add to the list of available studies. They identify the top three or four cited articles in the top journal in nine sub-disciplines, then code those studies for feasibility and potential student interest. This year they selected one new study, Turri, Buckwalter, & Blou (2015), “Knowledge and Luck”, Experiment 1 (https://osf.io/n5b3w/) with a target N = 100 for each local site.

Phase 2 (Preparation). The CREP invites Direct (or close) replications from contributors. As such, the protocol should match that of the published article. If AMPPS accepts the Phase 1 RRR manuscript, it will serve as the primary protocol by which contributing labs will design their independent replications.

For advanced students, the CREP invites Direct+Plus replications (i.e., direct replications “plus” an extension) which involve the addition of one or more additional independent or dependent variables that are collected after (or independently) of the original study.

The Turri et al. study 1 is exciting to replicate because the methods are all available in the publication, and they can be administered via computer. Further, there may be interesting moderator variables that could be tested across multiple labs (e.g., education, perception of luck vs. ability, culture, etc.).

The CREP asks contributors to create and edit an OSF page (https://osf.io/srh4k/) and provide their materials, ethics approval, and a video of their procedure. For the Accelerated CREP, we hope to recruit undergraduates from 50 or more locations to join the project. We already have 35 institutions signed up to contribute, and we expect more!

Phase 3 (Implementation). CREP contributors submit their project page for review by the CREP team twice, once before and once after data collection. The pre-data review verifies that the contributor is meeting CREP standards for ethics, materials, and procedure. For the post-data review, the CREP team reviews the presentation of the data and the results to verify the data are useable in the aggregate multilevel analyses.

Review teams including two faculty and a student administrative advisor, David Redman, will be led by one of the CREP’s experienced Executive Reviewers, Dr. Jordan Wagge. Faculty on contributing teams will be invited to serve as reviewers on other CREP contributor’s projects in order to ensure high quality replications.

Phase 4 (dissemination). Because the CREP is decentralized, the local data sets are posted publicly in order to go through the post-data review. Contributors are encouraged to present their findings at conferences, but the data are collated for the drafting and submitting of a manuscript reporting the combined findings, because no single replication provides definitive results. In contrast to normal CREP procedure, we invite potential authors to indicate their interest in authorship at the beginning rather than the end of the project. Braedon Hall will act as 1st author and the coordinating author for the RRR under the guidance of graduate advisor, and lead executive reviewer, Jordan Wagge.

The organizers of this partnership consider this a tentative relationship which we will re-evaluate for possible future implementation. In the meantime, come along and join us as we Accelerate a CREP study.

Jon Grahe (CREP Project Leader & Pacific Lutheran University)

Christopher R. Chartier (Accelerator Director & Ashland University)




Additional pronunciation guidance and derivation from Jon Grahe:

“Why CREP rhymes with Grapes”

the grape metaphor for replication science

When considering a bunch of grapes, all from the same DNA, they represent a wonderful metaphor for replication science. Grapes from the same bunch, or different bunches from the same vine, all share DNA, but they are rarely, if ever, identical. They differ from each other in a number of factors much like replications differ from each other, and from an original study. Because of growing conditions, contextual differences; grapes can differ in size, color, and ripeness. All the same DNA, but still different.

In comparison to grapes, replications also differ in size, color, and ripeness. Size is the easiest metaphor to recognize, researchers might have more access or resources to collect data from more participants. Color in research reflects all the diversity in application of a replication, not just diversity of participants and experimenters, but also the time of day, the decorations of the lab, the educational culture on campus, and all the other variables that make research laboratories unique. Finally, ripeness reflects age and experience; certainly applicable in research as replications are conducted by experimenters exploring the task for the first time and by professionals who have completed more studies than they can remember.  

PSA Face Rating Study: Last Call for Labs in Africa, Central America, Mexico, the Middle East, and Scandinavia.

Lab recruitment for the first official PSA study has far exceeded our wildest expectations. The study is now back under review (following a revise and resubmit decision) at Nature Human Behaviour as a Stage 1 Registered Report. This means that we are having our critical rounds of peer review now, prior to data collection. You can read the current draft of the manuscript here. The study will test the generalizability of the valence-dominance model of face perception across world regions, specifically: Africa, Asia, Australia and New Zealand, Central America and Mexico, Eastern Europe, the Middle East, the USA and Canada, Scandinavia, South America, the UK, and Western Europe. Our currently planned minimum N is greater than 9,000.

See the world 1

We are extremely excited to have such a large and diverse team involved, but the project could still be strengthened by additional contributions from a few key world regions. While we no longer need labs in most world regions, we would warmly welcome additional labs from:

  • Africa
  • Central America
  • Mexico
  • The Middle East
  • Scandinavia

Luckily, we can still expand our team of data collection labs and have the efforts of new labs recognized via authorship on the final published report, thanks to some excellent flexibility in the RR submission model at NHB. Data collection will begin as soon as possible (this depends on editorial decisions at NHB) and run through early September 2019. Please pass this information along to colleagues in the above regions who you think would be interested in contributing.

Sign up for the PSA network here.

Sign up for this specific study here.

 

Photo by Gabriel Picco 

The Psychological Science Accelerator’s First Year

Today marks the 1 year anniversary of the blog post that started what would eventually evolve into the Psychological Science Accelerator. In the brief post, I introduced the idea of a distributed laboratory network in psychological science, which I first called a “CERN for Psychological Science”:

What would a CERN for Psych look like? It certainly would not be a massive, centralized facility housing multi-billion dollar equipment. It would instead be comprised of a distributed network of hundreds, perhaps thousands, of individual data collection laboratories around the world working collaboratively on shared projects. These projects would not just be replications efforts, but also tests of the most exciting and promising hypotheses in the field.

They say timing is everything, and based on the immediate and enthusiastic response (mostly via twitter), this was clearly an idea whose time had come. Today, the PSA is one-year old, and is building the necessary capacity to fulfill the potential that so many of you saw in it back on August 26th, 2017. Here, I highlight a select list of noteworthy events, moments, and accomplishments from our first year, and welcome all psychological scientists to join in us in pursuing a fully-fledged team science model for the field.

6290956707_2958917001_o

Year 1 Highlights

  • September 3rd, 2017: the network reached 50 member labs.
  • September 5th, 2017: we issued our first call for study submissions.
  • September 19th, 2017: the still nameless network reached 100 member labs.
  • September 21st, 2017: we decided, by vote of the network, to call our project “The Psychological Science Accelerator,” maintaining an allusion to massively collaborative projects in Physics.
  • October 3rd, 2017: The Interim Leadership Team was announced. This team shepherded the PSA from a general idea into a functioning collaborative project. These individuals deserve immense credit for getting us off the ground and ensuring that our early momentum turned into real action: Sau-Chin Chen,, Lisa DeBruine, Charlie Ebersole, Hans IJzerman, Steve Janssen, Melissa Kline, Darko Lončarić, and Heather Urry.
  • November 8th, 2017: we selected our first study, proposed by Ben Jones, Lisa DeBruine, and Jess Flake.
  • November 8th, 2017: the PSA was covered in Science Magazine.
  • December 5th, 2017: we selected our second study, proposed by Curtis Phills.
  • December 13th, 2017: the PSA was covered in FiveThirtyEight.
  • January 18th, 2018: the PSA reached 200 member labs.
  • January 24th, 2018: we selected our third study, proposed by Sau-Chin Chen.
  • April 3rd, 2018: the PSA was covered in BuzzFeed.
  • April 8th, 2018: we announced the Accelerated CREP study (our fourth), a collaboration with the Collaborative Replication and Education Project, focusing on training students through the conduct of replication studies.
  • April 18th, 2018: we collected our first data, as part of a pilot test for bundling studies 2 and 3 in a single data collection session.
  • April 30th, 2018: we issued our second call for studies.
  • July 11th, 2018: the Stage 1 Registered Report manuscript for study 1 received a “revise and resubmit” decision from Nature Human Behaviour.
  • July 20th, 2018: our intro paper, laying out the policies and procedures of the PSA, was accepted for publication at Advances in Methods and Practices in Psychological Science.
  • August 23rd, 2018: the PSA was mentioned in the Guardian as a positive example of change in the field.
  • The near future: our 4 empirical projects will likely begin primary data collection in the coming months. All are at various stages of preparation or review as Registered Reports, so the timelines are still a bit uncertain, but our best current estimates see study 1 commencing in ~October 2018, studies 2 and 3 commencing in ~December 2018, and the Accelerated CREP commencing in ~January 2019. Interested researchers can still sign up for any of these studies. Feel free to email us (psysciaccelerator@gmail.com) for more information.

This has been an amazing, awe inspiring, humbling, and motivating first year for the Psychological Science Accelerator. We have established many of the policies, practices, procedures, and institutional norms to make this project a success in the years to come. Here’s hoping our next year is one of capitalizing on our potential and beginning to make a real and lasting impact on psychological science!

Allow me to close with a direct quote from our introductory paper, co-authored with over 100 members of the PSA:

Success in this endeavor is far from certain. However, striving towards collaborative, multi-lab, and culturally diverse research initiatives like the PSA can allow the field to not only advance understanding of specific phenomena and potentially resolve past disputes in the empirical literature, but they can also advance methodology and psychological theorizing. We thus call on all researchers with an interest in psychological science, regardless of discipline or area, representing all world regions, having large or small resources, being early or late in career, to join us and transform the PSA into a powerful tool for gathering reliable and generalizable evidence about human behavior and mental processes. If you are interested in joining the project, or getting regular updates about our work, please complete this brief form: Sign-up Form. Please join us; you are welcome in this collective endeavor.

364 Labs in 365 Days

At the moment of posting this, we are now up to exactly 364 member laboratories, just 1 lab short of our goal of growing the network to 365 labs in our first 365 days! Help us reach our arbitrary-but-fun goal of recruiting 1 new member of the PSA every day of our first year by signing up and joining this amazing team of researchers! A few of our current members passed along their reflections on joining the project. Read below if you need a smile or another good reason or two to join this community.

Chris

 

I’ve learnt a phenomenal amount about study design, translations, registered reports, organisation, and quantitative methods through my involvement with the PSA. In particular, I’ve learnt a lot from faculty like Chris Chartier, Jessica Flake, and Hans IJzerman, as well as postdocs and grad students like Bastian Jaeger and Nick Michalak. It’s really reinvigorated my interest in research too.

-Ben Jones

When I saw one of the early maps of labs my reaction was, “oooh there are going to be a lot of interesting measurement and modeling challenges” –I sent Chris a message and asked if we could talk. I wanted to know, could there be a methodological research and support arm of the PSA? He said !YES! And I reached out to some other people I had met at SIPS to talk more about forming a methods committee for the PSA. We are still figuring it all out, but every other week a group of us, across at least 3 time zones, get together to discuss the data and methods aspects of the PSA. What started as an interest in stats has morphed into a group of people working to make the PSA an enduring presence in psychology. I feel lucky to have found this family early in my career. It is scary for it to be up to us to make it happen, but also energizing!

-Jess Flake

I still remember that I felt so proud when I could show to my students that Catanzaro – a town located in one of the most impoverished areas in Southern Italy – was on the PSA map. I felt like that for many reasons. One of them is that the bachelor programme in Psychology at the University Magna Graecia of Catanzaro started just two years ago. We are only a few teachers, the most of us at the very beginning of our careers, passionate about our job and willing to provide our students with the most advanced and updated knowledge about what is going on in our field. I dedicated more than one lecture in my Methods class on the importance of reproducibility/replicability and on how Psychology is moving forward, thanks to the spreading of Open Access practices, pre-registration, and the PSA. I hope that my students will feel proud to know that they are not at the margins of this revolution, but rather embrace it and take part in this exciting adventure, which is growing and becoming more and more ambitious every month. Thank you for building it.

-Marco Tullio Liuzza

The PSA is the most exciting thing I’ve ever been involved with in science. The way so many people (especially so many talented ECRs) came together to pool their expertise and create something much bigger than the sum of its parts has been awe-inspiring. Through my involvement in the first study, I’ve seen first-hand just how much work is put into essential aspects of this project that are often overlooked, like the translations. I really hope the PSA can be a model for transforming how we do psychology to make it a more team-based and rigorous science.

-Lisa de Bruine

As a HDR Graduate student in psychology, my involvement in the PSA has tremendously increased my knowledge of experimental methods. I have had no previous experience with open science, replication, and registered reports. I believe that open science is the future of research. My involvement has allowed me to learn about open science and registered reports. It has been an exceptional opportunity to be involved with high-quality research and to increase my global research contacts and networks.  I would encourage everyone to be involved in this exciting network of researchers.

-Monica Koehn

The Psychological Science Accelerator: Call for Study Submissions (Deadline: June 20th)

The Psychological Science Accelerator (PSA), a network of 300 labs collaborating to collect large-scale international samples of psychological data, is currently accepting study proposals from all areas of psychological science. Anyone can submit a proposal, whether or not they are a member of the PSA. Our mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science, reducing the distance between truth about human behavior and mental processes and our current understanding. For a full overview of the PSA, please see our pre-print introducing our policies and procedures (https://psyarxiv.com/785qu/).

Capture

Proposed studies can test novel hypotheses or focus on the replication of previous findings, can be basic or applied in focus, and can be exploratory or confirmatory in nature. Because accepted studies will likely involve considerable use of resources, the study selection process begins with the preparation and evaluation of a Registered Report style submission by authors hoping to collect data via the PSA network. Our Study Selection Committee will conduct an initial feasibility evaluation for all proposed studies (see below for more information on this feasibility check). Studies that pass this check will be evaluated by 10 peer reviewers and ourStudy Selection Committee for final selection. We plan to accept 2-3 studies during this round of review. Selected studies will then proceed through the PSA workflow depicted in the figure below.

processfigures1_001.jpg

Please email submissions to our Director, Dr. Christopher R. Chartier, cchartie@ashland.edu. Submissions will be accepted until June 20th, 2018.

Feasibility

All feasibility decisions are made with respect to our current, and ever changing, resources. Although the PSA is comprised of hundreds of labs from around the world who have agreed to volunteer some of their resources to PSA projects, we are currently unable to accommodate all types of designs. Submissions are more likely to pass the initial feasibility check if they have the following characteristics:

  • Do not require specialized equipment (e.g., eye-tracking, EEG) or proprietary experimental software (e.g., E-Prime) to be used at the data collection sites, unless the proposing team can provide these resources to data collection labs
  • Experimental materials and analysis scripts can be shared easily and made publicly available
  • Do not require hard-to-reach samples (e.g., clinical populations). We hope to better accommodate such sampling in the future.
  • Target sample size per site is less than 150 participants
  • Target number of data collection sites is less than 150
  • Duration of an individual data collection session is less than 90 minutes
  • The likelihood and severity of risk to the participant is kept to a minimum, such that the risk is not greater than what participants would face normally and would not require special consideration or deliberation from an ethics board.

Characteristics of strong submissions

Beyond simply being feasible given current PSA resources, strong submissions will also:

  • Accurately and clearly describe literature relevant to the study’s goals and design, such that researchers unfamiliar with the subject can understand the basic concepts behind the theory/phenomenon and the purpose of the research.
  • Clearly articulate the purpose of the research, relevant research questions, and hypotheses.
  • Clearly articulate the research design, with a focus on sound methodology appropriate to the research questions, including adequate power analysis to justify sample size.
  • Provide examples of relevant material, for example websites, experimental scripts (e.g., E-prime, Inquist, OpenSesame), precise experimental design, and/or stimuli.
  • Accurately and clearly describe an analysis strategy appropriate to the research questions and design. Pilot or simulated data and working analysis scripts are ideal for clarity.
  • Make a compelling case for the importance of large-scale collaborative data collection.

Submission Format and Guidelines

The following components are required for all submissions:

  • Cover Page, including title of the study, date of the latest draft, and keywords
  • Abstract of up to 150 words
  • Main body submission text of up to 5,000 words
  • References
  • Supplementary materials

The following guidelines are intended to assist you in the preparation of your study submission to the Psychological Science Accelerator. Submissions normally include a description of the key background literature and motivation for the study, hypotheses, study procedures, proposed statistical analysis plan, a statistical power analysis, and pilot data (wherever applicable).

Introduction

A review of the relevant literature that motivates the research question and a full description of the study aims and hypotheses.

Methods

A full description of proposed sample characteristics, including criteria for data inclusion and exclusion (e.g. outlier extraction). Procedures for objectively defining exclusion criteria due to technical errors or for any other reasons must be specified, including details of how and under what conditions data would be replaced.

A description of study procedures in sufficient detail to allow another researcher to repeat the methodology exactly, without requiring further information.

Proposed analysis pipeline, including all preprocessing steps, and a precise description of all planned analyses, including appropriate correction for multiple comparisons. Specify all covariates or regressors. Specify analysis decisions that are contingent on the outcome of prior analyses.

Results

Studies involving Neyman-Pearson inference must include a statistical power analysis. Estimated effect sizes should be justified with reference to the existing literature or theory. Since publication bias overinflates published estimates of effect size, power analysis should be based on the lowest available or meaningful estimate of the effect size.

In the case of highly uncertain effect sizes, a variable sample size and interim data analysis is permissible but with inspection points stated in advance, appropriate Type I error correction for ‘peeking’ employed, and a final stopping rule for data collection outlined.

For studies involving analyses with Bayes factors, the predictions of the theory must be specified so that a Bayes factor can be calculated. Authors should indicate what distribution will be used to represent the predictions of the theory and how its parameters will be specified.  

Full descriptions must be provided of any outcome-neutral criteria that must be met for successful testing of the stated hypotheses. Such quality checks might include the absence of floor or ceiling effects in data distributions, positive controls, or other quality checks that are orthogonal to the experimental hypotheses.

Supplemental Materials

Include full questionnaires, stimuli, and materials needed to conduct the study. Pilot data can be included to establish proof of concept, effect size estimations, or feasibility of proposed methods. Simulated data and analysis scripts are ideal for clarity of the exclusion criteria and analysis plan.

These guidelines were adapted from https://osf.io/pukzy.

 

Please email submissions to our Director, Dr. Christopher R. Chartier, cchartie@ashland.edu. Submissions will be accepted until June 20th, 2018.