News from the Accelerator- November/December 2020

WOW… 2020.

It has been a wild year for all of us. From quarantine to scheduled grocery visits and walks, we have adapted a lot this past year. I want to wrap up this year by taking some to reflect on all that our network has accomplished.

The PSA has been able to successfully implement three new studies in just a few short months. We have added several new initiatives such as the director statements, the first PSA conference, and our most recent blog post about the PSA’s capacity. Along with the achievement of finishing data collection for several of our current studies (PSA 004, PSACR 001/002/003, and PSA 006).

It is thanks to all of our members and directors for making this possible. Thank you for continuing to be present in our studies and for your willingness to collaborate. We appreciate all of our PSA members but we want to give a special shoutout to our project teams for their hard work to keep the studies running in a pandemic.

We enter the new year with the prospect of new studies and the PSA 001 paper in press!

Study updates:

  • PSA 001- Paper is in-press! It will be published with Nature Human Behavior in January.
  • PSA 002/003- The leadership team for the 002/003 bundle of studies has been hard at work, creating an online version of the 002 study protocol that will not rely on in-person data collection. This new protocol was recently approved by the editors of the Psychonomic Bulletin and Review. We are now testing the 002 online link. 
  • PSA 004- We are no longer taking new teams and we would like data to be collected by the end of the year, with final CREP submissions by the end of January.
  • PSA 005- This project is currently on hold until the US gets better control of COVID. 
  • PSA 006- We are in the last month of data collection for 006!!! Thank you to everyone who has been a part of this process so far. 
  • PSA 007- PSA007 will be starting on pre-projects after the holidays, please email buchananlab@gmail.com if you are interested in joining.
  • PSACR Bundle- The submitting author teams are hard at work analyzing data and writing their manuscript drafts! Meanwhile, we (the admin team) are developing a data management plan (figuring out exactly how and when we will share the data). We also, on an ongoing basis, are working to make sure that everyone’s contributions are accurately documented.

Study Capacity Policy: 

Last month we voted on the study capacity policy for the PSA. As described in our Study Capacity Policy, study capacity is determined by the PSA’s data collection capacity or the amount and kind of participant data the PSA can collect in a given year, and its administrative capacity, or its ability to perform the administrative tasks required to collect participant data. The study capacity has passed and will go into effect. 

How many resources does the PSA possess?

This blog discusses the PSA’s first study capacity report, which contains a ton of interesting findings of our network. It also addresses the resources that the PSA currently has and will need in the future. Make sure you take a look at this important resource for the PSA. 

The financial cost of the PSA’s vision.

We recently released this blog on describing the need for funding to continue our vision. Patrick Forscher and Hans Ijzerman argue that in order to fulfill a maximalist vision we need to secure funding.

PSACR hackathon:

The admin team for the PSACR projects is hard at work preparing the datasets for release. As part of this process, we held two hack-a-thons to code the free text responses that participants entered about where they were located into more structured and usable location data. We also cross-checked our initial coded responses to ensure that our final dataset is accurate. The work involved sorting through two spreadsheets — one with 2000+ rows and one with a bit less than 200 — and deciding on the meaning of free text that was often incomplete and in non-English languages.

Fortunately, these hack-a-thons were a huge success! We were able to accomplish the work quickly and efficiently. Huge thanks go out to the people who volunteered their time to make this happen, including Biljana Gjoneska, Anna Szabelska, Martin Vasilev, Amélie Gourdon-Kanhukamwe, Jen Beaudry, and Niels van Berkel. Thanks also go out to the hardworking members of the admin team who prepped, facilitated, and administered the hack and its resulting data, including Erin Buchanan, Hannah Moshontz, and Patrick Forscher.

Clinical Chapter

We recently wrote a chapter about how our network might help with clinical research for an upcoming book on questionable research practices (and solutions) in clinical psychology. We think there’s potential to explore important clinical questions with the PSA! Check it out here!

Coffee Station Channel on Slack:

In a physical office, you would have a coffee station or a place where you can take a break and refresh with your co-workers. The coffee station often becomes a place where you can share about what is going on in your life and a place to just take a break from your busy work schedule. We want to be able to get to know more people in our vast network. Therefore, Savannah decided to create a virtual coffee station. We will be using this channel to get to know each other and possibly host some fun coffee hangouts. Head over to slack to check out the coffee station

Happy Holidays! 🎉 We look forward to working with you in the new year!

Savannah

How many resources does the PSA possess?

Patrick S. Forscher, Bastien Paris, and Hans IJzerman

How many resources does the PSA possess? This is a question that affects many activities within the PSA — prime among them the annual decision of how many studies the PSA is able to accept. Here are a few other examples:

  • People involved in the selection of studies must decide whether the PSA can feasibly support studies with special requirements, such as a proposal to conduct a multi-site EEG project or a project involving a low-prevalence (and therefore difficult to recruit) population. 
  • People involved in writing PSA-focused grants must be able to accurately describe the size and scale of the network to make their grant arguments and planning concrete. 
  • People involved in managing the PSA’s finances need to know the people and projects that have the highest financial need. 
  • People involved in regional recruitment need to know how many members are currently located in a specific world region and the number of participants those members can muster for a typical PSA study. 

In its first three years, we have had to rely on ad hoc sources to answer questions about PSA resources. Today, with the release of the PSA’s first study capacity report, we now have a source that is more systematic. This blog describes the logic that underlies the report, gives some of its top-level findings, and outlines what we plan to do with the report now that we have it.

How to think about and report on PSA resources

The PSA’s most basic activity is running multi-site studies, and one of the most fundamental resource-dependent decisions PSA leadership must make is how many proposals for these multi-site studies the PSA will accept. Thus, a single multi-site study provides a useful yardstick for measuring and thinking about PSA resources.

The PSA’s newly-ratified resource capacity policy takes just such an approach. It considers PSA resources from the perspective of helping PSA leadership decide how many studies they should accept in a given year. From this perspective, the most basic unit of analysis is the study submission slot, a promise by the PSA to take on a new multi-site study. Study submission slots are limited by at least two types of resources:

  1. Data collection capacity. This is the PSA’s ability to recruit participants for multi-site studies. Data collection capacity is mainly governed by the number of PSA members located in psychology labs throughout the world. However, money can also expand the PSA’s data collection capacity; the PSA has occasionally contracted with panel-provider firms to recruit participants on its behalf.
  2. Administrative capacity. This is the PSA’s ability to perform the administrative tasks required to support multi-site studies. Administrative capacity is mainly governed by the availability of labor, whether that labor be paid or volunteer.

The resource capacity policy also allows for the possibility of study slots that add on special requirements or evaluation criteria. These special submission slots might require, for example, that any studies submitted for consideration to that slot involve EEG equipment. Alternatively, the slots might require that the submitted studies involve investigating the psychological aspects of the Covid-19 pandemic. We will go into more detail about how we think about these special submission slots in a later post. For the time being, we simply note that assessing our resource capacity will allow us to understand the sorts of special submission slots we can accommodate.

The PSA’s data collection and administrative capacities are both in flux. The PSA’s ability to accommodate more specialized types of studies also fluctuates on a yearly basis. Moreover, the PSA is committed to the cultural and national diversity of psychology research — activities that are dependent on its reach in under-resourced countries. Accurate assessment of all these capacities therefore requires ongoing documentation of its members, member characteristics (including country of origin), and its yearly activities. Currently, our documentation happens in a shared Google Drive, Slack, the PSA’s OSF project and the subprojects for each of its studies, and the recently-created PSA member website.

According to policy, these various sources of documentation are consulted in a comprehensive way to form a complete picture of the PSA’s resources. This consultation results in an annual  study capacity report, which can inform decisions and activities involving the PSA’s resources.

Findings from the first study capacity report

The first PSA study capacity report is large and comprehensive. Here are some big-picture findings:

  • The PSA currently has 1,400+ members from 71 countries.
  • Out of seven studies, six are still underway collecting data.
  • Based on our past data collection capacity, we have the ability to recruit a minimum of 20,000 participants over the upcoming scholarly year for new PSA projects.
  • Two out of three PSA members come from North America (24%) and Western Europe (41%). 
  • We do not have sufficient information to accurately estimate the number of administrative hours available for each PSA role.

However, these big-picture findings hide a lot of important detail that may be important for PSA decision-making. For example, here are a few additional tidbits that come out of the report:

  • The number of PSA member registrations almost tripled as a result of the COVID-Rapid project.
  • At the time of the report’s writing, and excluding PSA007, the PSA will need to recruit 30,000 participants to complete its active roster of projects.
  • About 20% of the PSA’s membership have a social psychology focus area.
  • About 90% of people in active PSA administrative roles are located in North America (63%) and Western Europe (21%).

If you’re interested in digging into more of these details, you can find the full report here.

What’s next?

As outlined by policy, the main purpose of the report is to inform decisions about how many studies the PSA can accept in the next wave of study submissions. Thus, an important next step for this report is for the upper-level leadership to use the report to come to a decision about study submission slots.

However, the study capacity report has already catalyzed a number of ongoing conversations about what the PSA is, what it should be in the future, and how the PSA should go about meeting its aspirations for itself. Some of these conversations have resulted in their own dedicated blog posts, which will be posted to the PSA blog in the next few days.

In the meantime, we welcome your thoughts about the PSA’s study capacity and issues related to it. We believe that compiling this report has been a useful exercise precisely because the process of compiling the report has inspired so many useful conversations about the PSA’s direction and goals. This reinforces our commitment to maintaining this useful reporting structure in future years.


Funding Note: The study capacity report was made possible via the work of Bastien Paris; his internship at Université Grenoble Alpes is funded by a grant provided by the Psychological Science Accelerator. Patrick S. Forscher is paid via a French National Research Agency “Investissements d’avenir” program grant (ANR-15-IDEX-02) awarded to Hans IJzerman.

News from the Accelerator- October 2020

Study Capacity Policy up for a Vote

In an effort to more systematically assess our study acceptance capacity, we have drafted a new PSA policy document. All members of the PSA, please read this document, and then vote yes or no, by logging into the member site and finding the voting form under the “your tasks” heading on the main page.

Data Collection Updates

  • PSA 004: We have 2640 participants in over 50 labs. The deadline to sign up is past, but if your lab hasn’t started data collection then we will need you to start by November 15. Data collection ends in December. 
  • PSA 006: 006 is in the final stages of data collection. We are going strong but are not accepting any new labs at this time. Data collection is set to end at the end of 2020.
  • PSACR 001/002/003: Data collection for the PSACR bundle was completed on October 23rd! Considering people who completed at least 90% of questions on a given component of the survey, our estimated Ns are: 44,217 for the general questions; 16,618 for Study 1 (Loss Gain); 20,805 for Study 2 (Cognitive Reappraisal); and 18,594 for Study 3 (Self Determination). We also met our goal for Study 2 of having 35 countries with N >= 200. Anyone can explore Ns by study, language, and different percent completion thresholds at http://formr.psysciacc.org/shiny/shiny_app/.

Publication Updates 

  • PSA 001: The final copy of the paper has been accepted and we are working on responding to the various editorial formatting requests prior to publication! This is our first study to have a final, Stage 2, acceptance.
  • PSA 002/003: We are currently waiting on the journal editor’s comments about the modifications we made earlier. He will give us their decisions and suggestions in a few days, so we will have a lot of editing to do soon!
  • PSA 005: We have received Stage 1 acceptance at Nature Human Behavior! Here is a preprint of the manuscript if y’all want to give it a read!

Training Committee Update

We are trying to get some feedback from you all on what kind of training you think you need in order to be successful in the PSA and the many studies. Here is a google form you can all fill out to provide some feedback on what you think would be helpful. 

Community Building and Network Expansion Committee Update

From Crystal:

We have a position opening in the Community Building and Network Expansion Committee in December. We really hope folks from non-US/CA/EU areas consider going for the position. Having someone from these geographic regions in a leadership role, especially for this committee, is crucial for PSA’s growth and development. I will miss being CBNEC assistant director but I am SO excited about seeing where new leadership can take this committee and the PSA in general.

If you think you might be even kinda sorta interested in this position, please DM me on Slack or email me at cnsteltenp@usi.edu! I’d love to chat about the group and brainstorm on where it can go from here!  

Gathering feedback on a possible PSA translation service

PSA member, Adeyemi Adetula, would like to solicit your feedback on potential demand for a paid translation service for African languages in terms of interest and to assess available funds to dedicate to such a service.

In general, there is little research on African populations in psychology. Even in the Psychological Science Accelerator (PSA), African participation in multi-site studies has been poor, with a recent project including just two (out of several thousand) African indigenous languages. To facilitate inclusion and improve generalizability of psychology, we propose a paid translation service for African languages and thereafter offer a more general translation service (for full details regarding this service, see here).

Who could use this service? Any researcher interested in conducting research amongst African populations.   

What do I stand to benefit? First, users would get a high-quality translation. Quality of these translations is of high priority and we intend to adopt the already established translation procedures of the PSA. This procedure required a forward translation (source to targeted language), a back translation (serving as validation), editing (reconciling text difference of first version and validation version), external reading (feedback from potential respondents), cultural adjustment, proofreading, and implementation (transfer of translated note into a survey software). Second, such a translation service allows one to conduct research in African populations as we will also try to connect users with local researchers. 

Survey. We invite you to give us feedback on this proposed service via this short (5 minute) survey (if you are instead interested in being a translator, you can fill in the survey here).

News from the Accelerator – August & September 2020

Today’s newsletter is double-sized, summarizing PSA activities from both August and September. It was a fun and busy 62 days with an election, some exciting news on publications, our first conference, and much more detailed below. As always, thank you for everything you do to accelerate psychological science. Happy reading!!

Associate Director Election Results

We had two excellent candidates in our election for an open Associate Director position: Peter Mallik and John Protzko. Please join us in congratulating John Protzko as our new Associate Director following an extremely close network vote. We look forward to seeing the vision and energy that John will bring to this new role. We also thank Peter for running, and are thrilled to continue working closely with him in other PSA roles! 

Get to know John Protozko: He is a metascientist at the University of California, Santa Barbra. He participated in PSA001 on data collection and is the PM on PSA006. His research interest is in causality as it applies to people and the implications for theory in fields such as cognitive development, social perception, and metascience. He. Loves. Food. and will travel the world for it.

PSA 2020 Conference

The first ever PSA conference is in the books, and we had tons of fun! Thank you so much to all of the organizers, presenters, moderators, hackers, and attendees for making this first meeting a success. Over the span of three days, multiple people presented studies and research on a wide variety of content, and others tuned in to listen to these talks from all over the world. If you weren’t able to make it this year, or had to miss a session, each of the talks were recorded, and are available to watch here. All videos that were successfully recorded have been uploaded (although some have been lost 😢 ).

Whether you attended the conference or not we would like to provide feedback by taking this short survey! We believe that each person in our network has a valuable opinion on the conference that we would like to learn more about as we consider the possibility of hosting another one in the future.

Study Updates

  • PSA 001: The stage 2 (final) paper was accepted at Nature Human Behaviour with only minor formatting edits required before publication!!!!!!!!!! This marks the true completion of our first ever study at the PSA.
  • PSA 002/003- The 002 and 003 teams are in the final stages of moving our experiments from Open Sesame/Qualtrics to Lab.JS. Thanks to the hard work of Merle Schuckart, we have drafts of the new programs and are in the process of ensuring that the web versions of the studies retain fidelity to the original protocols and troubleshooting the studies. The new version of the experiment should fully integrate both the 002 and 003 procedures into one online protocol, greatly simplifying future data collection.
  • PSA 004: Data collection is still going on and will wrap up at the end of the year! We do however need more CREP reviewers as we wrap up this project at the end of the year. CREP reviewers look over each lab’s OSF page to ensure that each lab is performing the study as planned. If you or someone else is interested please have them fill out this form.
  • PSA 005: We have finished our latest revision of the stage 1 manuscript based on our communications with the editor. This should be the last revision before stage 1 acceptance. The newest revisions were submitted on Wednesday!!!!
  • PSA 006: 006 will continue data collection through the end of 2020. We have collected lots of great data from all of our members and are excited to see this study through to the end.
  • PSA 007: PSA’s new project: Semantic Priming Across Many Languages (SPAML) is just getting started. Learn more by checking out the presentation from the conference: click here! The words2manylanguages part of the project is currently recruiting all kinds of help. We are trying to create a repository of datasets to use as our criterion for the modeling section of this project. We downloaded a bunch of them, but we need to clean them up to a consistent format (naming, csv column names synced, etc.). Anyone who’s interested in helping will be given credit via CREDIT (so this task would count + reviewing the paper for authorship). You can email buchananlab@gmail.com. Join slack or email to be kept updated on the two other pre-projects, more announcements on those soon. All three will need to be completed as part of the design for the full 007 project.
  • PSACR 001/002/003: Data collection is complete for PSACR 001 and 003, and PSACR 002 data collection is ending on October 23rd. A summary of data collected is available here http://formr.psysciacc.org/shiny/shiny_app/!

Director’s Statement Installments 

Chris is releasing a Director’s Statement in individual posts/installments to solicit feedback and ultimately guide priorities and activities for the remainder of his Directorship with the PSA. You can read the opening letter here and see the planned section titles with tentative release dates here!

Community Building Update

At the end of December 2020, Crystal Steltenpohl will be stepping down as an Assistant Director of the Community Building and Network Expansion Committee. Her announcement reads:

“I look forward to seeing what new energy the next assistant director will bring. I have really enjoyed working with Natalia Dutra on this committee, whose goals of diversity, inclusion, and community building in the PSA are close to my heart. Of course, even as a non-assistant director, I will still be involved and support the CBNEC. The director group is so awesome and open to feedback. I don’t usually look forward to committee meetings but the all hands meetings are fun and productive and informative. The general PSA meetings remind me of why I got into science. We have an awesome, collaborative community full of talented people from across the globe.

While I have no control over who takes my place, I really hope folks from non-US/CA/EU areas consider going for the position. Having someone from these geographic regions in a leadership role, especially for this committee, is crucial for PSA’s growth and development. I will miss being CBNEC assistant director but I am SO excited about seeing where new leadership can take this committee and the PSA more generally.

If you think you might be even kinda sorta interested in this position, please DM me on Slack or email me at cnsteltenp@usi.edu! I’d love to chat about the group and brainstorm on where it can go from here!”  

Data and Methods Update

We have some pretty exciting things happening in the Data and Methods Committee! We have added a new member to our ranks, Will McAuffille, and we are ecstatic to get to work more with him! We’ve started up a subcommittee on meta science that our very own Nick Fox is leading. Lastly, we are currently in the midst of a flurry of activity and progress on the measurement study related to PSA 006. 

Secondary Analysis Challenge

A year after the secondary analysis challenge was announced we have posted the results and submissions! We are proud to announce that it seems to have been a success as we had 8 submissions, and we learned new information from each of them. Each submission script was checked for computational reproducibility by two members of our team (Abigail Noyce and Patrick Forscher). To read more, please visit the blogpost!

IT’S THE MAP!!!

Nicholas Coles has updated our membership map and it looks sick! We continue to grow as a network day by day but it had been some time since we had updated the map. The previous map had us at 760 researchers and now we are up to 1021 and counting. Head over to the website to take a gander out how much we have grown! 

Cool articles: 

Incentivizing Discovery through the PSA001 Secondary Analysis Challenge

Patrick S. Forscher, Abigail Noyce, Lisa M. DeBruine, Benedict C. Jones, 

Jessica K. Flake, Nicholas A. Coles, Christopher R. Chartier

Please cite this blog post as:

Forscher, P. S., Noyce, A., DeBruine, L. M., Jones, B., Flake, J. K., Coles, N. A., & Chartier, C. R. (2020, May 26). Incentivizing Discovery through the PSA001 Secondary Analysis Challenge.  https://psysciacc.org/2020/09/14/incentivizing-discovery-through-the-psa001-secondary-analysis-challenge/

On September 1, 2019, we announced a new initiative to promote the re-use of a large dataset we collected for the PSA’s first study. Called the Secondary Analysis Challenge, this initiative started from the premise that much of the potential richness in psychological datasets is currently untapped.

The Challenge aimed to tap that richness using two methods. The first method was relatively standard: we created extensive documentation of the project data and meta-data. The second method was a bit more unusual: we provided incentives to research teams who were willing to follow an analysis pipeline that we thought would minimize the chance of false positives and maximize the chance of true discoveries.

Specifically, with the help of a generous donation out of Hans Rocha IJzerman’s book income, we were able to offer $200 awards to up to 10 teams that:

  • Submitted a preregistered script that analyzed an exploratory segment of our data
  • Revised the script in response to a computational reproducibility check
  • Used the revised script to analyze a confirmatory segment of the data that was separate from the exploratory data
  • Posted a preprint to PsyArXiv with a brief description of the results

What happened?

We received a total of eight submissions to the challenge. Each submission consisted of a link to a preregistered script that conducted an analysis on the exploratory segment of the PSA001 data.

Two members of our team (Abigail Noyce and Patrick Forscher) checked the submitted scripts for computational reproducibility. These checks involved running the submitted analysis script on one of our computers. We focused primarily on ensuring that the scripts ran without error, but we also sometimes commented on other issues, such as unusual parameter estimates, and possible bugs. We did not comment at all on the validity or theoretical grounding of the analyses. In most cases we did not even have access to this information.

If the computational reproducibility check led to changes in the code, we asked the submitting teams to add a comment to the top of the revised script listing the changes. These revised scripts were uploaded to the project OSF page.

After these computational reproducibility checks, we released the confirmatory segment of the data. We asked the teams to use the revised scripts to analyze the confirmatory segment. To receive the award, the proposing team also had to write a brief preprint reporting the results to PsyArXiv. We did not give any strong requirements on the contents of these preprints; we merely asked that they be made public.

Carlota Batres, one of our Challenge participants, mentioned specifically that the Challenge structure provided her with opportunities she would not otherwise have had. “I hope there will be more initiatives like this one which leverage the collaborative power of the PSA,” she commented.

What are the preprints like?

Most, but not all, of the preprints rely on connecting the PSA001 face rating data to some other secondary dataset, such as the Chicago Face Database, the Project Implicit Demo Website datasets, or the UN Migration database. Aside from that similarity, the preprints varied substantially in structure, format, and content. Some are fairly brief descriptions of results. Some are submission-ready articles with publication-quality figures. The topics range from an investigation of the “halo effect” to the possible link between face perception and the race and gender IAT scores averaged across regions. The Challenge seems to have elicited a variety of submissions that bear on a variety of scientific topics. 

We think the preprints cover many interesting topics. However, no one within the PSA reviewed the preprints themselves for scientific veracity. You can help ensure that we advance science through this exercise by looking through the preprints yourself and providing comments and feedback to the authors. To facilitate this process, we have included links and descriptions of the preprints at the end of this post.

Lessons for Future Challenges

We think this Challenge was a success. The Challenge also holds some lessons for people who wish to facilitate their own, similar, initiatives:

  • Communicate a clear schedule. Clarity of communication helps keep the Challenge process predictable for the participants.
  • Conduct computational reproducibility checks. The computational reproducibility checks were effortful. I estimate that I spent at least an hour and a half per submission, including running code, debugging, and communicating with submitters. However, the checks uncovered multiple issues and sometimes led to substantive revisions to the analysis plan. These checks were effortful but worthwhile.
  • Enrich the target dataset with meta-data. Meta-data are data about the data — in other words, the time of data collection, information about the stimuli, and other details about the data collection process. These meta-data are important for primary studies, but are crucial for secondary analyses. In this Challenge, we archived a huge amount of meta-data alongside our primary data and documented these in our data management plan. These meta-data greatly facilitated the task of generating interesting exploratory analyses.
  • Build pipelines to other datasets. The extensive documentation of the PSA001 data made it easy to merge our dataset to other interesting datasets. In fact, we included Chicago Face Database ID variables in the data that we released and explicitly noted in our data management plan how to merge our data to the Chicago Face Database. This is another step that we took that, I think, allowed the Challenge to be generative for exploratory analysis.

Our Challenge format should not be viewed as a substitute for other forms of peer review. By design, the Challenge did not evaluate the merits of the theoretical logic, the analysis plan, or criteria for inference. Hopefully, these are issues that peer reviewers evaluate if and when these proposals are submitted for consideration at scientific journals.

Overall, we view the Challenge format as a promising supplement that can enhance the scientific value of large datasets. We look forward to observing other innovations people adopt to enhance the value of psychological data.

Appendix: Secondary Analysis Challenge Preprints

Preprint 1: PSA001 Secondary Analysis: Examining the “attractiveness halo effect”

Authors: Carlota Batres

Abstract: Research has found that attractiveness has a positive “halo effect”, where physically attractive individuals are ascribed with socially desirable personality traits. Most of the research on this “attractiveness halo effect”, however, has been conducted using Western samples. Therefore, this report aims to examine the “attractiveness halo effect” across eleven world regions using thirteen ratings on faces, including attractiveness, that the Psychological Science Accelerator network collected. We found that for both male and female faces, attractiveness generally correlated positively with the socially desirable traits and negatively with the socially undesirable traits. More specifically, across all eleven world regions, individuals rated as more attractive were rated as more confident, emotionally stable, intelligent, responsible, sociable, and trustworthy as well as less weird. These results replicate previous findings of the “attractiveness halo effect” in Western samples and suggest that the positive effect of attractiveness can be found cross-culturally.

Preprint 2: Is facial width-to-height ratio reliably associated with social inferences? A large cross-national examination

Authors: Patrick Durkee and Jessica Ayers

Abstract: Previous research suggests that facial width-to-height ratio (fWHR) may be associated with behavioral tendencies and social judgments. Mounting evidence against behavioral links, however, has led some researchers to invoke evolutionary mismatch to explain fWHR-based inferences. To examine whether such an explanation is needed, we leveraged a large cross-national dataset containing ratings of 120 faces on 13 fundamental social traits by raters across 11 world regions (N = 11,481). In the results of our preregistered analyses, we found mixed evidence for fWHR-based social judgments. Men’s fWHR was not reliably linked to raters’ judgments for any of the 13 trait inferences. In contrast, women’s fWHR was reliably negatively associated with raters’ judgments of how dominant, trustworthy, sociable, emotionally stable, responsible, confident, attractive, and intelligent women appeared, and positively associated with how weird women appeared. Because these findings do not follow from assumptions and theory guiding fWHR research, the underlying theoretical framework may need revising.

Preprint 3: Variance & Homogeneity of Facial Trait Space Across World Regions [PSA001 Secondary Data Analysis]

Authors: Sally Xie and Eric Hehman

Abstract: This preregistration is part of the PSA secondary analysis challenge. We investigate how the facial ‘trait space’ shifts across countries and world regions, using the PSA_001 dataset shared by the Psychological Science Accelerator. The facial trait space refers to the interrelationships between many of the trait impressions that people infer from faces. Here, we examine whether this trait space is more homogeneous (or less differentiated) in some cultures than others.

Preprint 4: Hester PSA001 Preregistration Preprint—Region- and Language-Level ICCs for Judgments of Faces

Authors: Neil Hester and Eric Hehman

Abstract: We report the results of preregistered analyses of the PSA001 face perception data. We tested whether the target-level intra-class correlations (ICCs) would be higher in specific regions (i.e., more culturally homogeneous samples) than in the global data set (i.e., a less culturally homogeneous sample). We also report perceiver-level ICCs as well as by-trait perceiver- and target-level ICCs.

Preprint 5: Do regional gender and racial biases predict gender and racial biases in social face judgments?

Authors: DongWon Oh and Alexander Todorov

Abstract: Trait impressions from faces are more simplified for women than men. This bias stems from gender stereotypes; when strong stereotypes exist for a group of faces (e.g., of women’s or Blacks’), they are evaluated more positively/negatively when they fit/violate the stereotypes, making the impressions simpler (i.e., more one-dimensional). In this preregistered study, using trait impression ratings of faces collected from various world regions (+11,000 participants in 48 countries), scores of implicit associations (+18,000 and +212,000 participants in +200 countries), and mixed-effects models, we ask (1) whether simplified facial impressions are found for women and Blacks across regions and (2) whether the regional level of stereotypes about genders and races is correlated with the level of simplification in the face-based impressions of women and Blacks, respectively. The results were not coherent across analyses. The interpretation of the results and the limitations of the study are discussed.

Preprint 6: Hierarchical Modelling of Facial Perceptions: A Secondary Analysis of Aggressiveness Ratings

Authors: Mark Adkins, Nataly Beribisky, Stefan Bonfield, and Linda Farmus

Abstract: The Psychological Science Accelerator’s (PSA) primary project tested for latent structure using exploratory factor analysis and confirmatory factor analysis but we decided to diverge from this approach and model individual traits separately. Our interest mainly was in examining the interplay between “stimulus ethnicity” and “stimulus sex” to discover how differing levels of these criterion differ across region, country, lab etc. While the necessary and prerequisite hierarchical structural information about each trait could certainly be found within the primary project’s dataset, we did not assume that any specific factor structure from the PSA’s primary analysis would necessarily hold, therefore we based our decision to model the data from each trait separately using a mixed model framework.

Preprint 7: Population diversity is associated with trustworthiness impressions from faces

Authors: Jared Martin, Adrienne Wood, and DongWon Oh

Abstract: People infer a number of traits about others’ based simply on facial appearance. Even when inaccurate, face-based trait impressions can have important behavioral consequences including voting behavior and criminal sentencing. Thus, understanding how perceivers infer others’ traits is an important social and psychological issue. Recent evidence suggests that face-based trait impressions may vary by culture. In the present work, we attempt to understand cultural differences in face-based trait impressions. As part of the Psychological Science Accelerator’s Secondary Data Analysis Challenge, we report a set of pre-registered analyses testing how cultural differences in present-day diversity relate to a) 13 face-based trait impressions, b) sensitivity to physical features of the face, c) and the mental structure underlying trait impressions. We find that greater present-day diversity might be related to lower trustworthiness ratings, in particular. We discuss this finding in the context of other recent work and suggest further analysis of the mental structure of face-based trait impressions across cultures.

Preprint 8: The Facial Width-to-Height Ratio (fWHR) and Perceived Dominance and Trustworthiness: Moderating Role of Social Identity Cues (Gender and Race) and Ecological Factor (Pathogen Prevalence)

Authors: Subramanya Prasad Chandrashekar
Abstract: People effortlessly form trait impressions from faces, and these impressions can affect a variety of important social and economic outcomes. Trait impressions based on facial features can be approximated to distinct dimensions: trustworthiness and dominance (Oosterhof & Todorov, 2008). One of the facial features, the facial width-to-height ratio (face ratio) is associated with the trait impressions. I tested whether social category (gender, race) of the target being perceived shapes the relationship between face ratio and perception of dominance and trustworthiness. In this preregistered study, using trait impression ratings of faces collected from 8800 participants across 44 countries, I employ mixed-effects analysis and report results on (1) the direct influence of social categories (gender and race) of the target on perceived dominance and trustworthiness, (2) the moderating role of social categories (gender and race) on the direct relationships between face ratio and perceived dominance and trustworthiness, and (3) the moderating role of pathogen prevalence on the direct relationships between face ratio and perceived dominance and trustworthiness.