A Roundtable Discussion with A Brooks Bowden, Jessaca Spybrook, and Elizabeth Tipton
SEER Standards and the Future of Evidence Quality in Education Research
Moderated by Vivian Wong
Hosted by Theory and Practice of Field Experiments and the Collaboratory Replication Lab
Monday, November 23 from 4:00 to 5:00 pm (EST) via Zoom.
Abstract
Nearly 20 years after the Institute of Education Sciences (IES) was established and prioritized the study of causal questions, the agency introduced the Standards for Excellence in Education Research (SEER) in 2019. IES Director Mark Schneider wrote that the goal of these new standards was to “codify good science practices” and move “education to the next stage” (Schneider, 2019). The SEER principles are based on the premise that rigorous education research is “transparent, actionable, and focused on consequential outcomes” (IES, 2020). To this end, the SEER principles encourage researchers to:
Pre-register studies
Make findings, methods, and data open
Identify interventions' core components
Document treatment implementation and contrast
Analyze interventions' costs
Focus on meaningful outcomes
Facilitate generalization of study findings
Support scaling of promising results
In this roundtable discussion, three leading methodological researchers (Bowden, Spybrook, and Tipton) will offer their perspectives on what methodological challenges the field currently faces, new standards for improving the rigor, applicability, and quality of education research, and the intersection between methodology research and funding policy. The roundtable will also ask panelists to consider “what is next” for education research?
Recommended Resources
Standards for Excellence in Education Research (SEER)
Anderson D, Spybrook J, Maynard R. REES: A Registry of Efficacy and Effectiveness Studies in Education. Educational Researcher. 2019;48(1):45-50. doi: 10.3102/0013189X18810513
Levin, H. M., Bowden, A. B., Belfield, C. R., Shand, R. D., McEwan, P. J. (2017). Economic Evaluation in Education: Cost-Effectiveness and Benefit-Cost Analysis. United States: SAGE Publications.
Tipton E, Olsen RB. A Review of Statistical Methods for Generalizing From Evaluations of Educational Interventions. Educational Researcher. 2018;47(8):516-524. doi: 10.3102/0013189X18781522
Bios
Dr. Bowden is an assistant professor in the Education Policy Division at the Graduate School of Education at the University of Pennsylvania. She also serves as Director of Training for the Center for Benefit-Cost Studies of Education at Teachers College, Columbia University. Professor Bowden specializes in program evaluation and economic analysis, focusing on applications and the methodology of the ingredients method to conduct cost-effectiveness analyses.
Dr. Spybrook is a full professor of educational leadership, research, and technology at Western Michigan University, specializing in evaluation, measurement and research. She earned her Ph.D. in education from the University of Michigan, where she also received an M.A. in applied statistics and B.A. in elementary education. Her research focuses on improving the quality of the designs and power analyses of group randomized trials in education. She coauthored the software and documentation for Optimal Design Plus, a program that assists researchers in planning adequately powered experiments, and has led the development of the Registry of Efficacy and Effectiveness Studies (REES) in education.
Dr. Tipton is an Associate Professor of Statistics, the Co-Director of the Statistics for Evidence-Based Policy and Practice (STEPP) Center, and a Faculty Fellow in the Institute for Policy Research at Northwestern University. Dr. Tipton's research focuses on the design and analysis of field experiments, with a particular focus on issues of external validity and generalizability in experiments; meta-analysis, particularly of dependent effect sizes; and the use of (cluster) robust variance estimation.