The Clearinghouse on Assessment and Evaluation (Ericae.net) is a comprehensive online resource dedicated to advancing knowledge in educational assessment, evaluation, and research methodology. Whether you are a student, educator, researcher, or policymaker, Ericae provides the tools, insights, and guidance needed to understand how learning is measured—and how those measurements can be used to improve outcomes.
At its core, Ericae is built to bridge the gap between theory and practice. You’ll find foundational content that explains essential concepts like validity, reliability, and different types of assessment, alongside more advanced material covering psychometrics, data analysis, and research design. For those actively working in education, the site also offers practical strategies for classroom assessment, program evaluation, and data-driven decision-making.
Ericae goes beyond traditional content by serving as a true resource hub. Visitors can explore curated reading paths through the Assessment Library, access peer-reviewed research from the Practical Assessment, Research & Evaluation (PARE) journal, and utilize tools like the Test Locator and ERIC database guides to find high-quality academic resources. Downloadable templates, rubrics, and evaluation tools are also available to support real-world application.
As education continues to evolve, Ericae remains focused on the future—covering topics like digital assessment, artificial intelligence, equity in measurement, and responsible data use. The goal is not just to inform, but to empower users to make better decisions, design better assessments, and contribute to more effective and equitable educational systems.
Whether you’re just getting started or looking to deepen your expertise, Ericae is your trusted guide to understanding and improving educational assessment and evaluation.

Avoiding Bias in Test Item Writing
Avoiding bias in test item writing protects score validity and fairness, helping you create trusted assessments that measure what matters.

What Makes a Good Test Item? Key Principles
Learn what makes a good test item with clear, fair, aligned principles that help you write better assessments and measure learning with confidence.

Designing Performance-Based Assessment Tasks
Designing performance-based assessment tasks that align to standards, reveal real learning, and deliver fair, useful results for better teaching.

Short Answer vs. Essay Questions: When to Use Each
Learn when to use short answer vs. essay questions to assess the right skills, score more fairly, and improve the quality of your tests.

How to Write High-Quality Essay Questions
Learn how to write high-quality essay questions that test analysis and reasoning, helping you design better assessments that reveal real understanding

Constructed Response vs. Selected Response Items
Constructed response vs. selected response items explained simply—compare validity, scoring, alignment, and test-taker impact to choose better assessments

Best Practices for Writing Distractors in MCQs
Learn best practices for writing distractors in MCQs to create fairer questions, reduce guessing, and measure real understanding more accurately.

Common Flaws in Multiple-Choice Questions (and Fixes)
Spot common flaws in multiple-choice questions and fix them fast with practical tips to write clearer, fairer assessments that better measure learning.

How to Write Effective Multiple-Choice Questions
Learn how to write effective multiple-choice questions that assess knowledge fairly, reduce ambiguity, and improve results across tests and training.

Building a Feedback Loop for Assessment Improvement
Build a feedback loop for assessment improvement with real evidence, clear analysis, and smart updates that make every assessment more effective.

Post-Test Analysis: What to Look For
Learn how post-test analysis turns pilot and field test data into clear decisions about fairness, reliability, validity, and assessment readiness.

Continuous Improvement in Assessment Design
Improve assessment design with systematic pilot and field testing to catch flaws early, refine scoring and timing, and build more reliable results.

Using Statistical Analysis in Field Testing
Using statistical analysis in field testing turns pilot data into clear, defensible decisions on item quality, score meaning, and test readiness.

How to Identify Weak Test Items Through Testing
Learn how to identify weak test items through pilot and field testing so you can improve assessments, boost score validity, and spot flawed questions fast.

Quality Control in Assessment Development
Quality control in assessment development helps ensure fair, accurate, and trusted test results so every score supports smarter decisions.
