RAOP

Assessment & Evaluation

RAOP uses a structured assessment and evaluation framework aligned to program objectives: (1) strengthen educator capability in inquiry-based STEM research workflows, (2) support evidence-based reasoning using simulation-to-hardware activities, and (3) produce classroom-ready lessons and assessments that educators can implement in grades 5–12 settings. The framework prioritizes artifact-based evidence collected every cycle, embeds lightweight formative checks during labs, and rotates one validated standardized instrument per year to maintain rigor while minimizing burden. Findings are reviewed biannually with the External Advisory Board and used for continuous improvement across cohorts.

What RAOP measures
  • Growth in inquiry-based experimental design and evidence-based reasoning through simulation-to-hardware workflows.
  • Growth in scientific literacy skills for interpreting results, evaluating evidence, and communicating findings.
  • Quality and classroom readiness of educator-produced lessons and assessments derived from RAOP labs.
  • Feasibility for adoption in high-need Local Education Agencies (LEAs) through clear materials, constrained classroom workflows, and practical assessment tools.

A central deliverable across cohorts is a robotics and automation-friendly curriculum package (lesson + student materials + assessment rubric + implementation notes) that can be adopted and adapted by high-need LEAs.

Primary Outcome Measures

The instruments below are used to measure educator growth and to document evidence aligned to RAOP learning outcomes. Results are summarized at the cohort level and used to refine future cycles.

Standardized instruments are administered using a pre/post or post-only design as appropriate for the annual cohort and program feasibility, with decisions guided by educator burden and advisory board recommendations. Artifact-based evidence and formative checks are collected every cycle and serve as the primary evidence base for continuous improvement.

Core Evidence (Collected Every Cycle)
Purpose: Document educator learning and classroom translation using authentic, artifact-based evidence aligned to guided inquiry-based learning (GIBL).
Timing: Throughout Weeks 1–3, with final submission at the end of Week 3.
Evidence collected
  • GIBL reflection logs (Predict–Test–Observe–Explain–Reflect) tied to selected modules
  • Educator-developed lesson plan/curriculum draft (grades 5–12) derived from RAOP labs
  • Classroom adaptation plan (constraints, feasibility, materials, timing)
  • Implementation notes (barriers, supports, troubleshooting outcomes) used for annual refinement
Notes
  • This is the primary evidence base every year and is designed to be low-burden and classroom-relevant.
Formative Checks During Labs (Virtual + On-site)
Purpose: Monitor learning during the simulation-to-hardware workflow and provide timely support without high-stakes testing.
Timing: Throughout Weeks 1–3 (embedded in each selected module).
Evidence collected
  • Short exit tickets/check-ins linked to lab objectives
  • Observation checklists for setup readiness and expected outputs (virtual and hardware)
  • Lab artifacts (plots, screenshots, brief interpretation prompts) for coaching and reflection
Notes
  • Formative checks are used for instructional improvement and educator support.
Rotating Standardized Instrument (One Primary Per Year)
Purpose: Maintain rigorous cohort-level outcome measurement while minimizing educator burden by rotating one validated instrument each annual cycle.
Timing: Pre (start of virtual phase) and Post (end of on-site phase), for the selected instrument in that year.
Evidence collected
  • Cohort-level pre/post summaries for the selected instrument
  • Triangulation with artifact-based evidence and educator reflections
Notes
  • Planned rotation: Year 1—EDAT; Year 2—TOSLS; Year 3—CLASS (or equivalent); Years 4–5 repeat based on findings.
  • If a different validated attitude instrument is substituted, the same pre/post timing and reporting structure are preserved.

Evaluation Components

Evaluation combines learning outcomes, implementation fidelity, and classroom translation evidence to ensure the program delivers measurable value and produces implementable classroom outputs.

Implementation Fidelity (Program Delivery)
Verify that the core program elements are implemented as designed across the two-week virtual phase and the one-week on-site phase.
Methods
  • Structured observation checklists for virtual sessions and on-site labs
  • Attendance and participation tracking for required sessions
  • Facilitator logs documenting constraints, deviations, and adjustments
Outputs
  • Cohort-level fidelity summary and lessons learned
  • Actionable improvements for the next cycle (pacing, supports, logistics)
Learning Outcomes (Educator Growth)
Assess educator learning outcomes aligned to inquiry-based STEM research workflows, simulation-to-hardware validation, and evidence-based reasoning.
Methods
  • Artifact-based evidence collected every cycle (GIBL reflections, lab outputs, interpretation prompts)
  • One rotating standardized pre/post instrument per year (EDAT or TOSLS or CLASS/equivalent)
  • Formative checks embedded in modules for coaching and support
Outputs
  • Cohort-level outcome summaries and trends
  • Mapped evidence to program objectives and planned refinements
Classroom Translation (Deliverables Quality)
Evaluate the quality and classroom feasibility of educator-produced materials derived from RAOP labs (grades 5–12).
Methods
  • Rubric-based review of educator deliverables (lesson plan, student materials, assessment tool, implementation notes)
  • Peer review and program-team feedback cycles (draft → revise → finalize)
  • Advisory board review of cohort-level deliverable quality and adoption constraints (biannual)
Outputs
  • Classroom-ready implementation package per educator (lesson + student materials + assessment + implementation notes)
  • Identified strengths/gaps used to improve templates and supports for the next cohort
Educator Experience (Satisfaction + Feasibility)
Understand educator experience, workload feasibility, and barriers to classroom adoption in high-need LEAs.
Methods
  • End-of-week pulse surveys (Weeks 1–3) focused on clarity, pacing, and support needs
  • End-of-program survey on feasibility and intended classroom use
  • Optional follow-up check-in during the academic year (adoption status and needs)
Outputs
  • Cohort-level experience summary (what worked, what should change)
  • Targeted supports for classroom adoption (time, resources, constraints)
Feedback and refinement process
  • During the virtual and on-site phases, formative checks and facilitator logs identify where educators need additional scaffolding, clarity, or pacing adjustments.
  • At the end of each cohort, pre/post outcome summaries and deliverable reviews are used to revise module selection, instructional supports, and PD templates for the next cycle.
  • Biannual advisory reviews provide external oversight and technical guidance. Advisory recommendations are tracked and incorporated into the next iteration of the RAOP curriculum and assessment materials.
  • Across five years, evidence is consolidated to produce a stable, robotics and automation-friendly curriculum that is feasible for adoption by high-need LEAs.

Technical & Advisory Support Team

RAOP evaluation is supported by continuous technical guidance and external oversight. The advisory board reviews progress biannually and collaborates with program leadership to ensure alignment with educational and research objectives.

Technical Support Team
  • Provides technical support for both virtual and on-site laboratory environments.
  • Assists with setup verification, troubleshooting, and workflow reliability.
  • Supports consistent execution of the simulation-to-hardware workflow used in RAOP activities.

Technical support is provided in coordination with the RAOP program team to ensure smooth delivery and a consistent participant experience.

External Advisory Board

The board provides oversight, strategic feedback, and technical guidance. Reviews are conducted biannually.

Dustin J. Tyler
Arthur S. Holden Jr. Professor of Biomedical Engineering
Professor – Electrical, Computer, and Systems Engineering
Director, Human Fusions Institute (HFI)
Case Western Reserve University
Almuatazbellah Boker
Collegiate Assistant Professor
Virginia Tech
Peter Martin
Director of Research & Development
Quanser
What the board reviews
  • Cohort-level outcome summaries and educator experience trends.
  • Quality and classroom feasibility of educator deliverables and assessment tools.
  • Alignment of module selection with RAOP objectives and adoption constraints in high-need LEAs.
  • Refinement priorities for the next annual cycle (templates, pacing, supports, dissemination).

Timeline

The timelines below describe how RAOP assessment and evaluation are executed for the first year cycle (2026) and how the process repeats and improves over the five-year program.

Year 1 Cycle (2026) — Assessment & Evaluation Timeline

This timeline is designed to be clear for educators, program staff, and the External Advisory Board. Evidence is summarized at the cohort level and used for continuous improvement.

WhenActivityOwnerEvidence collectedOutput / decision
Jan–Feb 2026 (Recruitment & selection window)
Finalize cohort selection; confirm Educator eligibility; distribute onboarding instructions and required accounts.RAOP Program Team
  • Roster confirmation and onboarding completion checklist
  • Educator baseline information required for program logistics
Confirmed cohort and readiness status for summer cycle.
Jun 2026 (Technical onboarding)
Software access verification; orientation to QLabs workflow; expectations for artifacts and deliverables; evaluation overview.RAOP Program Team + Quanser Engineering Team
  • Onboarding completion logs
  • Educator readiness self-check and support tickets (as needed)
Educators ready to begin Week 1 virtual phase with standardized setup.
Week 1 (Virtual phase)
Pre-assessment for the rotating standardized instrument selected for the annual cycle (e.g., EDAT or TOSLS or CLASS/equivalent) + foundational guided inquiry modules; formative checks embedded in labs.Educators + RAOP Program Team
  • Pre-assessment submissions for the rotating standardized instrument (cohort-level analysis)
  • Lab artifacts (plots, screenshots) and short GIBL reflections
  • Pulse survey (clarity, pacing, support needs)
Baseline dataset for the annual instrument and first-week learning evidence for coaching.
Week 2 (Virtual phase)
Deeper analysis and design tasks; draft classroom materials (lesson outline + assessment draft); peer review cycle.Educators + RAOP Program Team
  • Formative check-ins and lab artifacts aligned to selected modules
  • Draft lesson plan + draft assessment artifact
  • Peer review notes and revision actions
Draft classroom-ready package prepared for on-site validation.
Week 3 (On-site phase)
Hardware-aligned validation and demonstration; revise lesson/assessment for feasibility; capture performance evidence.Educators + RAOP Program Team + Quanser Engineering Team
  • On-site lab artifacts (validated results and observations)
  • Revised lesson plan + revised assessment with implementation notes
  • Fidelity checklist and facilitator log
Final classroom implementation package per educator.
End of Week 3 (Immediate post-program)
Post-assessment for the rotating standardized instrument selected for the annual cycle + final deliverables submission + end-of-program feedback survey.Educators + RAOP Program Team
  • Post-assessment submissions for the rotating standardized instrument (cohort-level analysis)
  • Final deliverables package (lesson + student materials + assessment tool/rubric + implementation notes)
  • End-of-program survey results
Cohort outcome summary and consolidated recommendations for refinement.
Fall 2026 (Implementation follow-up)
Follow-up check-in on classroom adoption in high-need LEAs; identify barriers; provide targeted supports where feasible.RAOP Program Team
  • Implementation status survey or brief interview notes
  • Examples of classroom adaptation (where Educators can share)
Adoption evidence and needs analysis to strengthen year-2 supports.
Dec 2026 (Biannual advisory review)
Advisory Board reviews cohort-level outcomes, deliverable quality, and adoption evidence; approves refinement priorities.External Advisory Board + RAOP Leadership
  • Cohort-level summary dashboard/brief (no individual reporting)
  • Refinement plan (module selection, pacing, PD templates, assessment supports)
Year-2 improvement plan and documented advisory recommendations.

Five-Year Program Timeline — Continuous Improvement and Adoption

This high-level view shows the repeatable annual cycle and the feedback loop used to refine the curriculum and strengthen adoption by high-need LEAs.

WhenActivityOwnerEvidence collectedOutput / decision
Each year (Jan–Mar)
Recruit and select cohort; confirm eligibility and logistics; finalize module subset for the annual experiments.RAOP Program Team
  • Recruitment metrics and selection rubric summaries (cohort-level)
  • Annual module selection rationale tied to program outcomes
Annual cohort plan aligned to goals and constraints.
Each year (Jun)
Technical onboarding and evaluation orientation; confirm access to virtual labs; share expectations for artifacts/deliverables.RAOP Program Team + Quanser Engineering Team
  • Onboarding completion logs
  • Support tickets/resolution summaries
Consistent starting conditions across cohorts.
Each year (Jul)
Deliver the 2-week virtual phase + 1-week on-site phase using a simulation-to-hardware workflow; collect pre/post, formative evidence, and deliverables.Educators + RAOP Program Team + Quanser Engineering Team
  • Pre/post outcome summaries for the rotating standardized instrument selected for the annual cycle at cohort level
  • Module artifacts and formative check results
  • Final classroom implementation packages
  • Fidelity logs and experience surveys
Annual cohort outcomes + classroom-ready robotics/autonomy curriculum artifacts suitable for high-need LEA adoption.
Each year (Aug–Oct)
Analyze cohort-level outcomes; synthesize strengths and gaps; revise templates, supports, and recommended module pathways.RAOP Program Team
  • Cohort analysis memo and prioritized improvement actions
  • Versioned updates to PD templates and assessment supports
Refined, more adoptable RAOP curriculum for the next cohort.
Each year (Fall semester)
Follow-up on classroom adoption in high-need LEAs; document barriers and feasible support strategies.RAOP Program Team
  • Follow-up implementation status evidence (survey/interviews)
  • Examples of adapted classroom lessons (as available)
Adoption evidence that informs targeted improvements and dissemination.
Biannually (2x/year)
External Advisory Board reviews cohort summaries and refinement plan; provides strategic and technical feedback.External Advisory Board + RAOP Leadership
  • Cohort-level summary brief and action plan
  • Advisory recommendations and tracking of implemented changes
Governance oversight and documented continuous improvement.
End of project (Year 5)
Consolidate multi-year evidence; document final curriculum package and adoption guidance for high-need LEAs; summarize outcomes and refinements.RAOP Leadership + Program Team
  • Five-year cohort trend summaries (cohort-level)
  • Final curriculum package and implementation guidance
  • Refinement history showing feedback → changes → outcomes
A robotics and automation-friendly curriculum package designed for adoption and reuse by high-need LEAs.

Follow RAOP

Updates, announcements, and participant highlights.

Official RAOP social channels. Additional platforms will be added as they launch.