IDE 641 – Techniques in Educational Evaluation
Grade: A
Professor: Robert E. Tornberg

This course introduces foundational concepts and methods for evaluating instructional and training programs. Students learn to plan, design, and conduct evaluations using both quantitative and qualitative approaches. Emphasis is placed on aligning evaluation strategies with stakeholder needs, program goals, and evidence-based decision-making in educational and organizational contexts.

Primary Project

Project TitleFormative Evaluation of the SMART Money Budgeting Module
Contributors: Soroth San, Emma Pate, and Chynara Turatbek Kyzy

Project Description:
In Spring 2025, as part of IDE 641: Techniques in Educational Evaluation at Syracuse University, I co-led a team of three to conduct a formative evaluation of the SMART Money Budgeting Module. This class project involved comprehensive planning and analysis of instructional content and objectives, followed by implementing evaluation strategies to gather feedback on learner engagement and effectiveness. We developed evaluation instruments such as surveys and observational protocols aligned with the module’s learning objectives, analyzed data to identify trends, and synthesized findings into actionable recommendations. As co-leader, I coordinated project planning, ensured team alignment, shared authorship of all report sections, and incorporated peer feedback to improve clarity and consistency. Throughout this process, I engaged in ongoing professional development by refining evaluation methodologies, reflecting on our collaborative workflow, and co-presenting our findings. This experience demonstrated my strengths in educational evaluation design and implementation, as well as my commitment to continuous improvement.

View the full Formative Evaluation Report here.

Formative+Evaluation+Report.docx
Feedback

View the full Formative Evaluation Presentation here.

slides

Reflection & Self-Assessment

Conducting this formative evaluation project enhanced my understanding of how to systematically assess instructional materials to improve their clarity, relevance, and instructional effectiveness. I developed skills in preparing evaluation tools, facilitating expert and user interviews, and applying structured qualitative coding techniques, including initial, focused and thematic codings. A major challenge involved supporting users with different backgrounds, particularly when facing language and cultural barriers, which I addressed by adapting interview strategies and clarifying participant roles. This experience changed my perspective on evaluation: I began to see it not just as a data-gathering activity but as a dynamic, learner-centered process requiring flexibility and cultural sensitivity. The evaluation process also reinforced the importance of defining criteria early, simulating real-world use, and asking probing, purposeful questions. Ultimately, this project contributed to my growth as a thoughtful evaluator and instructional designer, demonstrating that effective evaluation is critical for improving learner experiences as a result of  refining instruction before full implementation.

Secondary Projects

Project TitleSummative Evaluation Plan: SMART Money Budgeting Module
Contributors: Soroth San, Emma Pate, and Chynara Turatbek Kyzy

Project Description:
In Spring 2025, as part of IDE 641: Techniques in Educational Evaluation at Syracuse University, Emma Pate, Chynara Turatbek Kyzy, and I co-led the development of a summative evaluation plan for the SMART Money Budgeting Module as part. The SMART Money module is a budgeting component of Syracuse University’s peer counselor financial literacy training. The evaluation design includes two phases: Phase 1, a post-module quiz and confidence survey to assess learning outcomes and preparedness; Phase 2, interviews conducted after peer mentoring begins to gather learner reflections. This approach measures outcomes such as budgeting knowledge and mentor readiness, yielding actionable feedback for continuous improvement. The project integrates planning, analysis, design, implementation, and evaluation components, aligning with best practices in instructional evaluation.

View the full Summative Evaluation Plan here.

Summative Evaluation Plan with detail feedback
Feedback

Reflection & Self-Assessment

Developing this summative evaluation plan deepened my understanding of how to measure instructional effectiveness in a structured and outcome-focused manner. I learned how to align evaluation questions directly with learning objectives, define performance benchmarks, and design realistic data collection strategies. This project strengthened my skills in creating phased evaluation structures using quizzes, survey with open-ended questions at the end, and post-implementation interviews to assess both immediate learning and delayed application. One challenge was determining the most feasible methods within time and resource constraints. I addressed this by focusing on practical tools, such as a 90% quiz threshold and structured interviews while acknowledging alternative approaches, such as pre-tests or control groups. This experience shifted my thinking about evaluation from abstract theory to a strategic, stakeholder-oriented process. The final product demonstrates my ability to design summative evaluations that are clear, actionable, and aligned with instructional goals, contributing to the field by showing how evaluation can support continuous improvement in financial literacy training.

Project Title: Evaluating Two Instructional Websites: Khan Academy and Coursera
Author: Soroth San

Project Description:
This individual project was completed for the course IDE 641 – Techniques in Educational Evaluation (Spring 2025, Syracuse University). The assignment involved independently evaluating two widely used instructional websites, such as Khan Academy and Coursera’s “Programming Foundations with JavaScript, HTML, and CSS” offered by Duke University. Drawing from principles of instructional design, cognitive science, and established evaluation frameworks (e.g., Harmon & Reeves, 1998), I customized an evaluation tool to assess each platform across a range of usability, instructional quality, accessibility, interactivity, and design features.

The work primarily represents the Implementation and Evaluation component of IDD&E, as it required the application of a formal evaluation process to determine instructional effectiveness. Additionally, the project engaged Planning and Analysis, through needs identification and contextual review of course goals, learner characteristics, and delivery methods. I also applied Ongoing Professional Development by reflecting on my own learning process and refining evaluation tools to suit the specific instructional context.

My responsibilities included designing and adapting the evaluation form, conducting in-depth reviews of both platforms, analyzing the instructional strengths and limitations of each, and presenting comparative findings supported by evidence. The project enhanced my competencies in instructional evaluation, critical thinking, and data-informed analysis. It also deepened my understanding of what constitutes effective instructional design in digital learning environments, insights I will carry forward in future instructional design and evaluation work.

View the full Evaluation of the Two Instructional Websites here.

Soroth_Evaluating Two Instructional Websites
Feedback

Reflection & Self-Assessment

This assignment deepened my understanding of instructional website evaluation by challenging me to adapt an existing evaluation tool to fit the unique features of two platforms: Khan Academy and Coursera. Rather than using formal evaluation questions, I focused on refining and contextualizing evaluation criteria from Harmon & Reeves’ framework to better align with the instructional design, content delivery, and learner engagement elements of each site. Through reviewing and comparing both platforms, I strengthened my skills in critical analysis, judgment, and applying evaluation tools in a context-sensitive way. One key challenge was deciding how to adapt general criteria to suit different instructional structures between self-paced versus structured, free versus paid. I learned to prioritize what fits the evaluand. This project shifted my perspective on evaluation as a flexible, interpretive process. It contributes to my growth as an evaluator who can thoughtfully assess educational tools for quality, relevance, and learner needs.

Project TitleLogic Model
Contributors: Soroth San and Emma Pate

Project Description:
This project was completed as a pair assignment for IDE 641 – Techniques in Educational Evaluation (Spring 2025, Syracuse University). The objective was to construct a comprehensive logic model in response to a case study focused on improving math engagement and achievement among underserved middle school students through a gamified mobile learning application.

Working collaboratively, we analyzed the case study’s context, target learners, and instructional goals to identify key program components. The resulting logic model mapped out the app’s inputs, activities, outputs, short- and long-term outcomes, and ultimate impact. It also illustrated how the integration of gamification and learning analytics could foster self-regulation, engagement, and academic improvement, while promoting meaningful teacher involvement.

This project primarily represents the Planning and Analysis component of the IDD&E framework, as it involved identifying learner needs, defining program structures, and mapping outcomes. It also incorporates Design and Development, by illustrating how educational technologies and instructional strategies could be aligned for impact.

My responsibilities included analyzing the program context, aligning components of the logic model, and collaborating on the table of logic model: Resources, Activities,  Outputs, Short-Term Outcomes, Long-Term Outcomes, and Impacts. Through this work, I strengthened my competencies in logic modeling, audience-focused planning, and evidence-based communication. It deepened my understanding of how well-structured evaluation tools, such as logic models, can bridge instructional intentions and measurable outcomes in real-world educational settings.

View the full Logic Model for the Instructional Case Study here.

0. Soroth&Emma Logic Model Template (Mine)
Feedback

Reflection & Self-Assessment

This project enhanced my ability to conceptualize and structure a program using a logic model framework. Working collaboratively, my colleauge and I interpreted a case study and identified the necessary resources, activities, outputs, outcomes, and long-term impacts. I learned how to break down abstract program goals, such as improving self-regulation and math achievement, into measurable components tied to short- and long-term outcomes. A key challenge was determining which inputs and external factors to include without making assumptions unsupported by the case. I overcame this by relying on the logic model’s structured thinking process and validating decisions through team discussion. This experience shifted my perspective from thinking about isolated interventions to viewing educational improvement as a complex system with multiple interacting components. The final product reflects my growth in program evaluation and planning. It contributes to the field by modeling how instructional designers can use logic models to create clear, scalable, and measurable pathways for improving learner outcomes.

Scroll to Top