Evaluation
DEFINITION OF EVALUATION
"Evaluation is the process of determining the adequacy of instruction and learning" (Seels & Richey, 1994, p. 54). ECIT candidates demonstrate their understanding of the domain of evaluation through a variety of activities including problem analysis, criterion-referenced measurement, formative evaluation, and summative evaluation (AECT Standards).
"Evaluation is the process of determining the adequacy of instruction and learning" (Seels & Richey, 1994, p. 54). ECIT candidates demonstrate their understanding of the domain of evaluation through a variety of activities including problem analysis, criterion-referenced measurement, formative evaluation, and summative evaluation (AECT Standards).
REFLECTION
Prior to beginning the ITMA program, my experience with evaluations was limited to completing end of course surveys at the end of each college semester for individual courses. I would complete the evaluation as quickly and accurately as possible, but I never provided much qualitative feedback on the course evaluation form unless there was something that I felt strongly impacted by learning, highlighting both positive and negative experiences.
I had hoped that the University was going to use feedback from the evaluation to make changes to the course's curriculum or to even to link the rating from the evaluation into the professor's performance review. Now whether this happened or not, I'm not sure and I do not want to revisit any of these courses to find out.
In my professional experience, I did not know how to properly create measurable learning objectives, thus I would not have been able to evaluate any of the instruction that I was designing. I use the term design loosely, as I did not even know at that time what the difference between design and develop were. At the end of a lesson or learning forum, I would verbally ask learners for their honest feedback, which resulted in little no feedback as the learners headed straight for the classroom doors. Not having any formal education in evaluation hindered my ability to revise the instruction to address any identified weaknesses.
In my current view of evaluation, I have recognized the importance as an instructor to conduct formative evaluations throughout a course. Conducting formative evaluations helps the students provide feedback to the instructor about the course and the instructor can make changes to the course in regards to the course's pace, content, instructional materials, etc., before it becomes too late in the semester to make changes.
I currently accomplish this by administering a Kirkpatrick Level 1 evaluation at the end of each instructor-led class either through an online SurveyMonkey or a paper copy of the evaluation. A Level 1 evaluation measures learner’s reaction to training so that I can understand how well the training was received by the learners. Level 2 and 3 (learning and behavior, respectfully) evaluations are only administered for high-dollar - $100,000+ and high-impact courses, which do not happen that often.
As a result of the ITMA program, I have identified an area for improvement with the online training available in the learning management system. Currently, we only administer evaluations for instructor-led courses and do not evaluate the self-paced online trainings that we develop for employees. It is important to evaluate instructional programs, whether they are online or instructor-led, in order to identify weaknesses in the instruction. In order to solve this problem, I am currently designing an evaluation for our online courses and identifying the process on how to administer them. Once evaluations are implemented for online courses, I will be able to revise and improve them based on the feedback from learners.
My current approach to evaluation is very different than my prior approach. In my previous approach to evaluation, I struggled with understanding the difference between evaluation and assessments. At the office, I have heard my co-workers use the terms synonymously and as a result, I used them interchangeability as well. I previously thought the evaluation questions at the end of a training course assessed whether or not the course was effective at meeting its objectives. In my current approach to evaluation, I now understand that there are distinct definitions of the terms. An assessment is the systematic collection of data to monitor the success of a student in achieving intended learning objectives. An evaluation is a judgment by the instructor about whether the instruction has achieved its learning objectives. After learning that assessments impact the student-level and evaluation impacts the instruction, I was able to successfully create many assessment and evaluation questions that directly aligned back to each instructional program's learning objectives.
Prior to beginning the ITMA program, my experience with evaluations was limited to completing end of course surveys at the end of each college semester for individual courses. I would complete the evaluation as quickly and accurately as possible, but I never provided much qualitative feedback on the course evaluation form unless there was something that I felt strongly impacted by learning, highlighting both positive and negative experiences.
I had hoped that the University was going to use feedback from the evaluation to make changes to the course's curriculum or to even to link the rating from the evaluation into the professor's performance review. Now whether this happened or not, I'm not sure and I do not want to revisit any of these courses to find out.
In my professional experience, I did not know how to properly create measurable learning objectives, thus I would not have been able to evaluate any of the instruction that I was designing. I use the term design loosely, as I did not even know at that time what the difference between design and develop were. At the end of a lesson or learning forum, I would verbally ask learners for their honest feedback, which resulted in little no feedback as the learners headed straight for the classroom doors. Not having any formal education in evaluation hindered my ability to revise the instruction to address any identified weaknesses.
In my current view of evaluation, I have recognized the importance as an instructor to conduct formative evaluations throughout a course. Conducting formative evaluations helps the students provide feedback to the instructor about the course and the instructor can make changes to the course in regards to the course's pace, content, instructional materials, etc., before it becomes too late in the semester to make changes.
I currently accomplish this by administering a Kirkpatrick Level 1 evaluation at the end of each instructor-led class either through an online SurveyMonkey or a paper copy of the evaluation. A Level 1 evaluation measures learner’s reaction to training so that I can understand how well the training was received by the learners. Level 2 and 3 (learning and behavior, respectfully) evaluations are only administered for high-dollar - $100,000+ and high-impact courses, which do not happen that often.
As a result of the ITMA program, I have identified an area for improvement with the online training available in the learning management system. Currently, we only administer evaluations for instructor-led courses and do not evaluate the self-paced online trainings that we develop for employees. It is important to evaluate instructional programs, whether they are online or instructor-led, in order to identify weaknesses in the instruction. In order to solve this problem, I am currently designing an evaluation for our online courses and identifying the process on how to administer them. Once evaluations are implemented for online courses, I will be able to revise and improve them based on the feedback from learners.
My current approach to evaluation is very different than my prior approach. In my previous approach to evaluation, I struggled with understanding the difference between evaluation and assessments. At the office, I have heard my co-workers use the terms synonymously and as a result, I used them interchangeability as well. I previously thought the evaluation questions at the end of a training course assessed whether or not the course was effective at meeting its objectives. In my current approach to evaluation, I now understand that there are distinct definitions of the terms. An assessment is the systematic collection of data to monitor the success of a student in achieving intended learning objectives. An evaluation is a judgment by the instructor about whether the instruction has achieved its learning objectives. After learning that assessments impact the student-level and evaluation impacts the instruction, I was able to successfully create many assessment and evaluation questions that directly aligned back to each instructional program's learning objectives.
ARTIFACTS
5.1 Problem Analysis
"Problem analysis involves determining the nature and parameters of the problem by using information-gathering and decision-making strategies" (Seels & Richey, 1994, p. 56). ECIT candidates exhibit technology competencies defined in the knowledge base. Candidates collect, analyze, and interpret data to modify and improve instruction and ECIT projects (AECT Standards).
Needs Assessment
This artifact is an example of problem analysis because it involved gathering information on a particular situation in order to identify the instructional need and instructional goal. As a part of this needs assessment, I collected, analyzed, and interpreted data in order to create a strategy for solving the instructional problem.
Everyday Object Evaluation
In the Software Evaluation course, students were tasked with analyzing the functionality, appearance, financial aspects, and support of an everyday object. In this artifact, I had to gather and analyze information about a coffee cup. Never in my life had I put so much thought and analysis into an everyday object, but it was a great assignment to learn the basics of problem analysis.
5.2 Criterion-References Measurement
"Criterion-referenced measurement involves techniques for determining learner mastery of pre-specified content" (Seels & Richey, 1994, p. 56). ECIT candidates utilize criterion-referenced performance indicators in the assessment of instruction and ECIT projects (AECT Standards).
Software Evaluation Checklist
This artifact was created to help educators evaluate and make an informed purchase decision on educational software for their classroom. The categories in the checklist are instructional content, opportunities for practice and feedback, appearance and navigation, technical requirements, and training. At the end of the checklist, the evaluator adds up the scores of each of the checklists and determines whether or not the software is acceptable for the institution to purchase.
Training Assessment
This artifact, in sections 6 and 7, lists the performance checklist for learners to complete after finishing the multimedia program. I specify the context in which the performance will occur and how the checklist will be used. The chart shows the alignment between the assessment item and the learning objectives.
5.3 Formative and Summative Evaluation
"Formative evaluation involves gathering information on adequacy and using this information as a basis for further development. Summative evaluation involves gathering information on adequacy and using this information to make decisions about utilization" (Seels & Richey, 1994, p. 57). ECIT candidates integrate formative and summative evaluation strategies and analyses into the development and modification of instruction, ECIT projects, and ECITprograms (AECT Standards).
Multimedia Program Summative One-on-One Evaluation Results and Report
In one of my ITMA courses, my peers reviewed and evaluated an instructional program that I created. This artifact summarizes their review and outlines a plan for revision to the instructional program based on their feedback. This is an example as a formative evaluation because revisions were made to the product while it was still being created.
Expert Formative Evaluation Report
In this artifact, I conducted an expert review on an iPad app called GoSkyWatch. It is an educational app that allows students to explore the solar system and learn about planets, stars, and constellations. In the expert review, I provided a product description, evaluation of the instructional context, an evaluation of the instructional components, general message display, and navigation. In my final thoughts, I recommend the use of the free app provided that the course facilitator is available to answer any questions students may have.
5.4 Long Range Planning
"Long-range planning that focuses on the organization as a whole is strategic planning....Long-range is usually defined as a future period of about three to five years or longer. During strategic planning, managers are trying to decide in the present what must be done to ensure organizational success in the future." (Certo, et al, 1990, p. 168). ECIT candidates demonstrate formal efforts to address the future of this highly dynamic field including the systematic review and implementation of current ECIT developments and innovations (AECT Standards).
Planning a Distance Education Program
In this artifact, I outline considerations that educational institutions need to investigate before beginning the investment of resources into a distance education program. Creating a successful distance educational program is not something that is accomplished overnight, but instead requires a long term strategic plan to ensure organizational success in the future, for both the institution, its faculty, and its students.
Integrating the Web into the Classroom
In this artifact, I express the importance of how the Web can be better integrated into the classroom to enhance the educational experience in the longterm. It also emphasizes that a longterm technology plan is important for educators because students will need access to computers and high-speed Internet access in the classroom for activities and assignments.
5.1 Problem Analysis
"Problem analysis involves determining the nature and parameters of the problem by using information-gathering and decision-making strategies" (Seels & Richey, 1994, p. 56). ECIT candidates exhibit technology competencies defined in the knowledge base. Candidates collect, analyze, and interpret data to modify and improve instruction and ECIT projects (AECT Standards).
Needs Assessment
This artifact is an example of problem analysis because it involved gathering information on a particular situation in order to identify the instructional need and instructional goal. As a part of this needs assessment, I collected, analyzed, and interpreted data in order to create a strategy for solving the instructional problem.
Everyday Object Evaluation
In the Software Evaluation course, students were tasked with analyzing the functionality, appearance, financial aspects, and support of an everyday object. In this artifact, I had to gather and analyze information about a coffee cup. Never in my life had I put so much thought and analysis into an everyday object, but it was a great assignment to learn the basics of problem analysis.
5.2 Criterion-References Measurement
"Criterion-referenced measurement involves techniques for determining learner mastery of pre-specified content" (Seels & Richey, 1994, p. 56). ECIT candidates utilize criterion-referenced performance indicators in the assessment of instruction and ECIT projects (AECT Standards).
Software Evaluation Checklist
This artifact was created to help educators evaluate and make an informed purchase decision on educational software for their classroom. The categories in the checklist are instructional content, opportunities for practice and feedback, appearance and navigation, technical requirements, and training. At the end of the checklist, the evaluator adds up the scores of each of the checklists and determines whether or not the software is acceptable for the institution to purchase.
Training Assessment
This artifact, in sections 6 and 7, lists the performance checklist for learners to complete after finishing the multimedia program. I specify the context in which the performance will occur and how the checklist will be used. The chart shows the alignment between the assessment item and the learning objectives.
5.3 Formative and Summative Evaluation
"Formative evaluation involves gathering information on adequacy and using this information as a basis for further development. Summative evaluation involves gathering information on adequacy and using this information to make decisions about utilization" (Seels & Richey, 1994, p. 57). ECIT candidates integrate formative and summative evaluation strategies and analyses into the development and modification of instruction, ECIT projects, and ECITprograms (AECT Standards).
Multimedia Program Summative One-on-One Evaluation Results and Report
In one of my ITMA courses, my peers reviewed and evaluated an instructional program that I created. This artifact summarizes their review and outlines a plan for revision to the instructional program based on their feedback. This is an example as a formative evaluation because revisions were made to the product while it was still being created.
Expert Formative Evaluation Report
In this artifact, I conducted an expert review on an iPad app called GoSkyWatch. It is an educational app that allows students to explore the solar system and learn about planets, stars, and constellations. In the expert review, I provided a product description, evaluation of the instructional context, an evaluation of the instructional components, general message display, and navigation. In my final thoughts, I recommend the use of the free app provided that the course facilitator is available to answer any questions students may have.
5.4 Long Range Planning
"Long-range planning that focuses on the organization as a whole is strategic planning....Long-range is usually defined as a future period of about three to five years or longer. During strategic planning, managers are trying to decide in the present what must be done to ensure organizational success in the future." (Certo, et al, 1990, p. 168). ECIT candidates demonstrate formal efforts to address the future of this highly dynamic field including the systematic review and implementation of current ECIT developments and innovations (AECT Standards).
Planning a Distance Education Program
In this artifact, I outline considerations that educational institutions need to investigate before beginning the investment of resources into a distance education program. Creating a successful distance educational program is not something that is accomplished overnight, but instead requires a long term strategic plan to ensure organizational success in the future, for both the institution, its faculty, and its students.
Integrating the Web into the Classroom
In this artifact, I express the importance of how the Web can be better integrated into the classroom to enhance the educational experience in the longterm. It also emphasizes that a longterm technology plan is important for educators because students will need access to computers and high-speed Internet access in the classroom for activities and assignments.