Notes

Gathering Evidences, Components and the Assessment Cycle

Ways of gathering evidences<!–

How Cycle

  • Formative – those undertaken while student learning is taking place; the purpose or which is to improve teaching and learning; designed to capture students’ progress toward institution- or program-level outcomes based on criteria and standards of judgment
  • Summative – those obtained at the end of a course or program; the purpose of which is to document student learning for transcripts and for employers, donors, legislators, and other external audiences; designed to capture students’ achievement at the end of their program of study and their undergraduate or graduate education based on criteria and standards of judgment
  • Direct – evidence of student learning which is tangible, visible, self-explanatory; prompt students to represent or demonstrate their learning or produce work so that observers can assess how well students’ texts or responses fit institution- or program-level expectations
    • Example: performances, creations, results of research or exploration, interactions within group problem solving, or responses to questions or prompts
  • Indirect – evidence which provides signs that students are probably learning, but the evidence of exactly what they are learning is less clear and less convincing; capture students’ perceptions of their learning and the educational environment that supports that learning, such as access to and the quality of services, programs, or educational offerings that support their learning
    • Example: student satisfaction, alumni, and employer surveys
  • Objective – one that needs no professional judgment to score correctly (although interpretation of the scores requires professional judgment); examples: multiple-choice, true-false exams
  • Subjective – yield many possible answers of varying quality and require professional judgment to score
  • Traditional – the kinds of tests that have been around for decades; e.g., objective tests, ‘blue book’ essay questions, and oral examinations
  • Performance – ask students to demonstrate their skills rather than relate what they have learned through traditional tests; e.g., field experiences, laboratory and studio assignments, projects.  Also called authentic assessments when asking students to do a real-life task.  Have two components: (i) the assignment or prompt that tells students what is expected of them and (ii) a scoring guide or rubric used to evaluate completed work.
  • Embedded – program assessments which are embedded into course work
  • Add-on – assessments which are in addition to course requirements; e.g., assemble a portfolio, take a standardized test, participate in a survey
  • Local – created by faculty and/or staff
  • Published – those published by an organization external to the institution and used by several institutions
  • Quantitative – use structured, predetermined response options that can be summarized into meaningful numbers and analyzed statistically; place interpretative value on numbers; e.g., the number of right versus wrong answers
  • Qualitative – use flexible, naturalistic methods and are usually analyzed by looking for recurring patterns and themes; e.g., reflective writing, notes from interviews and focus groups; place interpretative value on the observer; e.g., observations of group interaction or an individual’s performance in a simulation

Components of Assessment

Steps which underlie the assessment of student learning:

  1. Develop learning objectives/outcomes
  2. Check for alignment between the curriculum and the objectives/outcomes
  3. Develop an assessment plan
  4. Collect assessment data
  5. Use results to improve the program
  6. Routinely examine the assessment process and correct, as needed

The Assessment Cycle

Assessment Cycle Promoting IE

    • Goals for learning – express intended results in general terms.  Used to describe broad learning concepts; e.g., clear communication, problem solving, and ethical awareness.
    • Objectives for learning – express intended results in precise terms.  Used to describe specific behaviors students should exhibit; e.g., “graduates in speech communication should be able to interpret non-verbal behavior and to support arguments with credible evidence”.

Outcomes Assessment Plan 

Student learning outcomes, should derive from the objectives which flow from the goals and mission of the program.  An Assessment Plan includes all these components and focuses on the assessment of each learning outcome and how the evaluation of the results of the assessment are used for program improvement

 

References:

http://assessment.uconn.edu/assessment-primer/assessment-primer-how-to-do-assessment/

http://assessment.uconn.edu/assessment-primer/assessment-primer-outcomes-assessment-components/

http://www.usf.edu/provost/documents/assessment/certificateprogram-programcomponents.pdf

http://www.usf.edu/provost/documents/assessment/certificateprogram-programcomponents.pdf

 

Notes

Basic Terminologies used in Assessment

assessment

EVALUATION

Evaluation is a judgment by the instructor or educational researcher about whether the program or instruction has met its Intended Learning Outcomes (ILO).

TESTING

A method to determine a student’s ability to complete certain tasks or demonstrate mastery of a skill or knowledge of content. Some types would be multiple choice tests, or a weekly spelling test. While it is commonly used interchangeably with assessment, or even evaluation, it can be distinguished by the fact that a test is one form of an assessment.

GRADING

Grading could be considered a component of assessment, i.e., a formal, summative, final and product-oriented judgment of overall quality of worth of a student’s performance or achievement in a particular educational activity, e.g., a course. Generally, grading also employs a comparative standard of measurement and sets up a competitive relationship between those receiving the grades.

MEASUREMENT

Measurement is the process of knowing about physical qualities of objects and individuals such as length, weight, height, volume, density, and so on.

PERFORMANCE CRITERIA

The standard by which students performance is evaluated. Performance criteria help assessors maintain objectivity and provide students with important information about expectations, giving them a target or goal to strive for. (New Horizons for Learning)

LEARNING OBJECTIVES

Learning objectives are brief statements that describe what students will be expected to learn by the end of school year, course, unit, lesson, project, or class period. In many cases, learning objectives are the interim academic goals that teachers establish for students who are working toward meeting more comprehensive learning standards. Learning objectives are also a way to establish and articulate academic expectations for students so they know precisely what is expected of them. When learning objectives are clearly communicated to students, the reasoning goes, students will be more likely to achieve the presented goals. Conversely, when learning objectives are absent or unclear, students may not know what’s expected of them, which may then lead to confusion, frustration, or other factors that could impede the learning process.

LEARNING OUTCOMES

Operational statements describing specific student behaviors that evidence the acquisition of desired knowledge, skills, abilities, capacities, attitudes or dispositions. Learning outcomes can be usefully thought of as behavioral criteria for determining whether students are achieving the educational objectives of a program, and, ultimately, whether overall program goals are being successfully met. Outcomes are sometimes treated as synonymous with objectives, though objectives are usually more general statements of what students are expected to achieve in an academic program. (Allen, Noel, Rienzi & McMillin, 2002)

CONSTRUCT

Constructs vary in their ease of measurement, with some constructs being relatively easy to assess and others requiring more subtle or indirect measurement. The construct is a proposed attribute of a person that often cannot be measured directly, but can be assessed using a number of indicators or manifest variables. Constructs are also discussed under other labels, such as theoretical constructs or latent variables, which are interchangeable terms.

ITEM BIAS

When important decisions are made based on test scores, it is critical to avoid bias, which may unfairly influence examinees’ scores. Bias is the presence of some characteristic of an item that results in differential performance for individuals of the same ability but from different ethnic, sex, cultural, or religious groups.

 Assessment VS.  Evaluation:

Comparison of Chracteristics
Assessment vs Evaluation
Assessment Evaluation
Formative
Cyclical – ongoing to improve learning
Summative
final to gauge quality
Process-Oriented
How learning is going
Product-Oriented
What’s been learned
Reflective
Internally defined criteria/goals
Prespective
Externally imposed standards
Diagnostic
Identify areas for improvement
Conclusive
Arrive at an overall grade/score
Flexible
Adjust as problems are clarified
Fixed
Determine success or failure in reaching desired outcome

Measurement vs Evaluation

Measurement is the process of knowing about physical qualities of objects and individuals such as length, weight, height, volume, density, and so on. On the other hand there are circumstances where measurement is not possible. This is where evaluation is done on the basis of either comparison or assessment. Evaluation helps in passing judgment about policies, performances, processes, and so on.

 

Reliability VS. Validity

  • Reliability refers to the reproducibility of a measurement. Validity refers to the similarity between the experiment value and the true value.
  • Reliability is related with the consistency of the measurements whereas validity is focused more on how accurate the measurements are.
  • By saying “a sample is reliable,” it doesn’t mean it is valid.

• Reliability is related with precision, whereas validity is related with accuracy.

 

DIRECT vs INDIRECT MEASURES

Direct measures are regularly employed to measure learning in the classroom. Direct measures are those that measure student learning by assessing actual samples of student work. Examples include: exams/tests, papers, projects, presentations, portfolios, performances, etc. Because direct measures capture what students can actually do, they are considered best for measuring levels of achievement of student learning on specific outcomes.

Indirect measures provide a less concrete view of student learning; for example, attitudes, perceptions, feelings, values, etc. Indirect measures imply student learning by employing self-reported data and reports. Indirect measures help to substantiate instances of student learning. Indirect measures include surveys, interviews, course evaluations, and reports on retention, graduation, and placement, etc. Indirect measures are best situated at program or university level assessment. These measures are commonly in conjunction with direct measures of student learning.

Norm-referenced / criterion-referenced test

Norm-referenced tests are designed to rank test takers on a “bell curve,” or a distribution of scores that resembles, when graphed, the outline of a bell—i.e., a small percentage of students performing poorly, most performing average, and a small percentage performing well.

In elementary and secondary education, criterion-referenced tests are used to evaluate whether students have learned a specific body of knowledge or acquired a specific skill set. For example, the curriculum taught in a course, academic program, or content area. If students perform at or above the established expectations—for example, by answering a certain percentage of questions correctly—they will pass the test, meet the expected standards, or be deemed “proficient.”

Sources:

http://web2.uconn.edu/assessment/docs/resources/Eric_Soulsby_Assessment_Notes.pdf

http://www.apus.edu/community-scholars/learning-outcomes-assessment/university-assessment/glossary.html

http://www.flaguide.org/start/assess_intro.php

http://www.differencebetween.com/difference-between-reliability-and-vs-validity/

http://www.differencebetween.com/difference-between-measurement-and-vs-evaluation/

http://www.differencebetween.com/difference-between-assessment-and-vs-evaluation/

http://www.smu.edu/Provost/assessment/Measures

http://edglossary.org/criterion-referenced-test/