Assessment & Program Review

Annual Program Learning Outcomes Assessment

Assessment is a process to ensure that we continue to create and maintain high-quality learning environments that promote student learning and a positive learning experience for all our students. This site is designed to support, help you learn relevant terminology, develop meaningful program learning outcomes, and maintain, sustain, use, and report your efforts to assess student learning.

The following figure illustrates the steps that take place during a typical PLO assessment cycle:

PLO cycle process

Programs may enter any step in the cycle depending on their progress in sustaining the assessment process. To learn more about the steps in the PLO assessment cycle, click on any resource listed under Planning Your Annual Assessment Process below.

Planning Your Annual Assessment Process

  • Best Practices

    Our colleagues from the (now defunct) Academic Assessment Council recommend the following best practice in program learning outcomes assessment: having an assessment coordinator at the college level and an assessment facilitator at the department or program level ensures the process moves forward and assessment reports are submitted on time.

    College Assessment Coordinator
    A College Assessment Coordinator is appointed by the Dean of each college and has the following responsibilities:

    • Convening Program Assessment Facilitators within college
    • Coordinate program reporting and archiving within college
    • Provide support for facilitators, training, consultation, discussion, etc.
    • Facilitate sharing of best practices, tools, report information between programs and for college community, train new College Coordinator
    • Work with faculty & chairs to integrate assessment and program review processes
    • Coordinate with Dean & Chairs to advance assessment work and quality
    • Ensure timely reporting to Dean and beyond as needed
    • Provide assessment report feedback to facilitators/chairs/programs
    Program Assessment Facilitator/Facilitation
    A Program Assessment Facilitator may be selected in each program through a process determined by the program, or a program can take collective or shared responsibility for the facilitation of assessment tasks and responsibilities (e.g. through an Assessment Committee). Whether there is a designated person or the role is shared, the facilitator or equivalent lead has the following assessment responsibilities:
    • Convening program assessment meetings addressing short- and long-term assessment plans, involving faculty in the planning process, assessment work, and implementation and tracking of improvements
    • Facilitate semester or yearly assessment work in conjunction with faculty, which can include but is not limited to, PLOs selection, sample work identification and collection, tool selection or revision, data evaluation, data analysis and discussion of improvements and implementation plans
    • Develop assessment knowledge and skills by taking advantage of campus resources to support program assessment activity
    • Attend university and college assessment related events, workshops, meetings
    • Work with the Department Chair, College Coordinator and program/department faculty to advance the assessment process and quality
    • Report on program findings using established templates, meeting established deadlines and additional requests for information
    • Identify what party is responsible for producing and delivering the final reports to the College Coordinator
    • Plan for and prepare replacement Facilitator if needed. New facilitators shall not be responsible for reporting previous work unless an agreement was made indicating that was an expectation
  • Program Mission and Goals
    Mission Statement Defined

    A mission statement is a holistic vision of the values and philosophy of the program. The program mission needs to be consistent with campus and/or college mission. The overall question that the mission statement should answer is, What is the overall, unique purpose of this program?

    Example from Business Information Systems

    We offer an innovative, technologically focused business program providing students with a foundation for life-long learning that leads to professional success. Our program is built on applied learning aligned with leading industry standards. We affirm the importance of ethical conduct, stakeholder participation, scholarship, and public service.

    Developing Goal Statements

    Program learning goals are statements that fundamentally answer the question: What do we expect graduates from this program to be able to do, know, or value? In other words, what are the exit skills, knowledge, and values that we want from this program’s graduates. Program goals are general statements about knowledge, skills, attitudes, and values expected in graduates.

    Examples
    • Knowledge: Students know basic biological principles and concepts.
    • Skill: Students can use appropriate technology tools.
    • Value: Students respect the professional code of ethics for business practice
  • Program Learning Outcomes

    Program Learning Outcomes (PLO) statements take the program learning goals and focus on how students can demonstrate that the goals are being met. In other words, PLOs answer the question: how can graduates from this program demonstrate they have the needed/stated knowledge, skills, and/or values.

    PLOs are clear, concise statements that describe how students can demonstrate their mastery of program learning goals. Each student learning outcome statement must be measurable. Measures are applied to student work and may include student assignments, work samples, tests, etc. measuring student ability/skill, knowledge, or attitude/value.

    Criteria for Development

    The following criteria can be used to develop and evaluate student learning outcome statements:

    • Student Perspective: statement focus is on what students learn not on what the course covers
    • Clarity: statement is concise and understandable by all interested groups including students, parents, faculty, and others specific to the program.
    • Potentially observable and/or measurable: statement describes how students can demonstrate that they have the knowledge, skills, and/or attitudes/values specified in the goals.
    Examples
    • Knowledge:  Students can distinguish between science and pseudo-science.
    • Skill: Graduates can locate appropriate sources by searching electronic and traditional databases.  
    • Value: Graduates appreciate the need to vary counseling services to people who are different from themselves with respect to gender, age, culture, sexual orientation, and other significant characteristics.

    Tips for Writing Student Learning Outcomes (PDF)

  • Curriculum Matrix
    Building the Program's Curriculum Matrix

    Once the learning goals and program learning outcomes (PLOs) are developed for the program, the next step in program assessment is to develop the program's curriculum matrix. This matrix lists the goals and associated PLOs against the courses in the program. In each cell, the faculty identify where each PLO is introduced, practiced, or mastered.

    The curriculum matrix can be used as the basis of the assessment plan. In later tasks the faculty and/or assessment coordinator can determine what student artifact or work sample (signature assignment) can be used to measure progress towards the PLO and/or when the assessment will take place. In addition, the matrix needs to be reviewed with respect to balance and omissions. Questions to ask at this point include the following:

    • Are there learning outcomes that are not associated with any course?
    • Are there courses that do not contribute to the achievement of any PLO?  (Why do we need these courses in our curriculum?)
    • Do we have appropriate levels of the desired performance associated with each PLO with respect to 1) introducing the learning outcome, and  2) practicing the learning outcome before mastery is expected?
    • Do we have a place where we can measure mastery within the context of the program for each PLO?
    • If answers to the above are “no,” what changes do we need in our curriculum to achieve the desired results?

    Below are examples of curriculum matrices. Letters are used to represent where an PLO is introduced (I), practiced (P), or mastered (M). In some matrices a “D” is used instead of an “M” to indicate that in this course this PLO demonstrates mastery of the learning within the context of the program. Of course, mastery is defined in a similar way. In either case, the M or D level is where the student can demonstrate the highest level of achievement with respect to the stated PLO within the context of the program.

    Curriculum Matrix Examples:
  • Strategies for Assessing Program Learning Outcomes

    In general the strategies used to do the assessment are largely addressed in how to conduct the assessment and are one key component in the planning process. Overall the strategies are classified as direct (where actual student behavior is measured or assessed) or indirect. Indirect measures include things like surveys, focus groups, and other activities that gather impressions or opinions about the program and/or its learning goals. Direct measures are most effective if they are also course-embedded which means the work done by the student is actually work that counts towards the grade. Most studies that look at assessment data show that if the student work is also used in the grading criteria the student takes the activity more seriously.

    While indirect measures can be useful, assessment of learning must include mostly direct measures. This is increasingly the mandate from accrediting agencies including WSCUC. It is also acceptable and often good to have multiple measures for the same goal. For example, as a result of an assessment of critical thinking, focus group discussion could be used to learn more about how the students viewed the assignment, etc.

    Examples of Direct Assessment include but are not limited to the following: 

    Written work, Projects, Performances or Presentations. According to Suskie (2009), these kinds of assignments typically have students demonstrate skills and are considered alternatives to objective exams or essays. Students have the opportunity to learn while working unlike traditional tests and are considered more authentic in that they should be more realistic and challenging, often requiring complex answers or outcomes.

    Capstone Assignments. Similar kinds of course-embedded assignments include Signature Assignments and Performance Tasks. The work produced by these assignments is typically driven by multiple program goals or student learning outcomes and challenge students to produce work that demonstrates a variety of outcomes. Advantages of course-embedded assessments include no additional assignments or work for students or faculty, a direct measure of progress on program-specific, mission-linked learning goals actually covered in the curriculum, increased involvement of faculty and students in assessment, and ability to address deficiencies in individual student learning before graduation. The primary disadvantage is the time necessary to develop the assessment systems as well as the time to collect and analyze the assessment data collected. 

    Portfolios.  According to Suskie (2009), “A portfolio is compelling evidence of what a student has learned. It assembles in one place evidence of many different kinds of learning and skills. It encourages students, faculty, and staff to examine student learning holistically – seeing how learning comes together – rather than through compartmentalized skills and knowledge. It shows not only the outcome of a course or program but also how the student has grown as a learner. It’s thus a richer record than test scores, rubrics, and grades alone” (p. 204).

    Examples of Indirect Assessment include but are not limited to the following: 

    • Assignment of Course Grades.
    • Surveys, such as satisfaction, attitudinal, feedback, employer or alumni perceptions.
    • Focus Groups.
    • Interviews.
    • Self-evaluations, such as student or alumni self-ratings of learning. 

    Suskie, L. (2009). Assessing Student Learning: A Common Sense Guide. San Francisco, CA: Jossey-Bass.

  • Developing a Plan
    Recap Assessment Planning Tasks
    1. Create program mission and goals
    2. Identify program student learning outcomes (PLOs) for each goal
    3. Build the outcomes/course curriculum matrix
    4. Develop the assessment plan
    5. Closing the loop – making assessment matter

    For Each PLO:

    • Refine the detailed rubric or measure (defined in the plan)
    • Conduct the actual evaluation activity (collect data)
    • Generate analysis – get evaluation results
    • Decide what action to take (if any)
    • Determine follow-up – when do we assess again
    Documenting the Plan
    • When  and how often should this PLO be measured?
    • If we use course embedded measures (this is recommended) what student assignment or artifact can we use for the assessment?
    • How can we measure it?
    • Who needs to be involved in measuring it?
    • Who gets the data, reflects on the results?
    • How can we be sure that the results are used to make meaningful program changes that will improve student learning?

    Starting with the curriculum matrix (developed in the previous steps) add two columns to the Excel spreadsheet, one for time frame to pilot (practice the measurement) and one for when we plan to collect and use the data. Since none of us are perfect at doing assessment a pilot or practice is a good idea. This can be real data that you gather, evaluate results and determine action or it can be practice to just test out the rubric. We also make sure that the student work is identified in the course where we want to do the assessment, and it is helpful to have a copy of the assignment and scoring rubric as part of the plan. If you are using a rubric that has been developed and tested on similar assignments it is probably not as important to pilot the rubric as in the case of a rubric or measure that is unique to the program or one that has not been used before by any other program. 

    Lessons Learned
    1. It is better to do the full cycle of assessment on one or two PLOs than to try and do everything at once.
    2. It is better to start at the mastery level and see if you have a problem than to gather measures at each of the levels (introduced, practiced, and mastery)
    3. Keep it Simple (Use the KISS method.)
    4. It is better to try something – Just Do It – rather than waiting for things to be perfect.
    5. Involve the faculty who teach the courses in both the measurement and analysis of results.
    6. Don’t aggregate across traits or characteristics in the rubric. Results within the category can lead you to an action plan. Aggregating the data (useful for assigning a grade) loses the level of granularity needed for action plans.
    7. Using the results is probably the most important step.
    8. Keep in mind this is not grading.
  • Closing the Loop

    For many (perhaps most) programs, this step is the most difficult and is typically where the assessment effort gets derailed. If the analysis is not compelling and sufficiently granular, constituents frequently are unable to reach consensus on which actions might be indicated by the data. Unable to even agree on a set of possible actions, no action is taken and the program fails to “close the loop.” To be successful at this step, programs need to present the data to stakeholders who can take action (department chairs, program coordinators, deans, etc.) in a form that is sufficiently granular so that a set of actions can be developed. Sometimes graphical data or data broken down by relevant student characteristics helps define the boundary for actions. Once a set of possible actions is completed, each action can be screened based on criticality, cost, time, and other dimensions to create an agreed-upon subset to implement.

    Actions can be anything from concluding that student performance with respect to a learning goal meets expectations to major curriculum change. Other actions may include increasing admission requirements, remediation, adding prerequisites, increasing or changing specific assignments in existing courses, and providing support structures such as tutoring or help sessions. Another action could be to reevaluate whether the learning goal or expectations for performance on that goal are appropriate. Note that actions can also include any modifications to the program’s assessment schedule or any aspect of the program’s assessment process.

    Finally the recommendations need to be implemented and follow up is required to see if the implemented change actually made a difference. This is largely a management function, and there has to be some way to make the execution of action and the follow up part of the ongoing work of department chairs, program coordinators and/or deans. Here is an example of how this is done in the College of Business.

Additional Information