Assessment, CSU, Chico

Developing a Plan

Recap Assessment Planning Tasks

  1. Create program mission and goals
  2. Identify program student learning outcomes (SLOs) for each goal
  3. Build the outcomes/course alignment matrix
  4. Develop the assessment plan
  5. Closing the loop – making assessment matter

For Each SLO:

  • Refine the detailed rubric or measure (defined in the plan)
  • Conduct the actual evaluation activity (collect data)
  • Generate analysis – get evaluation results
  • Decide what action to take (if any)
  • Determine follow-up – when do we assess again

Documenting the Plan

  • When  and how often should this SLO be measured?
  • If we use course embedded measures (this is recommended) what student assignment or artifact can we use for the assessment?
  • How can we measure it?
  • Who needs to be involved in measuring it?
  • Who gets the data, reflects on the results?
  • How can we be sure that the results are used to make meaningful program changes that will improve student learning?

Starting with the alignment matrix (developed in the previous steps) add two columns to the Excel spreadsheet, one for time frame to pilot (practice the measurement) and one for when we plan to collect and use the data. Since none of us are perfect at doing assessment a pilot or practice is a good idea. This can be real data that you gather, evaluate results and determine action or it can be practice to just test out the rubric. We also make sure that the student work is identified in the course where we want to do the assessment, and it is helpful to have a copy of the assignment and scoring rubric as part of the plan. If you are using a rubric that has been developed and tested on similar assignments it is probably not as important to pilot the rubric as in the case of a rubric or measure that is unique to the program or one that has not been used before by any other program. 

Lessons Learned

  1. It is better to do the full cycle of assessment on one or two SLOs than to try and do everything at once.
  2. It is better to start at the mastery level and see if you have a problem than to gather measures at each of the levels (introduced, practiced, and mastery)
  3. Keep it Simple (Use the KISS method.)
  4. It is better to try something – Just Do It – rather than waiting for things to be perfect.
  5. Involve the faculty who teach the courses in both the measurement and analysis of results.
  6. Don’t aggregate across traits or characteristics in the rubric. Results within the category can lead you to an action plan. Aggregating the data (useful for assigning a grade) loses the level of granularity needed for action plans.
  7. Using the results is probably the most important step.
  8. Keep in mind this is not grading.