Guidelines for Developing an Effective and Sustainable Credit Program Review

College instructional leadership should answer the following key questions when developing a credit program review process.

What is a program?

In order to keep it simple, a program might be either a discipline such as History, English, Art, etc. or a Workforce/Technical program such as Manufacturing, Nursing, Engineering, etc. For the purposes of this discussion a credit program review is either a discipline or a combination of workforce/technical courses that comprise a program. If the later, the courses included should be clearly identified during the review process.

What is the purpose of the review, i.e. what do you hope to accomplish?

At Richland College we used the three Rs to define our purpose. ROBUST = is the discipline/program growing? RELEVANT = do completing students emerge with at least one marketable skill? RESILIENT = is the discipline/program operating at a level that is financially sustainable? The program review metrics should reflect this defined purpose.

How frequently does the review need to occur?

The external environment is changing so rapidly that it makes sense to review selected program data elements annually. Improvement actions may be done less frequently unless the data indicate otherwise. Many times an examination of the annual data reveal a need for additional resources such as supplies, classroom or lab space, or additional full time faculty.

Who is ultimately responsible for conducting the review and what is the deliverable?

Although the data may come from IR, the review and analysis should be conducted by the instructional deans and program faculty. Reviews should be documented on a guided discussion template and stored electronically as evidence for SACSCOC visits. The guided discussion template can be developed collaboratively with IR and instructional leaders, keeping in mind that one part of the role of IR is helping administrators, faculty, and staff to ask good questions for continuous improvement. Deliverables, therefore, are the completed discussion templates for each discipline/program and action plans where appropriate.

What metrics will be tracked and published?

Which metrics are primary and which might be secondary? It is a good idea to publish a top level dashboard linked directly to the identified purpose of the review. Secondary, or supplemental data might be available as requested by instructional deans.

In what other ways might data from the program review be used by the institution?

Data from discipline/program reviews may be used by college leadership in a variety of ways. For example:

  • To determine needed strategic initiatives.

  • To determine hiring levels for faculty in various programs, i.e. which programs need additional fulltime faculty or parttime faculty.

  • To determine facility needs or to make a case for additional buildings.

  • Instructional deans may use these data during budget building to justify budget increases.

Some Additional Thoughts:

  • As a college embarks on conducting regular Program Reviews, answers to these six questions may very well change over time. Effective discipline/program reviews are essential to the long-term financial health of the institution, but most importantly to student success.

  • It is important that program review not be regarded as a vehicle to eliminate programs, although in some cases that may happen. Most often, program review results are used to strengthen credit programs or to address shifts in the marketplace. Some examples include the field of photography which has shifted from film to digital and the journalism field which is changing from the newsroom both in print and televised to podcasting and independent journalism. In order for college programs to remain relevant it is important to stay in tune with these changes.

Share This Post