South Dakota Top Blogs

News, notes, and observations from the James River Valley in northern South Dakota with special attention to reviewing the performance of the media--old and new. E-Mail to MinneKota@gmail.com

Monday, April 23, 2012

Why our colleges are failing

The criticism focused on our educational institutions is expanding to include our colleges and universities.  The criticism of higher education is following the same pattern that has been directed at public schools:  significant factors that have been introduced to higher education are not even mentioned in the studies.   

David Brooks in the New York Times cites some recent studies of higher education that list the evidence of its failures:  

  • Students experienced a pathetic seven percentile point gain in skills during their first two years in college and a marginal gain in the two years after that. 
  •  Nearly half the students showed no significant gain in critical thinking, complex reasoning and writing skills during their first two years in college.   
  •  Student motivation actually declines over the first year in college.
  •  Only a quarter of college graduates have the writing and thinking skills necessary to do their jobs.  
  • Colleges today are certainly less demanding. In 1961, students spent an average of 24 hours a week studying. Today’s students spend a little more than half that time.  
 These points have been discussed and dealt with for many years by people in higher education.  There has been a change in student attitude and achievement that is noticeable to anyone who has taught or worked with college students.    However, the studies that David Brooks cites pose questions about whether they provide accurate measurements and information on on the underlying problems.  Brooks' solution is for higher education to engage in administering value-added testing to evaluate the instruction.  Many colleges and universities do just that, which Brooks seems unaware of.  The South Dakota system has administered such assessment tests for years.  The Board of Regents has required a series of proficiency examinations,  senior exit exams, and other forms of university-wide assessment.   The individual campuses explain these assessment requirements on their websites.  




David Brooks draws conclusions from the studies he cites about factors that the studies do not purport to measure.  






One of the studies is the Wabash Study which is  currently being made by 29 institutions.  Its purpose  is for the institutions to use evidence to identify an area of student learning or experience that they wish to improve, and then to create, implement, and assess changes designed to improve those areas.  Therefore, the study is an effort to identify where the institutions want to make improvements and to devise ways to do so.  It is not an overall assessment of outcomes of the students.  




Another of Brooks' sources is the book Academically Adrift.  It "cites data from student surveys and transcript analysis to show that many college students have minimal classwork expectations -- and then it tracks the academic gains (or stagnation) of 2,300 students of traditional college age enrolled at a range of four-year colleges and universities. The students took the Collegiate Learning Assessment (which is designed to measure gains in critical thinking, analytic reasoning and other "higher level" skills taught at college) at various points before and during their college educations, and the results are not encouraging."  (Inside Higher Ed)


Brooks also cites We're Losing Our Minds, which makes the case "that too little of what happens in institutions of "higher education" deserves to be called "higher learning" -- "learning that prepares students to think creatively and critically, communicate effectively, and excel in responding to the challenges of life, work and citizenship."  (Inside Higher Ed)




The gist of these studies is that college students as a whole aren't reaching the levels of knowledge and communication skills expected of them and a lack of rigor in the institutions is the common assumption as the cause.  


These studies reiterate the same complaints I heard in department and college meetings, in faculty lounges, and in professional meetings for the last 20 years of my teaching.  In the ten years since I retired, I still hear them.  But as with the many studies finding fault with higher education, no one is listening to the faculty or asking them what they think of the state of higher education and what is responsible for that state.  Instead, college administrations and governing boards have done what David Brooks says we need more of--testing, accountability schemes, and other ways of managing and diminishing the role of faculty in collegial operation.  


At one time, the measure of student achievement and accomplishment was the grade point average and the college transcript.  Grade points were considered a reliable and telling indicator of student performance, and they were considered to have a consistent significance throughout the higher education system.  There have always been colleges that were regarded as diploma mills, and the system in general took the reputations of the institutions into account when assessing a transcript or grade point average.  Now colleges and universities have assessment offices which administer an extensive array of tests to determine how students are doing.  The grades from their coursework mean little.  I hear faculty questioning whether, given all the assessments to which students are subjected, if grading their work is worth the bother.  It has such little significance.


Some of us can remember when grades begin to lose their values as indicators of student achievement.   The college president under whom I began my academic career said that the introduction of student "evaluations of instruction" marked the beginning of the decline in higher education.  When students set the standards, he said, there will be no standards.  This college president also was adamant that all administrators who held professorial rank should teach classes.  He, as college president, taught at least one course a year and participated in the teaching of senior seminars.  His argument was that college presidents, deans, and department chairs were academic leaders who worked with their colleagues, not over them.  As leaders, it was part of their jobs to establish and maintain the academic standards that were applied to students and to insure that the faculty they led understood and worked to those standards.  


As administrators began to consider themselves managers more than leaders, they relied upon evaluative devices such as student opinion surveys as the basis for their relationship with faculty.  It has always been important for faculty to assess how their performance as professors was regarded and how effective it was for students.  But student opinions, as faculty know, are not based upon a command of knowledge and communicative skills.  They are based upon student attitudes and feelings, which often are resentment at people who  presume to know more than they do.


During the first round of student opinion surveys that I was involved in, the faculty had meetings about how to interpret them.  A common plaint that students made on them was that their opinions were just a valid as their professors', so  why did they have to report back  what they regarded as the professors' opinions.  Of  course, in that very complaint was evidence of a misperception of classwork that rendered them unqualified to assess it.  The basic knowledge of any college course is the assembly of facts known about the subject.  The course papers and tests are assessments of how well the students know the facts, reason with them, and communicate their reasoning.  It is not a matter of professors imposing their opinions on the students.  The task of a course is to lead students to an understanding of that process.  The perceptions they bring to a set of facts may well, indeed, be of equal value to those of the professor, but the professor is to grade them on their knowledge of the facts and the validity of their reasoning. 


As a negotiator for the faculty union, I dealt with student opinion surveys.  One of the first things we did was establish that the term "student opinion of instruction" be used instead of "student evaluation of instruction."  We did prevail in showing the administrators that student opinions often had no correlation with the facts.


Nevertheless, student attitudes and opinions have had a dramatic influence on higher education.  If colleges and universities are operating on low standards, it is because students are getting what they want.


Another factor is that colleges and universities are competing for student and tuition to operate.  Many state institutions have an open enrollment policy which admits almost all students who apply.  A shock I had when I moved from a private college to a public one was that the former college at which I taught had freshman classes with an ACT composite average of 26 while the composite average at the public college was 17.  A harsh fact of life is that colleges that admit low performing students put pressure on the faculty to dummy down the  course work so that large numbers of students don't flunk out and leave the college short on tuition and fee money.  Another harsh fact is that few low-performing students have the will or ability to be brought up to a competitive level.  


A result of this mix of the prevalence of student attitudes and competition for students is things like grade inflation, low performance on assessment tests and studies, and a growing realization that college is not worth very much anymore.


A telling point listed above is that in 1961 students spent 24 hours a week studying and now about half that.   First of all, I quibble with those numbers.  The old college rule was that to be successful in a course, you needed to spend two hours studying outside of class for every hour in class.  To excel in class, you had to study more.  In 1961, I had been released from active duty in the Army and was finishing my undergraduate degree.  A full-time course load was 12 to 15 hours, and those of us who made B's and A's put in a lot more than 24 hours a week.  To keep up with the daily work and the papers and  tests, one had to put in at least twice that.  Most evenings and weekends were spent in the library or over the typewriter. In graduate school, one had to be prepared to put in 60 to 80 hours a week.

Most professors know what is going on.  Students resent the kind of workload that will restore a college diploma as a badge of knowledge and skill in reasoning and communication.  Most professors know that to keep their jobs, they have to please the students and the administrators who judge their performance on the basis of pleased students.  

Americans may be dissatisfied with what colleges are turning out.  But they have had a big voice in shaping the colleges and universities, and they have got what they asked for.  







3 comments:

Douglas said...

Grammar preference:
Get, Got, Gotten in " have _____"?
(on checking that appears to be flexible)

More seriously, South Dakota has no standards for college test design. Some professors are incapable or unwilling to write tests that measure anything the student has or has not done. They view the tests as a contest of wits made primarily to display their own cleverness at tricks and traps in the questions. Then they may fail to allow enough granularity or design questions where an incorrect answer on part one makes all the solutions to following problems incorrect. The result is test results: zero or 100%.

Some professors get terrible student "opinions" because those profs are arrogant pricks putting in classroom time only to get research time.

David Newquist said...

I have heard this complaint, but wonder in what institutions it happens. In those that try to do real faculty evaluations of performance rather than opinion surveys, the faculty submits portfolios which include their syllabi, their course tests and assignments, and some samples of the results. I have always advocated that every test should have an essay component, which I know some professors think is more work than they want to put in. In my subject area, I often started off with a quiz that merely determined if students had read assigned materials. If they could not pass the quiz, they didn't qualify to take the essay test--or I did not read it for grading purposes. This eliminated the students who could BS their way on an essay exam from being in the same category as those who actually did the work and made the effort. Most students appreciated this strategy and felt that their work was being regarded with respect and sincerity.

David Newquist said...
This comment has been removed by the author.

Blog Archive

About Me

My photo
Aberdeen, South Dakota, United States

NVBBETA