2014 MFA Index: Further Reading

by
Seth Abramson
From the September/October 2013 issue of
Poets & Writers Magazine

Note: The following article explains the approach used to compile the 2014 MFA Index, published in the September/October 2013 print edition of Poets & Writers Magazine.

The 2014 MFA Index comprises two tables: one of seventy-eight full-residency MFA programs and one of twenty-six low-residency MFA programs. Full-residency MFA programs are assessed on the basis of twenty measures: ten listings of survey results, and ten listings of other important program features. While the ten surveys detailed in the full-residency table cannot be classified as scientific, all are predicated upon sufficient hard data to be substantially probative. A scientific survey of full- or low-residency creative writing MFA programs is not presently possible, as more than half of the nation’s full- and low-residency graduate creative writing programs do not publicly release the data necessary for such an assessment.

Five of the ten full-residency MFA surveys are based on a survey of a large sample of current MFA applicants. In each of these five surveys, programs are ordered on the basis of the number of times they appeared on applicants’ application lists; the resulting numeric ordering therefore assesses only the popularity of individual programs among a large sample of well-researched aspiring poets and writers, and is not an assessment of overall program quality. While prospective MFA students have a wide range of resources at their disposal in selecting where to apply—including not only quantitative data but also programs’ promotional materials; first-hand interviews with program staff, faculty, and current and former students; large online communities of fellow applicants, many of which are devoted to MFA research, discussion, and application advising; their own past experiences in undergraduate and non-institutional workshop settings; the literary oeuvres of current faculty members and recent alumni; previous program assessments in various online and print media, including first-hand accounts from, and interviews with, current and former faculty and graduates; and in some instances program visits or prior first-hand experience workshopping with selected faculty members—there is necessarily some information that is impossible for prospective MFA students to access unless and until they become matriculants. Once a student has matriculated, however, they may develop subjective attachments to their alma mater, which makes objective assessments of their own experiences, as opposed to the provision of definitionally “self-serving” survey responses, difficult or even impossible; for this reason and others discussed in more detail below, current MFA students are not asked to self-report on their programs, nor are they considered to have a broader, more accurate, or more timely knowledge of programs they do not attend than those unmatriculated applicants who are currently researching program options

In the remaining five surveys in the full-residency table, programs are noted by their relative numeric placement within the data-sets compiled. The five hard data-based survey columns in the full-residency program table are as follows: selectivity, funding, student-faculty ratio,  fellowship placement, and job placement. These categories appear in the table in the chronological order in which applicants to graduate creative writing programs encounter them: a program’s selectivity determines, all things being equal, an individual applicant’s likelihood of securing admission; the funding category indicates, all things being equal, what aid package will be attached to that program admission if and when it comes; student-faculty ratio gestures toward, all things being equal, a student’s ability to access individual faculty members while in-program; fellowship placement generally measures the success of individual programs in placing recent graduates in post-MFA fellowships; job placement generally measures the success of individual programs in placing graduates who have already achieved some measure of fellowship/publishing success in full-time creative writing teaching positions in higher education.

These survey results are scientific to the extent that they order programs on the basis of quantitative data publicly released by the programs themselves, and unscientific to the extent that not every program has released data for every category of assessment. These five columns therefore constitute an ordering of all publicly known data rather than an ordering of all extant data. A full complement of funding and admissions data is available for well over half of the nation’s full-residency MFA programs; the remaining programs are generally smaller, newer, lightly advertised, nondomestic, or regional in terms of their applicant base. As all of these programs have websites, however, and as all enjoy exclusive dominion over their online presence, the absence of any specific funding or selectivity data in these programs’ online promotional materials is taken as one indication that these programs would be unlikely to place within the top half of all programs in the funding and selectivity categories. The full-residency table is based in part on the presumption that it would be counterintuitive for a program providing full funding to a substantial percentage of its student body to not indicate as much in its promotional materials, or that a program among the most selective in the country would fail to designate itself as such. Program websites are regularly reviewed to determine whether a program has added information to its online profile. Program administrators can also e-mail Poets & Writers Magazine directly, at editor@pw.org, to draw attention to any substantive website, program-policy, or funding/admissions changes.

Based on the data presently available, it is not anticipated that any of those programs without a full complement of funding and admissions data available in some form online would have appeared in the top half of full-residency programs in the funding category. These programs, given the incompleteness of their promotional materials, are also much less likely to attract sufficient applications to place in the top half of the selectivity listing. At present, a program’s yield-exclusive acceptance rate would have to be 10 percent or less to place it in the top half of this category. As to the two placement-related surveys, these do not rely on programs’ promotional materials or on their willingness to release internal data to individual applicants or groups of applicants, so all programs nationally, both full- and low-residency, are equally eligible for a top-half placement in the fellowship and job placement categories. Data sufficient to calculate student-faculty ratios for virtually every MFA program in the United States were also readily available.

By definition, quantitative surveys of any kind—whether in the field of education or any other field—perform poorly when it comes to assessing unquantifiable program features and/or features that can only be assessed subjectively by an individual experiencing them firsthand. That such features are most assuredly a vital element of every graduate creative writing program does not and should not preclude the possibility of statistics-based assessment measures operating alongside the primary mechanism programs have to introduce applicants to unique curricular features: their own websites and promotional materials. Programs at all times bear the primary responsibility for educating prospective applicants regarding program offerings, and no program assessment or series of program assessments could or should usurp this responsibility—especially as no applicant applies to a program without first researching it online or by other means. Indeed, programs have a captive audience of hundreds if not thousands for their online and hard-copy promotional materials. Some programs may worry that neither the full-residency table nor any other series of surveys or hard-data assessments could adequately capture each graduate creative writing program’s most unique elements; these fears are understandable, but misplaced. The applicants surveyed for the full-residency table discussed in this Article had access to individual program websites and promotional material at all times before, during, and after their formation of a list of where they intended to apply.

If specific, unquantifiable program features do not lead to programs excelling in the hard-data measurements or the applicant popularity surveys, it is neither because the 2014 MFA Index did not consider such features nor because the applicants surveyed for Index did not. Interviews with hundreds of MFA applicants conducted as part of this program assessment project indicate that applicants can and do take into account a host of unquantifiable program features in deciding where to apply. What may be the case, instead, is that certain unquantifiable program features are less popular among applicants than among those program faculty and administrators who first brainstormed and implemented them. By surveying current applicants rather than individual program faculties and administrators, the 2014 MFA Index ensures that the class of persons surveyed for the program assessment is one likely to have recently accessed the very program websites which (presumably) prominently advertise those features of which programs are most proud. In contrast, students, faculty, or administrators at one program are highly unlikely to be visiting the websites of other programs on a regular basis; consequently, they are far less likely to be aware of peer programs’ idiosyncratic, online-advertised features.

The approach to compiling the 2014 MFA Index was devised with the following goals in mind: (1) To better inform applicants of their program options by offering the nation’s first complete listing of graduate creative writing programs; (2) to more widely disseminate hard data regarding objective but non-exhaustive/non-conclusory measures of program quality, which often play a role in applicants’ application and matriculation decisions; (3) to be responsive to, and reflective of, the opinions, interests, concerns, and values of current applicants to graduate creative writing programs; (4) to emphasize the importance of both inputs and outputs in assessing individual graduate creative writing programs; and (5) to enhance the transparency of the admissions process for present and future applicants to graduate creative writing programs.

Survey Locus
In the twelve months between April 16, 2012, and April 15, 2013, 304 full-residency MFA applicants were surveyed on The MFA Draft 2013 Group, hosted by Facebook.com. Created by a group of then-current MFA applicants in the early spring of 2012, the Draft at its peak had more than 1,500 members during the survey period, and featured dozens of new MFA-related conversation threads—some garnering up to a thousand individual responses—each day. The Draft was created and is moderated entirely by the applicants themselves; admission of new members was and is determined entirely by applicants; and decisions regarding the production of personal or program information for purposes of media research were likewise made entirely by applicants. The site was promoted via several methods: Word-of-mouth; links and references in the MFA Draft 2012 Group, the predecessor to the MFA Draft 2013 Group (and there is also, now, a well-attended MFA Draft 2014 Group); and links and references on The Creative Writing MFA Blog, founded by novelist Tom Kealey (and described in detail in the methodology article for the 2012 MFA Index). The author of this Article was at no time a moderator in the Draft, nor did the Author play any role in the creation or promulgation of the community in the MFA Draft 2013 Group. The Draft was the only survey locus used for the one-year applicant popularity survey described above; the five-year survey is a compilation of the data from this one-year survey and the past four years of Poets & Writers Magazine program assessments.

Individual users on the MFA Draft 2013 Group were distinguished by their proper names, as reflected by the full names (first and last) associated with their Facebook accounts. Internet research was conducted to verify applicants’ identities as and when authenticity was in question. The document in which Draft 2013 applicants voluntarily compiled their application decisions was part of an application that tracks all changes to uploaded documents by editors’ full names/Facebook accounts. This application ensured that any and all changes to the document were traceable to specific users. Users were permitted to amend their application lists in real-time; though less than 15 percent of users elected to make amendments to their lists once posted, all changes to applicants’ lists were accounted for by both the one-year and five-year applicant surveys appearing in the program tables. Substantial additional security measures were taken to ensure the authenticity of compiled application lists.

As noted, surveyed applicants voluntarily listed the programs to which they had applied or intended to apply, and were permitted to adjust these lists at any point during the survey period.


Period of Data Collection
Only recently collected data is of use to applicants. The one-year full-residency survey uses no survey data predating April 16, 2012; the five-year full-residency survey uses no survey data predating April 16, 2008; the low-residency survey uses no data predating April 16, 2007. The longer survey periods for low-residency MFA programs were necessitated by the significantly smaller applicant pools for these degrees.

The MRP Index
Eligibility for inclusion in the 2014 MFA Index was determined using the “MRP Index,” available for review at www.mfaresearchproject.wordpress.com. The Index tracks how programs place in seven survey areas: popularity among applicants, selectivity, funding, student-faculty ratio, fellowship placement, job placement, and location. Programs are categorized by the number of survey areas in which they place in the top half of all programs a) eligible for the category, and b) with data publicly available. The number of programs considered eligible for each category of assessment varies by the category; in some instances (such as fellowship placement and job placement) newer programs cannot yet be added to the pool of eligible programs because they have yet to graduate any poets or writers, whereas in other instances (such as selectivity, funding, and student-faculty ratio) certain programs may not yet have released the data necessary for them to be assessed in these areas. For the popularity and location surveys, all contemporaneously extant programs were automatically made members of the pool of eligible programs. Otherwise, the pool of eligible programs was 121 for the selectivity survey, 161 for the funding survey, 144 for the student-faculty ratio survey, 94 for the fellowship placement survey, and 94 for the job placement survey. For the fellowship and job placement surveys, only programs known to have graduated at least five classes of poets and writers by the beginning of data collection in these categories (2009) were considered to have had a full opportunity to place in these categories, with the result being that the number of MFA programs founded prior to fall 2001 was used as the “eligible pool” figure for this measure (but note that, in order not to disadvantage programs founded after this time, such programs were nevertheless included in the survey if they successfully placed a graduate in an eligible fellowship or teaching position). As 94 full-residency programs were founded prior to fall 2001, the size of the pool for the job placement measure was set at 94. In future years the size of the “eligible pool,” consistent with the above-stated methodology, will increase.

Programs appearing in the top half of three or more of the seven survey areas listed above were included in the 2014 MFA Index.

Survey Questionnaires and Program Response Rates
Graduate degree programs in creative writing respond to assessment-related inquiries at a lower rate than do programs in almost any other field of study in the United States. This is one of several reasons the MFA Index does not seek to survey the opinions of program faculty and administrators as to the features and efficacy of peer programs.

The following response rates were reported for questionnaires sent pursuant to the 2012 U.S. News & World Report program assessments (in each instance, the field of study is followed by the response rate from all programs surveyed in the field): Engineering (98 percent); Business (91 percent); Criminology (90 percent); Education (90 percent); Medicine (84 percent); Healthcare Management (76 percent); Statistics (67 percent); Law (66 percent); Public Health (61 percent); Audiology (57 percent); Library and Information Studies (56 percent); Pharmacological Sciences (56 percent); Social Work (56 percent); Occupational Therapy (53 percent); Veterinary Medicine (48 percent); Nursing (47 percent); Computer Science (46 percent); Physician Assistance (45 percent); Sociology (43 percent); Speech-Language Pathology (42 percent); Public Affairs (40 percent); Rehabilitation Counseling (40 percent); Fine Arts (39 percent); Political Science (37 percent); Economics (34 percent); Mathematics (34 percent); Physical Therapy (33 percent); English (31 percent); Physics (31 percent); Earth Sciences (29 percent); Clinical Psychology (28 percent); Chemistry (25 percent); Psychology (25 percent); History (23 percent); and Biological Sciences (15 percent). Respondent institutions in each of these academic fields were aware that their questionnaire responses would not be kept confidential, and that their participation in surveys sponsored by U.S. News & World Report would result in publication of a substantial stock of program-specific data regarding each university queried.

Every two years, the Association of Writers and Writing programs (AWP) sends a questionnaire to programs in the field of creative writing—a field whose administrators and faculty are no longer surveyed (and whose programs are no longer assessed) by U.S. News & World Report—in much the same way U.S. News & World Report does for the thirty-five fields of study listed above. A crucial difference between the two questionnaires, however, is that the AWP questionnaire guarantees anonymity to its respondents; AWP releases no program-specific data or survey results pursuant to its biennial questionnaire. It is worth noting, too, that AWP estimates (as of its 2009 Annual Report) that 34 percent of programs in the field of creative writing have thus far declined to become members of AWP. These programs are not subject to AWP questionnaires.

According to AWP’s publicly released summary of the program questionnaire it distributed in 2007, between 40 and 60 percent of AWP’s member programs declined to answer AWP’s queries regarding internal admissions and funding data. Specifically, 47 percent of programs declined to reveal how many assistantships they offered annually to incoming students, 61 percent declined to reveal the stipend offered to teaching assistants, 56 percent declined to reveal whether they offered a full-tuition waiver to teaching assistants, 49 percent declined to reveal how many scholarships were offered to incoming students, 55 percent declined to reveal their annual number of applicants, and 52 percent declined to reveal the size of their annual matriculating class. AWP did not distinguish between low-residency and full-residency programs on the questionnaire. 

Comments

Columbia

I think people mistake all education with vocational training these days.  They want a certificate and a job when they finish. Money in, money out.  At Columbia, I was given time (and academic credit) for writing.  There were no teaching fellowships and no one chased after you to give you career counseling, agent counseling. There was not one single lecture on marketing your work.  Much of what I learned had to do with developing a certain angle of vision. Attitudes about my work and how to pursue my ideas with both faith and objectivity.

"academic" versus "studio" MFA program distinction

I've read comments on a couple of sites claiming that the so-called studio-versus-academic distinction regarding MFA programs was created by the AWP. If so, that's too bad. It's an unfortunately misleading distinction.

What does "academic" mean? I attended one of those so-called academic programs, where certain courses were modeled after earlier courses at the Iowa Writers' Workshop, where one of the co-founders of my program studied under Donald Justice in the 1960s. Our "Form and Theory of Fiction" and "Form and Theory of Poetry" courses were modeled after courses in Iowa's MFA program, including "Form and Theory of Fiction" and its later versions under later names. In fact, here's an example:

http://www.slate.com/articles/arts/books/2012/11/kurt_vonnegut_term_paper_assignment_from_the_iowa_writers_workshop.html

I read a few years ago on the MFA blog a comment by one potential MFA applicant who stated that she would prefer a "studio" MFA program because she couldn't stand the thought of writing another lit crit paper.

After having read plenty of "critical theory," etc. on my own in an effort to figure out what all this jargon-laden prose by contemporary lit scholars was saying, I became determined to never write such a paper EVER. The sole reason I didn't major in English (I majored in "analytic" philosophy instead) is that I was appalled by the obscurantist writing that characterized so much of the scholarship I'd come across in literary theory, and I didn't think I would benefit from any course that would reward me for writing that badly.

And had I been expected to write such papers in my "academic" MFA program, I would have left the program after one semester. Fortunately, the focus was on craft, not Derrida or post-structuralism, etc, etc.

So in case anyone fears the more "academic" programs, rest assured that at least SOME of those programs won't torture you by making to write a Marxist or feminist or Foucauldian analysis of "Sense and Sensibility." (As physicist Alan Sokal demonstrated, one can be politically liberal, or "progressive," without embracing the ideas of the “academic left.”)

the claim that creative writing can't be "taught"

I'd like to avoid altogether referring to Seth Abramson, the creator of this system of "rankings," but he's created such a world for himself around his views on MFA programs that it's impossible for me to avoid referring to his other comments on the topic--or impossible to avoid if I'm to again raise questions about the wisdom of his system and about the wisdom of Poets & Writers for advocating the system's worthiness. 

Abramson has a habit of proclaiming that something is true and then assuming, as if through magical thinking (or so it seems to some of us), that the mere the stating of the idea therefore makes it true.

One of those truisms of his: that being a good writer has little or nothing to do with being a good writing teacher. Yet, he provides utterly no evidence for that claim--not even his own anectdotal evidence.

Here's my own anedtodal (experience-based) evidence:

With one exception, all the good writers I had as writing teachers were very good writing teachers. True, it's not necessarily the case that a good writer would be a good writing teacher. But unless the teacher has an emotional problem, is self-centered (and, therefore, uninterested in students' needs), or has some other emotional/social/psychological reason she cannot communicate her ideas orally or in writng to students, it would make sense that good writers would tend be good readers and good at expressing themselves in language about the art of (say) fiction writing and, therefore, would make good writing teachers.

Having an asute reader is vital to learning to write well. 

Rust Hills wasn't a fiction writer but he was a great fiction editor (meaning, a smart fiction reader) and, therefore, a good writing teacher:

http://www.amazon.com/Writing-General-Short-Story-Particular/dp/0618082344

On the other hand, good fiction writers tend to think about what they're doing and--barring some bizarro problem with their abilty to work with other human beings--tend to be (if they communicate even a tenth as well outside their writing as they do in their writing) perfect candidaties for being good writing teachers.

I'd like to see us rid ourselves of this romatic/romanticized notion that writing teachers are pretty much irrelvant in these programs.

Oh, and by the way: I, like many other voracious readers when we were young, was able to read astutely long before I entered an MFA program (even though I didn't major in English!). Although the sprouting of more and more MFA programs would serve Abramson's purposes well, the idea that MFA programs should increase in number so that Americans can become better readers of literature is not only absurd when we look at literary history--including the history of readership--in the U.S. but also conspicuously self-serving on Abramson's part.   

Abramson's expertise on MFA programs and literature

Also, Abramson has claimed that he's acquired special expertise on MFA programs, and Poets & Writers editors have quickly supported/defended that claim. Yes, he's got some numbers down--though those numbers don't satisfy either a doctoral-level mathematician or statistician I've talked to about this.

The problem is that, in a broad range of areas, he doesn't display great expertise:

1) He shows an almost obsessive need to classify things: literature, writing teachers, periods in the history of poetry... 

But the difficulty with a tendecy to classify that intensely is that it veers increasingly toward over-classification--and, as many people realize, overclassification often leades to oversimplification.

Seth Abramson is trying to learn about the history of poetry, and I laud him for that effort, but he so often gets that history wrong. And his rather grandiose claims about the worthiness of MFA programs does next to nothing to elevate the status of MFA programs in the eyes of those who didn't attend one. Pre-Seth, we had Dana Gioia as the main detractor of MFA programs. I seems obvious to me that as more of these programs have sprouted, the more resistence I'm seeing among "literary" (and I mean that in the best sense) poets and witers who didn't get an MFA. And to be honest, had my first encounters with MFA programs been with Abramson's description of them, I would have likely regarded the whole phenomenon with much more suspicion.

As it is, Abramson sometimes seems like a kid who's just encountering a whole new history of poetry, but his reaction seems to be an over-simplified thinking about that history

His latest view on the future of poetry: Metamodernism is taking over lierature--or lit crit? (It's an intersting conclusion for someone who supports the ideal of the "studio" program where no "analysis" takes place.)

http://www.huffingtonpost.com/seth-abramson/on-literary-metamodernism_b_3629021.html

And while this exchange is clever on the surface, it's also worth reading because it shows that literary history is messy and complex:

http://scarriet.wordpress.com/2013/07/23/metamodernism-lol/

By the way, the latest (as far as I can tell) fad in lit crit is "neuro lit crit." My favorite sentence from this particuar article:

"Given that many philosophers saw critical theory as a way for English professors to do philosophy really badly, it should not come as a surprise to find that some with a keen understanding of neuroscience are deeply skeptical of this attempt to say something new about old books." 

http://www.forbes.com/sites/booked/2010/04/01/neuroscience-and-literary-theory-a-match-made-in-nonsense/

about the value of the MFA

I was in a dark hole-in-the-wall corner of a restaurant when I wrote my last post, but I nonetheless apologize for the typos, etc. therein. (If there's a spellcheck on this site's keyboard, I missed it).

A New Yorker piece from 2009 about the many attempts to define or explain the worth and purpose of the MFA program in creative writing:

http://www.newyorker.com/arts/critics/atlarge/2009/06/08/090608crat_atlarge_menand?currentPage=1

Based on the accounts of two people I know who got their MFAs at Iowa, former director Frank Conroy didn't appear to believe that faculty ought to just "get out of the way" of students and let things happen, creatively. (And WERE that truly the case for the faculty at Iowa's writing program, the university might want to consider putting those same faculty members' salaries toward another use.) Anyway, Conroy was known to sometimes say to a student (and in front of that student's classmates) some things that certain others in the class saw as emotionally damaging. In any case, Conroy was, apparently, never known for "getting out of the way" and leaving any discussion of a story's merits solely to the students in a particular workshop. 

Conroy

"Beautiful prose in the service of what?" That's the sentence one Iowa-alum friend of mine described Conroy as saying when the prose in a story that was being workshopped was lovely but nothing of consequence was actually happening in the story. My friend, who saw Conroy as sometimes very unkind to students, has still said, all these years later, that he "learned" a great deal about story-telling from being in Conroy's workshops.

Besides Stop-Time, his memoir (written before the memoir became hip and widely marketable), Conroy's work includes the short story Midair (first published in a collection by that name), and it's astonishingly good--one I've read three or four times in the past 15 years.

Advice

After reading your post, I would like to talk to you and get your advice. When it comes to an MFA and Critical Theory, NH Institute of Art is big on that....I applied and got in....now I am strongly wondering if this will help me.....I applied to Lesley as well. What are your thoughts about both programs? Which one is better???

"selectivity"

What Seth still seems to fail to grasp is that what he calls "selectivity" is really just the school's acceptance rate. A school that generally draws less qualified applicants but has (or claims to have) a 5 or 10% acceptance rate is not going to be as "selective" as a school that attracts much more qualified applicants and has the same acceptance rates as the first school. He never makes that distinction.

Of course, it would be hard to compare the quality of current MFA students in one program with those in another on the basis of any clearly quantitative measurement.