2013 MFA Index: Further Reading

by
Seth Abramson
From the September/October 2012 issue of
Poets & Writers Magazine

National Full-Residency Applicant Pool Size

The median estimate for the national full-residency fiction/poetry applicant pool (as calculated in 2011) is 2,797, the mean estimate is 3,253, and the adjusted mean is 3,042. The same series of calculations produced a median estimate, for the national nonfiction applicant pool, of 291, and a mean estimate of 345. The total size of the national full-residency applicant pool, across all three of the “major” genres of study, is therefore likely between 3,000 and 4,000. The four-year, 2,215-respondent applicant survey that appears in the Poets & Writers Magazine 2013 MFA Index consequently surveys the equivalent of 55% to 74% of an annual national applicant pool in the field of creative writing; the one-year surveys published annually by Poets & Writers Magazine survey between 13% and 23% of the three-genre national applicant pool for that admissions cycle.

Data Sources

For those program measures not subject to applicant surveys, such as recitations and ordered listings of admissions, curricular, placement, student-faculty ratio, and funding data, only data publicly released by the programs—either to individual applicants, to groups of applicants, to Poets & Writers Magazine directly, in a program's promotional literature, or via a program website—have been included in the index. All data were updated regularly to reflect programs' most recent public disclosures.

Many of the nation's full- and low-residency MFA programs decline to publicly release internal data. Programs unable or unwilling to release data regarding their funding and admissions processes are necessarily disadvantaged by an approach that relies on transparency. Yet no program that fails to release this data for applicants' consideration can avoid being judged, by applicants and other observers, through the lens of such nondisclosures.

The Nonfiction Survey

Because fewer than half (47 percent) of full-residency MFA programs offer a dedicated nonfiction or creative nonfiction track—defined as a curricular track which permits a master’s thesis in the genre—nonfiction and creative nonfiction applicants have been surveyed separately from poetry and fiction applicants. These survey responses do not factor, in any sense, into either the one-year or four-year popularity surveys published in the Poets & Writers Magazine 2013 MFA Index.

For the nonfiction/creative nonfiction survey, the designation “n/a” indicates that a given program does not offer a nonfiction track.

LOW-RESIDENCY SURVEY

Structure

Low-residency programs were assessed in eleven categories, nine of which are either applicant surveys or ordered listings of hard data—six of these employing unscientific but probative surveying of the sort described above, and three based upon publicly-available hard data. Low-residency programs have not been assessed with respect to their funding packages because these programs generally offer no or very little financial aid to incoming students. The reason for this is that low-residency programs presume their students will continue in their present employment during the course of their graduate studies.

Cohort

Over the course of five successive application cycles, a total of 280 low-residency applicants were surveyed as to their program preferences, with these preferences exhibited in the form of application lists. The locus for this surveying was (between April 16, 2007 and April 15, 2011) the Poets & Writers Magazine online discussion board, the Speakeasy Message Forum, widely considered the highest-trafficked low-residency community on the Internet; from April 16, 2011 to April 15, 2012, the survey locus was the MFA Draft 2012 Facebook Group. The relatively small cohort used for this surveying accounts for the following: (1) The annual applicant pool for low-residency programs is approximately one-eighth the size of the full-residency applicant pool; (2) low-residency applicants do not congregate online in the same way or in the same numbers that full-residency applicants do; and (3) low-residency programs are subject to a "bunching" phenomenon not evident among full-residency programs, with only ten programs nationally appearing on even 10 percent of survey respondents' application lists, and only three appearing on 20 percent or more.

One explanation for the bunching phenomenon may be that low-residency programs are less susceptible to comparison than full-residency programs, as many of the major considerations for full-residency applicants, including location, funding, cohort quality, class size, program duration, student-faculty ratio, job placement, and cost of living, are not major considerations for low-residency applicants due to the structure and mission of low-residency programs. Generally speaking, low-residency programs are assessed on the basis of their faculty and pedagogy, neither of which are conducive to quantification.  It is worth noting, too, that a significant number of the world's fifty-seven low-residency MFA programs were founded within the last eight to ten years; applicant familiarity with these programs may still be relatively low.

The five-year low-residency surveying described above has been further broken down into year-by-year survey results. The survey cohort for the 2011-2012 annual survey was forty-six, for the 2010–2011 survey thirty-six, for the 2009–2010 survey eighty-nine, for the 2008–2009 survey fifty-six, and for the 2007–2008 survey fifty-three. If and when individual Speakeasy account-holders applied to programs in more than one admissions cycle, their application lists from each cycle were treated as separate survey responses; repeat applicants accounted for less than 10 percent of the survey cohort, however. Full-residency applicants on The Creative Writing MFA Blog who applied to one or more low-residency programs as part of their overall slate of target programs were also included in the low-residency survey; due to the exceedingly small number of such survey responses, these entries were manually compared both to one another and to existing low-residency application lists to ensure duplicate lists were avoided.

While surveys with larger cohorts are, all other things being equal, more reliable than those with smaller ones, the fact that the annual applicant pool for low-residency programs is likely between 350 and 400 suggests that the total survey cohort for the Poets & Writers Magazine 2013 MFA Index of low-residency programs likely represents approximately 50 percent of a single-year national applicant pool for this sort of degree program. Moreover, as is the case with the full-residency program table, crosschecking applicant survey responses across a period of five years reveals substantial consistency in the responses and quickly unearths any significant anomalies or outliers. Of the ten most popular low-residency programs listed in the this year's index, eight (80 percent) were among the ten most popular programs—according to applicants—in all five years of surveys, while the other two programs were among the fifteen most popular low-residency programs in all five of the application cycles studied (and in both cases missed the ten-most-popular grouping in only a single admissions cycle).

An “n.d.” notation signifies that a program has not released the requisite data. Two dashes (--) indicate that the program did not place in that category. Only fourteen of the nation’s fifty-seven low-residency MFA programs earned a positive score in the either of the two placement surveys, which considered placement data for full- and low-residency programs in a single assessment. In order to better acknowledge the achievement, in the placement categories, of these fourteen low-residency programs relative to their low-residency peers, and in recognition of the fact that low-residency graduates are substantially less likely to seek postgraduate fellowships or even postgraduate university teaching positions (largely because they do not give up their present employment when they matriculate), the national placement data collected for the low-residency table have been reconstituted as an ordered, low-residency-only listing. This applies equally to both the one-year and five-year applicant popularity surveys and to the surveys of selectivity and fellowship placement.

Low-Residency Applicant Pool Size

A realistic estimate for the annual number of low-residency MFA applicants is four hundred. Added to the adjusted mean for annual full-residency poetry, fiction, and nonfiction applicants, the estimate for the annual number of low-residency applicants suggests a total annual applicant pool to creative writing MFA programs—across all genres and types of residency, and gauging discrete applicants only—of somewhere between 3,500 and 4,250.

INTERNATIONAL PROGRAMS

Special Note on International Programs

The Poets & Writers Magazine full- and low-residency program tables have always considered, and will continue to consider, international MFA programs. However, international programs are unlikely to fare as well as they otherwise might in the surveys for several reasons: (1) nearly all non-U.S./non-Canadian graduate creative writing programs are (by U.S. accreditation standards) nonterminal (that is, they are M.Phil, M.St., or MA degrees, as opposed to the terminal MFA degrees considered by the Poets & Writers Magazine charts); (2) non-U.S./non-Canadian applicants are less likely to frequent U.S./Canadian-based MFA-related websites like The MFA Draft 2012 Facebook Group and The Creative Writing MFA Blog, and therefore non-U.S./non-Canadian programs are less likely to appear on the application lists of those surveyed for the Poets & Writers Magazine tables (and Canadian applicants applying to Canadian programs may be less likely to patronize the aforementioned websites than American applicants applying to American programs); (3) unlike U.S. and Canadian MFA programs, overseas programs are rarely fully funded for nondomestic students (U.S./Canadian MFA programs less frequently distinguish between domestic and international applicants with respect to funding eligibility), and therefore are less likely to be popular amongst the U.S. and Canadian applicants that frequent The MFA Draft 2012 Facebook Group and/or The Creative Writing MFA Blog; and (4) due to the exceedingly small number of non-U.S. terminal-degree MFA programs now in operation (well over 90 percent of all creative writing MFA programs now extant are located in the United States, and more than half of those in operation outside the United States were founded within the last five years), programs in Canada and elsewhere simply have fewer entrants into the international MFA system with which to achieve a relatively high placement in the applicant popularity surveys.

The 2013 MFA Index: Full-Residency Programs Categories

Funding

Nothing in the MFA Index funding assessments is intended to impugn the motives or character of professors, administrators, or staff at any of the nation's graduate creative writing programs. The presumption of the funding listing is that all of these groups have and do militate, with varying degrees of success, for more funding for their students—and that, given the choice, every program would choose to be fully funded. Still, there is no question that some programs require virtually no financial outlay by admitted students, and others are expensive. The Poets & Writers Magazine 2013 MFA Index takes this into account, as funding is an important factor among the current MFA applicant pool when deciding where to apply—and is also rated the number one consideration by MFA faculties themselves.

Program funding packages were calculated on the basis of annual cost-of-living-adjusted stipend values for programs with full tuition waivers, and on the basis of annual cost-of-living-adjusted stipend values less annual tuition for programs offering only partial tuition waivers. Programs were further divided into categories on the basis of the percentage of each incoming class offered full funding. "Full funding" is defined as the equivalent of a full tuition waiver and an annual stipend of at least $8,000/academic year. No program offering full funding to less than 100 percent of its incoming class placed ahead of any program fully funded for all students. Likewise, no nonfully funded program placed, in the numeric ordering of programs, ahead of any program in a higher "coverage" bracket. The five coverage brackets acknowledged by the hard-data funding assessment are as follows: "All” (100 percent fully funded); “Most” (60 to 99 percent); “Some” (30 to 59 percent); “Few” (16 to 29 percent); and “Very Few” (0 to 15 percent). All of these percentages refer to the percentage of each annual incoming class that receives a full funding package.

Programs that fully fund 33 percent or more of their admitted students were considered eligible for “package averaging.” If and when programs meeting this criterion were revealed to offer funding packages of differing value to different students, the total stipend value of all full-funding packages was divided by the number of such packages to determine average annual stipend value.

The funding category does take into account duration of funding, as programs’ funding packages were assessed for this category by multiplying average annual package value by the duration of each program in years. Other than for the deduction of outstanding tuition costs (as described above), the varying amount of tuition charged at individual programs was disregarded, as students receiving full funding do not, by definition, pay tuition.

Applicants should be aware that many programs deduct administrative fees—almost always less than $1,000, and usually less than $500—from their annual stipends. These fees were not considered in the funding listing. Moreover, some programs offer health insurance to all admitted students and some do not. Programs that offer health insurance to all admitted students include, but are not limited to, the following (programs are listed in order of their appearance in the numeric funding ordering): University of Texas in Austin [Michener Center]; Cornell University in Ithaca, New York; University of Michigan in Ann Arbor; Louisiana State University in Baton Rouge; Ohio State University in Columbus; University of Alabama in Tuscaloosa; Virginia Polytechnic Institute (Virginia Tech) in Blacksburg; Washington University in Saint Louis, Missouri; Arizona State University in Tempe; Iowa State University in Ames; Purdue University in West Lafayette, Indiana; University of Minnesota in Minneapolis; McNeese State University in Lakes Charles, Louisiana; Pennsylvania State University in University Park; University of Iowa in Iowa City; University of Wyoming in Laramie; Vanderbilt University in Nashville; University of Wisconsin in Madison; University of Texas in Austin [English Department]; University of Virginia in Charlottesville; University of California in Irvine; University of Oregon in Eugene; University of Central Florida in Orlando; University of New Mexico in Albuquerque; Rutgers University in Camden, New Jersey; and Oklahoma State University in Stillwater.

Selectivity

As fewer than five full- or low-residency programs nationally publicly release “yield” data—the percentage of those offered admission to a program who accept their offers and matriculate—the acceptance rate figures used for the index’s selectivity listing are necessarily yield-exclusive. Most have been calculated using the simplest and most straightforward method: Taking the size of a program's annual matriculating cohort in all genres and dividing it by the program's total number of annual applications across all genres. Thirty-two of the 92 (i.e., 35 percent of) full-residency programs with a) an annual applicant pool over 50, and b) known acceptance rates, had available admissions data from the 2011-2012 admissions cycle; 28 (30 percent) most recently released data from the 2010-2011 admissions cycle; 14 (15 percent) from the 2009-2010 admissions cycle; five (5 percent) from the 2008-2009 admissions cycle; and five (5 percent) from the 2007-2008 admissions cycle. In total, 74 programs (more than 80 percent of the total with available admissions data and a sufficiently large annual applicant pool) had data available from 2010 or later.

The relative paucity of data available for the selectivity listing—acceptance rates are available for 121 of the 224 MFA programs worldwide (54 percent; however, dozens of these data-unavailable programs were too new to have produced reliable admissions trends yet)—is partly attributable to programs' continued reticence in releasing the sort of internal admissions and funding data regularly released by colleges, universities, and most professional degree programs. Hundreds of interviews with MFA applicants between 2006 and 2012 suggest that a program's acceptance rate is one of the five pieces of information applicants most frequently seek out when researching a graduate creative writing program.

In order to avoid artificially privileging smaller or regional programs with an unknown but possibly modest annual yield—that is, programs with small applicant pools but also small incoming cohorts, and consequently, in some instances, extremely low yield-exclusive acceptance rates—only programs receiving more than fifty applications annually were eligible for the selectivity listing. Of the sixty-five full-residency programs with unknown admissions data, no more than ten would likely even be eligible for inclusion in the selectivity listing on the basis of their applicant-pool size. Whether these programs' annual incoming cohorts are also sufficiently small—and thus the programs, statistically, sufficiently selective—to make any of these programs entrants into the top half of all programs in the selectivity category is unknown. The likelihood is that three or fewer programs that would otherwise appear in the top half of all programs for selectivity are ineligible for the selectivity listing solely because they have thus far declined to publicly release their admissions data.

Of programs with fewer than fifty applications whose admissions data are known, the ten most selective programs (from most to least selective) are as follows: Northern Michigan University in Marquette; Old Dominion University in Norfolk, Virginia; Temple University in Philadelphia; Savannah College of Art & Design in Georgia; Otis College of Art & Design in Los Angeles; University of Missouri in Kansas City; University of Central Florida in Orlando; Butler University in Indianapolis; Chapman University in Orange, California; and Sewanee: University of the South in Tennessee.

The small number of low-residency programs with publicly accessible acceptance rates makes crafting a selectivity listing for such programs difficult. Of the 19 programs (33 percent of all low-residency programs) with available data, many have available admissions data only from the 2007–2008 admissions cycle or earlier. Fortunately, of the fourteen programs in this class most popular among applicants, nine (64 percent) have available admissions data. Moreover, the three most popular programs (in the view of applicants) have all released data from one of their past three admissions cycles.

The applicant-pool-size cutoff for inclusion in the low-residency selectivity listing is set at forty annual applicants.

Comments

Many of the same flaws

I've so far put off commenting on this cosmetically altered version of the "rankings." So, apparently, have others. The people I've discussed this topic with have not based their decision not to respond on the judgment that they consider the problems with this barely-different "methodology" solved; they've based it on the question of whether or not we should ignore this enterprise altogether.

I decided I shouldn't. The rankings are based on so many logical and empircal flaws that it's important, I think, for someone to address them (and I'm hardly alone in this opinion). So I'm gonna add my many 2 cents in the next few weeks, when I have spare moments.

As I mentioned last year, my brother is a mathematical (as opposed to applied) statisticain--which means he also understands the applications of stats. Having already read half of the "methodology" (it's still so long), my brother raises the obvious question about sample size. Before I receved a response from him, I'd raised to him the quesiont of the Central Limit Theorem, and the question of when it does not apply. (I first encounted the theorem in stat 101.) It does not apply well to this kind of sampling. 

As I did early on last year, I won't read any of Seth's responses unless a friend tells me that I ought to because of some innacuracy he's made about my claims or other good detail I should consider. I've seen smart and fair questions raised in response to Mr. Abranson's claims: e.g., on (I believe) HTMLGiant, a woman raised the quite reasonable question that myabe it was "misleading" for Abramson to say that Iowa's program was the "best" MFA program long before any other program existed; after all, "better" and "best" imply that there's something to which the item in question can be compared. His response to her was, in my view, rude and unfair.

Besides, I'm not really writing to him anyway.

 

P.S.

P.S. Sorry for the typos in my first post! I find spell checks useful (though I never rely on grammar checks)--and for that reason, I miss the squiggly red line! (Those can catch clerical errors as well!)

The assumptions you cannot make in science

Bit by bit, I want to respond to several assumptions made by Mr. Abramson. What I'll say tonight:

He describes his respondennts as "well researched," yet he provides no empirical evidence whatesover to support this claim. Also, he states in his "methodology" that his enterprise isn't "scientific" because, he argues, not all programs have responded. The problem, however, is that even if every program WERE to provide all the data he's searching for, his "rankings" (or "index," or whatever P&W wants to call it this year) would STILL be unscientific, and here's part of the reaon that that's the case:

One cannot make assumptions about one's sample without supporting evidence; also, human subjects, when it comes to their OPINIONS or FEELINGS (as opposed to, say, tissue samples), are fraught with well known interpretive difficulties that go beyond those found in the typical study in the natural or physical sciences.

Anyone who fully understands the scientific method understands at least this much: Making unsupported assumptions about your sample is NOT THE WAY SERIOUS SCIENCE IS DONE.

Anyway, more later...

 

Oh, and I'll add...

He argues that surverying MFA graduates would be a create a biased sampling because graduates would tend to rank their own programs highly. Fair enough--at least in theory.

But his method of examining prospective applicants doesn't get rid of the problem of bias; it merely replaces one kind of bias with a set of other biases beyond funding: location, picking "easy" programs regarding admission, having a "connection" to a particular faculty member, etc., etc., etc.

Noticing such details isn't rocket science.

(Oh, by the way: I'm trying to figure out how to separate paragraphs with white space. It seemed easier last year!) 

 

I agree with Caterina

These rankings continue to be an absurd blemish on PW's otherwise superb support for the CW community. The whole debate seems very simple to me - the information is useful, so make it available. But ranking requires criteria, and no-one has yet come up with sensible and generally-applicabe criteria for ranking MFA programs. Seth Abramson's criteria might work for him, and that's great. But putting PW's name on Abramson's ranking is silly (almost as silly as prorating the number of MFA programs founded in the 2010s on the basis of the number founded in 'the first thirty months' of the decade). 

A few final (?) thoughts

I agree with you, TimO'M.

A few additional claims made by the polll's creator that I'd wanted to respond to, including statments made by him that I think of as "Sethisms--a term I don't mean as denigrating but use because Seth has made a number of claims I'd never heard elsewhere but that he presents as if they should be believed merely because he made them (unless he thinks they carry some other obvious force: e.g., that they're self-evident or were handed down by the MFA Goddess and transcribed by Seth):

1) That MFA programs provide a "nonprofessional, largely unmarketable degree..." The problem with this claim is that this used to not be the case, before the number of MFA programs mushroomed (I think there are too many MFA programs now, and I suspect that some of them were created as cash cows--little equipment required but good salaries for the faculty and cheap labor by those who do manage to get funding from them). Although most Harvard law grads probably do manage to find good-paying jobs in the profession, the same phenomenon, more or less, has happened with law schools--a professional and marketable degree, traditionally: http://www.nytimes.com/2011/01/09/business/09law.html?pagewanted=all.

In fact, numerous law professors and members of the American Bar Association have questioned the ethics of this phenomenon.

2) That teaching is a relatively unimportant component in the MFA experience. While Seth is welcome to his opinion on this matter, that's all it is: his opinion. I earned my MFA from a program that, according to Seth, is associated with high post-grad employment. Why did I choose to apply there, though? A) the quality of the alumni; and B) the quality of the writers on the faculty. Most others I knew who had applied to harder-to-get-into programs considered the same two factors.

Although being a good writer doesn't guarantee that one will be a good teacher, I've had only one writing teacher who excelled the former but not the latter. Most good writers are good readers. How can that help (enormously, I'll add) and MFA student? By being read by a nuanced reader who understands the art form--someone who isn't also in competition with you, by the way--you can learn what you're doing well and what you're not doing so well. (Many of us have witnessed or even experienced this phenomenon: where one student will make in workshop a humane and fair-minded criticism of another student's piece and then the latter will later say, as payback, something nasty about the former's work. it's childish but also human, and it's more likely to occur among peers.)

No precise scientific measure will be created for MFA rankings, and I suspect that's why Abramson treats, for example, the quality of the faculty as rather trivial. How would he be able to measuer the quality of the writers on the faculty? By awards won? Which awards? As imperfect as it was, I find the old US News & World Reports helpful in that a) a faculty respondent was unable to rank her own school and b) faculty, who often guest-teach at other programs, have an idea of where the better students tend to be studying. Any more "scientific" a ranking seems highly unlikely to me. 

3) That it's really one's classmates--peers--that determine the quality of one's experience in an MFA program. Again, Mr. Abramson is entitled to his opinion, but that's all it is. A talented poet, and perhaps the gentlest person in my class, left after the first year (of a four-year program) for what he said would just be a "leave." He never returned. One thing he told me before he left was that he'd found no "writing community" there. Others did. But let's face it, an MFA program can include a lot of back-biting among students. (A friend of mine who attended Iowa in the '80s said that a running joke there was that the Iowa Writer's Workshop kept the student counseling services plied with clients. Perhaps the environment there is more humane now. It's refreshing to see the current director publicly state that applicants she's stongly supported--based on their writing sample--have sometimes been, to her surprise, rejected by the rest of the faculty votes.)

4) This distinction between "studio" and "academic" MFA programs, terminology I hadn't encountered pre-Seth Abramson (though I'd done an enormous amount of research on programs before I applied). He's said that Iowa is one of the "least academic" programs. By what measure? That they don't give grades? I know someone who took, during his MFA program there, a seminar that included classical Greek thought and was taught by James Alan McPherson: Pulitzer winner, Guggneheim and MacArthur Prize winner, and graduate of Harvard Law School before he attended the IWW. (Ever read any of his essays, often known for their intellectual, as well as emotional, nuance?) Not an "academic" program? (In contrast, my more "academic" program focused on reading literature as an art form; no postmodernist/post-structuralist/cultural studies-based lit-crit was involved. Otherwise, I wouldn't have attended.)  

5) That the level of the writing of MFA students at Iowa (or similar programs) is exceptional (I wish I could find the reference to that--if I do, I'll include it)--another justification for the claim that teaching isn't all that important?

While, as an undergraduate, I was taking other kinds of courses at Iowa, I used to sneak to the bin of fiction submissions for workshop (but only after the workshop had met) and steal the one or two leftovers (I wasnted to write fiction but was also scared by the prospect). Some of the writing was exceptional. Some of it, though, was rough-hewn (it was a workshop, after all)--and occasionally it was relatively bad, even if the prose was pretty good. My friend once described to me Frank Conroy's response to such stories: "Beautiful prose in the service of what?" (I.E., where was the plot, the characterization, the conflict, the sensory detail...?) Yes, this is hearsay, but I've heard the same depiction from several other grads of the IWW.

Given all of these obvious questions in response to this "ranking" system, what is it that has convinced P&W to attach it's name to it and give it such exposure.

I want to raise one more matter (one I consider at least as important as the above concerns I expressed), but it's getting rather late, so I'll sign off for now.

My feline pal, Caterina (one of three cats I live with), thanks you on my behalf for your indulgence--assuming you've made it this far into my comments.

Oh, just caught:

Forgot the end-parentheses in the second sentence above ("Sethisms").

While I'm thinking of it...

On the positive side: The application numbers are being called “popularity,” as they should be.

On the less positive side: It appears the Seth has still failed to distinguish “selectivity” from “acceptance rate.” As a Yale University administrator, whom I quoted last year, pointed out, the quality of the applicant pool makes a huge difference. In other words, a program that has a 25 percent acceptance rate might be more selective that some schools with, say, 10 percent acceptance rates. (And I have no bone to pick here: According to Mr. Seth’s own measures, the program I finished has a 4-5% acceptance rate.)

 I’m of course, in the above references, talking about Columbia (and some of the other NYC schools). For whatever reasons, Columbia’s MFA program has been associated with an exceptionally large number of fine writers. Tom Keeley, and Seth Abramson, were correct in alerting MFA applicants to the reality that funding is more available at some schools than at others, and that some of those latter schools are incredibly expensive if you don’t get funding. But it seems that Mr. Seth categorizes such schools as moral transgressions, even though some students get funding from them. (And anyway, if you’re living in NYC and you’ve got the money...)

 I also wrote earlier about Seth’s distinction between “studio” and “academic” MFA programs in creative writing, a distinction that caught my attention because I’d never it anywhere when I applied to programs in the ‘90s—which is why I came to call such terms “Sethisms.”

 Again: I have a friend who, during his MFA program at Iowa, took a seminar under James Alan McPherson--who also has a Harvard Law degree--on early classical Western thought. How is that not “academic”?? And why should we conclude that artistry and intellect are mutually exclusive? Since when? The idea that they're deeply different is a fairly recent distinction in the West.

 And one more time: In my own four-year program, we didn’t study Derrida or Foucault, etc., etc... So is that "academic" or not?

Oh, and I’ll add for good measure: I think Jorie Graham is, at least in her later work, a fantastically bad poet. Iowa (IWW) is lucky to be rid of her. And if we’re talkin’ intellectual stuff: Graham’s stupidly irrelevant references to obscure Latin botanical terms and to quantum theory say one thing she seems to want others to believe about her above all other possibilities: “I’m really really really really smarter than you!!

(And I'll later post a small bit about Columbia.)