The question of graduate school isn’t really one question, since grad school is said in many ways: there are worlds of difference between grad school in the humanities, in law, and in business, for instance. In contrast to the infographic, I am going to talk here exclusively about grad school in the humanities.
Young people who go into lucrative professions scorned as bereft of moral scruples rather than choosing a noble profession helping others are often regarded as “selling out.” But Jason Trigg, a recent MIT graduate, “sells in“:
Jason Trigg went into finance because he is after money — as much as he can earn…. he goes to work each morning for a high-frequency trading firm. It’s a hedge fund on steroids. He writes software that turns a lot of money into even more money. For his labors, he reaps an uptown salary…
Why this compulsion? It’s not for fast cars or fancy houses. Trigg makes money just to give it away. His logic is simple: The more he makes, the more good he can do.
He’s figured out just how to take measure of his contribution. His outlet of choice is the Against Malaria Foundation, considered one of the world’s most effective charities. It estimates that a $2,500 donation can save one life. A quantitative analyst at Trigg’s hedge fund can earn well more than $100,000 a year. By giving away half of a high finance salary, Trigg says, he can save many more lives than he could on an academic’s salary.
His inspiration? The moral philosophy of Peter Singer:
While some of his peers have shunned Wall Street as the land of the morally bankrupt, Trigg’s moral code steered him there. And he’s not alone. To an emerging class of young professionals in America and Britain, making gobs of money is the surest way to save the world. When you ask Trigg where he got the idea, his answer is a common refrain among this crowd: “I feel like I’d read stuff by Peter Singer.”
Singer’s influence notwithstanding, we can also see Trigg as trodding the path of the Bodhisattva…
Should philosophers focus less on Value Theory, and more on Value Added?
Over at Salon, a plea for philosophers to swallow their pride and get on with selling themselves and their profession:
if philosophy is so important, then selling itself to the culture at large is important too. So it’s time for philosophers to put their clothespins on their noses, wade into the stench of real-world commerce, and ask some of those tanned and toned marketing majors who skipped out on Philosophy 101 for some help.
Philosophy, in short, needs a Marketing Makeover.
Over at Adjunct Rebellion, a scathing assessment of MOOCs:
While the last 20 years of academia have seen these two destructive practices aimed at the professoriate, it hasn’t been until lately that the threat is driven by the internet — in the case of academia, in the form of MOOCs that are now looming enormous, casting monstrous shadows over the college campus. The MOOC model, from the standpoint of the professoriate, is an entirely exploitative one. The professor designs a class, has lectures and other media support shot and “canned” — and then the university, or the MOOC itself owns that material. It OWNS the intellectual property of a professor who has trained for, on average, a decade for advanced degrees, who has taught for years and developed skills and abilities. And, once that particular area of scholarship is canned — who needs the professor, ANY professor, anymore?
This is an example of the rhetoric of crisis I discussed earlier, and let me be clear that I don’t always think that’s a bad or un-useful thing–it just depends on what your goals are. If your goal is to wallow, then it works great. If your goal is to get tenure, you’re barking up the wrong tree (these two goals, incidentally, are espoused by the folks over at the Philosophy Smoker Blog, a nest of nattering nabobs of negativism, which openly admits that its focus is to “bitch about” trying to make it in academic philosophy). If your goal is to make a living, then just quit and do something else (and you CAN do something else).
Cathy Davidson, a professor at Duke, has a great idea:
In January 2014, I will offer a six-week Coursera class, “The History and Future of Higher Education,” free and open to anyone. I’d like to turn the class’ weekly forums into an opportunity for a massive, global, collaborative, constructive, peer dialogue about how higher education got to its current dilemma. And from there, I hope we can come up with some creative, innovative, and workable ideas to make a better future.
A MOOC about MOOCs seems to make a great deal of sense for a few reasons.
For one, it provides a forum for investigating just what a MOOC is, what it can and cannot be, whether and to what extent it does indeed enhance learning, and whether and to what extent and in what ways this can be measured. If it turns out that such an experiment yields a more nuanced and useful picture of the ontology and application of the MOOC, then this itself would be evidence that the MOOC is a sound design and delivery mechanism.
Second, as Cathy notes,
In the present mood of high polemic, hyperbolic promise, and hysterical panic, it is almost impossible to sort out the questions, let alone the answers to these questions, on either a national or international level: Is now the time to reject or embrace massive online learning? Do MOOCs yield improved learning and free and open access to those who have been excluded from higher education—or are they yet another cynical attempt to defund the public and extract profits from tax payers and diminish the value of what virtually all universally claim to be the public good of higher education?
Crisis rhetoric is seductive but does not have a great signal-to-noise ratio. A MOOC that took a, well, academic approach to MOOCs might help to dispel the fervor over the MOOC-ment and help people think clearly about just what it is and what it means.
Third and related, much of the chatter about MOOCs is so focused on the “disruption” of the status quo, but sometimes the storied history of that status quo is not sufficiently excavated. An inquiry into MOOCs in the context of the history of higher ed might help us see that the notion of Higher Education enshrined in our social imaginary is a historical anomaly made possible by a set of specific events, notably World War II and the G.I. Bill. The Chronicle of Higher Ed just ran a piece along these lines (though it is paywalled).
I have finally decided to take the plunge: I have signed up for Coursera’s “Internet History, Technology, and Security” course. It’s not quite Christopher Hitchens voluntary trying out water boarding in order to do his subject justice, but I figure it only makes sense to walk the walk. Reports forthcoming.
Robert Maguire has profiled a math MOOC funded by the Gates Foundation and launched at the University of Wisconsin LaCrosse that had an unexpected effect: though it was offered worldwide, it was widely embraced around the state by high schools and led to deeper coordination between high school and college students, teachers, and administrators in order to avoid the “redemial math trap and close what we might call the “Preparation Gap.” From McGuire’s interview with two representatives from the college:
The way MOOCs are growing I imagine a lot of graduating high school seniors are thinking about using them this summer, whether they’re being driven to it by the necessity of a placement exam or for enrichment or to stay sharp for college. What would you advise a graduating high school senior who’s thinking about taking a MOOC?
A MOOC can be helpful to show what a college course actually looks like, how it’s done and what to expect in their first year of college.
Over summer, taking a MOOC is going to help them learn how to be an independent learner, how to study, how to find that internal motivation, how to seek out resources, recognizing that they do have multiple ways they learn, and they need to find that strategy within themselves.
Students might look at what’s aligned with their discipline of study. If someone’s looking at going into a history major, then they might look for some different history MOOCs. They can use the MOOC as a way to find out, “Is this something I am really passionate about and want to study for the next several years of my life.”
This is proof positive of an idea Noel B. Jackson floated which I mentioned yesterday: MOOCs not only expand open access to what, for convenience sake, I’ll call the Third World (Globalization), but they can strengthen local and regional communities in the (f/c/s, again) First World. They not only expand the net to wire more nodes, but they deepen the connections around each node. MOOCs can potentially have “glocal” impact. In the case of the MathMOOC at UWL, the connections are spanning vertically across the different levels of the education system. This might take the teeth out of the objections of MOOC skeptics, who dismiss MOOCs as trojan horses for neoliberalism or digital colonialism.
This “localizing” side-effect of MOOCs targets a serious problem that so many college teachers face: beset with near illiteracy and/or innumeracy in their students, they find themselves asking, “How did these kids get into college?” This often happens with writing skills. The college teacher faces a dilemma: should I teach them the content, or teach them how to write? If you just teach the content, then a) they aren’t likely to grasp it as roundly, since you can’t cleanly separate the ability to write clearly and the ability to think clearly, and b) you shirk your responsibility as the “last line of defense” before the students get out into the real world bereft of solid writing skills. If you teach them how to write, you’re not teaching the content. And if you try to split the difference, well, as Lao Tzu says, “if you chase two rabbits, both get away.”
Better coordination between high school and college teachers and administrators could help close the “preparation gap” that frustrates so many teachers and short-changes many students.
By the way, MOOC News and Reviews is a treasure trove of information about the cluster of issues orbiting the MOOC-ment.
(image courtesy of http://www.apartmenttherapy.com)
Noel B. Jackson, a professor of literature at MIT, has a thoughtful and balanced take on MOOCS over at “Sustained Inattentions”–he has the advantage of proximity, since he is essentially at one of the two ground-zero’s of the MOOC movement (Silicon Valley and Cambridge). He testifies that, in his time at MIT, no issue has arrested the attention of folks in higher ed as much as the MOOC. His view on the place of MOOCs in current discourse about higher ed is insightful:
“The MOOC has become a repository for utopian and dystopian narratives about the present and future directions of higher ed.”
The rhetoric of crisis and disruption can inhibit us from thinking clearly and carefully about how best to surf this strange new wave. The utopian and dystopian narratives are, as Noel points out, the views that MOOCs are either democratizing or corporatizing: that they are either making the highest quality education available to the world’s poor, or they are merely the latest step in the corporatization of the university that has been underway for decades.
Confessing his ambivalence about MOOCs, he points to a possible benefit of MOOCs that I hadn’t heard of before:
“My interest in MOOCs extends to how the format can be imagined to provide access to a university curriculum to populations that may not have had this kind of access, as this is the population that stands to gain most from them. But in addition to the flat, global learning community ritually invoked as the audience for MOOCs, we could benefit from thinking locally too. How can the online course format make possible new relationships not only with the most far-flung remote corners of the earth but with the neighborhoods and communities nearest to campus? Can we make MOOCs that foster meaningful links with the community or create learning communities that cut across both the university and the online platform?”
This is certainly a pressing need at the university I teach at. Fordham University’s main campus is an oasic bubble plopped in the middle of one of the poorest counties in the country, and few of the students venture past the perimeter of security-saturated environs. Anything that could facilitate a deeper engagement–heck, any engagement–with the world beyond the walls would be a very good thing; and perhaps MOOCs and other online approaches might facilitate that, though I’m not sure how.
(image courtesy of http://www.apartmenttherapy.com)
In a recent interview over at The Philosopher’s Magazine, Nigel Warburton, co-presenter of Philosophy Bites, the wildly successful philosophy podcast, riffs on his experiments in public philosophy, the problems plaguing philosophical research, and his recent decision to leave academia. The success of his podcast is proof positive that there is a hunger for philosophy in the publicyber space. Excerpts below.
The surprising success of the podcast:
The initial thought was that mainly philosophy students and lecturers might take an interest, but he’s heard from American listeners with time to kill on long drives, people waiting out wildfires in Australia, and soldiers in Afghanistan concerned about ethics. When I ask for details over email, Warburton sends me a list of 40 countries, all with more than 10,000 downloads each, some with vastly many more, millions more in some cases. Just after the usual English-speaking suspects, China checks in at number five. The United Arab Emirates, Argentina, Taiwan, Iran and Indonesia make the list. Several spin off series, two books (and a third in the pipeline), more than 250 interviews and an alarming 16.7 million downloads later, and Philosophy Bites is an international philosophy phenomenon.
Warburton explains that he is leaving his secure position at Open University largely because of the dominance in academia of what he calls “crossword puzzle philosophy” (essentially, what Daniel Dennett has deemed “chmess“):
“Philosophers today have mostly got their heads down. They’re concerned with writing for a journal which will publish work that takes them two or three years, and only five people will read it. These are people who could be contributing to something that’s incredibly important. Gay marriage is just one example of many. I don’t think philosophers responded particularly well to 9/11. Issues about free expression, all over the world, are not just academic. They’re matters of life and death. There are exceptions, but philosophers are by and large more interested in getting a paper in Mind or Analysis than they are in commenting on the major political events of our time.”
On philosophical “research”:
I’m not even sure what research means in philosophy. Philosophers are struggling to find ways of describing what they do as having impact as defined by people who don’t seem to appreciate what sort of things they do. This is absurd. Why are you wasting your time? Why aren’t you standing up and saying philosophy’s not like that?… It’s not the kind of thing that Socrates did or that Hume did or that John Locke did… Why are you doing this? I’m getting out. For those of you left in, how can you call yourselves philosophers? This isn’t what philosophy’s about.”
One is hard-pressed to disagree with a straight face. As someone who has been on the job market for a couple of years, I always inwardly cringe when I am asked to explain my “research” to a search committee or a dean. In a formal sense, research is something that a scientist does in a lab or in the field: designing and conducting experiments, collecting and interpreting data, and the like. In an informal sense, it means doing your homework–gathering relevant information–before a meeting, an interview, etc. Philosophical writing, for the most part, is not research: it is reading articles and books, thinking about them and the subjects concerned, and then writing what one thinks about them. Exceptions could arguably be made for “experimental philosophy” and branches of philosophy in dialogue with the sciences, such as philosophy of mind or biology, but for the most part, I think it’s a category mistake to think of the reading and writing of philosophy as “research.” We might view today’s philosophical “research,” largely a consequence of the rise of analytic philosophy and “science envy,” as a new form of scholasticism, a defensive, conservative crouch destined to be consumed by the coming Avalanche (more on this, Higher Education’s equivalent of the Singularity, later…) (I hasten to add, however, that analytic thought, at its best, provides a needed check against the scholastic excesses, verbosity, and sheer fictioneering of much Continental thought.).
Despite the coming storm, Warburton is ultimately optimistic about the fate of philosophy:
“Because of changes in online teaching, in the next ten years, the university system will be turned on its head. If Philosophy Bites can make such an impact with two guys with a hard disk recorder and a couple of laptops, think what people who fully understand the new technology, who can write code, who can employ the best philosophical communicators around, think what they could produce. It’s only just starting. We’re going to see dramatic changes to how we learn, teach, do research and share ideas. I think philosophy’s future’s very bright.”
I asked two days ago what, in light of Leon Wieseltier’s view that philosophy these days only “tweaks and tinkers,” an alternative might look like. Philosophy Bites seems to be a solid step in the right direction.
(image courtesy of Philosophy Bites)
I came across a letter I wrote to a friend last year who inquired about the philosophy of Ayn Rand, and thought I’d repost excerpts of the philosophical content below:
[I had to laugh when I got your message–I was in church of all places. Next question: what was I doing looking at my phone in church and committing digital blasphemy? Answer: obnoxiously long Catholic ceremony. The supreme irony is that Rand’s most recent notoriety in American culture is Paul Ryan–a, well, “severe” Catholic–a big Rand fan.
About Rand. Let me take your questions one at a time, but let me be blunt: I think Rand’s philosophy is ludicrous–it is an attractive and interesting philosophy embraced with zeal by adolescents (including high-school me!) first starting to think for themselves, but when touted as a philosophy of life, or as a serious platform for political economy, it is dangerous, historically uninformed, and morally abhorrent. Hopefully my responses to your questions will convey why I think this.
Leon Wieseltier, editor of the New Republic, added another entry to the growing genre of commencement speeches targeting technology. He worries about the shrinking of the humanities in higher education and the culture at large, as technology colonizes more and more corners of our lives. His piece reminded me of T.S. Eliot:
“Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?”
Leon is taking aim at the values and worldview of Silicon Valley, an ideology that Evgeny Morozov has dubbed “technological solutionism”, the reduction of all problems to technical problems, the notion that technology can fix all things, and the reduction of knowledge to information:
There are thinkers, reputable ones if you can believe it, who proclaim that the exponential growth in computational ability will soon take us beyond the finitude of our bodies and our minds so that, as one of them puts it, there will no longer be any difference between human and machine. La Mettrie lives in Silicon Valley. This, of course, is not an apotheosis of the human but an abolition of the human; but Google is very excited by it.
He is referring, of course, to Ray Kurzweil, the scientist, inventor, and anointed Philosopher Prophet of Silicon Valley who has just been hired by Google. Leon’s piece is aimed squarely at Kurzweil’s scientism: the extension of science from a method to a metaphysics, with claims based not on data but on dogma. There are some who consider Kurzweil the Most Dangerous Man in America. While Steve Jobs has been raised up as the Great Man of our age, he may end up being overshadowed by Kurzweil, who is on track to become the Father of AI. I will be addressing Kurzweil’s worldview–essentially, that technology is the continuation of evolution by other means–in future posts. For now, see Michael E. Zimmerman’s recent reflection on AI from the perspective of Integral philosophy.
Leon’s is exactly the argument that C.S. Lewis made over half a century ago in The Abolition of Man: man’s modern conquest of nature is really nature’s conquest of man. Why? Because when reason is turned into a tool to satisfy our desires, our desires are running the show–but our desires or instincts largely come from nature. I will return to Lewis’ argument and its connection to modern nihilism in future posts.
One noteworthy thing Leon mentions is the place of philosophy in all of this:
Philosophy itself has shrunk under the influence of our weakness for instrumentality – modern American philosophy was in fact one of the causes of that weakness — and generally it, too, prefers to tinker and to tweak.
What would it mean to not just “tinker and tweak”? What would that look like? Why is it so difficult, not only to do, but to even imagine?
I think philosophy has been assigned one of its great tasks for the present age. If Hegel said philosophy is its own time comprehended in thought, then the great challenge for thought in our time is that one of the most important matters, technology, is largely about our future, and its grip on our present makes it so hard to reflect on it.
[Reposted from the following discussion thread]
Thinking about this and reading your posts, I am reminded of David Foster Wallace’s Infinite Jest, which is emerging as sort of the Big Novel of our era. The story takes place in the not too distant future, and the title refers to a film that is so addictive that it kills the people who watch it; and the film, unsurprisingly, is wildly popular.
Wallace was concerned that, in the words of sociologist Neil Postman, we are “amusing ourselves to death”–not literally, of course, but psychologically or spiritually. I find this narrative seductive, but I resist it for that very reason. Part of the problem, I think, is that people just have different dispositions. Humanist folks tend to have a European part of their soul, a melancholic affect, a deep suspicion of the popular, the common, the fashionable, the masses, a reverence for some distant past, a disdain for the practical. But a lot of Americans don’t share this affect or this outlook: they just want to do their work, make their money, and have some fun, however the culture is currently defining and delivering it–”what’s the harm in that? Lighten up?” The Euro-humanist, of course, looks at these people and just cries “false consciousness”–they either don’t know, or won’t admit, their true condition. The Euro-humanist sees most people as trapped in and bespelled by some kind of Cave, and tends to see The Next Big Thing (MOOCs, gamification, Facebook, etc.) as just more distraction, illusion, ideology, etc. As the inimitable Roger Sterling puts it:
So what I think we’re dealing with here, at some level, is just different sensibilities: the can-do, practical, pragmatic, American happiness pursuer just NEVER WILL see the world quite like the intellectual, Europeanish, theory-minded soul will; for that reason, the gamified world is a blast (“awesome!”). This person does not have a problem with just doing their work, whatever it is, and going home and living their life. They don’t see, and they don’t care, that the compulsion to be entertained does any kind of damage to the soul, or makes us less of a human being. Maybe some people can just handle entertainment in a more moderate way. Wallace himself, for instance, had a highly addictive personality, and couldn’t handle fun things because he just found them to be, well, too much fun.
I have grown suspicious over the years of what I’ll call the Office Space Ideology that lots of intellectuals and humanists and liberals adopt: that corporations are evil, that office workers are drones, that it all really is as stupid and wretched and soul-rending as films like Office Space portray it to be. Why? Because most of those people have probably never worked in an office! And yes, they probably would find it to be drudgery. But maybe for people of a different sensibility, that’s not what it is. Maybe they are just better at accepting things for what they are–that, as Matthew Crawford puts it in his thoughtful and important meditation on the value of work, work is necessarily toil and serves someone else’s interests. And so rather than futilely try to fuse work and play, erect a separation of powers: work is the realm of necessity, play is the realm of freedom. And that reminds me of something Wallace said in a different context, when he was interviewing a pro tennis player: “I am almost in awe of his ability to shut down neural pathways that are not to his advantage.” People who are well adjusted are better at adapting to the reality of American life, which in some important ways overlaps with reality itself.
And let’s face it, the American Pragmatist is sometimes spot on about the EuroHumanist’s posturing, pedantry, and pretentiousness:
Maybe one reason that Euro-humanists disdain things like gamification is that their attachments to an idyllic past and an ideal future create such a sense of loss, longing, disappointment, and frustration that the escape and pleasure provided by games et al. is an irresistible narcotic. The crucial question is, whose sense of reality is more warped?
[Reposted from the following discussion thread]
Discussions about education these days often reference something called “gamification”: the use of games or game-like structures to enhance learning.
On the one hand, I see the appeal: rather than fight the forces affecting students’ behavior outside the classroom, harness them and integrate them into the learning process. “Badges” will replace “grades,” and “competence-based learning” will replace degrees, etc. Now earlier iterations of online learning may well fall prey to the diploma mill problem (a piece of paper saying you can now do what you could already do), but it sounds as though the next generation of online learning tools will be more sophisticated: they will be able to empirically demonstrate that student x has learned skill y to do job z. And they achieve that result through an engaging learning process that motivates them through gamified learning modules (like a video game) that take less time (more efficient) than the traditional course/degree model.
But what unsettles me about this, from something like a sociological perspective, is that it turns everything into a “game”–the game of professional advancement and money-making that people will be playing for most of their lives, of competing and achieving and winning, will become seamless with the educational sphere. It feeds into the hyper-competitive culture we are becoming more and more each year.
Moreover, the shift from text to image based learning seems to be a kind of surrender to our culture, which has been image-based for a long time. In my view one of the chief functions is to give students the tools to RESIST and challenge and criticize the present culture–to give them a chance to be an individual. And so gamification seems like another stage in the subsumption of education by corporate values: “fun” on the outside (infotainment), soul-eroding on the inside. All to equip students with 21st century skills so that we can “beat China”, or whatever.
But to challenge THAT–video games aren’t what they used to be. Many involve sophisticated cognitive tasks. So part of the gamification craze is a challenge to the highbrow, elitist prejudice that only book smarts and book learning are real smarts and real learning. There is a parallel here to the time-lag in critiques of capitalism. I wonder whether Marxist, or Marxish, intellectuals are ragging on a form of capitalism that was, well, creatively destroyed, ages ago, and not that capitalism is perfect, but 21st century capitalism is an importantly different animal. They might retort that it is still the same SPECIES–inherently, structurally unjust and exploitative and dehumanzing, and so on…which is an essential debate to have.
My initial post, and the included infographic from OnlineColleges.net, generated a great discussion thread over at the Unemployed Philosopher’s Blog. Reposting it here.
Paige Harris has an informative piece over at Online PhD Programs on some best practices for landing an academic job. Despite one factual error–Paige claims that academia has long been “unscathed” by the vicissitudes of the economy, when in fact the job market, at least in the humanities, has been abysmal since the 1980s–I think it is all sensible advice, though I would say that the understated tone of the piece may be misleading to those in or aspiring to graduate school.
Paige writes: “There’s no doubt that building a career in academia is a challenge these days, but it can be done.” There are challenges, and then there are challenges. Running a 6-minute mile is a challenge for many, but pretty much anyone can do it if they discipline themselves. Running a 4-minute mile is nearly impossible; it depends not just on an unusual degree of hard work and determination, but on winning a genetic lottery. As someone who has just hazarded the punishing fire of the academic job market and lived to tell the tale, while I would not equate landing a tenure-track or secure position in academia with running a 4-minute mile, it’s not far off. I will recount my harrowing tale–which, I can assure you, has a most happy ending–in a future post, “There and Back Again.”
A couple of things that Paige does not mention (and, to be fair, need not mention, as her subject is simply HOW to get a job) are how “success” is measured in academia, and whether “success” is really as desirable as wide-eyed graduate students tend to believe. First, reading her post, you might be wondering how, with all of the energy that goes into “packaging” yourself, you ever find time to focus on the ACTUAL job: teaching, thinking, reading, and writing. As I recently told a group of young graduate students, the first rule of the job market is also the first point in Rick Warren’s The Purpose Driven Life: “it’s not about you.” It’s about a persona that you will create that will, hopefully, be selected in the lottery that we call the academic job market. This is essential not just for marketing purposes, but for maintaining mental health.
But this marketing does not end once you get a job. As you will see–through attending conferences and publishing papers and getting your nose dirty in department politics–academia is a game–everything is not as it seems. This is partly why Frank Donoghue claims, in his book The Last Professors, that today’s academic is less an intellectual than a kind of salesman. I will be blogging on Frank’s book–and, hopefully, interviewing him via podcast–in the coming months.
Second, as Paige rightly emphasizes, people need to think very, very carefully about whether they really want it–and what they’re really signing on for, financially, geographically, socially, and professionally. To paraphrase Ian Malcolm, the eccentric mathematician from Jurassic Park: “Academics are so focused on whether or not they can get a job, they never stop to think if they should.”
I will be posting about these and related issues in the coming weeks.
David Brooks has, I think, made progress in the discussion about MOOCs and online education. His central idea is that given the increasing sophistication and decreasing cost of online learning as a delivery mechanism for technical knowledge and skills, universities can no longer cling to a business model in which they charge a small fortune to impart technical skills. As Brooks flatly states, “There will be no such thing as a MOOC university.” One thing they can do–perhaps with a somewhat lower price tag–is specialize in the acquisition and development of practical knowledge and skills–the “Practical University”:
So far, most of the talk about online education has been on technology and lectures, but the important challenge is technology and seminars. So far, the discussion is mostly about technical knowledge, but the future of the universities is in practical knowledge.
Practical knowledge is not about what you do, but how you do it. It is the wisdom a great chef possesses that cannot be found in recipe books. Practical knowledge is not the sort of knowledge that can be taught and memorized; it can only be imparted and absorbed. It is not reducible to rules; it only exists in practice.
While Brooks’ notion of “practical knowledge” is a bit thin (column-sized), the point is important. What makes all of this possible is the “flipped classroom.” While humanities teachers have generally shaken their heads at and pooh-poohed EdTech, the flipped classroom is a game-changer. Lectures on Plato, colonialism, and Melville can now be placed online (and software can check to make sure students are watching them), while class time can be used exclusively for seminar-style interactions in which students can develop prized social skills. As Brooks notes,
Think about Sheryl Sandberg’s recent book, “Lean In.” Put aside the debate about the challenges facing women in society. Focus on the tasks she describes as being important for anybody who wants to rise in this economy: the ability to be assertive in a meeting; to disagree pleasantly; to know when to interrupt and when not to; to understand the flow of discussion and how to change people’s minds; to attract mentors; to understand situations; to discern what can change and what can’t.
Let’s face it: where and when do we deliberately try to develop these “soft”, “people” skills? One might carp at Brooks using an example of a corporate environment–a critic might say that this just makes university seminars into a lab for “behavior modification”–but we can view his point more expansively: that universities taking this approach are helping to develop the whole person; in that sense, they could become more congruent with the original liberal arts ideal.
Whereas before professors had to (often awkwardly) balance lecture and discussion, now they can have a clearer division of labor. I can testify to the challenge of “getting through” lecture–transmitting the ideas, interpretations, facts, etc., that you want to highlight from the reading–to get to what, in my heart, I consider the real business of teaching: the conversations that you foster and facilitate in the classroom.
Brooks explains how technology might be used to enhance the classroom environment:
The goal should be to use technology to take a free-form seminar and turn it into a deliberate seminar (I’m borrowing Anders Ericsson’s definition of deliberate practice). Seminars could be recorded with video-cameras, and exchanges could be reviewed and analyzed to pick apart how a disagreement was handled and how a debate was conducted. Episodes in one seminar could be replayed for another. Students could be assessed, and their seminar skills could be tracked over time.
In this way, technology can create the space in which a stronger sense of community can take root in the classroom. Moreover, in reviewing their performance on video, they would be able to see how they appear in public. This would make students uncomfortable in the very way that we want them to feel uncomfortable.
The general sense in these sorts of discussions is that all of this EdTech stuff is bad news for humanists. However, notice that the technical knowledge sounds like stuff that robots can do; as Kevin Drum details, the long imagined future of the robot worker is not too distant at all. This might lead to a cruel irony: online learning is maturing–through gamification, analytics, adaptive learning mechanisms, and so on–at around the same time as automation. What is the sense in equipping the masses with all of these technical skills if robots are just going to perform the jobs to which they are suited? Then, you might say, people should be trained how to build the robots and do the programming and engineering, etc. But the reality is that there are only so many people who will be needed for this kind of work. All of which begs the question: just what the hell are all of these people going to do for a living?
But this might put humanists in a surprisingly good position. Daniel Pink, one of the new darlings of the business self-help industry, has argued that Right Brainers will rule the future. And indeed, Forbes recently listed the Top 10 In Demand Skills in 2013–check out the top four. What is driving this? I think it’s the fact that life in our new Technopolis is creating problems and raising questions that are not scientific and technical problems and questions.
My chief concern with Brooks’ proposal is not about substance, but about scale. It’s easy to imagine something like this going on at Harvard et al. But at Wannabe University?
(image courtesy of marketingzen.com)
One more point about the Bloomberg article raises pertains to the plight of adjuncts. Though SNHU’s online program was initially supported by adjuncts getting paid the usual pittance, it has generated enough revenue to hire full-timers to do more (and, hopefully, eventually, most) of the teaching. This may be a way to break the fatal logic of the adjunct dilemma as it exists at (solely) brick-and-mortar universities. Not only would schools have the resources to ensure that many, most, or all of their on-site teachers are full time, but now adjuncts could still teach part-time, but do so more comfortably, without having to shuttle from campus to campus, which is a major drain on time, money, and mental health.
Of course, the true adjunct dilemma is faced by the teachers themselves, not the administrators. My fellow blogger Dan Mullin recently shared his ambivalence about going back to adjuncting after a hiatus.
Looks like Southern New Hampshire University has devised a nimble business plan.
This is just the kind of “disruptive innovation” that Harvard Business School professor Clayton Christensen has predicted: this refers to, in John Hechinger’s words, “the process by which companies at the bottom of the market use new technologies to displace more established competitors.” The attack comes, not from the front, but from the side: from the other two raptors you didn’t even know were there. It is exactly the kind of thing that universities that wish to survive will need to do if anything like Christensen’s Prophecy–that in 15 years, HALF–that is right, half–of the institutions of higher learning in the United States will be gone. (More on Christensen’s Prophecy–and the coming Avalanche–later…)
This may well be a viable pathway–and the only pathway–for middling universities attempting to surf and survive the volatile seas of the EdTech Era. Frank Donoghue, whose essential book I’ll be plumbing in upcoming posts, thinks that the lasting mark on higher education left by the first generation of online for-profits will not be the companies themselves, but the selection pressure they exert on traditional institutions of higher learning:
“The real legacy of this industry, I believe, is its lasting and widespread influence on traditional universities. Whatever the fate of specific campuses of the University of Phoenix, Career Education, or DeVry, these companies have demonstrated that it is possible to operate a university as a business…. The business model for higher education devised by the for-profits has tremendous appeal to administrators and lawmakers in an era of steadily declining public funding and tuition increases that are quickly becoming prohibitive.”
Donoghue thinks that the majority of non-profits will be torn asunder by the cross pressures of vocational for-profits, which lead to jobs, and elite nonprofits, which leverage prestige. Large state research universities, he thinks, have largely lost their way, unable to decide what their mission and role in society really is, and thus plagued by “mission creep.” This is the arms race that Christensen terms the “bigger and better” virus that has infected academic administrative culture. However, the model of SNHU may well offer them a middle way: the profits from an online apparatus that offers primarily vocational training can be funneled back to the leafy host campus in order to boost its prestige. The challenge facing universities that take this path is, in part, one of perception, as Hechinger relays:
“Even some of the beneficiaries of Southern New Hampshire’s online push are uneasy. John Wescott, a 19-year-old sophomore at the physical campus, expects to graduate with only $15,000 in student debt thanks to financial aid. Yet he recalls a spirited discussion at a student-government meeting: ‘There was a sense that we were turning into the University of Phoenix and the value of our degree was going down.'”
Thus the “threat to Harvard” I discussed yesterday comes not just from the for-profits themselves, but from the effects they are likely to have–and are already having–on the non-elite, traditional universities. But, again, let’s be clear: Harvard feeling “threatened” is like the prom queen who is insecure about her appearance.
Check out this infographic on MOOCs posted over at http://www.onlinecolleges.net:
I will have more to say about the developing debate over MOOCs later, but at first blink, I have two impressions based on everything I have read:
The Good News: MOOCs will disseminate the highest quality education to the poorest people. As I noted in a previous post, and as Thomas Friedman has pointed out, whatever the fate of MOOCs in higher ed in the developed world, one unadulterated good they provide is giving people in the developing world a chance to acquire the knowledge and skills they will need to have a fighting chance in the 21st century economy.
The Bad News: The new strains of premium MOOCs being devised and piloted by the elite universities–the Big Three players listed in the graphic above–threaten the other players in the higher ed ecosystem: for-profits, non-profit, 2nd and 3rd tier private schools, and non-profit state universities. Harvard et al., fueled by virtually unlimited coffers, can BOTH kick butt in the arms race for prestige, and leverage that prestige to dominate the online landscape, thus furthering weakening the hand of mainstream, “middle class” universities. Indeed, (ironically) Harvard economist David J. Collis predicted as much; in The Last Professors: The Corporate University and the Fate of the Humanities, Frank Donogue explains Collis’ prescient speculation:
“[Collis] speculates that these top universities, made all the richer by capitalizing on their brand names to market “basic lectures and course”s online, could then ‘shift back to the tutorial system to differentiate their on-campus education’ experience. They will, in other words, offer convenience to one market of students and prestige to another.”
They will, in other words, corner the markets for both the Technical University and what David Brooks has recently called the Practical University. I will treat Brooks’ proposal–which seems correct but salutary in a depressingly restricted sense–in a separate post.
But one thing to notice is the story behind how Harvard made the decision to MOOC forward. As Nathan Heller recently reported in the New Yorker,
One day in February, 2012, a social scientist named Gary King visited a gray stone administrative building in Harvard Yard to give a presentation to the Board of Overseers and Harvard administrators. King, though only in his fifties, is a “university professor”—Harvard’s highest academic ranking, letting him work in any school across the university. He directs the university’s Institute for Quantitative Social Science, and he spoke that day about his specialty, which is gathering and analyzing data.
“What’s Harvard’s biggest threat?” King began. He was wearing a black suit with a diagonally striped tie, and he stood a little gawkily, in a room trimmed with oil paintings and the busts of great men. “I think the biggest threat to Harvard by far is the rise of for-profit universities.” The University of Phoenix, he explained, spent a hundred million dollars on research and development for teaching. Meanwhile, seventy per cent of Americans don’t get a college degree. “You might say, ‘Oh, that’s really bad.’ Or you might say, ‘Oh, that’s a different clientele.’ But what it really is is a revenue source. It’s an enormous revenue source for these private corporations.”
HARVARD feels threatened? Are you serious? One is reminded of the bizarre phenomenon in recent American politics, in which the RICH plead that they are under attack by the “takers.” Whereas under “normal market conditions,” the only class reasonably contemplating any kind of protest and revolt would be the lower and working classes, in today’s bizarro world of Gilded Age income inequality, the people at the top are so out of touch with reality, so insecure about their position at the top–perhaps haunted by a kind of “thriver’s guilt” fueled by the deep down knowledge that they did not really earn it, but won a cruel lottery–that they deceive themselves that they are under attack. It is not enough that Harvard win the prestige game, it is not enough that they be the richest (with an endowment of–take a deep breathe, because i guarantee you are not ready for this figure–over $30 billion)–no, they must one-up the “1.0” for-profits (University of Phoenix, et al.) by leveraging their brand name, with one hand, and undermine the strapped middle class state universities and struggling 2nd and 3rd tier private universities, with the other.
This is a seriously incomplete and somewhat ranty account, and there is much more to the story–and, I think, more Good News that what I noted above–but it’s a perspective that needs to be laid out on the table and reckoned with.
After a prolonged hiatus–due almost exclusively to the interminable demands of the mad campaign of the academic job market–I am finally returning to blogging. Over the next several weeks, I’ll be exploring the supercluster of issues orbiting education, technology, and the rapidly evolving relationship between them (so-called “EdTech”).
Along the lines of education, I’ve been working my way through several of the most recent screeds on and exposes of higher education. I’ll be trying to sort through issues such as the following:
- Corporatization of the university
- The future of tenure and the nature of academic freedom
- The very idea of a public intellectual in the 21st century
- The so-called “skills gap”
- For-profit universities
- online education and “MOOCs” (Massive Open Online Courses)
- The role of Big Data in higher ed
- Student loans and the prospect of a higher ed “bubble”
- Changing student demographics
- The psychology and culture of academia
One of the most fascinating things I’m coming up against in this research, again and again, is how ignorant many academics, particularly humanists, tend to be about the conditions of their labor (as well as their reluctance to recognize what they do as labor), about how the university works, about the macroeconomic forces operating, as Hegel might say, “behind the back of consciousness.” Our reflexes dictate that we bemoan the corporatization of the university, and scoff at the conservative critiques of tenure, intellectuals, and academia in general, yet we often fail to consider whether these positions have a kernel of truth. What the research suggests–what students and the public suspect, and what more self-aware academics know–is that the university is not what it seems to be.
In much the same way that we continue to refer to something called “the middle class” in America, despite the radically changed and changing economic landscape of the last few decades, and especially the last five years, we continue to cling to a conception of the university that arose in a very different era; it is part of our “social imaginary” and is deeply bound up with our understanding of what it means to be a successful, middle class American; which, for many of us, sadly, is more or less equal to what it means to be a full citizen and, like, an actual human being.
On the technology front, I will be exploring recent critiques of the micro- and macro- roles and effects of technology: in our personal lives, and in our political economy and culture. Jaron Lanier, a founding father of virtual reality and early web, has emerged as one of the most perceptive and, given his tech chops, authoritative, critic of digital culture. Lanier’s most troubling claim is that Web 2.0 and what he calls the worldview of “cybernetic totalism” is not only making it more difficult to be an actual person, but is accelerating the erosion of the middle class set in motion decades ago.
The great danger, he thinks, is that cultural creatives–musicians, journalists, and the like–are canaries in the “data mine”, but the first wave of middle class professions that will be rendered “redundancies” as more and more jobs are made obsolete by robots, computers, etc. To this list, we can add professors. As Lanier has it, a democracy is not possible without a middle class, but a middle class is not possible unless a society is structured to provide sufficient opportunities for most of the people to amass more wealth than the infinitesimally small number of people at the top. The symbolic numbers of Occupy Wall Street point toward what Lanier considers the barely distant future: In our new technopolis, there are the Lords of the Cloud, and the digital peasants. Digital technology, the child of a democratic society in which prosperity was widely shared, is coming to undermine the bulwarks of the society that spawned it.
While Lanier focuses more on the political, economic, and social dimensions of tech, Sherry Turkle, MIT sociologist, zeroes in on how tech might be harming our psyches and our relationships. Her central concept–that in the new, hyperconnected world we are always and everywhere “alone, together”–points to the dark side effects of technology, and the ways in which we have become addicted–like the incubants in the Matrix, or the prisoners in Plato’s Cave.
And that, it seems to me, is what connects these two great themes of education and technology: they so pervasively define the contours of life in today’s world, yet their recent pasts are so unknown, their present effects are so hard to pinpoint, and their likely futures are so difficult to predict. They constitute such a crucial part of our contemporary Cave. The great task, then, is to patiently, persistently grapple with them.
(Image courtesy of gasparandmichelle.com)
In his famous Allegory of the Cave, Plato inquires into “our nature as it concerns education.” These days, education is a hot-button issue, and with good reason: from concerns over “teaching to the test” in elementary school, to deficits in basic reading and writing skills, to skyrocketing tuition and crushing student loans, to the corporatizing of the university, to the rise of online education–education is in a state of dysfunction, disrepair, and decline. Indeed, the title of the most popular recent documentary on education is apt: “Waiting for Superman.”
These problems raise questions about precisely what education is for, what it means, and in what it consists. Why is education such a difficult problem in American life? In modern life? In life itself?
Please join us as we delve into these and other thorny questions!
A belated thanks to all those who took part in our second Socrates Café a couple weekends ago. This time we had a smaller group and a somewhat more intimate discussion that centered on the effects technology is having on our everyday lives and innermost minds. Our conversation ranged over a swath of issues: the positives and negatives of social media, the incentives for children to approach relationships transactionally, digital reflexes, boredom, distraction, online dating, and more.
We also got a couple suggestions for how to improve the event:
-Distribute a short reading to the group beforehand that touches on the topic at hand, so that everyone has a common base to launch from
-Tilt more toward divisive or at least controversial issues in order to spark more spirited debate and avoid a bland consensus
-Recommend some additional philosophical literature on the subject
I will keep these in mind in planning for the next event, but per the last suggestion, I want to post a few readings for those who’d like to learn more:
1) Martin Heidegger, “The Question Concerning Technology”. Heidegger’s classic essay on technology is noteworthy for his (at first) strange thesis that the question concerning technology is not technological. That is, technology is not really “the stuff”–the computers, iPhones, planes, trains, and automobiles–but rather a way of seeing, knowing, disclosing the world: it is a way the world is presented to us. It is not a purely human artifice, but one-dimension of the world that, in the modern age, has been blown out of proportion such that it crowds out and obscures other modes of appearance. While not intrinsically an evil or a negative force in our lives, the danger with technology is that we will come to see ourselves in terms of it; that, as Emerson put it, “things are in the saddle, and ride us,” such that we forfeit our freedom and humanity in our attempt to gain control over our lives.
2) C.S. Lewis, “The Abolition of Man”. Following up on the last point, Lewis questions the long-term goal of modern secular humanism and the modern scientific research project–which, he argues, is to gain total control not just over nature, but over human nature. The danger is that, in such a world, our only polestars for what counts as progress are our desires–our instincts–rather than some transcendent moral order, such as the Tao, Natural Law, God. As such, Lewis concludes that, in our attempt to use technology as, in Freud’s phrase, a “prosthetic God,” our victories over nature are really nature’s victories over us.
3) Ray Kurzweil, “The Singularity is Near”. Kurzweil is the intellectual prophet of Silicon Valley. A distinguished and brilliant scientist, his radical views on the telos of technology can be roughly distilled into the following equation: Hegel + evolution + technology + the Matrix = the cosmos. Put differently, technology is the continuation of evolution by other means, and technology is developing at an accelerating rate. Soon, with the birth of AI, evolution will reach a new stage, and the changes that will be wrought not just in human life but in the universe are so disruptive and unimaginable that this singularity is like an eschaton, a point of no return, the edge of a black hole–what lies on the other side is inconceivable from our present standpoint. But Kurzweil insists it is good.
4) Jaron Lanier, “You Are Not a Gadget”. A scion of Silicon Valley , Lanier, plays the puckish trickster to the pantheon of Gates, Jobs, and Zuck. In this polemical text, he argues that the internet and digital technology is gradually corroding the human spirit and dealing away our dignity, one click at a time. Like Heidegger, he fears the ways that technology warps our minds and constricts our engagement with others and the world around us, offering up a form of false consciousness in which he imagine we are free and following our heart’s desire, a state he calls “digital Maoism.”
Finally, I encourage everyone to visit TED.com (Technology, Entertainment, and Design), which contains a cornucopia of short talks on tech.
If you have any recommendations, please post them here and/or on MeetUp!
I will be in touch soon about our next MeetUp, which will be in late February. I plan to lock down a more commodious venue.
Thomas Friedman, ever the technological optimist, heralds the coming revolution in online education.
There is a kind of Hegelian strain in Friedman’s boosterism for neo-liberalism and globalization; not the state, but the free market is the march of spirit on Earth. Any nasty consequences are just the acceptable side-effects and bugs of the beta version of something that will be surely perfected in the next iteration or soft-ware update. Though Friedman’s natural optimism sometimes gets the better of him, his point about the potential impact of online learning in so-called developing countries is hard to deny. This, coupled with increasing access to nimble tools like micro-finance, may well give people in the poorer countries and forgotten places of the world more opportunity to improve their lives.
We often discuss the merits and demerits of online education in the context of life in the developed world. While this is surely an important discussion to be having, it may blind us to the prospect that the most far reaching, world-historical effect of online education may be felt not by us, but by those still struggling to secure basic needs.
My friend and colleague Dan Fincke just posted a reflection on his own journey through the twisted funhouse of the academic employment market. Dan’s energy and passion–as a teacher and a blogger–has for years simply dumbfounded those of us who know him; his efforts are über-human, and in this way he is true to the ideal of his favorite philosopher, Nietzsche.
Dan’s situation is a symbol for what is wrong with professional philosophy. In much the same way that Andrew Sullivan–one of Dan’s role models as a blogger–has led the charge in upsetting the conventions and exposing the limitations of traditional print journalism, Dan is leveraging the new medium of the blog to do philosophy in way that is accessible, interesting, relevant, and important for a broader audience. I don’t say “popular” audience because that carries the whiff of “pop culture,” which spells “dumb.” But today’s popular audience, in some parts of the country and the world, at least, no longer spells dumb. When academics turn their nose up at “popular” writing and venues, I think they have this 19th century vision of a semi-literate hoi polloi a world removed from the elite bastions of oak-adorned studies and sophisticated salons. But Dan, like an increasing number of younger academics, smells the rot and decadence that infects this way of thinking and this way of doing philosophy. Again, like his intellectual hero, Nietzsche, Dan is finding a way to do philosophy outside the confines of academic scholarship. And it should concern us that the 20th century was the first in which almost all the major philosophers were academics. I heard a talk recently where a scholar argued that philosophy has always done better as a parasite (gadfly?)–when it uses something else as fodder for reflection, be it new developments in science, culture, technology, or politics. Whenever it tries, or pretends, to become it’s own thing, it retreats into a sorry sort of solipsistic solitude, a cloud of self-important knowingness; a retreat fueled by fear and insecurity. Voltaire’s Candide is precisely a mockery of this tendency–Dr. Pangloss (literally, “all words”) is the caricature of this mindset.
Not to be confused with the “Law of Attraction,” the concept peddled by the best-selling self-help New Age book and film, The Secret: the idea that, if you just want something hard enough—“I think I can, I think I can”–it will eventually come into your life. Taken at a literal level, of course, this is plainly stupid and easy to mock. But the book wouldn’t be so successful if it didn’t contain a kernel of truth. The message resonates with people because it taps into a brute and basic psychological truth: that people who are generally open and optimistic will generally attract other people and opportunities that will generally get them what they want and where they want to go. It’s not a law of gravity, but a pragmatic strategy to help us navigate life.
One other such strategy is what we might call the Law of Subtraction. We can come at this concept by defining it in terms of what it’s not: the Law of Addition, which rules our lives more often than not. What is the Law of Addition?
Please join us for our second MeetUp! RVSP
Our topic: “What is technology doing to our society?” Digital technology is rapidly and radically changing just about everything we do. As Emerson said, “things are in the saddle, and ride us.” Whether we see this spreading as a wildfire, a disease, or a wave of freedom–or as just really cool–I think we can all agree that its simply a fascinating phenomenon. How are different technologies–medical, transportation, communication, information–changing our lives, for better or worse?
Please come join us for a Sunday afternoon of collective inquiry!
If you’d like to learn more, check out my website at http://www.davidestorey.com
*If you plan to attend, please be sure to patronize our generous host, Sit and Wonder Café.
**If you would like to suggest discussion topics, please let me know.
***Space is limited. I am exploring an alternative venue that can accommodate more members of our growing group. Stay tuned!
Thanks to all those who attended our first Socrates Café Brooklyn, “What is Success?” It was a real pleasure meeting all of you, hearing your stories and struggles, and peeling back the veneer of our conventional views on success to try and approach the heart of the matter. I think we often fail to realize the power and importance of throwing ourselves into dialogue with people from different walks of life and suspending, if only for a few minutes or a couple of hours, our basic assumptions about ourselves, our trajectory in life, and our view of the world. It is not easy–indeed, in our discussion, we hit a few bumps in the road and the engine stalled a few times; but confusion is the crucible of a higher, deeper, rounder form of consciousness. And we had some unpleasant exchanges; it became clear pretty quickly that the philosophical is the personal. But overall, I think we had a good first showing and I look forward to our next meeting in January.
Some highlights from our discussion:
Academic philosophers, pressed to explain their unique contribution to the university, how they “add value,” why they are relevant, and so on, often fumble about, and the first thing the seize upon is the good old, tried and true “critical thinking.” Ironic as it sounds, today’s academy isn’t all that interested in their critical thinking prowess. But as it turns out, they may be fumbling in the wrong place, all while sitting on a pile of gold. In its forecast of hiring practices for 2013, Forbes puts critical thinking at the top of the list. In fact, philosophical habits of mind dominate the list: complex problem-solving, judgment and decision-making, and active listening round out the top four.
Forward-thinking business leaders have been singing this song for years: technical know-how is more downloadable than the supple habits of mind needed to deal with ambiguity and complexity, integrate concepts, perspectives, and data across domains, and see the bigger picture. As Dov Seidman has argued, in today’s new economy, it doesn’t just matter what you can do, but how you do it, and philosophy is uniquely-suited to help us navigate the new normal of hyper-complexity, hyper-connectedness, and hyper-transparency:
Philosophy can help us address the (literally) existential challenges the world currently confronts, but only if we take it off the back burner and apply it as a burning platform in business. Philosophy explores the deepest, broadest questions of life—why we exist, how society should organize itself, how institutions should relate to society, and the purpose of human endeavor, to name just a few.
Credit, climate, and consumption crises cannot be solved through specialized expertise alone. These problems, like most issues businesses confront in the global marketplace, feature complex interdependencies that require an understanding of how political, financial, environmental, ethical, and social interests influence each other. A philosophical approach connects the dots among competing interests in an effort to create synergy. Linking competing interests requires philosophers to examine areas that modern-day domain experts too often ignore: core beliefs, ethics, and character.
Perhaps we might amend Plato’s dream of the philosopher-king: that the world will limp on until philosophers become CEOs, or CEOs become philosophers. Bodhisattvas must become businessmen.
What leadership looks like:
On Thursday afternoon, on Day 2 of the Council of Graduate School’s annual meeting here, Michael F. Bérubé was scheduled to give a plenary address titled “The Future of Graduate Education in the Humanities.”
“There is no way to talk about the future of graduate education in the humanities without talking about everything else involved in the study of the humanities,” he told a rapt audience of about 700 graduate deans, most of whom were not from humanities fields.
Mr. Bérubé opened his remarks by saying that every aspect of graduate education in the humanities is in crisis, from the details of the curriculum to the broadest questions about its purpose. “It is like a seamless garment of crisis, in which, if you pull on any one thread, the entire thing unravels. It is therefore exceptionally difficult to address any one aspect of graduate education in isolation,” he said.
Among the problems he cited were high attrition rates among graduate students, the many years it takes students to get their degrees, the need to revise the content of graduate courses so that students are prepared for jobs outside of academe, whether alternative forms should replace the traditional dissertation, and if some programs should be reduced in size or eliminated altogether.
Mr. Bérubé also noted the glut of Ph.D.’s in the academic-job market and the 1.5 million people now employed as adjuncts, with no hope or expectation of ever getting a tenure-track position.
“For what are we training Ph.D.’s in the humanities to do, other than to take academic positions in their fields?” Mr. Bérubé asked the audience. “What does one do with a Ph.D. in philosophy or history, other than aspire to teach and conduct research in philosophy or history?”
The great task of the current generation of graduate students and early-career academics is to answer that question–together. The university system cannot save them.
Lenny Cassuto makes one:
What if we reconceived the guiding assumption that Ph.D.’s are supposed to become professors? As the Versatile Ph.D., a Web site dedicated to alternative careers for Ph.D.’s, pointed out in a comment to me, “Recognizing nonacademic placements as legit communicates a much more positive message about the skills and abilities that are nurtured by graduate education. It affirms the value of the entire enterprise.”
But it also throws a bone to administration. If graduate programs were tricked out with nonacademic job training programs and workshops; if they forged partnerships with university career services offices, AltAc alumni, and administrators; talked openly about applying PhD training and skills, rather than relegating these conversations to the shadows; and/or incorporated internships and/or service learning into their programs–if any or all of these things are done, then graduate schools gain a competitive advantage. They can say to prospective students: “We don’t just place our graduates in tenure-track jobs. We prepare them for a whole host of careers in different sectors.” A healthy culture is one capable of criticism, reform, and adaptation–that is how institutional metabolism works. But as Cassuto points out, cultural change can only happen if it starts at the academic equivalent of birth:
That affirmation has to begin at the earliest stage of graduate school. Professors need to shape students’ expectations before they enter graduate school—which means more transparency about their career options. And we need to shape students’ expectations while they’re in school about what’s waiting for them afterward. Most important, we need to alter their training accordingly, to prepare them for the full range of jobs they will be able to get.
The system only gets fixed from the inside, granted. But I worry that Cassuto’s solution is only a rearguard action that eases the passage of the current generation of graduate students but concedes that the war is lost: admissions will be cut and programs will close, and “becoming a professor” will no longer be a legitimate career path.
In any case, if present trends continue, I think we’re likely to see three species of PhDs: the few Elites idling in Ivy Heaven , the many Plebs toiling away in Adjunct Hell, and the plucky, creative NACs who parlay the PhD into something new.
This week, both The Chronicle of Higher Education and the New York Times added to to mounting conversation about the status, role, and nature of college in our present moment–this time from the teenage perspective, proposing two alternatives to the traditional high-school-to-college conveyor belt: later or never.
Roger Steare, an organizational ethics professor in the UK, guides organizations in the private and public sectors: “Ethics is no longer optional, it is absolutely crucial to the sustainability and success of our businesses, our public-sector services and every other institution and enterprise.”
More at his organization, Ethicability.
(Chart by James Lawrence Powell)
In the last two posts, I broached the question of what long-term, structural effects online learning will have on higher education. At Thanksgiving, I spoke a great deal with my two nieces, who are getting ready to go to college next year, and their parents, about the myriad dimensions of the process. Like health care, college has become one of the most complicated, and most anxiety-inducing, pieces in the puzzle of modern life, not least because they are the sectors in which costs mock inflation. Indeed, with the election over, I’d wager that families discussed these issues more than maybe any others.
As we’ve seen over the last decade, industries we considered staples of life in the modern industrialized world–music, journalism, and retail–were radically disrupted and transformed when the world became Flat. This year, the New York Times has declared 2012 the Year of the MOOC (Massive Open Online Course), with 3 flagship online universities pioneering the new platform:
I want to follow up and throw into the mix two other perspectives I’ve come across in the meantime:
- Robert Koons, a professor of philosophy at University of Texas at Austin. Though Koons does not explicitly discuss online learning or MOOCs, his scathing, Closing-of-the-American-Mind-ish critique of the modern university–which he considers the most corrupt institution in modern society–casts light on spiritual, intellectual, moral, and economic weaknesses in the status quo that make the university vulnerable to the digital disruption.
- Clay Shirky, NYU new media guru, one of the closest things we have to a public intellectual. Essentially, Shirky seems willing to bet his tenure that early MOOC platforms like Udacity are tantamount to Napster, and that over the long haul online learning will indeed to to higher education something like what the mp3 did to music.
The NY Times’ fascinating report on the rise of the MOOC raises questions about what we might call the “Professors of the Future”:
Udacity courses are designed and produced in-house or with companies like Google and Microsoft. In a poke at its university-based competition, Dr. Stavens says they pick instructors not because of their academic research, as universities do, but because of how they teach. “We reject about 98 percent of faculty who want to teach with us,” he says. “Just because a person is the world’s most famous economist doesn’t mean they are the best person to teach the subject.” Dr. Stavens sees a day when MOOCs will disrupt how faculty are attracted, trained and paid, with the most popular “compensated like a TV actor or a movie actor.” He adds that “students will want to learn from whoever is the best teacher.”
The implications are enormous, and difficult to sift through.
Apparently 2012 is not only the return of Quezacotl and Mayan Apocalypse, but, according to the The New York Times, the Year of the MOOC (Massive Online Open Course).
The paint is barely dry, yet edX, the nonprofit start-up from Harvard and the Massachusetts Institute of Technology, has 370,000 students this fall in its first official courses. That’s nothing. Coursera, founded just last January, has reached more than 1.7 million — growing “faster than Facebook,” boasts Andrew Ng, on leave from Stanford to run his for-profit MOOC provider.
“This has caught all of us by surprise,” says David Stavens, who formed a company calledUdacity with Sebastian Thrun and Michael Sokolsky after more than 150,000 signed up for Dr. Thrun’s “Introduction to Artificial Intelligence” last fall, starting the revolution that has higher education gasping. A year ago, he marvels, “we were three guys in Sebastian’s living room and now we have 40 employees full time.”
“I like to call this the year of disruption,” says Anant Agarwal, president of edX, “and the year is not over yet.”
What does the MOOC mean for the future of the traditional university? The $20 million question–or, perhaps more accurately, the $50K/year question–is whether digital technology will do to higher education anything like what it did to the music industry. A decade ago, few would have thought that a computer company would replace the record store; but here we are.
What might a tipping point look like?
As Sandy approached, the media compared the storm, in its scope, rarity, and composition, to the three-headed monster that hit south of Nova Scotia in 1991 and was featured in the best-selling novel and feature film, The Perfect Storm. In addition to the multiple meteorological elements, many all but immediately started speculating about the human element that magnifies the storm’s disruptive power: next week’s election. But there is another dimension to the storm that makes the moniker, “perfect,” even more apt: climate change.