Here.

images

Over at Adjunct Rebellion, a scathing assessment of MOOCs:

While the last 20 years of academia have seen these two destructive practices aimed at the professoriate, it hasn’t been until lately that the threat is driven by the internet — in the case of academia, in the form of MOOCs that are now looming enormous, casting monstrous shadows over the college campus. The MOOC model, from the standpoint of the professoriate, is an entirely exploitative one.  The professor designs a class, has lectures and other media support shot and “canned” — and then the university, or the MOOC itself owns that material.  It OWNS the intellectual property of a professor who has trained for, on average, a decade for advanced degrees, who has taught for years and developed skills and abilities.  And, once that particular area of scholarship is canned — who needs the professor, ANY professor, anymore?

This is an example of the rhetoric of crisis I discussed earlier, and let me be clear that I don’t always think that’s a bad or un-useful thing–it just depends on what your goals are.  If your goal is to wallow, then it works great.  If your goal is to get tenure, you’re barking up the wrong tree (these two goals, incidentally, are espoused by the folks over at the Philosophy Smoker Blog, a nest of nattering nabobs of negativism, which openly admits that its focus is to “bitch about” trying to make it in academic philosophy).  If your goal is to make a living, then just quit and do something else (and you CAN do something else).

Read the rest of this entry »

Cathy Davidson, a professor at Duke, has a great idea:

In January 2014, I will offer a six-week Coursera class, “The History and Future of Higher Education,” free and open to anyone. I’d like to turn the class’ weekly forums into an opportunity for a massive, global, collaborative, constructive, peer dialogue about how higher education got to its current dilemma. And from there, I hope we can come up with some creative, innovative, and workable ideas to make a better future.

A MOOC about MOOCs seems to make a great deal of sense for a few reasons.

For one, it provides a forum for investigating just what a MOOC is, what it can and cannot be, whether and to what extent it does indeed enhance learning, and whether and to what extent and in what ways this can be measured.  If it turns out that such an experiment yields a more nuanced and useful picture of the ontology and application of the MOOC, then this itself would be evidence that the MOOC is a sound design and delivery mechanism.

Second, as Cathy notes,

In the present mood of high polemic, hyperbolic promise, and hysterical panic, it is almost impossible to sort out the questions, let alone the answers to these questions, on either a national or international level: Is now the time to reject or embrace massive online learning? Do MOOCs yield improved learning and free and open access to those who have been excluded from higher education—or are they yet another cynical attempt to defund the public and extract profits from tax payers and diminish the value of what virtually all universally claim to be the public good of higher education?

Crisis rhetoric is seductive but does not have a great signal-to-noise ratio.  A MOOC that took a, well, academic approach to MOOCs might help to dispel the fervor over the MOOC-ment and help people think clearly about just what it is and what it means.

Third and related, much of the chatter about MOOCs is so focused on the “disruption” of the status quo, but sometimes the storied history of that status quo is not sufficiently excavated.  An inquiry into MOOCs in the context of the history of higher ed might help us see that the notion of Higher Education enshrined in our social imaginary is a historical anomaly made possible by a set of specific events, notably World War II and the G.I. Bill.  The Chronicle of Higher Ed just ran a piece along these lines (though it is paywalled).

I have finally decided to take the plunge:  I have signed up for Coursera’s “Internet History, Technology, and Security” course.  It’s not quite Christopher Hitchens voluntary trying out water boarding in order to do his subject justice, but I figure it only makes sense to walk the walk.  Reports forthcoming.

images

Robert Maguire has profiled a math MOOC funded by the Gates Foundation and launched at the University of Wisconsin LaCrosse that had an unexpected effect:  though it was offered worldwide, it was widely embraced around the state by high schools and led to deeper coordination between high school and college students, teachers, and administrators in order to avoid the “redemial math trap and close what we might call the “Preparation Gap.”  From McGuire’s interview with two representatives from the college:

McGuire

The way MOOCs are growing I imagine a lot of graduating high school seniors are thinking about using them this summer, whether they’re being driven to it by the necessity of a placement exam or for enrichment or to stay sharp for college. What would you advise a graduating high school senior who’s thinking about taking a MOOC?

Kosiak

A MOOC can be helpful to show what a college course actually looks like, how it’s done and what to expect in their first year of college.

McHugh

Over summer, taking a MOOC is going to help them learn how to be an independent learner, how to study, how to find that internal motivation, how to seek out resources, recognizing that they do have multiple ways they learn, and they need to find that strategy within themselves.

Students might look at what’s aligned with their discipline of study. If someone’s looking at going into a history major, then they might look for some different history MOOCs. They can use the MOOC as a way to find out, “Is this something I am really passionate about and want to study for the next several years of my life.”

This is proof positive of an idea Noel B. Jackson floated which I mentioned yesterday:  MOOCs not only expand open access to what, for convenience sake, I’ll call the Third World (Globalization), but they can strengthen local and regional communities in the (f/c/s, again) First World.  They not only expand the net to wire more nodes, but they deepen the connections around each node.  MOOCs can potentially have “glocal” impact.  In the case of the MathMOOC at UWL, the connections are spanning vertically across the different levels of the education system.  This might take the teeth out of the objections of MOOC skeptics, who dismiss MOOCs as trojan horses for neoliberalism or digital colonialism.

This “localizing” side-effect of MOOCs targets a serious problem that so many college teachers face:  beset with near illiteracy and/or innumeracy in their students, they find themselves asking, “How did these kids get into college?”  This often happens with writing skills. The college teacher faces a dilemma: should I teach them the content, or teach them how to write? If you just teach the content, then a) they aren’t likely to grasp it as roundly, since you can’t cleanly separate the ability to write clearly and the ability to think clearly, and b) you shirk your responsibility as the “last line of defense” before the students get out into the real world bereft of solid writing skills. If you teach them how to write, you’re not teaching the content. And if you try to split the difference, well, as Lao Tzu says, “if you chase two rabbits, both get away.”

Better coordination between high school and college teachers and administrators could help close the “preparation gap” that frustrates so many teachers and short-changes many students.

By the way, MOOC News and Reviews is a treasure trove of information about the cluster of issues orbiting the MOOC-ment.

(image courtesy of http://www.apartmenttherapy.com)

images

Noel B. Jackson, a professor of literature at MIT, has a thoughtful and balanced take on MOOCS over at “Sustained Inattentions”–he has the advantage of proximity, since he is essentially at one of the two ground-zero’s of the MOOC movement (Silicon Valley and Cambridge).  He testifies that, in his time at MIT, no issue has arrested the attention of folks in higher ed as much as the MOOC.  His view on the place of MOOCs in current discourse about higher ed is insightful:

“The MOOC has become a repository for utopian and dystopian narratives about the present and future directions of higher ed.”

The rhetoric of crisis and disruption can inhibit us from thinking clearly and carefully about how best to surf this strange new wave.  The utopian and dystopian narratives are, as Noel points out, the views that MOOCs are either democratizing or corporatizing:  that they are either making the highest quality education available to the world’s poor, or they are merely the latest step in the corporatization of the university that has been underway for decades.

Confessing his ambivalence about MOOCs, he points to a possible benefit of MOOCs that I hadn’t heard of before:

“My interest in MOOCs extends to how the format can be imagined to provide access to a university curriculum to populations that may not have had this kind of access, as this is the population that stands to gain most from them. But in addition to the flat, global learning community ritually invoked as the audience for MOOCs, we could benefit from thinking locally too. How can the online course format make possible new relationships not only with the most far-flung remote corners of the earth but with the neighborhoods and communities nearest to campus? Can we make MOOCs that foster meaningful links with the community or create learning communities that cut across both the university and the online platform?”

This is certainly a pressing need at the university I teach at.  Fordham University’s main campus is an oasic bubble plopped in the middle of one of the poorest counties in the country, and few of the students venture past the perimeter of security-saturated environs.  Anything that could facilitate a deeper engagement–heck, any engagement–with the world beyond the walls would be a very good thing; and perhaps MOOCs and other online approaches might facilitate that, though I’m not sure how.

(image courtesy of http://www.apartmenttherapy.com)

7247427

Genius. (Analytic)

My favorite:

“Is fictional construct designed to make you feel superior.”

“Will still do better than you on the job market.”

Not quite as funny, but also kind of genius. (Continental)

(image courtesy of memegenerator.net)

Unknown

In a recent interview over at The Philosopher’s Magazine, Nigel Warburton, co-presenter of Philosophy Bites, the wildly successful philosophy podcast, riffs on his experiments in public philosophy, the problems plaguing philosophical research, and his recent decision to leave academia.  The success of his podcast is proof positive that there is a hunger for philosophy in the publicyber space.  Excerpts below.

The surprising success of the podcast:

The initial thought was that mainly philosophy students and lecturers might take an interest, but he’s heard from American listeners with time to kill on long drives, people waiting out wildfires in Australia, and soldiers in Afghanistan concerned about ethics. When I ask for details over email, Warburton sends me a list of 40 countries, all with more than 10,000 downloads each, some with vastly many more, millions more in some cases. Just after the usual English-speaking suspects, China checks in at number five. The United Arab Emirates, Argentina, Taiwan, Iran and Indonesia make the list. Several spin off series, two books (and a third in the pipeline), more than 250 interviews and an alarming 16.7 million downloads later, and Philosophy Bites is an international philosophy phenomenon.

Warburton explains that he is leaving his secure position at Open University largely because of the dominance in academia of what he calls “crossword puzzle philosophy” (essentially, what Daniel Dennett has deemed “chmess“):

“Philosophers today have mostly got their heads down. They’re concerned with writing for a journal which will publish work that takes them two or three years, and only five people will read it. These are people who could be contributing to something that’s incredibly important. Gay marriage is just one example of many. I don’t think philosophers responded particularly well to 9/11. Issues about free expression, all over the world, are not just academic. They’re matters of life and death. There are exceptions, but philosophers are by and large more interested in getting a paper in Mind or Analysis than they are in commenting on the major political events of our time.”

On philosophical “research”:

I’m not even sure what research means in philosophy. Philosophers are struggling to find ways of describing what they do as having impact as defined by people who don’t seem to appreciate what sort of things they do. This is absurd. Why are you wasting your time? Why aren’t you standing up and saying philosophy’s not like that?…  It’s not the kind of thing that Socrates did or that Hume did or that John Locke did…  Why are you doing this? I’m getting out. For those of you left in, how can you call yourselves philosophers? This isn’t what philosophy’s about.”

One is hard-pressed to disagree with a straight face.  As someone who has been on the job market for a couple of years, I always inwardly cringe when I am asked to explain my “research” to a search committee or a dean.  In a formal sense, research is something that a scientist does in a lab or in the field:  designing and conducting experiments, collecting and interpreting data, and the like.  In an informal sense, it means doing your homework–gathering relevant information–before a meeting, an interview, etc.  Philosophical writing, for the most part, is not research:  it is reading articles and books, thinking about them and the subjects concerned, and then writing what one thinks about them.  Exceptions could arguably be made for “experimental philosophy” and branches of philosophy in dialogue with the sciences, such as philosophy of mind or biology, but for the most part, I think it’s a category mistake to think of the reading and writing of philosophy as “research.”  We might view today’s philosophical “research,” largely a consequence of the rise of analytic philosophy and “science envy,” as a new form of scholasticism, a defensive, conservative crouch destined to be consumed by the coming Avalanche (more on this, Higher Education’s equivalent of the Singularity, later…) (I hasten to add, however, that analytic thought, at its best, provides a needed check against the scholastic excesses, verbosity, and sheer fictioneering of much Continental thought.).

Despite the coming storm, Warburton is ultimately optimistic about the fate of philosophy:

“Because of changes in online teaching, in the next ten years, the university system will be turned on its head. If Philosophy Bites can make such an impact with two guys with a hard disk recorder and a couple of laptops, think what people who fully understand the new technology, who can write code, who can employ the best philosophical communicators around, think what they could produce. It’s only just starting. We’re going to see dramatic changes to how we learn, teach, do research and share ideas. I think philosophy’s future’s very bright.”

I asked two days ago what, in light of Leon Wieseltier’s view that philosophy these days only “tweaks and tinkers,” an alternative might look like.  Philosophy Bites seems to be a solid step in the right direction.

(image courtesy of Philosophy Bites)

Image

I came across a letter I wrote to a friend last year who inquired about the philosophy of Ayn Rand, and thought I’d repost excerpts of the philosophical content below:

[I had to laugh when I got your message–I was in church of all places.  Next question: what was I doing looking at my phone in church and committing digital blasphemy? Answer: obnoxiously long Catholic ceremony. The supreme irony is that Rand’s most recent notoriety in American culture is Paul Ryan–a, well, “severe” Catholic–a big Rand fan.

Image

About Rand. Let me take your questions one at a time, but let me be blunt: I think Rand’s philosophy is ludicrous–it is an attractive and interesting philosophy embraced with zeal by adolescents (including high-school me!) first starting to think for themselves, but when touted as a philosophy of life, or as a serious platform for political economy, it is dangerous, historically uninformed, and morally abhorrent. Hopefully my responses to your questions will convey why I think this.

Read the rest of this entry »

Leon Wieseltier, editor of the New Republic, added another entry to the growing genre of commencement speeches targeting technology.  He worries about the shrinking of the humanities in higher education and the culture at large, as technology colonizes more and more corners of our lives.  His piece reminded me of T.S. Eliot:

“Where is the wisdom we have lost in knowledge?  Where is the knowledge we have lost in information?”

Leon is taking aim at the values and worldview of Silicon Valley, an ideology that Evgeny Morozov has dubbed “technological solutionism”, the reduction of all problems to technical problems, the notion that technology can fix all things, and the reduction of knowledge to information:

There are thinkers, reputable ones if you can believe it, who proclaim that the exponential growth in computational ability will soon take us beyond the finitude of our bodies and our minds so that, as one of them puts it, there will no longer be any difference between human and machine. La Mettrie lives in Silicon Valley. This, of course, is not an apotheosis of the human but an abolition of the human; but Google is very excited by it.

He is referring, of course, to Ray Kurzweil, the scientist, inventor, and anointed Philosopher Prophet of Silicon Valley who has just been hired by Google.  Leon’s piece is aimed squarely at Kurzweil’s scientism:  the extension of science from a method to a metaphysics, with claims based not on data but on dogma.  There are some who consider Kurzweil the Most Dangerous Man in America.  While Steve Jobs has been raised up as the Great Man of our age, he may end up being overshadowed by Kurzweil, who is on track to become the Father of AI.  I will be addressing Kurzweil’s worldview–essentially, that technology is the continuation of evolution by other means–in future posts.  For now, see Michael E. Zimmerman’s recent reflection on AI from the perspective of Integral philosophy.

Leon’s is exactly the argument that C.S. Lewis made over half a century ago in The Abolition of Man:  man’s modern conquest of nature is really nature’s conquest of man.  Why?  Because when reason is turned into a tool to satisfy our desires, our desires are running the show–but our desires or instincts largely come from nature.  I will return to Lewis’ argument and its connection to modern nihilism in future posts.

One noteworthy thing Leon mentions is the place of philosophy in all of this:

Philosophy itself has shrunk under the influence of our weakness for instrumentality – modern American philosophy was in fact one of the causes of that weakness — and generally it, too, prefers to tinker and to tweak.

What would it mean to not just “tinker and tweak”?  What would that look like?  Why is it so difficult, not only to do, but to even imagine?

I think philosophy has been assigned one of its great tasks for the present age.   If Hegel said philosophy is its own time comprehended in thought, then the great challenge for thought in our time is that one of the most important matters, technology, is largely about our future, and its grip on our present makes it so hard to reflect on it.

Wikipedia_NES_PAL

[Reposted from the following discussion thread]

Thinking about this and reading your posts, I am reminded of David Foster Wallace’s Infinite Jest, which is emerging as sort of the Big Novel of our era. The story takes place in the not too distant future, and the title refers to a film that is so addictive that it kills the people who watch it; and the film, unsurprisingly, is wildly popular.

Wallace was concerned that, in the words of sociologist Neil Postman, we are “amusing ourselves to death”–not literally, of course, but psychologically or spiritually. I find this narrative seductive, but I resist it for that very reason. Part of the problem, I think, is that people just have different dispositions. Humanist folks tend to have a European part of their soul, a melancholic affect, a deep suspicion of the popular, the common, the fashionable, the masses, a reverence for some distant past, a disdain for the practical. But a lot of Americans don’t share this affect or this outlook: they just want to do their work, make their money, and have some fun, however the culture is currently defining and delivering it–”what’s the harm in that? Lighten up?” The Euro-humanist, of course, looks at these people and just cries “false consciousness”–they either don’t know, or won’t admit, their true condition. The Euro-humanist sees most people as trapped in and bespelled by some kind of Cave, and tends to see The Next Big Thing (MOOCs, gamification, Facebook, etc.) as just more distraction, illusion, ideology, etc.  As the inimitable Roger Sterling puts it:

So what I think we’re dealing with here, at some level, is just different sensibilities: the can-do, practical, pragmatic, American happiness pursuer just NEVER WILL see the world quite like the intellectual, Europeanish, theory-minded soul will; for that reason, the gamified world is a blast (“awesome!”). This person does not have a problem with just doing their work, whatever it is, and going home and living their life. They don’t see, and they don’t care, that the compulsion to be entertained does any kind of damage to the soul, or makes us less of a human being. Maybe some people can just handle entertainment in a more moderate way. Wallace himself, for instance, had a highly addictive personality, and couldn’t handle fun things because he just found them to be, well, too much fun.

I have grown suspicious over the years of what I’ll call the Office Space Ideology that lots of intellectuals and humanists and liberals adopt: that corporations are evil, that office workers are drones, that it all really is as stupid and wretched and soul-rending as films like Office Space portray it to be. Why? Because most of those people have probably never worked in an office! And yes, they probably would find it to be drudgery. But maybe for people of a different sensibility, that’s not what it is. Maybe they are just better at accepting things for what they are–that, as Matthew Crawford puts it in his thoughtful and important meditation on the value of work, work is necessarily toil and serves someone else’s interests.  And so rather than futilely try to fuse work and play, erect a separation of powers:  work is the realm of necessity, play is the realm of freedom.  And that reminds me of something Wallace said in a different context, when he was interviewing a pro tennis player: “I am almost in awe of his ability to shut down neural pathways that are not to his advantage.”  People who are well adjusted are better at adapting to the reality of American life, which in some important ways overlaps with reality itself.

And let’s face it, the American Pragmatist is sometimes spot on about the EuroHumanist’s posturing, pedantry, and pretentiousness:

Maybe one reason that Euro-humanists disdain things like gamification is that their attachments to an idyllic past and an ideal future create such a sense of loss, longing, disappointment, and frustration that the escape and pleasure provided by games et al. is an irresistible narcotic. The crucial question is, whose sense of reality is more warped?

David E. Storey

Join 180 other followers

Twitter: Short, Controlled Bursts

Goodreads