You are currently browsing the monthly archive for May 2013.

Image

I came across a letter I wrote to a friend last year who inquired about the philosophy of Ayn Rand, and thought I’d repost excerpts of the philosophical content below:

[I had to laugh when I got your message–I was in church of all places.  Next question: what was I doing looking at my phone in church and committing digital blasphemy? Answer: obnoxiously long Catholic ceremony. The supreme irony is that Rand’s most recent notoriety in American culture is Paul Ryan–a, well, “severe” Catholic–a big Rand fan.

Image

About Rand. Let me take your questions one at a time, but let me be blunt: I think Rand’s philosophy is ludicrous–it is an attractive and interesting philosophy embraced with zeal by adolescents (including high-school me!) first starting to think for themselves, but when touted as a philosophy of life, or as a serious platform for political economy, it is dangerous, historically uninformed, and morally abhorrent. Hopefully my responses to your questions will convey why I think this.

Read the rest of this entry »

Leon Wieseltier, editor of the New Republic, added another entry to the growing genre of commencement speeches targeting technology.  He worries about the shrinking of the humanities in higher education and the culture at large, as technology colonizes more and more corners of our lives.  His piece reminded me of T.S. Eliot:

“Where is the wisdom we have lost in knowledge?  Where is the knowledge we have lost in information?”

Leon is taking aim at the values and worldview of Silicon Valley, an ideology that Evgeny Morozov has dubbed “technological solutionism”, the reduction of all problems to technical problems, the notion that technology can fix all things, and the reduction of knowledge to information:

There are thinkers, reputable ones if you can believe it, who proclaim that the exponential growth in computational ability will soon take us beyond the finitude of our bodies and our minds so that, as one of them puts it, there will no longer be any difference between human and machine. La Mettrie lives in Silicon Valley. This, of course, is not an apotheosis of the human but an abolition of the human; but Google is very excited by it.

He is referring, of course, to Ray Kurzweil, the scientist, inventor, and anointed Philosopher Prophet of Silicon Valley who has just been hired by Google.  Leon’s piece is aimed squarely at Kurzweil’s scientism:  the extension of science from a method to a metaphysics, with claims based not on data but on dogma.  There are some who consider Kurzweil the Most Dangerous Man in America.  While Steve Jobs has been raised up as the Great Man of our age, he may end up being overshadowed by Kurzweil, who is on track to become the Father of AI.  I will be addressing Kurzweil’s worldview–essentially, that technology is the continuation of evolution by other means–in future posts.  For now, see Michael E. Zimmerman’s recent reflection on AI from the perspective of Integral philosophy.

Leon’s is exactly the argument that C.S. Lewis made over half a century ago in The Abolition of Man:  man’s modern conquest of nature is really nature’s conquest of man.  Why?  Because when reason is turned into a tool to satisfy our desires, our desires are running the show–but our desires or instincts largely come from nature.  I will return to Lewis’ argument and its connection to modern nihilism in future posts.

One noteworthy thing Leon mentions is the place of philosophy in all of this:

Philosophy itself has shrunk under the influence of our weakness for instrumentality – modern American philosophy was in fact one of the causes of that weakness — and generally it, too, prefers to tinker and to tweak.

What would it mean to not just “tinker and tweak”?  What would that look like?  Why is it so difficult, not only to do, but to even imagine?

I think philosophy has been assigned one of its great tasks for the present age.   If Hegel said philosophy is its own time comprehended in thought, then the great challenge for thought in our time is that one of the most important matters, technology, is largely about our future, and its grip on our present makes it so hard to reflect on it.

Wikipedia_NES_PAL

[Reposted from the following discussion thread]

Thinking about this and reading your posts, I am reminded of David Foster Wallace’s Infinite Jest, which is emerging as sort of the Big Novel of our era. The story takes place in the not too distant future, and the title refers to a film that is so addictive that it kills the people who watch it; and the film, unsurprisingly, is wildly popular.

Wallace was concerned that, in the words of sociologist Neil Postman, we are “amusing ourselves to death”–not literally, of course, but psychologically or spiritually. I find this narrative seductive, but I resist it for that very reason. Part of the problem, I think, is that people just have different dispositions. Humanist folks tend to have a European part of their soul, a melancholic affect, a deep suspicion of the popular, the common, the fashionable, the masses, a reverence for some distant past, a disdain for the practical. But a lot of Americans don’t share this affect or this outlook: they just want to do their work, make their money, and have some fun, however the culture is currently defining and delivering it–”what’s the harm in that? Lighten up?” The Euro-humanist, of course, looks at these people and just cries “false consciousness”–they either don’t know, or won’t admit, their true condition. The Euro-humanist sees most people as trapped in and bespelled by some kind of Cave, and tends to see The Next Big Thing (MOOCs, gamification, Facebook, etc.) as just more distraction, illusion, ideology, etc.  As the inimitable Roger Sterling puts it:

So what I think we’re dealing with here, at some level, is just different sensibilities: the can-do, practical, pragmatic, American happiness pursuer just NEVER WILL see the world quite like the intellectual, Europeanish, theory-minded soul will; for that reason, the gamified world is a blast (“awesome!”). This person does not have a problem with just doing their work, whatever it is, and going home and living their life. They don’t see, and they don’t care, that the compulsion to be entertained does any kind of damage to the soul, or makes us less of a human being. Maybe some people can just handle entertainment in a more moderate way. Wallace himself, for instance, had a highly addictive personality, and couldn’t handle fun things because he just found them to be, well, too much fun.

I have grown suspicious over the years of what I’ll call the Office Space Ideology that lots of intellectuals and humanists and liberals adopt: that corporations are evil, that office workers are drones, that it all really is as stupid and wretched and soul-rending as films like Office Space portray it to be. Why? Because most of those people have probably never worked in an office! And yes, they probably would find it to be drudgery. But maybe for people of a different sensibility, that’s not what it is. Maybe they are just better at accepting things for what they are–that, as Matthew Crawford puts it in his thoughtful and important meditation on the value of work, work is necessarily toil and serves someone else’s interests.  And so rather than futilely try to fuse work and play, erect a separation of powers:  work is the realm of necessity, play is the realm of freedom.  And that reminds me of something Wallace said in a different context, when he was interviewing a pro tennis player: “I am almost in awe of his ability to shut down neural pathways that are not to his advantage.”  People who are well adjusted are better at adapting to the reality of American life, which in some important ways overlaps with reality itself.

And let’s face it, the American Pragmatist is sometimes spot on about the EuroHumanist’s posturing, pedantry, and pretentiousness:

Maybe one reason that Euro-humanists disdain things like gamification is that their attachments to an idyllic past and an ideal future create such a sense of loss, longing, disappointment, and frustration that the escape and pleasure provided by games et al. is an irresistible narcotic. The crucial question is, whose sense of reality is more warped?

Wikipedia_NES_PAL

[Reposted from the following discussion thread]

Discussions about education these days often reference something called “gamification”:  the use of games or game-like structures to enhance learning.

On the one hand, I see the appeal: rather than fight the forces affecting students’ behavior outside the classroom, harness them and integrate them into the learning process. “Badges” will replace “grades,” and “competence-based learning” will replace degrees, etc. Now earlier iterations of online learning may well fall prey to the diploma mill problem (a piece of paper saying you can now do what you could already do), but it sounds as though the next generation of online learning tools will be more sophisticated: they will be able to empirically demonstrate that student x has learned skill y to do job z. And they achieve that result through an engaging learning process that motivates them through gamified learning modules (like a video game) that take less time (more efficient) than the traditional course/degree model.

But what unsettles me about this, from something like a sociological perspective, is that it turns everything into a “game”–the game of professional advancement and money-making that people will be playing for most of their lives, of competing and achieving and winning, will become seamless with the educational sphere. It feeds into the hyper-competitive culture we are becoming more and more each year.

Moreover, the shift from text to image based learning seems to be a kind of surrender to our culture, which has been image-based for a long time. In my view one of the chief functions is to give students the tools to RESIST and challenge and criticize the present culture–to give them a chance to be an individual. And so gamification seems like another stage in the subsumption of education by corporate values: “fun” on the outside (infotainment), soul-eroding on the inside. All to equip students with 21st century skills so that we can “beat China”, or whatever.

But to challenge THAT–video games aren’t what they used to be. Many involve sophisticated cognitive tasks. So part of the gamification craze is a challenge to the highbrow, elitist prejudice that only book smarts and book learning are real smarts and real learning. There is a parallel here to the time-lag in critiques of capitalism. I wonder whether Marxist, or Marxish, intellectuals are ragging on a form of capitalism that was, well, creatively destroyed, ages ago, and not that capitalism is perfect, but 21st century capitalism is an importantly different animal. They might retort that it is still the same SPECIES–inherently, structurally unjust and exploitative and dehumanzing, and so on…which is an essential debate to have.

My initial post, and the included infographic from OnlineColleges.net, generated a great discussion thread over at the Unemployed Philosopher’s Blog.  Reposting it here.

 

The Dark Side of MOOCs

Paige Harris has an informative piece over at Online PhD Programs on some best practices for landing an academic job.  Despite one factual error–Paige claims that academia has long been “unscathed” by the vicissitudes of the economy, when in fact the job market, at least in the humanities, has been abysmal since the 1980s–I think it is all sensible advice, though I would say that the understated tone of the piece may be misleading to those in or aspiring to graduate school.

Paige writes:  “There’s no doubt that building a career in academia is a challenge these days, but it can be done.”  There are challenges, and then there are challenges.  Running a 6-minute mile is a challenge for many, but pretty much anyone can do it if they discipline themselves.  Running a 4-minute mile is nearly impossible; it depends not just on an unusual degree of hard work and determination, but on winning a genetic lottery.  As someone who has just hazarded the punishing fire of the academic job market and lived to tell the tale, while I would not equate landing a tenure-track or secure position in academia with running a 4-minute mile, it’s not far off.  I will recount my harrowing tale–which, I can assure you, has a most happy ending–in a future post, “There and Back Again.”

A couple of things that Paige does not mention (and, to be fair, need not mention, as her subject is simply HOW to get a job) are how “success” is measured in academia, and whether “success” is really as desirable as wide-eyed graduate students tend to believe.  First, reading her post, you might be wondering how, with all of the energy that goes into “packaging” yourself, you ever find time to focus on the ACTUAL job:  teaching, thinking, reading, and writing.  As I recently told a group of young graduate students, the first rule of the job market is also the first point in Rick Warren’s The Purpose Driven Life:  “it’s not about you.”  It’s about a persona that you will create that will, hopefully, be selected in the lottery that we call the academic job market.  This is essential not just for marketing purposes, but for maintaining mental health.

But this marketing does not end once you get a job.  As you will see–through attending conferences and publishing papers and getting your nose dirty in department politics–academia is a game–everything is not as it seems.  This is partly why Frank Donoghue claims, in his book The Last Professors, that today’s academic is less an intellectual than a kind of salesman.  I will be blogging on Frank’s book–and, hopefully, interviewing him via podcast–in the coming months.

Second, as Paige rightly emphasizes, people need to think very, very carefully about whether they really want it–and what they’re really signing on for, financially, geographically, socially, and professionally.  To paraphrase Ian Malcolm, the eccentric mathematician from Jurassic Park:  “Academics are so focused on whether or not they can get a job, they never stop to think if they should.”

I will be posting about these and related issues in the coming weeks.

images

David Brooks has, I think, made progress in the discussion about MOOCs and online education.  His central idea is that given the increasing sophistication and decreasing cost of online learning as a delivery mechanism for technical knowledge and skills, universities can no longer cling to a business model in which they charge a small fortune to impart technical skills.  As Brooks flatly states, “There will be no such thing as a MOOC university.”  One thing they can do–perhaps with a somewhat lower price tag–is specialize in the acquisition and development of practical knowledge and skills–the “Practical University”:

So far, most of the talk about online education has been on technology and lectures, but the important challenge is technology and seminars. So far, the discussion is mostly about technical knowledge, but the future of the universities is in practical knowledge.

Practical knowledge is not about what you do, but how you do it. It is the wisdom a great chef possesses that cannot be found in recipe books. Practical knowledge is not the sort of knowledge that can be taught and memorized; it can only be imparted and absorbed. It is not reducible to rules; it only exists in practice.

While Brooks’ notion of “practical knowledge” is a bit thin (column-sized), the point is important.  What makes all of this possible is the “flipped classroom.”  While humanities teachers have generally shaken their heads at and pooh-poohed EdTech, the flipped classroom is a game-changer.  Lectures on Plato, colonialism, and Melville can now be placed online (and software can check to make sure students are watching them), while class time can be used exclusively for seminar-style interactions in which students can develop prized social skills.  As Brooks notes,

Think about Sheryl Sandberg’s recent book, “Lean In.” Put aside the debate about the challenges facing women in society. Focus on the tasks she describes as being important for anybody who wants to rise in this economy: the ability to be assertive in a meeting; to disagree pleasantly; to know when to interrupt and when not to; to understand the flow of discussion and how to change people’s minds; to attract mentors; to understand situations; to discern what can change and what can’t.

Let’s face it:  where and when do we deliberately try to develop these “soft”, “people” skills?  One might carp at Brooks using an example of a corporate environment–a critic might say that this just makes university seminars into a lab for “behavior modification”–but we can view his point more expansively:  that universities taking this approach are helping to develop the whole person; in that sense, they could become more congruent with the original liberal arts ideal.

Whereas before professors had to (often awkwardly) balance lecture and discussion, now they can have a clearer division of labor.  I can testify to the challenge of “getting through” lecture–transmitting the ideas, interpretations, facts, etc., that you want to highlight from the reading–to get to what, in my heart, I consider the real business of teaching:  the conversations that you foster and facilitate in the classroom.

Brooks explains how technology might be used to enhance the classroom environment:

The goal should be to use technology to take a free-form seminar and turn it into a deliberate seminar (I’m borrowing Anders Ericsson’s definition of deliberate practice). Seminars could be recorded with video-cameras, and exchanges could be reviewed and analyzed to pick apart how a disagreement was handled and how a debate was conducted. Episodes in one seminar could be replayed for another. Students could be assessed, and their seminar skills could be tracked over time.

In this way, technology can create the space in which a stronger sense of community can take root in the classroom.  Moreover, in reviewing their performance on video, they would be able to see how they appear in public.  This would make students uncomfortable in the very way that we want them to feel uncomfortable.

The general sense in these sorts of discussions is that all of this EdTech stuff is bad news for humanists.  However, notice that the technical knowledge sounds like stuff that robots can do; as Kevin Drum details, the long imagined future of the robot worker is not too distant at all.  This might lead to a cruel irony:  online learning is maturing–through gamification, analytics, adaptive learning mechanisms, and so on–at around the same time as automation.  What is the sense in equipping the masses with all of these technical skills if robots are just going to perform the jobs to which they are suited?  Then, you might say, people should be trained how to build the robots and do the programming and engineering, etc.  But the reality is that there are only so many people who will be needed for this kind of work.  All of which begs the question:  just what the hell are all of these people going to do for a living?

But this might put humanists in a surprisingly good position.  Daniel Pink, one of the new darlings of the business self-help industry, has argued that Right Brainers will rule the future.  And indeed, Forbes recently listed the Top 10 In Demand Skills in 2013–check out the top four.  What is driving this?  I think it’s the fact that life in our new Technopolis is creating problems and raising questions that are not scientific and technical problems and questions.

My chief concern with Brooks’ proposal is not about substance, but about scale.  It’s easy to imagine something like this going on at Harvard et al.  But at Wannabe University?

(image courtesy of marketingzen.com)

One more point about the Bloomberg article raises pertains to the plight of adjuncts.  Though SNHU’s online program was initially supported by adjuncts getting paid the usual pittance, it has generated enough revenue to hire full-timers to do more (and, hopefully, eventually, most) of the teaching.  This may be a way to break the fatal logic of the adjunct dilemma as it exists at (solely) brick-and-mortar universities.  Not only would schools have the resources to ensure that many, most, or all of their on-site teachers are full time, but now adjuncts could still teach part-time, but do so more comfortably, without having to shuttle from campus to campus, which is a major drain on time, money, and mental health.

Of course, the true adjunct dilemma is faced by the teachers themselves, not the administrators.  My fellow blogger Dan Mullin recently shared his ambivalence about going back to adjuncting after a hiatus.

Looks like Southern New Hampshire University has devised a nimble business plan.

This is just the kind of “disruptive innovation” that Harvard Business School professor Clayton Christensen has predicted:  this refers to, in John Hechinger’s words, “the process by which companies at the bottom of the market use new technologies to displace more established competitors.”  The attack comes, not from the front, but from the side:  from the other two raptors you didn’t even know were there.  It is exactly the kind of thing that universities that wish to survive will need to do if anything like Christensen’s Prophecy–that in 15 years, HALF–that is right, half–of the institutions of higher learning in the United States will be gone.  (More on Christensen’s Prophecy–and the coming Avalanche–later…)

This may well be a viable pathway–and the only pathway–for middling universities attempting to surf and survive the volatile seas of the EdTech Era.  Frank Donoghue, whose essential book I’ll be plumbing in upcoming posts, thinks that the lasting mark on higher education left by the first generation of online for-profits will not be the companies themselves, but the selection pressure they exert on traditional institutions of higher learning:

“The real legacy of this industry, I believe, is its lasting and widespread influence on traditional universities.  Whatever the fate of specific campuses of the University of Phoenix, Career Education, or DeVry, these companies have demonstrated that it is possible to operate a university as a business….  The business model for higher education devised by the for-profits has tremendous appeal to administrators and lawmakers in an era of steadily declining public funding and tuition increases that are quickly becoming prohibitive.”

Donoghue thinks that the majority of non-profits will be torn asunder by the cross pressures of vocational for-profits, which lead to jobs, and elite nonprofits, which leverage prestige.  Large state research universities, he thinks, have largely lost their way, unable to decide what their mission and role in society really is, and thus plagued by “mission creep.”  This is the arms race that Christensen terms the “bigger and better” virus that has infected academic administrative culture.  However, the model of SNHU may well offer them a middle way:  the profits from an online apparatus that offers primarily vocational training can be funneled back to the leafy host campus in order to boost its prestige.  The challenge facing universities that take this path is, in part, one of perception, as Hechinger relays:

“Even some of the beneficiaries of Southern New Hampshire’s online push are uneasy. John Wescott, a 19-year-old sophomore at the physical campus, expects to graduate with only $15,000 in student debt thanks to financial aid. Yet he recalls a spirited discussion at a student-government meeting: ‘There was a sense that we were turning into the University of Phoenix and the value of our degree was going down.'”

Thus the “threat to Harvard” I discussed yesterday comes not just from the for-profits themselves, but from the effects they are likely to have–and are already having–on the non-elite, traditional universities.  But, again, let’s be clear:  Harvard feeling “threatened” is like the prom queen who is insecure about her appearance.

Check out this infographic on MOOCs posted over at http://www.onlinecolleges.net:

The Dark Side of MOOCs

I will have more to say about the developing debate over MOOCs later, but at first blink, I have two impressions based on everything I have read:

The Good News:  MOOCs will disseminate the highest quality education to the poorest people.  As I noted in a previous post, and as Thomas Friedman has pointed out, whatever the fate of MOOCs in higher ed in the developed world, one unadulterated good they provide is giving people in the developing world a chance to acquire the knowledge and skills they will need to have a fighting chance in the 21st century economy.

The Bad News:  The new strains of premium MOOCs being devised and piloted by the elite universities–the Big Three players listed in the graphic above–threaten the other players in the higher ed ecosystem:  for-profits, non-profit, 2nd and 3rd tier private schools, and non-profit state universities.  Harvard et al., fueled by virtually unlimited coffers, can BOTH kick butt in the arms race for prestige, and leverage that prestige to dominate the online landscape, thus furthering weakening the hand of mainstream, “middle class” universities.  Indeed, (ironically) Harvard economist David J. Collis predicted as much; in The Last Professors:  The Corporate University and the Fate of the Humanities, Frank Donogue explains Collis’ prescient speculation:

“[Collis] speculates that these top universities, made all the richer by capitalizing on their brand names to market “basic lectures and course”s online, could then ‘shift back to the tutorial system to differentiate their on-campus education’ experience.  They will, in other words, offer convenience to one market of students and prestige to another.”

They will, in other words, corner the markets for both the Technical University and what David Brooks has recently called the Practical University.  I will treat Brooks’ proposal–which seems correct but salutary in a depressingly restricted sense–in a separate post.

But one thing to notice is the story behind how Harvard made the decision to MOOC forward.  As Nathan Heller recently reported in the New Yorker,

One day in February, 2012, a social scientist named Gary King visited a gray stone administrative building in Harvard Yard to give a presentation to the Board of Overseers and Harvard administrators. King, though only in his fifties, is a “university professor”—Harvard’s highest academic ranking, letting him work in any school across the university. He directs the university’s Institute for Quantitative Social Science, and he spoke that day about his specialty, which is gathering and analyzing data.

“What’s Harvard’s biggest threat?” King began. He was wearing a black suit with a diagonally striped tie, and he stood a little gawkily, in a room trimmed with oil paintings and the busts of great men. “I think the biggest threat to Harvard by far is the rise of for-profit universities.” The University of Phoenix, he explained, spent a hundred million dollars on research and development for teaching. Meanwhile, seventy per cent of Americans don’t get a college degree. “You might say, ‘Oh, that’s really bad.’ Or you might say, ‘Oh, that’s a different clientele.’ But what it really is is a revenue source. It’s an enormous revenue source for these private corporations.”

HARVARD feels threatened?  Are you serious?  One is reminded of the bizarre phenomenon in recent American politics, in which the RICH plead that they are under attack by the “takers.”  Whereas under “normal market conditions,” the only class reasonably contemplating any kind of protest and revolt would be the lower and working classes, in today’s bizarro world of Gilded Age income inequality, the people at the top are so out of touch with reality, so insecure about their position at the top–perhaps haunted by a kind of “thriver’s guilt” fueled by the deep down knowledge that they did not really earn it, but won a cruel lottery–that they deceive themselves that they are under attack.  It is not enough that Harvard win the prestige game, it is not enough that they be the richest (with an endowment of–take a deep breathe, because i guarantee you are not ready for this figure–over $30 billion)–no, they must one-up the “1.0” for-profits (University of Phoenix, et al.) by leveraging their brand name, with one hand, and undermine the strapped middle class state universities and struggling 2nd and 3rd tier private universities, with the other.

This is a seriously incomplete and somewhat ranty account, and there is much more to the story–and, I think, more Good News that what I noted above–but it’s a perspective that needs to be laid out on the table and reckoned with.

David E. Storey

Join 180 other followers

Twitter: Short, Controlled Bursts

Goodreads