Month: January 2016

The Bigger Short

The markets swoon and swoon, and then precious normalcy seems to return–till we find we’re no longer sure what normalcy is supposed to look like. We’ve so far had the worst 10-day start to a year in the Dow’s history, and there are already dark premonitions swarming about an incipient downturn, to be even worse than 2008.

Of course, plenty of similar warnings have been put out by paranoiacs and scaremongers alike in the years since the worst paroxysms of the financial crisis faded into the drugged calm of this long, strangely infuriating recovery. Alarmism, like any other easily repeatable strategy, has a way of becoming a cottage industry unto itself. And trying to divine the markets’ day-by-day zigzag is a fool’s errand, whether through Nobel-winning economic models, fancy fractal geometry, deep learning or other statistical crunching. (I notice the widespread awareness of “black swan phenomena” has done almost nothing to raise the detectability of black swans themselves, except maybe in the sense of getting Nicolas Taleb on TV a lot more often.)

Faced with this glum reality, we are advised to simply trust in growth–which the market, despite its day-to-day vagaries, must ultimately follow, like a dog that trails you home. Step back to a long enough timescale, and you see the zigzags always head up to new highs. So don’t flinch, don’t get greedy, and you’ll eventually end up (modestly) rich. This is a bit like the investment-class equivalent of the working man’s American Dream–get educated, work hard, be patient and you will make it to the good life.

There actually is one other strategy available in the financial world that doesn’t really have a parallel in the working world, which is to control the system (and the numbers it puts out) so that it proclaims happy days, while delivering an endless payout to its main participants like a broken slot machine. (Too Big to Fail, anyone?) And so, even as pundits declare the fundamentals of the American economy strong and solid, it’s hard not to suspect that the definition of prosperity has been distorted into something that most people wouldn’t connect with that name.

Maybe my cynicism has been turbocharged by seeing “The Big Short” last week, an excellent docudrama based on the unraveling of the financial sector in 2008. All in all, it provides a dose of the ever-useful (if existentially discouraging) takeaway that, no matter how much people pride themselves on their respectability and rationality, they are invariably prone to doing massively stupid and venal things–and as often as not, doing them en masse.

I also haven’t been exactly cheered of late by listening to progressive radio host Thom Hartmann, who is usually a sober and upbeat voice but now is vividly announcing the “Crash of 2016”. He’s just published an eponymous book on the subject.

So alarmism may be the vice of our times, but I think there’s more to the present atmosphere than people misguidedly projecting their memories of ’08, or trying to make a buck peddling 31 flavors of fear.

My own view is that the crisis of 2008 was really just an apprentice-piece; the master-piece is still on its way. It will be related in its causes to 2008 but far more broad-based, as the true culmination of all the corruption, negligence, and wishful thinking that first allowed 2008 to happen, and which to this day has not only not been addressed, but been allowed to become more entrenched than ever.

One of the key differences between the coming crisis and that of 2008 is that its cause will not be as neatly traceable, and its unfolding may be much more complex and multi-staged. In 2008, there was a single massive problem in one distinct system that could be detected, localized, quantified, and then traced out in its repercussions though a common-sense understanding of the financial system. This afflicted system then did indeed fail, more or less on cue.

For example, in “the Big Short”, the whole story gets rolling when Michael Burry, an eccentric hedge-fund manager begins to look under the lid of a motley of mortgage-backed securities and sees that most of the loans comprising them are bound to default once the adjustable rates kick in. It’s not long before Burry’s insight gets leaked to a few other traders, each with their own axes to grind against the financial system, and then things are off and rolling.

Burry was not, as the movie implies, the only one to notice the problem early. Also, the movie’s attempt to portray him and his fellow short-sellers as anti-establishment folk heroes–instead of vultures making a fortune off the misery of millions–is unconvincing to say the least. Still, his example illustrates that a sharp mind, a careful eye, and a determination to dig below a façade that nearly everyone took as solid as a rock (and just as boring) was all it took to correctly diagnose the impending 2008 crisis.

Similarly, once the failure of the mortgage system became evident, it could propagate rapidly because it could be traced and understood by anyone using this compelling and simple story: it turns out A is actually worthless, so B, C, D, and E that were based on A are probably worthless, too. The notion of “contagion” itself became a contagion.

That can’t happen so easily this time around, simply because the problem is universally distributed; it is gaseous, diffusing largely unremarked into every corner of the economy. In 2008, the US had a huge bubble in real estate. Today, it’s more like the country itself is a huge bubble made out of other huge bubbles; at the higher levels, nearly everything is corrupt, malfunctioning, or massively in debt. Not only this or that corporation or government department, but entire fields–medicine, science, politics, finance, and education, to name a few–have become disfigured by cronyism, consolidation, and massive money-ginning schemes, all disguised by vast outflows of paper meant to signify “progress”. The result has been declining or stagnating utility for the society they are supposed to serve.

There is a peculiar symmetry about such a widespread problem, which tempts us to mistake it for stability. It reminds me of an old Simpsons episode where Mr. Burns is diagnosed as “the sickest person who ever lived”. He is only alive, the doctor explains, because all of his diseases happen to be in perfect balance, canceling each other out. “You mean I’m indestructible!” exclaims Burns. The doctor demurs, saying that in fact the slightest disturbance could destroy the precarious stalemate, but Burns, no longer listening, just tents his fingers and repeats triumphantly, “In-destructible…”

Similarly, by a combination of wishful thinking and circular logic, we in the US have made our belief in our own indestructibility our main asset, and also its own proof, instead of a conclusion drawn from the solidity of the constituent systems and how well they play together. Just as in the subprime collapse statistics were used, perversely, to justify that mortgage-backed securities were safely “diversified” because they were made of subprime loans from many different parts of the country, so now we tacitly assume that the widely distributed nature of the problems we face means that the risk is safely diversified away. We do not notice that the underlying causes are similar in structure, and we certainly don’t notice their deepest root–the collision of a faith of unending growth and progress with ever more serious realities of technological and social stagnation.

While that stagnation is quietly inscribing itself in a million forms onto the landscape of American life, the ongoing belief in growth ensures we dare not speak its name. But now, as growth flags in China and the Federal Reserve begins to release its seven-year sleeper hold over the economy, we are about to witness an epic experiment to see just how long illusion–or the studied re-framing of terms such as “prosperity”–can keep a machine as massive as the U.S. national economy floating in mid-air.

Impressions From the Med Trip

I’m back from my medical interviews journey. This was in fact a multi-purpose trip, partly fact-finding, and also involved three other job interviews–two for medical scribing, and one for a potential post-doctoral position. This post will be a wide-ranging and detailed exploration of my impressions, a kind of self-debriefing. (I hope to write something much more focused next time.)

I’ve been reluctant about the idea of doing a post-doc since long before I finished with my PhD, mostly due to the combination of intense day-to-day frustration, diminishing returns for significant discoveries, and inept management that I’ve found goes with laboratory research. But it’s also that I have simply seen what the culmination of an academic career really looks like too many times, and too up-close. Even at the higher levels of success (which involve a great deal of luck), the typical academic scientist’s life has become almost purely bureaucratic in character; for me, it summons countless images of lonely graying figures sitting in office buildings writing grants 70% of the day, dealing with administrative intrigues for the rest, with an occasional conference thrown in. Tenure is nice, but given the drabness of the overall picture and the awful odds of achieving it, it has never seemed like much of a sweetener.

Nevertheless, the thought of discovering new provinces of knowledge or creating technologies that change the way we live does have a stubborn allure, and I have kept up the idea that there might be some place out there where the funding might be secure, the professors engaged and helpful, the subject matter itself exciting, new, and amenable to dreaming big (and the women strong, the men handsome, and all the children above average). Lake Wobegon notwithstanding, when I found a laboratory a few weeks ago that focused on tools for controlling protein interactions using light–a hot subject, and also tantalizingly complementary to my own research–it was enough to make me wonder if such a perfect place might exist after all. Choking down my potential hypocrisy, I contacted the professor about potential openings. To my happy surprise, I was promptly invited to an interview.

I spent almost two weeks reading the publications of the lab, thinking up experiments to propose, outlining the connections between my past work and that of the new lab, and preparing a presentation of my ideas and previous work in case I was asked to speak to the lab members.

There isn’t much to tell about how it went, except that it confirmed my native doubts about academia far more than it assuaged them. It was bad enough to find that this prof had not bothered to glance at my CV before inviting me to interview, nor apparently had done so up to my arrival. What was much more telling was that she did not seem to know the ideas and experiments in her own recent papers. Whenever I would ask about a detail of an experiment or a method developed by the lab, she would peer at me with a baffled or slightly panicked expression and ask me what I meant by that. I’d then refer back to the publication and get a sort of hasty nod: “oh, yes, that…”

In a nutshell, my preparations backfired. The more I referred to details of the professor’s own work, the more I exposed her ignorance. Oops. Later when I commented on this to a friend, she said, “well of course; that professor probably didn’t write any of those manuscripts, and probably hardly read them either.” What is unfortunate is that this is quite likely true, and not atypical. Absolute concentration on funding leads principal investigators to a kind of weary indifference–if not outright obliviousness–to the work being done in their name by an army of postdocs, grad students and technicians, the vast majority of whom will never find faculty positions themselves.

On to the next, the medical scribing interview. “Medical scribing” has existed for decades, but has sprouted into a big business only since the passage of the HITECH Act in 2009, which incentivizes nationwide adoption of electronic health records (EHRs). An EHR is an extremely complex document requiring deft handling of a myriad of abbreviations and terms, so a number of companies now offer scribing services to physicians, in order to spare the latter from having to navigate the maze of an EHR while trying to listen to a patient’s concerns and figure out a course of treatment. The scribe masters the structure of the EHR as well as medical terminology, and follows the doctor(s) throughout their rounds, taking down every germane piece of information.

Scribing has rapidly been positioned as a kind of rite-of-passage for would-be medical students, and a more intensive substitute for traditional physician shadowing. Although most companies stress the “opportunities for advancement” for their scribes within the company, in reality nearly all base their business model on hiring future med students for short periods at rock-bottom wages: typically just over $8/hour, moving up to $10/hour pending good behavior.

These interviews went extremely well; I joked a lot, talked about my passion for science and hopes for a medical career, and left everyone smiling. It was good to know I can still turn on the charm from time to time. I was even willing to accept the wage and hours–which include weekends, holidays, and overnight shifts. I became more reticent, though, to learn that these companies demand a one- or even two-year commitment as a condition of employment. It does seem like chutzpah to demand a contract for what is essentially a low-level clerical position with bad hours, almost no pay, and negligible chances of promotion. Yet this is a reality of the job market everywhere: maximize employee responsibility while minimizing employer risk.

My last day was the big kahuna–the med school interview. I wrote in an earlier post about my growing ambivalence about medical school, especially the serious problems of medical school financing and the bizarre and distorting financial and administrative pressures exerted by the profit motive on the whole healthcare system. I also noted the strange determination of growing numbers of people to pursue allopathic training, no matter the cost in terms of finance or mental health. But I put these heavies aside for the time being and instead enjoyed strolling around the campus.

The campus was extraordinarily large (almost a half mile long), and sparklingly new. Though impressive, it seemed almost excessive: located so far outside the center of its city, such a huge and lavish medical complex could hardly expect to be used to more than a fraction of its capacity. The grounds were carefully landscaped, with broad tree-lined walkways wending between the dozens of brick-and-glass research and clinical towers, yet they were almost completely empty of people, whereas I have usually known such medical complexes to be filled with busy crowds crisscrossing from one building to another even in the dead of winter. The effect was sort of Potemkin-like. I thought maybe the place was so new they hadn’t been able to move everyone in yet.

I was surprised at the size of our interview group: nearly 50 applicants. We all gathered in the auditorium in our suits and finery for orientation, which went smoothly–as did my first interview, with a kindly old general surgeon. There followed a series of nuts-and-bolts presentations to the group on various programs and deadlines, but it was the one on financial aid gave me the most pause: $90,000 a year, plus 6.2% interest (if you get the “good” kind of federal loan).

I zoned out and did a quick calculation. Including simple interest, four years of this would put me and my fellow applicants about $406,000 in debt. Nearly half a million bucks underwater from the moment you get out, before earning a cent–and it would keep growing. I wondered: had this huge campus become a boondoggle, which the staff now hoped to cover by shaking down the students in insane fees?

I looked at the presenters after that with new eyes. Before, they had seemed matter-of-fact, considered, kindly, interested in our success. But now it felt like there was something almost predatory afoot. The room was filled with anxious and eager youth–all stoked with the dream of one day practicing medicine, of gaining the noble power to liberate others from the slavery of suffering, illness, mortality. Yet we were surrounded by a system that, in exchange for a shot at that dream, proposed to enslave each of us with decades worth of massive debt–except those of us who were already millionaires, like one of the second-year students, a former hedge fund exec who in his 40s decided medicine would be a wonderful lark.

I kept waiting for one of the administrators to either say “gotcha, just kidding” or apologize shamefacedly for these outrageous prices, but in vain. Instead, faced with these mind-bending numbers, we got (unintentional?) irony in the form of a visit from a staff psychiatrist.

With a practiced pep-talk swagger and a strange blend of inspirational clichés, he dove headfirst into the dire facts facing today’s doctors-to-be. How many of you have been warned by a doctor not to go into medicine, he asked in a teasing tone. All hands in the room shot up, including mine. (The very person I was staying with, himself a doctor, had just a couple nights before regaled me with awful stories about the proliferation of administrators in his hospital, and the abandonment of the hospital’s founding mission under a new president devoted to increased profit-seeking. This doctor had eventually been forced into retirement. He recounted it all with a dazed look, gripping a glass of scotch.)

The shrink continued. “How many of you have been warned about skyrocketing rates of job dissatisfaction in the medical profession, crippling debt?” More hands. He promptly flashed slides of recent studies indicating that, uniquely among the professions, the more hours doctors work the more job dissatisfaction they report. (Usually the correlation is the reverse; if you like your job more, you spend more time doing it.) The story grew ever grimmer, the numbers more stark. We waited in hushed anticipation for the speaker’s wise advice for coping with this possible future. Finally it came: “I would like to encourage you not to ‘go’ to medical school”, he told us, then paused to let it sink in.

That was the pearl of wisdom: the whole problem centered our errant usage of the word “go”. Somehow, this “go” contained the kernel of a mindset that could explode into motivation-destroying depression, or cynicism, or any of the other negative feelings obviously so common among physicians and medical students these days. (Alas I still think “go” is a nice, positive-sounding word.)

The psychiatrist did say exactly one thing that rang true, and reminded me of why I was there in the first place. Physicians, he told us, are unique among today’s professionals in being able to be directly engaged in highly meaningful work. This is true: although appointment times have shortened in order to see as many patients as possible, and paperwork has come to take up half or more of physicians’ workday, there is still hardly any human connection more profound, or potentially more satisfying, than guiding someone in their first steps towards healing.

Our world is chockablock with wantonly meaningless work, almost as if meaning were a blight that must be artfully contrived out of existence by precision management. I thought of the professor in the cubicle office writing grant after grant, not knowing about the work coming out of her own lab and basically too tired to care. I thought of the medical scribe, filling out mostly gratuitous paperwork and earning next to nothing, following the doctor around on a ten-hour night shift just on a hope of one day being like him/her. I thought of my own experiences in research–the long stretches of frustration and emptiness, punctuated by moments where it all seemed to make sense.

  All in all, I don’t know whether to classify the psychiatrist’s strange interlude as brilliant marketing, blithe hand-waving, or an incredibly brazen exercise in reverse psychology. I do know that by the end I felt like my brain had been tied in a stevedore’s knot, and the feeling of vague mistrust I’d gotten after my little calculation had shifted to high gear. “Yes, you will end up catastrophically in debt”, seemed to be the message, “and yes, the system is broken and most of the people who have followed in this path have wound up miserable. But lighten up! There’s no substitute for being a doctor. You know you want it. And you’ll get paid in meaningfulness.” I thought of used car salesmen and subprime mortgages. You can pay it off later. Don’t be fooled by the naysayers. You know you want it.

It’s interesting how students get hooked on the dream. One of the first-years who was giving us a tour of the grounds began to discuss his reasons for choosing the school. He said, “the one piece of advice I can give you is, go where you love. Like me. I got other acceptances, but I absolutely loved this school and I still love it. Look, I’m probably going to be $300,000 in debt when I leave here. But that doesn’t matter because I am just having such a great time.” He laughed nervously, maybe realizing he had said something he ought not to. I couldn’t help wondering what he would say four years from now, about to enter residency, with that 300k no longer an abstraction but a reality bearing down on his life and plans with its full weight. Would he still be having a great time? Perhaps. As the psychiatrist said, there are things more important than money.

My last interview was with an extraordinarily voluble specialist in neuroinflammatory diseases. An unrepentant technocrat, he seemed far more interested in discussing the nitty-gritty of the Affordable Care Act than in my clinical experiences or general interests. After plowing through the required interview questions he quickly steered the conversation towards an obscure provision in the 1,500 page legislation that establishes a new research initiative, PCORI. Unlike all previous such initiatives, he avowed, PCORI will be truly indexed to patient-evaluated outcomes, ushering in a tectonic shift in medical practice. At the same time, new initiatives in telemedicine will soon bring specialist care to remote underserved areas. “It will revolutionize healthcare,” he declared roundly.

I listened to these pronouncements of impending revolution with my best poker face. For the research institute, I imagined another multi-billion dollar bureaucracy, filled with more people just like my professor. And while telemedicine can offer real benefits to the underserved, there is something disheartening about yet another initiative to reduce our already diminishing in-person contact with those supposed to be healing and caring for us to yet another interaction with digital screens.

He went on to discuss a model of human cells acting on mouse brains being investigated in one of his labs. When I asked whether modeling a human disease based on the behavior of isolated tissues from two different species might be rather unreliable, he doubled down, reassuring me that “B-cells don’t have MHC receptors, so that can’t be possible”. Winding from topic to topic, I also learned that populations in countries with single-payer health care systems actually hate them, but just haven’t realized it yet.

My head still whirling from the encounter with the technocrat, I headed back home in a state of total exhaustion, managing to get caught in a snowstorm along the way. The dark of the night, the whirlwind of icy crystals, and the slow erasure of the road under a slippery white crust was like a running allegory of my own stewing reactions to these experiences. I checked into a cheap hotel and slept the storm away, getting home the next afternoon.

So long story short, medicine is over. It might have been my path at a different time–but given all I have seen and heard, it’s hard for me to grasp how it could be the sincere choice of anyone who is not either a millionaire or gripped by a romanticized attraction to unalloyed misery. At the very least, that is a terrible frame of mind with which to begin medical school.

I’d more or less come to this point a few days after I got home, but wanted to go see the the doctor I’d been shadowing and thank him for his time. Instead there was another MD in his place, one who I’ve met a few times, subbing in for the usual guy. I began to tell him about my experiences interviewing and it was not long before he was pouring his heart out in frustration at everything from insurance costs to the new ICD-10 rules. All the nightmare tales of dysfunction, malpractice and venality, and all the glum faces and veiled warnings of doctors I had met over the past year, came back to mind. After a certain point I could not listen to any more. I thanked the doc and headed out of the hospital one last time, leaving in my ID badge and blazer at the volunteer desk.

A few days later I got a call from one of the scribing companies I’d interviewed with, offering me a job. For an instant I toyed with the idea, purely for the sake of random adventurousness, of moving to a strange city to take a low-paying job whose sole upside was an increased chance at a career in which I was no longer interested. It could have literary value, I imagined, living a bohemian lifestyle by day and scribing like a maniac by night, wooing women in exotic nightclubs with my wicked knowledge of medical terminology.

I politely declined the offer.

It’s been a fascinating several months, but not exactly productive. I’m nearly back where I started: all around I see professional careers whose packaging and general goals seem appealing, but whose daily reality seems void of meaning, if not soul-crushingly miserable.

Lately, as my med school application saga has been winding down, I worry when I look at these choices and judgments I’ve made that I am falling into the trap warned of in Candide: “those are not the best stomachs that reject, without distinction, all sorts of food.” Yet maybe it isn’t purely a my fault of my character that so many of the options these days seem truly lackluster.

I’ve also found myself haunted by a line in “Do Androids Dream of Electric Sheep?”, where the pseudo-messiah Mercer declares, “it is the basic condition of life, to be required to violate your own identity”. I wonder if is this simply a throwaway line, devised by a troubled science-fiction writer who wanted to depict a world where war and mass entertainment had turned all human hopes awry, or something that resonates more deeply with how we now live. Has violating one’s own identity always been a sad fact of life? Was I was just too green and too insulated to see it?

I tend to answer that no, that it need not be a fact of life. When I talk with people of older generations, I see a lot of them really did not have to violate their own identity in order to make a livelihood–or at least only had to do so a little bit. All the physicians who I have chatted with at one time clearly loved their work–stress, studying and all. They joined the profession with high hopes for a meaningful calling and a chance to make good on their potential, and for a long time, the profession requited those hopes in exchange for earnest work and a dollop of ambition.

The situation in the professions has since become qualitatively different from that, not just in medicine, and maybe not just in the U.S. There is a wider trend here–something to do with a change in the relationship between professional and profession, a move from respect to indentureship. We have a long ways to go to get back to the days where a quality education and a can-do spirit could assure a reasonable financial freedom, a respectable career, and a chance at the pursuit of happiness–and I cannot help but think it will require social struggle, above and beyond distinctions between public and private sector. There are now doctors’ strikes occurring in Britain’s NHS over wage cuts and increasing working hours, and speculation is growing that American doctors might not be long to follow suit–there is talk of a union. For all the attention given to technological novelties like IBM Watson, FitBit, telemedicine and personalized genetic counseling, pavement-pounding actions like this may have at least as big a part to play in deciding the state of medical practice in the years ahead.

The Force, Mistaken

Of course the world is doing nothing that interesting, besides a possible war in the Persian Gulf, immigration craziness in Europe congealing around the Cologne events, a State of the Union speech juxtaposed against the rise of our country’s first not-so-crypto fascist presidential candidate, and markets tanking from Shanghai to New York. So it’s the perfect time to massively overthink movies.

Yes, I went to see the new Star Wars, finally. This is a true phenomenon, amassing billions in ticket sales in a few days, with the embarrassment of George Lucas’s ungainly prequels all too easily forgotten (his comments about “white slavers” less so). Safe in the hands of Disney Corporation, the trilogy that Joseph Campbell once considered a new hope for the re-discovery of mythological power in everyday life has been hoisted up again like an Imperial battle flag.

Star Wars was very much a product of the ’70s, a time marked by downbeats like the deepening Cold War, the Arab oil embargoes, the dismal winding-down of the Vietnam misadventure, and the fall of Bretton Woods and the birth of neoliberal consensus, but also by an extreme and burgeoning technological hopefulness. That shows in the world of the original Star Wars; sure there is an Evil Empire, but even in the darkest times there is a sense of dynamism, magic and adventure afoot, and marvels of technology that are inconceivable to us merit as much attention and explanation as we would give to a cuisinart. But how does this world play with the spirit of our own very different times? Has the old myth come out with new things to say for our own 2015-vintage crises?

For certain, one can’t fault “The Force Awakens” on its special effects or atmosphere. Each planet and each scene is rendered with a loving realism combined with a stark surrealism that in places seems to draw on the old 1990’s computer game Myst. Every set-piece is meticulously constructed, and praise must go to J. J. Abrams for eschewing CGI wherever possible in favor of the texture and warmth of real-life scenery. This world is alive: monstrous machines purr with obscure energy or soar through landscapes littered with natural and technological debris of cyclopean scale; luminous beams of sheer destructive power surge through space or are suspended in midair; and whole planets explode in expertly constructed tableaus of terror.

But as far as actually being a film goes, it’s hard to come away from “The Force Awakens” feeling any new final frontiers (forgive the mixed metaphor) have been opened up at all. Instead, something about the whole production is studiously deceptive; perfectly executed, gorgeously rendered, yet multi-dimensionally unfulfilling. Probably the mildest way of putting the problem is that “The Force Awakens” features an awful lot of rehash; a less mild way would be to claim self-plagiarism run amok. As one watches, it’s hard to avoid the feeling that someone coldly analyzed a list of the most salient features of the original trilogy, right down to the camera angles and lighting, and then contrived mightily to stuff them all back into a new clown-car script with as little rearrangement or alteration as possible.

The list of rehashes goes on and on: we have the planet-destroying Death Star, again, only bigger, now planet-sized instead of moon-sized, as though making it bigger makes the idea novel again. Seriously, how many times will they try to build that thing? We have the climactic scenes of flea-like X-wing fighters somehow taking out this same planet-sized weapon, which just happens to have one supreme vulnerability–and again, how many times do they have to build in that One Supreme Weak Point, and leave it open to attack no less?

On to the characters. Masked villain, thoroughly Vader-esque down to the clothing style, weapon and voice, complete with later “I am your father/son” complications and a theme of skill with the Force by hereditary descent? Check. Evil Empire, smartly though incorrectly rebranded as the “First Order”, complete with similar iconography and identical uniforms? Check. Emperor or Emperor-substitute, though in this case it looks too much like Gollum and Ian McDiarmid was way better? Check. Rising of a new hero(ine) on an impoverished hot desert world from hardscrabble but mysterious beginnings? Check. Same catchphrases, ships and old jokes?

I’ve got a bad feeling about this.

This kind of abject dependence on past formulae in a saga that has become, for better or worse, a cultural institution–and that was also, in its time, something stunningly fresh and exciting–might not have unsettled Campbell, who spent his career in service to the ultra-reductionist idea that nearly all myths can be reduced to a single set of archetypal themes, laid out in a standard pattern called the “hero’s journey”. Yet even if it’s true that mythology constantly recycles old themes that are deeply woven in the human condition, a vibrant myth also breathes, explores new territory, and creates new visions. Instead, in the case of “Force Awakens”, seeing such lack of imagination manifested on such an inevitably vast popular-cultural canvas makes it hard to dismiss it as just a mediocre movie, say one so choked with nostalgia and old formulae that there is almost no room for anything surprising to sprout. If “Star Wars” has become one of the core myths of our civilization, then, its latest incarnation suggests that while the need for this myth has only grown stronger, the myth’s actual vibrancy and creativity has abandoned us; and the failure of a movie of mythic intentions must itself be… mythic.

Yet perhaps the most alarming aspect of “Force Awakens” may stem from what is arguably its only novel twist on the old myth. Put simply: didn’t we just destroy the Empire 20 years ago, with the end of “Return”? How is it that the near carbon-copy First Order could spring up so quickly from the ashes, and with such massive resources? How could our remedy in “Return” have turned out so inadequate? At the end of the first trilogy, the evil Emperor was killed. The Death Star was destroyed. Vader was restored to grace, and Luke finally arose as a wise and balanced master of the Force. It may not have been the best ending, but it looked about as happily ever after as you can get–the threads were tied together satisfyingly enough, with an arc of character development, spiritual progress, and an earnest victory over the evil of the machine.

But “Force Awakens” now implies that all these developments were short-lived, even futile–a mere 20 years after the events of “Return”, the Dark Side is already back on its feet, blowing up planets by the dozen. The Light Side, meanwhile, is unmasked as having no better remedy for such evil than perpetual warfare, and indeed seems to thrive on it.

This new perspective gives the impression not of transcendent wisdom or compassion winning out, as the original trilogy had it, but of a kind of sordid moral duopoly between the two sides of the Force, a 1984-like model with imperial powers, all innately ruthless, fighting endlessly and pointlessly over what is left of the universe. (Or perhaps: the original trilogy as WWI, and the “Force Awakens” as WWII.) Here, at least, is a vision all too resonant with our current times of rising intercultural strife and curtailed hopes. A kind of moral idealism in the earlier trilogy has been replaced, for the 2015 audiences, with a world where perpetual warfare and constant strife is accepted as the best that even the good guys can offer. But even if it is appropriate to our times, I can’t say this is a welcome change, and it is even stranger to see it applauded.

This is not to dismiss the efforts of the cast, which is uniformly excellent. Daisy Ridley, who plays Rey, is a revelation; she smartly eludes all cliches, portraying Rey not as a stock “strong woman who can hang tough just like the guys” character, nor as a thinly veiled sex object, but as a genuinely vivid and vivacious personality filled with curiosity and drive. John Boyega portrays Finn with urgency and human warmth, and not a trace of guile or pretense. Harrison Ford approaches Han Solo, the role that launched him into the annals of superstardom, with the same old rascally wit we know, plus a nostalgia that is often more compelling than what the script actually gives him. The rest of the old cast is touching too, including Carrie Fisher as Leia and Mark Hamill’s brief but poignant appearance.

Yet even these excellent efforts don’t disguise the lack of a novel story or interpersonal chemistry. There are embers of real emotion in the reunion of Solo and Leia, at least. But even this is at once too superficial and laid on too thick. Rae, for all her appealing pluck, is kind of a blank; we meet her on her desert planet, and learn nothing about who she is or what motivates her (except for a couple of vague allusions to childhood trauma) before all hope of finding out is lost in the compulsive kinetics of the plot.

The same applies for the runaway storm trooper, Finn–we would like to know a little about who he is or why he, so uniquely, revolts and escapes from the Order, but nothing is forthcoming. Soon we are off racing through special effects land, carefully aping every tic and stutter of the original trilogy, inexorably pulled towards the encounter with Han Solo by a series of coincidences too contrived to be explained by anything but the force of Harrison Ford’s immense salary.

In a recent blog post, Jim Kunstler wrote of his deep weariness of space adventure movies. A number of other writers have also begun to question whether space is even still a valid setting or subject for new fiction, since a) it has been done to death and b) the dread of an impending civilizational backslide makes space stories seem absurdly utopian and out of touch. I consider this an over-reaction, but still invites the question of what our hunger for space stories is all about. Is it a Faustian thing, a heady craving for infinite expansion and boundless possibility? Or could it be a sign of a deep emptiness we feel but cannot articulate, leading to a nihilistic infatuation with the void? Do we retain a trace of old cosmology in the back of our heads, believing some form of divinity or grace awaits among the stars?

I would propose that our culture produces Star Wars (and similar sci-fi franchises) as a kind of antibody–a formula learned by challenge with a certain kind of existential stress. Just as our bodies must struggle against an infection before hopefully discovering the exact antibody that will bind to the attacking germ and destroy the infection, so the minds that make up a culture must struggle with the problems of life in their time and wait for someone to come up with an answer that makes those problems at least bearable.

In fact, given that “Force Awakens” is for the most part utterly imitative and emotionally empty, I’d submit that it makes more sense to interpret it as an antibody than as a new installment of an old myth. Just like an antibody, “Force Awakens” is a cultural summoning-up of a formula that once helped us to fight off a illness resembling what now are grips us, but adds little immunity to anything beyond that.

The question then becomes: what does the Star Wars antibody fight, and does it work now? I’d suggest that it fights (or fought) the fear of technological and economic impasse on the one hand, and on the other, the fear creeping spiritual death at the hands of faceless bureaucracy, amoral capitalism, and scientific materialism, and that today it works a lot less than we desperately want it to. The original Star Wars succeeded wildly because it combined the yearning for and conviction in the “technological sublime” that prevailed in the ’70s with a way, however fictive, of embracing the spiritual and symbolic urges of man that were just then under attack by those same technological advances. By postulating a great non-materialistic “Force” that permeates and connects all living things through all of time, and that is responsive to moral valences like goodness/light and evil/darkness, Star Wars gave modern man a way to have his cake and eat it too: to keep his mythic soul aflame and still go light-speed into the technological fantasy.

Of course, it is a bold move to take such phenomena as the Star Wars rehash-craze as a readout on the wider attitude of the country or world. A movie is nothing but a money-making enterprise, a warning voice seems to say; do not regard its popular success or artistic shortcomings as indicative of anything further. Yet while financial incentive is important, I believe that cannot alone account for the themes that are in peoples’ minds–both writers and moviegoers alike–and it cannot decide the response or the reception.

Movies, I instead believe, are like trial balloons that aim to illuminate the current preoccupations and needs of the populace–their psychological state. If a movie succeeds in hitting those needs, there is a great flash, a synchrony, as huge numbers of people go to watch it and reflect its light, adopt parts of it into their own personal mythos and are brought closer together in the process. If it fails, or finds a need too far ahead of its time, the reaction is deafening silence and obscurity. In our symbolic way, always oblique as most human activity is, we vote with our feet by filling the theaters, seeking relief for impressions that haunt us yet cannot be summoned consciously.

Of course, repetition is a good substitute for creativity only if the needs have not changed at all, if the illness we face is the same one as we fought off before. Said Michael Hiltzik of the LA Times in his review of “Force Awakens”, “One can envision Hollywood eventually turning out only two products: “Star Wars” movies and James Bond movies, each periodically “rebooted” for a new generation of customers by casting the latest new young stars in new costumes facing the same old perils and uttering the same old quips, with every other vestige of creative originality relegated to the void and forgotten. ” He left out Fast and Furious, Tolkien fantasies, Bat-man, Super-man, Spider-man, Ant-man, Iron-man, and (Insert-Noun-Here)-man, but the point is taken.

The lamentable problem of the present-day imagination–the failure to discover new antibodies to new spiritual illnesses–may be summed up this way: the entertainment industry has become our sole acknowledged myth-creating system; yet it was not meant to function as such, but to make profits. As long as this is so, our mythology, however compelling and badly needed in our time, will always bend to the demands of franchising.

The Healer’s Quandary

So I’m on the road again. Just popped through Telluride where the skiing and the views are spectacular, sheer cliffs, 14,000-foot peaks and pure expanses of sparkling snow, like another planet. Then drove and drove through the frigid core of the Rockies (2 below at mid-day).

The main reason for the trip is interviews. I’ve applied to medical school, and this is my last one–unless some school I haven’t heard from surprises me with an invitation very late in the application cycle.

As the days roll nail-bitingly by, I find myself poring over the motivations for my decision to go the med route. I originally set out on this path for two main reasons: 1) the inspiring journey of becoming a healer, meeting and helping people and becoming a pillar of the community; and 2) the abject rottenness of the job market for recent Ph.Ds.

But it’s strange the hidden stories some people have. I remember a nurse I met long ago during a check-up, very energetic and unusually bright, who related that she had once attended law school–purely for “ha-ha’s”, as she put it. She ended up leaving and of course wound up in a radically different career.

With no acceptances, only this interview left, and my 30’s ticking away, I find myself wondering if in a few years I will tell people, with a practiced nonchalance, the same thing about my graduate school experience and about applying for med school: that I did it for “ha-ha’s”.

I have come to see is that medicine is a house of cards in this country. Docs are often miserable, up to their eyeballs in expenses, administrative hassles, and debt, and students and residents are even worse off. In about a year since setting out to pursue medicine, I have already seen first-hand a series of hideous outcomes due to medical mistakes, while a recent article in Time, about the epidemic of depression and suicide among medical students, also threw my grand imagery of re-tooling for a medical career a serious curve.

Then add in debt. For example, this particular school I’m interviewing costs >$50,000 a year, which puts it slightly ahead of the curve, but only slightly. With living expenses and loan interest near 7%, one can easily come out $300,000 in debt.

This extreme level of indebtedness seems to have the ultimate effect (goal?) of pulling the medical profession into a kind of financial servitude, though it afflicts the scions of the well-to-do as much as the underprivileged: read about Ben Bernanke’s son, who recently graduated med with $270,000 in debt. According to AAMC, the average indebtedness of the class of 2014 was $180,000–a number that may well be higher in reality, considering the source, and in any case is almost guaranteed to increase.

Yet here is the amazing thing: this seems to be no deterrent. A huge, indeed record number of people are applying to medical schools, filled with a starry-eyed conviction that if they just get in, their lives will be immeasurably more fulfilled. I can’t tell whether such people are magnificently idealistic, unbelievably tough, or lamentably drinking too much hero-doctor Kool-Aid past its pull-by date.

Another possibility is desperation, as more and more avenues of advancement for smart and highly educated young people seem to be cut off. Ladders are being thrown away, and oligarchy becomes the order of the day. The United States now ranks among the least economically mobile advanced nations, and though incomes for doctors have declined and continue to do so, medicine remains one of the very few areas where, if you are smart and (extremely) hard-working, you can be fairly assured of a top-1% salary.

As a result of this flood of applicants, admissions and interviews have never been more competitive, even for low-tier schools. Tiny slips or deficiencies can doom you.

A friend of mine, to whom I voiced these concerns–particularly about ending up over $1/3 million in debt after school–simply shook his head and said, “if the money concerns you, then you don’t really want to be a doctor”. But I think this is over-simplistic. It’s true that part of becoming a professional is to go through struggle and suffering to prove yourself. But any object of enthusiasm can be reduced to an object of ambivalence if the obstacles and sacrifices get too crippling.

As I ponder these prospects I find myself wondering: could medical school be a bubble? So far, it appears not. Despite a huge shortfall of physicians, admissions have been kept low, and the record tide of applicants shows no sign of slackening. Yet I cannot help thinking that at some point, unless things change dramatically, more and more people will begin to realize that a medical career is not the prosperous happily-ever-after that it once was, and is still widely thought to be.

Then I look at the alternatives, which with my educational background and no industry connections, pretty much means postdoc: make barely more money than a grad student stipend, have zero job security, crank out boring papers.

So for all my concerns, when I ask myself what I will do if I don’t get in to any med schools this time around, the answer comes back: apply all over again next June.