
Via Vector Belly, h/t Ana Jenkins.
odd duck
On an episode in May of this year of the podcast The Incomparable, which is a great panel discussion show about whatever bit of culture, entertainment, or literature strikes their fancy, the topic was Iron Man 3. Somewhere in the middle of the conversation, Guy English takes the temperature of the group concerning the introduction of Tony Stark’s panic attacks, a symptom of his PTSD following the harrowing events of the previous film, The Avengers. There was general agreement that, yes, what Tony experienced battling the aliens at the climax of The Avengers would definitely fuck one’s shit up, but there seemed to be some ambivalence about whether it out of place in the context of the film in question.
I quickly want to address what they did not, which is whether it was done well. I watched Iron Man 3 only recently, on my MacBook during a flight a month or so ago, and I found the portrayal of someone suffering from post traumatic stress disorder shockingly realistic.
Let me qualify: I have only my own experiences to draw from. Read this to get a better idea of how I know anything about the topic — it doesn’t involve aliens. I do not at all want to assert that my experience of PTSD is universal or even common. It’s just what I know.
All that said, I was amazed at how much I related to Stark in his moments of panic. I recognized my own behavior in his when one of his attacks set in. (For the sake of rhetoric, I’m going to use the word “you” even though I’m really talking about “me.)
Something sets you off — a reference to an event, an association, a physical stimulus, what have you — and an animalistic fight or flight instinct takes hold. But it doesn’t necessarily own you entirely, you don’t turn into some werewolf in a waking nightmare. Your conscious mind is aware of what’s happening. You know you’re having an irrational rush of emotions and that your body is now compelled to act with sudden and overwhelming urgency. Maybe you run, maybe you fight, maybe you hide, maybe you scream, etcetera. All the while, you recognize that something not of your neocortex is in control. You may even be able to make jokes about it while it’s happening.
So I was mightily impressed and very much surprised by the way it was handled in this movie. It would have been easy to overdramatize Tony’s episodes, to make them Hulk-like in their violence and intensity, to make Tony unrecognizable in those moments. Instead, they let them be very much Tony’s episodes. We got to see him become aware of something happening to him, see him comment on it, struggle with it, and even try to mitigate it based on circumstance. And yes, he could even have a sense of humor about it in the moment. What was so true to life for me was that Tony never loses all control in those moments, but you do see his whole body carriage change as though a new force were asserting itself on his body’s operation, as though he was the Iron Man suit, and his amygdala now the driver.
We see a lot of troubled superheroes. Too often, though, their traumas exhibit themselves in brooding or vendetta. It was extremely refreshing to see a trauma manifest clinically in a superhero character, in a way I as a fellow-sufferer recognized. Indeed, Downey’s portrayal of PTSD episodes was so real to me, it mildly triggered my own responses, sitting there at over 10,000 feet, in the dark. My heart raced with his. My amygdala called shotgun in my mind for a little while.
I understand why it might have seemed a touch superfluous to the Incomparable cast. There are a lot of ways to tell the story of Tony growing as a character and knowing what it is to have weaknesses and failings. But this way of telling that story was crystal clear to me. For a big, explode-y Hollywood blockbuster, they told that about as well as I imagine anyone could.
A piece in The Economist argues that despite popular fatigue with our country’s countless foreign entanglements, Americans ought to appreciate those entanglements, which enable us to maintain our world primacy, and therefore our ability to enormously influence the workings of the world to our advantage.
I am sympathetic to this position. I take solace that it is currently we who are calling many of the global shots and not some other superstates, on the ascent as they may be.
But when I think this way, when I’m expressing a preference for the primacy of American values around the world, I admit, I’m not thinking about a great deal of what makes up America. I’m not thinking about Texas or Florida or Louisiana. I’m thinking about myversion of American values; progressive, with a strong emphasis on human rights. I’m thinking about “Blue America.”
So this gets tricky for me. Here’s one of the qualities the author of the Economist piece in question notes as a key factor in our
First is geography. Being self-contained makes America secure, whereas all other great powers have had to defend themselves against their neighbours. Even Britain at the height of empire in the 19th century was repeatedly distracted by the need to stop any one country dominating continental Europe. By contrast, America has friends to the north and south and fish to the east and west. Europeans warily eyeing nearby Russia, or Asians fearful of China, can ask Americans for help, safe in the knowledge that they have a home to go back to on the other side of the world.
This is very Germs-Guns-and-Steel-esque, ascribing geopolitical destiny in large part to geography, both in terms of location and land shape. And it makes perfect sense.
But here’s the thing. I have suggested, rather earnestly, that the United States would be far better off if it were not so damned united anymore. Rather, I’ve preferred the idea of smaller North American nations that better suit the increasingly-disparate ideologies of the various regions. So, for example, you’d have New England as one country, Texas as its own nation, etcetera, all trading and cooperating, but no longer bound by the same central power, and therefore able to get more done without haggling with polities with few shared interests.
If I got my way, though, there go the fruits of geography. Poor, naive New England or the tiny-yet-dense nation of New Amsterdam (which is what I’m now calling the nation-state of New York City) would be suddenly vulnerable to the potential aggressions of The Old South or what have you. Far-right representatives now commonly sent to Washington have already shown themselves to be more than happy to destabilize the global economy on a whim, and they tend to hail from those gun and machismo-worshipping regions that might be more inclined to threaten their neighbors. Given the power of an independent nation with a military of its own, who’s to say they wouldn’t behave just as irrationally and dangerously?
So not only is a unified, centrally-governed United States good for geopolitics, but it may also be the only thing standing between a secure New England and an army of ornery Texans marching on Boston. I’m probably exaggerating the potential for all-out armed conflict, but it does throw a wrench in my fantasy.
Here’s two cover tunes I cooked up in my almost nonexistent spare time: my own little versions of The Buggles’ “Video Killed the Radio Star” (which I’d always meant to cover, and was spurred to do so when Emily Hauser dissed the song on Twitter), and Toad the Wet Sprocket’s “Walk on the Ocean,” which I used to sing to my son as a lullaby.
And he’s four years old today! I love you, buddy! (You can see him sing the song himself when he was 2 in this video.)
Anyway, here’s the tunes. Share and enjoy.
Update 12/1/2013: Fixed a drum issue with “Walk on the Ocean,” so the file is new.
Update 12/2/2013: Toad the Wet Sprocket, in the person of their Twitter account, just declared my cover of their song “Very cool.” Achievement unlocked.
Twenty days with the iPad Air, and a quick follow-up. I’m still in love, but the sparks of young romance are evolving into a mature and seasoned relationship.
Whoa, that got weird. Anyway, it’s still great, of course, but I will say that despite my delight over the vast reduction in weight, it’s still not really a one-handed use tablet. The narrowed bezels certainly makes for easier thumb typing in portrait orientation, but it also means I’m never sure if I’m letting my thumb creep too far onto the display for the device’s accidental-touch sensing voodoo to keep buttons from being tapped and pages from being turned. If I were a little more sure of that, one-handed use would be markedly easier.
I can certainly hold it in one hand for short bursts, but it’s no iPad mini. Some reviews have said that the weight difference between them is almost negligible, and I just don’t think that’s correct.
Ah, the iPad mini with Retina Display. I got a change to play with one briefly in the Apple Store today, and it is a little wonder of a device. Now that is a fantastic tablet for long-term reading, and yes, holding with one hand.
But it’s also a little too scrunched. I launched the Paper drawing app, and it felt more like a little notepad than a sketch tablet. While text was just fine, and arguably better than on the iPad Air, everything else feels slightly more claustrophobic. Just slightly.
And this is based on only a quick look, but the store model’s display certainly seemed rather dim. I double-checked, and it was indeed turned up to full brightness, and it just didn’t pop. The iPad Air pops like crazy.
So look, I won’t be holding my iPad Air like this no matter what Apple’s marketing images suggest (especially since I haven’t bought my AppleCare yet, and I’d probably drop it), but it’s still a wonderful device to hold, to read with, and to type on (comparatively, of course). It’s still the object I want to use, as opposed to what I have to. I still pine for the unworldly lightness of the Mini, but for the full experience, the Air is still the slab for me.
Alan Jacobs, one of my favorite writers, declares that writing on the iPad, as opposed to a laptop, sucks. Lamenting the device’s frustrating limitations as an editor and formatter of text, he concludes:
I’m typing this post on my MacBook Air, and it’s a real pleasure. It’s lightweight and fits in my lap nicely. It was trivially easy for me to insert all those links into this post, and it’ll also be trivially easy for me to upload what I’ve written to Blogger. When I made mistakes in typing it was simple to correct them. Unless I were compelled by economic or other necessity to use an iPad to write, why would I ever do so?
I have an answer.
I write on my iPad almost every day. I’m doing it right now–and I’m not even using an external keyboard! On-screen typing is one of the key reasons I’ve opted for the 10″ variant over the Mini, which my RSI-addled wrists would greatly prefer. (Dig my opus review of the iPad Air here.)
I also have a MacBook Air. A tiny 11″ one, at that. So why not use that to write whenever the occasion strikes? I think it’s the same reason that you probably use your iPhone camera most of the time, even if you own a nicer, single-purpose, higher-end camera. When you want to take a snapshot, the phone is there with you. The kid is doing something cute right now! That crazy-beautiful sunset is happening right now! The high-end camera, if I own one, is in the house, or in the other room. The iPhone camera is right here. Snap!
So it is with the iPad and writing. I get most of my ideas for posts while sitting and reading, you guessed it, on my iPad. My MacBook Air, small and portable as it is, is hooked up to a monitor, speakers, a microphone, and an external hard drive up in my office. The iPad is here in my hands. If I have an idea I want to write about now, I just turn the iPad to landscape (maybe pop it into my STM Cape case that holds it in a typing position very firmly) and start writing, right where I am.
Of course the MacBook Air is a better writing tool. By far. (And that “by far” is where my analogy to the iPhone as a camera weakens, as the iPhone camera is pretty damn good.) Everything that Jacobs says in his post about how maddening it is to do something as simple as selecting text on an iPad is totally true, and why Apple hasn’t improved this experience one iota is kind if baffling. And yes, inserting links and doing other kinds of formatting are vastly simpler on the MacBook. Which, by the way, is why on the iPad I now write my blog posts in Markdown.
But the MacBook is over there and hooked up to as many wires as a patient in an ICU. I want to write now, while I’m still awake and while the thoughts are still churning. Or when I’m at the coffee shop sans computer, or on the plane when dragging the Mac out of my bag is too much of a hassle. Etcetera.
This is almost cliché by now, but I think it’s worth noting that the mono-tasking that the iPad enforces really does reduce distractions, making it easier to focus on what I’m writing rather than Twitter replies or, um, well, nothing matters more than Twitter replies. (Yes, I have those notifications turned off on the iPad.) It is far easier to approach a state of flow when all I have in my field of vision are the words on the page, and the letters on the keyboard that will make up further words. Writing on the computer often feels like something I am trying to squeeze in, or even sneak in, among all the other calls for my attention.
So to Jacobs’ question, why would one ever do so, why ever write on an iPad; the answer is clear. Because it’s there.
If you’re deeply into Star Trek, as I am, you’ve wondered what the hell people do all day. Not the folks in Starfleet of course, but, well, everyone else. We are told that within the United Federation of Planets, or at least in Terran society, there is no money, and people labor merely for self-improvement and scientific or cultural advancement.
Rick Webb has written a really fascinating piece trying to suss out the economics of the Star Trek universe, making sense of some seeming contradictions (such as “Federation credits” and human-owned private property) and delineating basic needs that are nearly-infinite (food and wealth on Earth) and those that are perhaps less so (the energy and material required to construct a fleet of starships). This analogy, previewing the meat of his explanation, gives the gist:
Imagine there’s some level of welfare benefits in every country, including America. That’s easy. That’s true. Imagine that, as the economy became more efficient and wealthy, the society could afford to give more money in welfare benefits, and chooses to do so. Next, imagine that this kept happening until society could afford to give the equivalent of something like $10 million US dollars at current value to every man, woman and child. And imagine that, over the time that took to happen, society got its shit together on education, health, and the dignity of labor. Imagine if that self-same society frowned upon the conspicuous display of consumption and there was a large amount of societal pressure, though not laws, on people that evolved them into not being obsessed with wealth. Is any of that so crazy? Is it impossible?
I think that is basically what’s going on on Star Trek.
You should really read the whole thing.
Anyway, Matthew Yglesias doesn’t agree with everything Webb has written (though I don’t think even Webb believes he’s got it all figured out and totally correct). He concentrates on the question posed at the beginning here about what the hell people do all day since they’re not compelled economically to have a job. But then again, we know that some people do have “professions” outside even the straight sciences or the arts, such as Ben Sisko’s dad Joseph who owns a restaurant and Picard’s brother René who owns a vineyard. So, why would they if not to make money? Yglesias writes:
So what do the producers of scarce goods do? Well, presumably they’re giving a lot of stuff away. Friends and family get bottles of wine. Perhaps you send a case or two to some particularly admired athletes or scientists or other heroes. Maybe artisanal wine just isn’t that popular in general. And maybe you barter some bottles for other artisanal goods. Maybe you have a friend who hand-carves furniture. But at its most fundamental level, it’s a gift economy. The point of running your restaurant or your vineyard is essentially to show off your mastery, not accumulate wealth. There may be some more-or-less formal exchanges, but the key point is to get the output into people’s hands and not work so hard as to make yourself miserable.
I think he’s trying too hard. Think about it; what is it I’m doing right now? I’m blogging, for nothing. I try every day to post a new piece, some short and ill-considered, others long and (I hope) worthy of digesting. But I do it all for no expectation of compensation. Now, I would very much like to be compensated (and you can donate to this little enterprise here!) but because I enjoy it, I do it as much as my time and energy allows.
And if money were no consideration at all, I’d do a lot more of it. But here’s the key: it wouldn’t be exclusively for my own private satisfaction. I’d hope that folks would read what I write, engage with it, and discuss it with others. I’d hope that people would listen to my music and podcasts, and that they too would get a shot at making their own tiny dents in the universe. There’s value, scarcity, because while “writing” is overly-abundant, the writing of Paul is scarce. It comes from a single and mortal source, giving it value.
So I’m saying that the economy of Star Trek is a lot like that of the user-generated ecosystem of today’s Web.
Joseph Sisko didn’t open his restaurant so that he could be the only person to eat there. He wants people to come and enjoy themselves, to talk about the place, talk to him, and spread word of what a great place it is. This is important: with ubiquitous teleportation technology, having a brick-and-mortar restaurant is as “placeless” as anything on the Web. Anyone can come on any night from anywhere on Earth, and beyond! On a whim! Joseph Sisko gets to share his work with the literal world.
And so for René Picard and his vineyard (before, of course, the events of Star Trek: Generations). René, being a proud and meticulous man, may have even taken pleasure in the simple maintenance of his estate without ever feeling the need to share it with anyone outside his family. But he had access to the whole of the Alpha Quadrant. Had he wished, he could have, as Yglesias puts it, shown off his mastery throughout the galaxy, thanks to transportation technologies of the time. It’s as though one could sample his wine by popping a URL into a browser.
As much as I enjoy writing, and would have almost certainly tried to make a career of it no matter what, it is the advent of the Internet and ubiquitous and free or cheap publishing platforms that have allowed me to do what I do at this site. Whether or not people read and engage with my work is another matter, but the point is that they are able to, from wherever they are, at any time. And I do it without pay. As I said, I’d rather get paid, surely, and it would improve my output in terms of both quality and quantity. But on the Federation’s Earth in the 24th century, I wouldn’t need to worry about that. I could devote my energies as I wished. So would it be in Webb’s analogy of every American getting $10,000,000 from birth. In those situations, some would run restaurants, some would join the military, some would be artists, some would be explorers, some would be scientists, and some, perhaps many, would do nothing at all. And that would be okay, too.
Joe Wiesenthal creates out of thin air the first-worldiest of all first-world problems. (And I say this as someone who loathes the “first-world problems” faux guilt-tag.) You know what our problem is? Too many days off:
Far from everyone has a job where they’re truly stimulated, and get to be around people who provide them an invigorating level of social interaction. But for the people who do have that, two days is a long time to totally shut that out. After a day, it’s time to start warming back up and getting into work mode.
For many professionals it seems, Sunday is less a “day off” than it is to do similar things as you might do while “at work” but without the infrastructure and bureaucracy of being “on the job.”
Let’s give the benefit of the doubt, considering his use of the word “professionals,” and presume he’s not talking about folks who physically labor, or work themselves to exhaustion at their jobs.
But even so. Work is work. Even if you’re lucky enough to be intellectually engaged by your job, even if its associated subject matter is something you’re fascinated by regardless of your state of employ, even if you’d do your day-job work for nothing if you had to, it’s still your job. Particularly if you work for someone, you’re doing that meaningful work within a structure and an institution that has its own overarching needs and directives, and which always supersede yours.
You will have goals to meet, boxes to check. You have a schedule, deadlines, “working hours” that, presumably, you do not set. When you are “at work,” your time and efforts are not your own. This can even be the case if (and sometimes especially if) you own your own business, and answer to no one. Because it’s work, you’re somewhere, somehow, answering to somebody.
And you need time off from that. And one day doesn’t cut it, not for me anyway. One day off is a fluke, a sick day, an errand day. Two days off is minimum for what feels like actual time off.
Of course, it’s perfectly fine to choose to continue to engage in things related to work on your days off. It’s even okay if you want to just work! But then, it’s your choice, you’ve decided that you want your leisure time to be filled with more from the universe of your job. Fine.
Me, I can’t do that. I love the organization I work for, and I’m proud every day to be a part of it. I believe in its agenda wholeheartedly, and find its sphere of subject matter fascinating and critical.
But on my off-time, I hardly touch it. I don’t read atheist blogs on the weekends, I don’t listen to the skeptic podcasts, I don’t spend my quiet time reading books about secularism. Because to me, it’s all part of work, and I need to decompress from it almost entirely when I’m not at work. Why wear myself out on it if I don’t have to? Why waste the opportunity to engage in other interests and activities?
I want two days to cleanse the palate. Bare minimum. One day would be akin to an ill-timed nap that leaves you more tired than before. Add to all this that I have two small kids, and that not-work time becomes several times more valuable.
So, no. Let’s not do one-day weekends. Ever.
Now, I could be open to staggered days off, say, Saturday and Wednesday or something. But two. No fewer. Ever.
So there’s that new Noam Scheiber piece in The New Republic that everyone’s talking about, positing that Elizabeth Warren could well be the insurgent force that upends the Hillary Clinton presidential coronation. It’s good stuff, though I think it overstates the favorability of the environment for Warren to succeed or mount a serious threat to Clinton. Scheiber cites some compelling-seeming data on the Democratic electorate’s feelings about Wall Street and banks, and of course they all show how much we liberals hate them richie-riches. But that’s not new, though perhaps the feelings are more intense now. I just find it hard to believe that something as mind-bogglingly complicated as policy concerning financial markets could really define the contours of a national race. I could be wrong.
(I have to admit, my eyes rolled a bit at this passage: “Chris Murphy, the Connecticut senator, estimates that not too long ago, congressional Democrats were split roughly evenly between Wall Street supporters and Wall Street skeptics. Today, he puts the skeptics’ strength at more like two-thirds.” Oh really? One guy guesses it might be “like two-thirds”? Well take that to the bank! Or, since it’s angry populist Democrats, take it to the community credit union.)
But for this post I’m more concerned with the question of whether Warren will or should run for 2016. The answers are maybe and yes.
If the conventional wisdom about a political figure is, “Well they’re pretty hot right now, and they show promise as a candidate in 4/8 years,” it really means they need to run now. Many thought Barack Obama should have waited at the time, being young and relatively inexperienced, but he knew better. He knew his star was brightest in 2008.
But the example need not be a successful one. Sure, John Huntsman turned out to be a lousy candidate. But remember, he was something of a darling during Obama’s first term: a popular governor, crossing party lines, representing America in China of all places. That glow was not going to last to 2016, however, at which point he’d be forgotten. He had to run when he did, or never run at all. He failed utterly, but he had to try then, or never.
Chris Christie could afford to sit 2012 out. He’d clearly get another term as governor, he’d continue to make headlines and attract attention as an equal-opportunity ass-chewer, and for God’s sake, the party nearly begged him to run in 2011/12, right in the middle of a primary already well underway. Correctly, Christie determined his moment would come again.
Ted Cruz knows his moment is now. As does Rand Paul. If either of them ever run, it’s this time. This is not to say they’ll win, or even do well, but as was the case with Huntsman, this is it. (Paul, however, is the iconoclastic type who might make several credible goes of it.)
(Rick Perry should have run in 2012, yes, but started earlier, as I’ve argued previously, because the fashionably late prove they don’t have the fire within them required to go all the way.)
In four years, Elizabeth Warren might still be a liberal hero, and she might still be firing up the base. But that’s a big maybe. Four years is plenty of time for other figures within the party to emerge and suck up more populist oxygen. And eight years? Forget it. She’ll be in her 70s, and decade-old news. (And if it is eight years, that implies two terms of a Democratic president, which might also mean less agitation for a populist candidate.)
This is it, 2016, Hillary or no Hillary. If Warren wants to be president, she is probably smart enough to know that this is her shot. It may not be a good shot, and I am skeptical that it is, but it’s likely the only one she’ll get.
I think Elizabeth Warren might well run for president. I have to assume her chances of success, now or ever, are rather slim. Not because she wouldn’t be an excellent president, as I’d be hard pressed to come up with anyone I’d prefer myself, but because I sense she’s not a sociopath. I don’t think she’s both brilliant and nuts, which I believe is nearly required to capture a party nomination, let alone become president. As Scheiber says, Warren may be touched with a mania, but it’s not a mania for power or self-aggrandizement, but for a policy agenda. I don’t think that’s enough. I think to win this thing, you need to want it for yourself, and want it so bad it hurts.
I just don’t think she hurts that way.
I recently wrote about how I had been compelled to introduce the concept of death to my son when he was a little over 2 years old, and how he was momentarily devastated by the idea that, in this instance, a little moth he had crushed was now “broken” and not coming back to life.
Well, he’s about to turn 4 now. He and I have birthdays only a week or so apart, and here’s an exchange the two of us had two nights ago before bedtime:
Me: On you’re next birthday, you’re gonna be 4! You know how old I’m gonna be on my next birthday?
Boy: No.
Me: 36! That’s pretty old, huh?
Boy (grinning): That means you’re gonna die.
So I guess he’s come to terms with death. That was fast.
The New York Times rounded up some opinions from authors about the effect of modern technology on one’s ability to write contemporaneously-set fiction, and as you might imagine the perspectives vary widely. These two, however, seemed to represent the poles.
On one hand, fiction is based upon conflict, characters having to overcome something. Marisha Pessl rightly notes that modern technology makes it harder to truly alienate a character:
The trouble with technology is that it eradicates a character’s ability to be lost, and it’s the state of being in the dark and the journey toward understanding that has given rise to the greatest stories ever written.
No argument there. This is of course not to say that one could not contrive to have a character’s gadgets and Internet access confiscated in some way, but it would be just that: an additional contrivance on top of what is already, well, contrived. Indeed, I think this is part of what makes historical fiction compelling: the lack of technological options which which a character can save him or herself.
Two of my favorite books by Neal Stephenson come immediately to mind. In The Baroque Trilogy (which for the purpose of this post I’m considering one book…one 3000-page book), the most brilliant minds of the 17th century, including Newton himself, are constantly thwarted and put in danger, with only the “cutting-edge” tech of the 1600s to aid them. Watching a large set of extremely smart characters network and bridge divides over continents, cultures, and decades is utterly compelling, in large part because of what they don’t have available to them, and for which they must use their wits to make up.
Meanwhile, in Stephenson’s Anathem, my favorite novel, even though it takes place in a fictional “parallel” universe with futuristic personal technology, much of the action takes place in a kind of monastery-university, where men and women grow their own food, grow paper on trees, and solve complex math problems through choir harmonies. The lack of technology, as well as its sudden incursion into the characters’ lives, create the drama and conflict.
But don’t worry, iPhones (or, in Anathem, “jeejaws”) don’t spell the end of fiction. On the other pole we have Elliot Holt:
Good fiction depends on longing and subtext — the tension between what people say and what they want. Characters used to wait to receive letters; now they wait for Facebook messages or Twitter mentions. Characters used to wonder about lost loves; now they Google those ex-lovers. But they are still waiting and wondering. They are still aching and yearning, trying to overcome obstacles. Even in this hyper-connected digital age, there is desire and subtext, conflict and loss. So there will always be good stories.
Exactly. This calls to mind a time in my theatre life, in which I’m playing Trinculo in The Tempest, and I’ve been awkwardly directed to meticulously remove bits of my fool’s outfit (floppy hat, shoes, etc.) before hiding under/on/around what I do not realize is Caliban. It was at first incredibly awkward, because it took so long, and I didn’t have enough text to fill the time. But I was reassured, correctly, by a castmate: it almost doesn’t matter what an actor does on stage, as long as he or she is fully engaged in it, and the audience will therefore find it interesting. So I fully engaged in my piecemeal disrobing, developing bits and gags out of it and coming to enjoy the process.
That’s not “drama” in the sense of conflict, but it speaks to the larger point that Holt is making: it doesn’t matter so much what gadgets or crutches or assistive objects a character has at his or her disposal. Because that character is (presumably) human, and will inevitably find conflict–he or she will always want something. The getting of that something might involve the latest social media fad, or it might involve navigating medieval court intrigue. It’s the investment in getting that thing that makes the drama. Fiction will be fine.
And I say this as someone with no successful track record in writing fiction. But I’ve played it on stage!
So this odd thing appeared in my Facebook feed yesterday, originating here, and my first response was rather shockingly visceral, something akin to, “Oh my christ you fucking hipsters I hate that you made that snooty, too-clever, showy-offy, hey-look-I’m-a-maker thing exist, let’s chop it up and burn it before it infects the culture, because now you’ve gone too far!!!” Something like that. I know, I know, but rest assured I’ve had extensive therapy.
But then I considered whether I’d feel differently if, rather than stashing picnicky supplies, it held one’s Dungeons & Dragons manuals, or laptop/tablet gear. And then I calmed down a little. Context is everything I guess.
(But that little fold down tray still pisses me off, and no, I don’t know why.)

I love stuff like this, when a stuffy-seeming artistic institution embraces a piece of pop culture with genuine enthusiasm.
However, I feel like I do have to note that there is something a tad menacing about a wall of Russian law enforcement officers boasting how they will be up all night in order to “get lucky.” Yes, boys, I’m sure you will be.
See also: Superheroes staying up all night to get Loki.
Hat tip to Cherry Teresa.

The aliens of Star Trek get a bit of grief for looking suspiciously like homo sapiens. I can tell he’s a different species because he has very slight ridges on his nose! She’s clearly an extraterrestrial because she’s got dots on her. And of course he’s an alien! His ears point up, and who would wear their hair like that???
So fine, it’s a fair cop. But let’s be fair, TV budgets are not limitless (particularly for shows for the relatively small nerd demographic), and I suspect audiences would have some trouble relating to, say, an amorphous blob or an intelligent jellyfish-type thing. I have always given Star Trek a pass because I know that a big reason the aliens that normally appear can’t be so alienating to viewers that they put too much of a burden on storytelling. Villainous or intentionally-bizarre creatures like the Crystalline Entity are of course the exception, in which they are alienating by design.
And really, Next Generation-era Klingons, Cardassians, Ferengi, and others, are really well designed, even if they are a little too humanoid for some.
Fortunately for Trek apologists like myself, there may be some sound justification for the franchise’s aliens looking a whole lot like human beings. George Dvorsky at io9 explores the idea that in order to achieve anything like a technology-wielding civilization, even an extraterrestrial species might do well — and indeed, may even need — to be very much like us.
First of all, they’d likely need to dwell on a planet’s surface; not swimming in the water, and not wafting about in the atmosphere (thus ruling out the whole intelligent jellyfish thing):
[It’s] very unlikely, says [Fermilab physicist Don] Lincoln, that technically advanced civilizations like ours could have developed on a planet without land masses, like a so-called water world. He believes it’s unlikely that intelligent dolphins will ever develop the technology for spaceflight. “There could be alien cavemen underwater,” he says. “But truly, you can’t smelt metal.”
I’d say that’s this is a) a point in favor of Trek-type aliens and b) a big let-down for believers in mer-people. All those metal tridents and whatnot? No way. Sorry, King Triton. You don’t get to exist.
But here’s the kicker, and it has to do with something called convergent evolution:
If [the alien species is] terrestrial, it would likely have to face the same sort of evolutionary pressures that our ancestors did. That doesn’t mean, of course, that all intelligent civs are descended from primates. But they may all take similar paths on their evolutionary journey, a well-documented phenomenon evolutionary biologists refer to as convergent evolution — those cases in which organisms not closely related independently acquire some characteristic or characteristics in common; mutation in evolution may be random, but selection is not.
Examples include physical traits that have evolved independently (e.g. the eye), ecological niches (e.g. pack predators), and even scientific and technological innovations (e.g. language, writing, mathematics, the domestication of plants and animals, and basic tools and weapons). Looking off-world, it’s not unreasonable to think about similar examples of convergent evolution; there may be certain ecological and sociological niches that are not Earth-specific or human-specific and are archetypal throughout the universe.
And only recently, of course, we learned from the Kepler spacecraft that there may be billions of Earth-like worlds in our own galaxy alone. And if they really are quite Earth-y, there’s every reason to believe that their creatures might evolve to use brainpower and technology to dominate their environment. For that, they’ll need things like grasping digits, limbs to carry them from place to place, light and sound-detecting organs, etcetera.
This is not to say they’d be bipedal with two eyes and ears (or speak English or be able to procreate with other alien species), differentiated from humans only by crazy skull protrusions , but it might mean that they would not seem quite as alien as we presume. They might even make for sympathetic characters in a space adventure story.