The post that follows was an unsuccessful submission for the London Library Student Prize, whose winners will be published in The Times later this year. The subject – whether or not gap years are a new form of “colonialism” – is a vast one that is difficult to treat adequately in the limit of 800 words. This is the best that I could manage at the time with exactly that number, but I have since read so much more, much of it damning, and so should you (start here, h/t Scott Hatch).
When travelling abroad, few things are as disheartening as arriving to find that the serene and inviting accommodation that was advertised has been disembowelled and is rotting from the inside out. That is unless you are volunteering on a gap year, in which case, after carefully selecting one of the deals on offer from your specialist provider, you’ll hope for the most destitute community possible and will be horrified if the people you hoped to serve were doing well enough without you, except for when they needed a babysitter.
Experiences like these sound unlikely, but they have become increasingly common with the commodification of gap years, now that around one hundred organisations – many resembling travel agencies – are pitching to students. Gap years are no longer a spontaneous luxury for the elite; they are pre-packaged, widely affordable and sold for a profit. Yet, while some students may end up in communities that don’t need help, it may be better for those that do if they are left alone, as volunteer work frequently has negligible long-term consequences and can even extend poverty by giving the reassuring impression that people are being cared for despite no significant improvements to their living standards.
I don’t watch much TV, and when I do, I mostly watch documentaries and comedies of the kind typified by David Attenborough and Armando Iannucci. I occasionally flirt with fantasy, but I didn’t think I’d be in the mood for a while after giving myself to big-screen Tolkien adaptations, whose turgid trilogy became a cinematic masterpiece, leaving the lighter and altogether more charming Hobbit – a personal childhood landmark – to suffer the gimmicky ruin of any franchise that goes on too long. But I was avoiding work one day last week when I spotted an On Demand link to HBO’s first two series of Game of Thrones, the TV adaptation of George R. R. Martin’s highly-acclaimed and tremendously successful series of books, A Song of Ice and Fire.
My opinion of the series wouldn’t need to be expressed had you seen me spend the past two-and-a-half days in sedentary silence, binge-watching right up to the start of series three which, with perfect timing, aired its second episode tonight. I don’t have the credentials of a purist because I’ve come to the epic backwards, starting with the adaptation and not yet having read the books (books that I have just ordered, as I won’t wait years for HBO to show me what happens). But in an ecosystem sustained by millions of fans, what can I have to say that is possibly new, or which does not alienate people who haven’t read or seen it? Yes, of course Tyrion Lannister is a worthy favourite for his character and cunning; of course the series is made interesting by its unique blend of medieval realism with a sparing touch of magic; and of course we should be glad that it’s kept fresh by murdering the characters we most sympathise with just as they seem to gain the advantage – tell us something we don’t know. Well, there is still something interesting to say more generally about the genre, literature, and culture, and how stories like Game of Thrones distort our ordinary sense of right and wrong.
Tat, for those unfamiliar with the word, is cheap and nasty junk. Not broken or useless or second-hand knock-offs, but things that are tasteless, garish, and flimsy. When thinking of tat, you might think of those heavy bauble ear-rings worn by the woman across the street, or of the brown and chipped crockery your tenth-best friend owns which never looks clean no matter how much it’s washed. But would you ever think of a diamond engagement ring? Not an obvious fake bought from a market stall, but one with a pristine, $2,000, 18-carat diamond? You wouldn’t, but maybe you should. Rohin Dhar has written an interesting article about the commercial history of diamonds as a wedding gift from a groom to his bride, and it’s aptly called, ‘Diamonds are Bullshit.’ His argument can be surmised by two choice paragraphs, although you really should read the whole thing:
The next time you look at a diamond, consider this. Nearly every American marriage begins with a diamond because a bunch of rich white men in the 1940s convinced everyone that its size determines your self worth. They created this convention – that unless a man purchases (an intrinsically useless) diamond, his life is a failure – while sitting in a room, racking their brains on how to sell diamonds that no one wanted.
So here is a modest proposal: Let’s agree that diamonds are bullshit and reject their role in the marriage process. Let’s admit that as a society we got tricked for about a century into coveting sparkling pieces of carbon, but it’s time to end the nonsense.
My opinion of Scientific American has lessened quite considerably over the past year. It has some fairly reliable and interesting bloggers, but much of its mainstream output conforms to the journalistic trope of misleading people with simplistic or even false headlines, often with the so-called ‘caveat in paragraph 19‘. An example I came across this morning is their rather late uptake of the discussion of Keith Chen’s working paper about the supposed effects that a language’s grammar can have on an individual’s management of their money. People have been arguing about it for over a year now, but it was popularised in February by a summary of the data given by Chen in a TED talk. There, he explains his hypothesis that the formulation of the future tense in your first language is strongly correlated with economic behaviour. From the outset, let me say that I find it unconvincing and so do many other linguists – for a critical response, I’d start with Geoffrey K. Pullum, a prominent linguist at the University of Edinburgh. Chen’s argument has been thoroughly assessed by Pullum and others, so I’m not going to do that here – I trust you’ll inform yourself. Instead, let’s go back to Scientific American.
Thinking of space again, one of the reasons why I cherish modern astrophysics is that it not only limits our egos in the same way as evolution by connecting us with the surrounding world and teaching us that we are not special, it also gives us a magnificent sense of the scale of the universe we live in. It’s not exactly something that we can comprehend easily, but even trying to contemplate it is wonderful. For example, if we were to take a moderately sized atom such as that of magnesium or zinc and blow it up to the size of a small apple, then the corresponding enlargement for the apple would be to make it the same size as the earth. We also know that the earth is related to the size of the observable universe in the same way that a virus is related to the size of our solar system. And of course, we’ve all heard that there are billions and billions of stars out there, many of them hundreds of times larger than our sun, all ingredients in massive galaxies which are unbelievably far away.
But what do we make of ourselves in the middle of all this? The ‘middle’ is actually a peculiar word to use. Most often, we recognise that there are objects which are stupefyingly tiny, and others which are imposingly gargantuan, and we pull these together into a comfortable frame of reference which positions us in the middle – we are the Goldilocks of the universe; we operate at a normal size, snug between the extremes. This is intuitive, but wrong. One mistake here is to separate the world into groups of objects according to the way that they behave, creating false distinctions which are conveniently human-centric when we might just as well imagine everything existing on a continuous spectrum. Another flaw is to think that because we can describe lots of objects diminishing in size compared to humans, as well as lots of objects which are bigger than humans, then we must therefore be somewhere in the middle. But what if we had a list like this, with just five things on it arranged by size: a pill, a paperclip, a spoon, a shoe, and the African continent? You would be right to put the spoon in the middle, but you would be wrong to say that the spoon is middle-sized relative to everything else, or even that it inhabits a middle-world – instead, the African continent is huge and everything else is barely noticeable. Similarly, the proportions of the objects in the universe mean that the difference between a human and something microscopic is not nearly as big as the difference between a human and something giant.