Archive for the ‘Pseudo Academia’ Category

Much of the focus of the debate on global warming has been on the level of carbon dioxide emissions. There is very good reason for this, considering how much C02 we are pumping into the atmosphere and its proven relationship with average global temperature. Yet, of course, carbon dioxide is by no means the only culprit, only the most abundant and significant contributor. Another, far more potent greenhouse gas is methane (CH4), which, depending who you listen to, is between twenty and thirty times more potent than C02. Its presence in the atmosphere is rapidly growing. Indeed, according to recent research headed by Natalia Shakhova, we might be on the brink of a tipping point, the result of a massive environmental feedback in the Arctic.

Firstly, I’d like to say something about climate change prediction models. Most climate models have focussed on the shorter term, namely, the 21st century, with few daring to venture into the 22nd or 23rd centuries and beyond to predict global atmospheric and climatic conditions. It goes without saying that such modelling is fraught with uncertainty, especially considering our relatively limited understanding of environmental feedback mechanisms. This inclines most climate models to be overly conservative in their predictions, especially in the case of sea-level rise. The IPCC’s calculations in its fourth assessment of sea-level rise, based largely on melting ice and thermal expansion of water, did not factor in dynamic processes such as calving ice-sheets and the observed acceleration of ice-loss and melting, effects which are less easy to predict and model. Their most recent figure of an 18-59cm rise in sea-level by 2100 falls short when we use a different measuring stick – average global temperature relative to sea-level.

With the planet currently trending at the highest end of greenhouse gas emission scenarios, bearing in mind the strong relationship between atmospheric carbon dioxide levels and global temperature, a more likely outcome is a sea-level rise of between 75 to 190cm. It is worth noting that a sea-level rise of one metre would be devastating for low-lying coastal regions, such as The Netherlands, Florida, Bangladesh and Shanghai to name a few. It’s all very well to argue over the numbers, which, at this stage, seem so abstract, yet their manifestation in reality would be akin to a vast global crisis, potentially making some of the most populous regions of the planet effectively uninhabitable. Humans will no doubt battle it very effectively initially, but if major cities are subjected to consistent flooding, it will be very difficult to sustain year-round economic activity and industrial output will decline, as might coastal infrastructure. It is by no means impossible that major metropolises will eventually have to be abandoned.

There is another fundamental problem with our shorter-term climate modelling. Scientists may talk of a potential sea-level rise by the end of the 21st century, but where will that leave us at the end of the 22nd century? In a warmer planet, ice-melt is not about to stop at some arbitrary date that humans see as a convenient cap for current predictions. If, as has so far been observed, the rate of melt increases as the temperature increases and sea-level rise accelerates towards a worst-case outcome by the year 2100, then what of the subsequent century? Can we expect to add a further two metres, or three perhaps? And what of the very long term?

Of course, the idea is to achieve a zero carbon global economy by the end of the 21st century. I don’t mean to be overly cynical, but the idea seems, at this stage, so utterly fanciful that it’s quite difficult to accept. Humans will, in all likelihood, continue to use fossil fuels as long as they can dig them up. Fears of peak oil have been pushed significantly back as the vast reserves trapped in tar sands have been factored in. As is discussed below, there are vast methane reserves in the Arctic. I expect this planet will be very much a going concern in the middle of the next century. When food hits the roof, we’ll clear out the remaining 82% of the Amazon and plant it all with crops. I hate to say it, but that’s a hell of a lot of good agricultural land. When Chinese capital completes its quasi-colonial infrastructural investment in Africa, the vast forested lands of the Congo basin will be developed and exploited. When the aquifers fail in China and India, they’ll desalinate the overlapping sea. Human industrial society is just beginning; it will, in all likelihood, get a great deal bigger. Inequalities will be vast, but both human and industrial resources will be fully shackled to the task with the eternal bribe of hope.

In the short term, the failure of Europe marks the beginning of a decline in European leadership. They will likely become, ultimately, an effete satellite of East Asia. Wealth and power will shift back to India and China, where it resided for the first sixteen centuries of the last two millennia. One thing is for certain, there are going to be a lot of serious hiccups along the way, for, throughout all this, we’ll be pumping out shitloads of carbon.

Presently the level of atmospheric C02 is roughly 392 parts per million (ppm), up from roughly 315 ppm in 1960. Atmospheric levels are now estimated to be at their highest for twenty million years. There is little likelihood of another ice-age occurring any time soon, put it that way.

Carbon dioxide is now increasing in the atmosphere at roughly 2ppm each year, a rate which has picked up considerably since forty years ago, when it was measured at 0.9 ppm / year. We know that during the Eocene period, a mere thirty-eight million years ago, atmospheric levels of carbon dioxide sat at around 2000 parts per million and the average global temperature was roughly ten degrees warmer than today. Indeed, the Earth currently sits in a temperature trough, likely the terminal end of an extended cool period that began at the end of the Eocene, after a lengthy hot period that spanned most of the Cretaceous. The hot period peaked in the Eocene, when there was no ice at the poles and both the Arctic region and the continent of Antarctica were forested with tropical plant species and populated by dinosaurs. As Antarctica drifted south it gradually lost the warming benefits of tropical ocean currents and began to cool. Its isolation at the bottom of the planet from tropical currents might be sufficient to see it retain its ice, even during a significant rise in average global temperatures, but such is by no means clear. One thing is certain, that the poles are warming significantly faster than the tropics and the eventual loss of Antarctic ice is, if not inevitable, certainly plausible in the long term. It is worth noting that during the Eocene, sea-level was, take note, 170 METRES higher than it is today. Have a look at this map:

This is clearly a worst case-scenario, yet even should it take two thousand years to melt all the planet’s ice, it’s difficult to imagine anything equally catastrophic having occurred in the previous two thousand years of human history. The fall of the Roman Empire, the Crusades, Black Plague, genocide in South America, Depression and World War 2 look very mild by comparison. Simply put, we really do not want to return atmospheric conditions to those of the Cretaceous or Eocene, yet if humans continue to burn fossil fuels far into the future, and in the last year, our rate of output was the highest ever recorded, despite depressed global economic conditions, it is by no means impossible that we could push atmospheric carbon dioxide levels towards those seen during those epochs in the very long term. Of course, such a situation is unlikely, especially considering the disruption to industrial and economic activity that would occur should we see even one fifth of the above 170 metre rise in sea-level.

I came here to talk about methane, and the above is clearly off-topic. Yet it serves to demonstrate the degree to which climate models often limit their predictions to currently observable and measurable factors, ignoring many feedbacks that are less easy to measure accurately, sticking to methods that are sufficiently robust to make solid predictions, such as atmospheric carbon dioxide levels. They also tend to tell us about the next hundred years, and not the next thousand, which is equally relevant to the future of humanity and our ability to survive and thrive in a comfortable and stable environment. Such caution is good scientific practice, yet it leaves us with predictions that are almost certainly considerably below the likely more serious consequences of global warming.

One such unpredictable feedback is methane, and methane is a hell of a problem. Pound for pound methane is roughly twenty-two times worse than carbon dioxide as a greenhouse gas. When we think of methane’s role in global warming, we usually consider the flatulence of cattle. Meat production generally produces roughly 80% of all agricultural emissions globally – a figure which is bound to get worse as the rapidly expanding middle class across Asia in particular demands more protein. Livestock currently contribute roughly 20% of methane output, with the rest coming from rice production, landfill sites, coal mining, and as a bi-product of decomposition, particularly from methane-producing bacteria in places such as the Amazon and Congo basins. These are the measureable outputs included in most climate models, yet what the models do not include is the steady and rapid increase in methane release across the Arctic circle.

The Arctic circle is full of methane. Most of it is locked up in permafrost soils and seabed, though the gas has long been escaping through taliks, areas of unfrozen ground surrounded by permafrost. Global warming, however, has seen the most pronounced temperature increases at the poles, with a measured 2.5 degree average increase across the Arctic. As the region warms (current rates suggest a 10 degree temperature spike by the end of the century), as less ice forms, and as ocean temperatures in the region also rise, the until recently frozen seabed, more than 750 million square miles across this vast region, has slowly, but surely, begun to melt. An area of permafrost roughly one third this size, with equally intense concentrations of methane, also exists on land, mostly in far eastern Russia. This too has, in places, begun to thaw.

Lower-end estimates suggest that there is roughly 1400 gigatons of carbon locked up in the Arctic seabed. A release of merely 50 gigatons of methane would increase atmospheric methane levels twelve-fold. Presently, as Natalia Shakhova of the International Arctic Research Center, states,

“The amount of methane currently coming out of the East Siberian Arctic Shelf is comparable to the amount coming out of the entire world’s oceans. Subsea permafrost is losing its ability to be an impermeable cap.”

Much of the methane released is being absorbed by the ocean. In the area studied, more than 80% of deep water and more than half of the surface water had methane concentrations eight times higher than normal seawater. In some areas concentrations were considerably higher, reaching up to 250 times greater than background levels in summer, and 1400 times higher in winter. In shallower water, the methane has little time to oxidise and hence more of it escapes into the atmosphere.

Offshore drilling has revealed that the seabed in the region is dangerously close to thawing. The temperature of the seafloor was measured at between -1 and -1.5 degrees celsius within three to twelve miles of the coastline. Paul Overduin, a geophysicist at the Alfred Wegener Institute for Polar and Marine Research (AWI), speaking to Der Spiegal, stated that:

“If the Arctic Sea ice continues to recede and the shelf becomes ice-free for extended periods, then the water in these flat areas will get much warmer.”

More research is needed into the process and its possible long-term consequences. A sustained and intense release of methane would indeed have a significant impact on global warming, but at this stage it is difficult to be certain whether or not such will occur.

Natalia Shakhova remains cautious as to whether warming in the region will result in increased gradual emissions, or sudden, large-scale and potentially catastrophic releases of methane.

“No one can say right now whether that will take years, decades or hundreds of years.”

The threat, however, is very real. Previous studies showed that just 2% of global methane came from Arctic latitudes, yet with the recent rise in output, by 2007, the global methane contribution had risen to 7%. Atmospheric methane tends to linger in the atmosphere for ten years before reacting with hydroxyl radicals and breaking down into carbon dioxide. Yet in the case of ongoing large releases, the available hydroxyl might be swamped, allowing the methane to hang around for up to fifteen years. This would be an even more significant problem should rapid methane release be ongoing. Not only would the atmosphere’s ability to break down methane be significantly compromised, but the warming effect of the lasting methane presence would trigger further warming and thus further methane release. This is a classic case of a potentially dire environmental feedback, and it might be a very long time before we see the end of such a cycle should it commence. It is especially concerning when we take into account that the pre-requisites for triggering such an event might already be in place. Irrespective of how much humans cut emissions output, which, quite simply put, in real terms, they are not doing in the slightest, the trajectory of global temperature increase based on current greenhouse gas emissions is already sufficient to thaw the Arctic seabed eventually.

Still, there are too many variables and too much uncertainty about the scale and pace of this phenomenon and, for this reason, scientists are right to be cautious. Yet, when we consider that something as potent as this is not being included in climate models on account of its unpredictability, it reminds us how conservative and cautious those models really are and how dangerous our flirtation with heating the planet really is.

It would almost be fitting for humans, as decadent, indulgent and superfluous as they are, to drown in flatulence. It would make for an amusingly sarcastic take on history, written at the consequence end of the great and unfunny fart joke that is the Anthropocene epoch. Perhaps a thousand years from now, when humans, with their cockroach-like ability to adapt and survive in almost any environment, outdone for durability only by the bacteria they seem determined to hand the planet back to, have reconstructed their societies in a more sustainable manner on higher ground, they will look back and wonder why they had their priorities so utterly wrong for so long.

ps. Again, I apologise for lack of references. If you made it this far, no doubt you can do your own research into the matter. The purpose of this article is to be thought-provoking, not comprehensively informative.  Good luck out there!

– P. Rollmops

Read Full Post »

This is the transcript of a talk I gave on Alain de Botton’s “The Art of Travel” for my creative writing masters, c. 2005.

Alain de Botton has made a name for himself writing popular philosophy. In a review of de Botton’s best-selling Consolations of Philosophy, in the Independent, Christina Hardyment wrote: “Singlehandedly, de Botton has taken philosophy back to its simplest and most important purpose: helping us to live our lives.” In The Consolations of Philosophy, de Botton considered the works of six great Western philosophers – Socrates, Epicurus, Seneca, Montaigne, Schopenhauer and Nietzsche – and drew from them ideas he found of particular value and relevance to modern life.

This was a theme he had already explored in depth in his earlier publications, Essays in Love, published in 1993 when the author was only twenty-three years old, The Romantic Movement, 1994, Kiss and Tell, 1995 and the best-selling How Proust can change your life, published 1997.

Philosophy, poetry and theory do not normally attract popular attention owing to a misguided conception that they bear no relation to the practical and exist solely for the gratification of intellectuals commonly derided for their social disjunction. What de Botton succeeds in doing so masterfully is to reveal the simplicity and humanity of much philosophical writing, poetry and theory by putting it into the context of personal experiences which are familiar to all of us. At the same time, whilst locating material parallels for these ideas in the quotidian, he avoids making them appear mundane or banal. The high brow becomes palatable by removing its intimidating veneer, but without cheapening or ridiculing the evident seriousness with which much of these ideas were initially produced, except where were they were, to some degree, designed to be amusingly provocative.

In the Art of Travel de Botton examines themes in the psychology of travel; how we imagine places before we have visited them, how we interpret places upon arrival, and how we shape our recollections of places upon our return.

As is the case with How Proust can Change your Life, The Art of Travel is really a collection of essays. The text is divided into five parts under the thematic rubrics of Departure, Motives, Landscape, Art and Return. Each of these parts is further subdivided into chapters with subtitles such as On Anticipation and On Travelling Places. On the title page of each chapter, de Botton provides a sort of itinerary for what is to come; a handy list accompanied by thumbnail illustrations not only of the place or places he intends to discuss, but also the guide or guides through whose eyes or with whose thoughts he will consider the place or places. Thus, in the first chapter, On Anticipation, de Botton considers his own locale, Hammersmith in London, and his impending holiday destination, Barbados, through the eyes of Joris-Karl Huysmans. In his 1884 novel, A Rebours, Huysmans’ “effete and misanthropic hero”, the Duc des Esseintes, attempts a journey to London. He makes it as far an English tavern near the Gare St Lazare, where, after a meal of oxtail soup, smoked haddock, roast beef and potatoes, two pints of ale and a chunk of stilton, he is overcome by lassitude.

“He thought how wearing it would be actually to go to London, how he would have to run to the station, fight for a porter, board the train, endure an unfamiliar bed, stand in queues, feel cold and move his fragile frame around the sights that Baedeker had so tersely described – and thus soil his dreams. ‘What was the good of moving when a person could travel so wonderfully sitting in a chair? Wasn’t he already in London, whose smells, weather, citizens, food and even cutlery were all about him? What could he expect over there but fresh disappointments?’”

De Botton applies the lesson of contrast between anticipation and realisation to his own experience of a holiday to Barbados. The promised lures of a travel brochure with its palm trees and spotless beaches are soon darkened by a cloud of anxieties. Shortly after arrival, fretting about concerns he ought to have left behind, de Botton notes that:

“A momentous but until then overlooked fact was making its first appearance: that I had inadvertently brought myself with me to the island.”

He adds:

“My body and mind were to prove temperamental accomplices in the mission of appreciating my destination.”

In the chapters that follow, de Botton continues with this clever interspersing of accounts of real and imagined journeys with personal, anecdotal accounts of his own travel experiences. The tight and entertaining summaries of the thoughts and ideas of his guides make clear and immediate the experience of these writers, artists and thinkers. The anecdotal accounts make even clearer just how quotidian are the concerns of many of his guides, and are further enriched with photographs of his own personal spaces and acquaintances.

He applies the techniques of the anthropologist and ethnographer in examining social artefacts and extrapolating from them about the society they represent. In describing the exotic nature of an overhead sign in Schiphol airport in Amsterdam, he writes:

“A bold archaeologist of national character might have traced the influence of the lettering back to the de Stijl movement of the early twentieth century, the prominence of the English subtitles to the Dutch openness towards foreign influences and the foundation of the East India Company in 1602 and the overall simplicity of the sign to the Calvinist aesthetic that became a part of Holland’s identity during the war between the United Provinces and Spain in the sixteenth century.”

De Botton’s analysis of cultural artefacts extends to an exceptional empathy with the subject matter of artists. His chapter On Travelling Places which includes a study of places of transit such as services stations, airports and roadside diners, is a masterful combination of art appreciation, focussing primarily on the twentieth-century American artist, Edward Hopper, and extrapolation with personal, anecdotally driven musings.

In Chapter 4, On Curiosity¸ de Botton describes his first experience of Madrid, to where he travelled in order to attend a conference. Having been advised of Madrid’s many attractions, he finds himself overcome with an intense lethargy upon arrival.

“And yet these elements (ie. the sights of Madrid as described in his guide book and assorted brochures) about which I had heard so much and which I knew I was privileged to see, merely provoked in me a combination of listlessness and self-disgust at the contrast between my own indolence and what I imagined to be the eagerness of more normal visitors.”

He contrasts his own lack of enthusiasm with that of his guide to the chapter, the German explorer and Botanist, Alexander von Humboldt, who was driven by a powerful urge to visit foreign lands. The chapter serves to establish the difference between the known and the unknown – von Humboldt’s explorations take him to uncharted places, whereas de Botton feels overwhelmed by the seemingly meaningless level of detail available to him through his guidebooks. The philosophical point of the chapter is to establish an understanding of what lies at the heart of curiosity and the degree to which it is personal and contextual.

De Botton writes:

“In the end it was the maid who was ultimately responsible for my voyage of exploration around Madrid. Three times she burst into my room with a broom and basket of cleaning fluids and at the sight of a huddled shape in the sheets, exclaimed with theatrical alarm, “ola, perdone!”

De Botton not only contrasts his attitude to Madrid with von Humboldt’s attitude to South America, but also highlights their respective realms of exploration. Again, the ideas he explores are firmly rooted in highly illustrative personal anecdotes, and the success of his anecdotes lies not merely in the ideas they are designed to illuminate, but in the level of personal detail he provides. He appears wholly honest with us, occasionally pushing the envelope of self-deprecation to the point of humiliation. He informs us of the flavour of a packet of crisps he ate in Madrid, tells us of a hair he found attached to the sideboard of his bed in a hotel in the Lakes District, and describes the sound of the timer on a microwave on a train.

In many ways the core to de Botton’s philosophical approach in The Art of Travel can be found in his chapter on Ruskin. He focuses on Ruskin’s ideas of the importance of “seeing” and “appreciating”. Ruskin worked keenly to promote the teaching of drawing in nineteenth-century Britain, believing that drawing would teach people to have an eye for beauty and to appreciate detail, thus making them happier by enriching their everyday experience. For Ruskin talent was an irrelevance – it is not ability as an artist that matters, merely the attempt to draw that is important, for, Ruskin argues, drawing teaches us to see.

“A man is born an artist as a Hippopotamus is born a hippopotamus; and you can no more make yourself an artist than you can make yourself a giraffe. My efforts are directed not to making a carpenter an artist, but to making him happier as a carpenter.”

His aim was to teach people to spend time to appreciate the detail and complexity, or indeed, simplicity that made something beautiful, and to notice beauty in things that might not be obviously beautiful. Ruskin was fervently opposed to people who travelled and looked, but did not see. He wrote:

“No changing of place at a hundred miles an hour will make us one whit stronger, happier or wiser.”

In many ways de Botton’s intention mirrors that of Ruskin, though he is hardly about to suggest that we take a sketchbook with us on holiday. Rather, he impresses the importance of remembering how rewarding it is to appreciate things with the eye of a sketcher. He is equally keen to make us “see.”

“The only way to be happy is to realise how much depends on how you look at things.” Your own viewpoint will fix feelings far more solidly than any vista: “If you have to rank how happiness comes about,” he argues, “beauty is a worryingly weak ingredient, in terms of shifting mood.”

This key injunction to learn to “see” underlies every major idea presented in The Art of Travel.

Despite the apparent variance between many of the places and the historical figures upon whose thoughts de Botton draws in The Art of Travel, each selection of place or person is so apposite as to seem almost inevitable. His combination of personal, anecdotal detail with equally personal anecdotes from his subjects ensures a specificity and intimacy that engages. It is only in his chapter on Provence, aided by Vincent van Gogh that one feels his point is rather laboured. It still holds our interest but lacks the charm and economy of his writing elsewhere in this book.

De Botton’s works have been bestsellers – selling in the many hundreds of thousands in many different territories over the last eleven years. He has written and presented two TV series based on The Consolations of Philosophy and Status Anxiety. His work has also been characterised as ‘popularisation,’ yet his books are in fact attempts to develop original ideas (about, for example, friendship, art, envy, desire and inadequacy) with the help of the thoughts of great past thinkers. There is much that is original and, indeed, amusing in his application of the ideas of the people upon whose thoughts he has drawn. As stated above, his “popularisation” does not come at the expense of intellectual integrity and he thus avoids the lowest common denominator as a benchmark for his relativism. De Botton has been described as a “Mass-market metaphysician,” a term which could be misconstrued as a pejorative, but is not intended as such.

For an aspiring writer The Art of Travel is almost as frustratingly neat as it is delightful to read. The end result is a book of theory and philosophy that reads with the ease and accessibility of a travel guide. It comes effectively to constitute a companionable treatise on Romantic aesthetics.

It has been said of de Botton that his musings are akin to an accessible W. G. Sebald, equalling his gravitas, though perhaps falling short of the depth of Sebald’s personal reflections. De Botton’s strength lies not only in the quality of his writing, which, for its complexity, shows no signs of impenetrable flabbiness, but in the powerful ideas to which we can all easily relate. His scope covers all aspects of travel, from the quotidian journey to the bus-stop, to international flights and expeditions to unknown regions. Essentially de Botton’s purpose in writing The Art of Travel is to promote further the importance of applied philosophy as a way of enriching life.

Read Full Post »

I began writing this article in 2000, whilst still researching my PhD at Cambridge. It was largely finished, but with significant holes which I have finally decided to fill in. I originally intended to research it more intensively and submit it for publication to an academic journal, but ultimately the style seemed more journalistic and its prohibitive length ruled out any hope of publication in a newspaper or magazine. So, after all these years, here it is!


The recent release of Ridley Scott’s film Gladiator has once again sparked interest in a genre that seemed doomed never to be revived. Prohibitive costs and questionable appeal were the enduring memories after the hugely expensive and unsuccessful Cleopatra and the ponderous The Fall of the Roman Empire. After 1964, no one was either rich enough or stupid enough to invest in a project of this scale.


Gladiator, the first Roman epic for almost forty years, whilst receiving mixed reviews from critics, has proven very popular with cinema-goers the world over. The story of Maximus’ fall from the slippery heights of power as a conquering Roman general, to his being sold as a slave and his evolution as a great gladiator, certainly makes for great matinee entertainment. The exotic locations, vast battles, splendid sets, and epic scenes are true to form of the “sword and sandal” epic, and with the assistance of modern technology and greater attention to close detail, Gladiator sets a new benchmark for a raw and “realistic” evocation of the Roman world. Yet what is so frustrating about Gladiator is its lack of contextual historical accuracy.

The fall of the roman empire

The genre to which Gladiator belongs has always been a flawed one. Roman epics have attracted criticism for both their historical accuracy and dramatic qualities. Roman epics aren’t so much historical films, as vehicles for other, often anachronistic moral or ideological themes; Italian nationalism and fascism, for example. Otherwise they have tended towards ponderous, opulent romance.

Gladiator is an interesting product in the context of film history, for it picks up almost directly where the Roman epic left off. Gone are the moralising voice-overs which introduce the historical context; gone is the typical demonisation of the Roman Empire; gone is the anachronistic emphasis on modern Christian concepts of ethics and morality. In their place we have a secularised film which does not seem to carry any message whatsoever. This absence of any clear moral purpose behind Gladiator is, in part, what makes it a better Roman epic than many of its predecessors.

Historical films can also have a very powerful effect on an audience, imaginatively and emotionally, but often very particularly on account of national identity. This is especially the case when the film depicts the actions of a national group, and particularly in the context of an international conflict. The film Braveheart, for example, generated very heated debate about its depiction not just of certain historical personalities, but also ofEngland’s relationship toScotland. It was not at all well received by the English.


It seems extraordinary that a cinematic interpretation of events which took place almost seven centuries ago could cause such rancour, yet such they did. Some film-makers might therefore be wary about alienating potential audiences, which raises the question as to whether or not historical accuracy in the cinema depends upon the degree to which there is a risk of upsetting members of any social group which could identify with the characters and events of the film. Inevitably, where national identities are concerned, someone is bound to be upset, and the director or author of the screenplay are likely to find themselves forced to justify the reasons for their portrayal.

The Roman epic, however, occupies a special place in the broad spectrum of historical films. This is because the period it depicts is sufficiently distant in time to avoid arousing the ire of any political or ethnic group by an historically unfair or inaccurate portrayal; thus neutralising any possible social antagonism such as that generated by films such as Braveheart. This might go some way towards explaining the flights of fantasy into which Roman epics are capable of delving. The recent and appalling television production of Cleopatra was a perfect example of the quite extraordinary degree to which history can be manipulated.

Gladiator is another production in which there is very little historical truth. It need only be pointed out that Maximus did not exist, that Commodus was already co-opted as co-emperor in 177, three years before the death of Marcus Aurelius in 180, and that he ruled until 193 when he was strangled to death by a professional wrestler as he lay in a drunken sleep, to illustrate the quite ridiculous historical inaccuracy of the film. Can Gladiator therefore rightly be called an historical film?

Gladiator, mounted

On some levels, namely those of costuming and interior design, the makers of Gladiator have made an impressive effort to achieve historical accuracy. It is perhaps counter-productive to quibble about the exact appearance of the Roman urban landscape at the time; which facades loomed, which statues stood where, which aqueducts had been completed, and about the decoration of the interior of the senatorial curia. That neo-classical facades were shot, cut and pasted to create the backdrop of the city of Rome should not trouble us too greatly, for the effect is at least successful in conveying an impression of the scale, and, it might be said, the  “modernity” of Roman development at the height of the Empire’s power. Perhaps more importantly, the attention to detail in military hardware, costumes, furniture, personal effects, and so on, is a considerable advance on previous cinematic depictions of theRoman Empire.

Another positive of the film is that it attempts to create a less anachronistic intellectual, social and cultural context. Often, due to the need to acquaint the audience with the historical context, period films tend to be packed with informative dialogue and exposition, which at times stumbles uncomfortably from the lips of the protagonists. Gladiator is somewhat more successful in contextualising this background and making it incidental to the film.

Still, it is reasonable to wonder why so much effort has been put into minute detail, when the broader context in which all the detail is conveyed is almost completely fictional?

Director Ridley Scott provides the best answer to this question. When asked what attracted him to the film, he described his first encounter with the producer Walter Parkes, in which Parkes simply threw down a rolled-up print of Jean Leon Gerome’s famous painting of a gladiator in the Colosseum. “That’s what got me,” said Scott, “It was a totally visceral reaction to the painting.”

Gladiator by Jean Leon Gerome

Gladiator is probably best described as a visceral experience. Rather than being an historical film, Gladiator is a “human” film in a fictive historical context, whose historicity is supported by a careful reconstruction of the appearance of the world being represented. If we were to try to define Gladiator further, then it would be as the story of an individual’s struggle against injustice, and of loyalty to a threatened ideal of enlightened despotism or republican government.

It is tempting, however, to be more cynical and say that considering the lack of regard for the historical narrative, it is essentially a vehicle for great special effects and innovative action sequences. After all, the project began with only the arena in mind. The script, which needed a great deal of work, ran to a mere thirty-five pages and underwent a number of transformations throughout the shoot. Perhaps as a consequence of the simplicity of its original conception, it is difficult to find any serious message in Gladiator. If one were to look for a historical message in it, all one really finds is that Marcus Aurelius was a good man, Commodus was a bad man, life was hard and tenuous, and that Roman Republican government, namely rule by the Senate, was a cherished ideal.


It could also be misconstrued that the principle message of the film is to reveal the horrors of gladiatorial combat, for Gladiator depicts gladiatorial contests with very startling realism, although what we see is as nothing to the vast and elaborate slaughter which often took place in the Colosseum and other arenas around the Empire. The horrors of slavery and the staging of fights to the death, resonates strongly with our modern outrage at such “entertainments.” The assertion of the humanity of the slaves and gladiators is deeply moving to us who so greatly value freedom and human life. Yet this is not really the concern of Gladiator. Indeed, if one looks at the web-site, it becomes quite clear that the film is more concerned with glorifying the arena than anything else.

This is not necessarily a bad thing, as it is less of an anachronism. Indeed, one of the problems with the film Spartacus is that it makes too much of the slave revolt as a type of ideological movement against an oppressive and evil empire, and establishes Spartacus as a sort of proto-communist revolutionary. We cannot ignore that slavery was something almost irrevocably intrinsic to the ancient world; the Persians, the Egyptians, the Greeks, the Carthaginians, all had slave-based economies, and it would be difficult to say that any of these civilisations were more inclusive, more tolerant, or provided a better system of social infrastructure than did Rome. Though we are appalled by slavery, to vilify theRoman Empire for employing it is rather like vilifying a child for adopting the habits of its parents, and of society at large.


Yet whilst Spartacus might be too redolent with Marxist overtones it is one of the few Roman epic films which attempts to remain true to the understood historical narrative of what it depicts, with the exception of its fabricated conclusion.  (Spartacus’ body was never recovered from the battlefield.) It is an excellent, humane, and deeply moving film, which has a greater “historicity” than many of its predecessors.

When asked why he thought Roman epics had vanished for forty years, Ridley Scott said that: “They reached a saturation point and then they simply went away because every story seemed to have been exhausted.”

This response might go some way to explaining why Gladiator is essentially fiction. Yet, at the same time, it might be the very thing which will allow the Roman epic to re-emerge as a genre. No one had ever heard of Maximus before, and the vast majority of the audience will never have heard of Commodus either. This has in no way hindered Gladiator’s success. Not many people outside of the United Kingdom, and probably only a limited number within it would have ever heard of William Wallace before the release of Braveheart. Roman history is so rich that countless stories could be artfully extracted without much need to change the context. Rather than turning to fiction, the time is now ripe for screen-writers to plough deeply the very rich and extensive soil of Roman history for future epics. Apart from all the smaller, human stories of individuals caught up in the events of Roman history, there is vast scope for movies on a grander scale. The late Roman empire in particular begs attention. Why is there no epic about Constantine, or of Alaric’s sack of Rome in 410? What of Attila’s failed invasion of the ailing western empire in 451 and, in particular the epic battle of the Catalaunian Plains?

The release of Gladiator is a very exciting and important event in film history. It has the potential to bring about a rebirth of a dead genre and to set a new direction for that genre. For, one of the most promising aspects of Gladiator is that it avoids the polemics against Roman rule which were characteristic of so many of its predecessors. It empathises much more successfully with the period in offering a fairer cross-section of Roman society and ideas. In the opening battle scene, Maximus’ Tribune Quintus says with derision; “People should know when they’re conquered.” To which Maximus replies, “Would you Quintus, would I?” In conversation with Marcus Aurelius, Maximus acknowledges that the world outside of Rome is dark and forbidding; “Rome is the light,” he says sincerely. The means by which the greater complexity of the Roman world is conveyed is more subtle than many other epics of this genre and less dominated by modern political, religious and ideological concerns.


The earliest Roman films were often rooted in a strong ideological agenda. , The 1914 Italian film Cabiria, set during the Second Punic War (218-202 BC), was produced by the ultra nationalist Gabriele d’Annunzio and was released shortly after the Italo-Turkish war, in which Italy conquered the Ottoman provinces of Tripolitana and Cyrenaica in North Africa. Similarly, the 1937 film Scipione l’africano, depicting the life of Scipio Africanus, Rome’s most successful general during the Second Punic War, followed in the wake of Mussolini’s Ethiopian conquest.

Scipione l’Africano

The 1964 Hollywoodfilm, The Fall of the Roman Empire, reads like a positivist moral essay; striving to put across a more explicit historical argument. Starring Alec Guiness as Marcus Aurelius, and Christopher Plummer as Commodus, it has many parallels with Gladiator in that it too focuses on the accession and reign of Commodus. It essentially argues that the reign of Commodus and what took place immediately afterwards, namely the auction of the Empire to the highest bidder (it ignores the brief reign of Pertinax) was the beginning of the decline which was to lead to the Empire’s eventual “fall”, though this did not happen in the west for another two hundred and fifty years. This particular interpretation of the narrative of Roman history dates back to Gibbon, who first identified the reign of Commodus as a significant turning point after the more enlightened rule of Marcus Aurelius.

One of the central themes of The Fall of the Roman Empire, namely the social experiment of settling barbarians as farmers in Roman territory, was a massive oversimplification of an issue which, in fact, was dealt with at a painstakingly academic and philosophical level in the late Roman Empire, the consequences of which were central to the gradual devolution of Roman power in the west in the fifth century.

The Fall of the Roman Empire

It is inevitable that political and social complexities have to be glossed over in an historical film – no audience is going to sit through a film which depicts with arduous detail the mind-boggling intricacy of Roman bureaucracy – yet such complexity can be hinted at through thought-provoking ambiguity, rather than being arduously explicit. Ideally, the Roman context should be incidental to the film and less explicit, especially where long-established clichés are otherwise the only resort. Typically the Roman Empirehas been portrayed as a vicious, cruel organisation, run by ruthless madmen. Gladiator at least went some way towards suggesting that Commodus was just an example of a very cruel, weak, and over ambitious megalomaniac in a world of otherwise sane human beings with complex identities.

The 1951 MGM film Quo Vadis, however, opens with a startling and lengthy diatribe against the nature of Roman power, based entirely upon modern, Christian concepts of ethics and morality, and which is to put it mildly, anachronistic in the Empire of the 1st Century AD. Such criticisms of Roman power as did exist in the 1st century, rarely focussed on the immorality and inhumanity of gladiatorial contests or slavery, rather upon an antique perception of freedom and self-determination, which, sadly, often translated as the freedom of another aristocracy or religious oligarchy to run its own exclusive autocratic regime.

Indeed, the degree to which the Roman state is vilified in the cinema is probably only paralleled by post-war portrayals of Nazi Germany. Certainly the Roman Empire was a physically coercive entity which encouraged practices we find abhorrent, but considering the context from which it emerged, it was the paragon of ancient civilised states of the Mediterraneanand near Eastern world. The Roman Empire was an inclusive, not an exclusive system which encouraged religious freedom, (with the exception of certain troublesome dissidents who worshiped a dead carpenter), which provided immense and sophisticated public services, sanitation, education and security, which championed free trade, and which, under the pax Romana, also championed peace.

The great eighteenth century historian Edward Gibbon once wrote:

“If a man were called to fix the period in the history of the world during which the condition of the human race was most happy and prosperous, he would, without hesitation, name that which elapsed from the death of Domitian to the accession of Commodus. (AD96-180).”

During Gibbon’s lifetime such an observation had much greater currency, especially when we consider that theBritish Empirehad not as yet abolished slavery by the time of his death. Clearly there is no excusing slavery in any context, but this is a modern sensibility. Even the much vaunted Athenian democracy was heavily dependent on slave-labour, and they did not offer to extend their citizenship to outsiders as the Romans did.

It is largely for this reason that Gladiator makes a departure from its predecessors. Rather than critiquing theRoman Empire as an entity, it highlights the folly and wickedness of certain individuals. It marks a turning point in the portrayal of Roman history and offers, without being especially cerebral or historically accurate, a less explicitly moralising theme and context. If its success results in the making of further such historical epics, then there might be something of a rebirth of the genre. Either way, and perhaps most importantly, enrolments in ancient history courses both at high school and university have risen dramatically in its wake. If the cinema can still inspire students to take an interest in the very distant history that underlies the culture, identity and institutions of modern western society, then this is surely a positive.

Read Full Post »

If you had told a teenager playing PONG back in 1972 that one day computer games would be the most profitable entertainment industry on the planet, they might just have believed you – but few others would. That computer games could become so completely entrenched in society, that we should find ourselves discussing the game-ification of life itself, might also have been difficult to fathom some forty years ago. The evidence is clear however. Not only do people love games, and computer games in particular, but they have embraced them on an utterly breathtaking scale.

Computer game and console revenues have continued to show steady growth globally since the 1970s, with the market showing periods of accelerated growth in the mid to late 80s, mid 90s and again from 2006. Initially a very expensive luxury item, the speed of technological development, steady advance of miniaturisation and the vast up-scaling of production has made computers and consoles both easily affordable and ubiquitous.

Much of the growth has been driven in recent years both by emerging markets and the further expansion of already established markets in Asia, Europe and the United States. Even in the developed world, where the inevitable maturation of the market saw a levelling of revenues in the first half of the previous decade, there were strong forecasts for growth and the industry was tipped not only to outstrip both music and movie revenues, but to double them by 2011. Sure enough, in 2006/07 revenues in the United States grew by a staggering 28.4%. Indeed in that year, on average, according to Entertainment & Software Association (ESA) CEO and president Michael D. Gallagher, “an astonishing 9 games were sold every second of every day of the year.”

In 2007 total global sales for consoles and games hit $41.9 billion. Compare this with roughly 30 to 40 billion for music sales, 27 billion for movies and around 35 billion for books in that same year. Also consider that in 1994, the entire gaming industry generated just $7 billion in revenues, and in 1982, a mere $1.5 billion.

In 2008, Grand Theft Auto IV became the most successful entertainment release in history. In the space of 24 hours, the game had grossed $310 million US in sales, compared to the book Harry Potter and the Deathly Hallows and the film Spiderman 3 which grossed $220 million and $117 million respectively in their first 24 hours. In 2008, video games revenue smashed all predictions, with the industry selling more than $54 billion. In 2009, Guitar Hero 3 became the first single computer game to generate more than 1 billion dollars in sales. Total global computer game revenues, despite the economic downturn and an 8% correction in 2009, are projected to reach as high as $68 billion in 2012. Should the rapid expansion of the middle class in India and East Asia continue for some time as many predict it will, then this will almost certainly ensure further significant growth in this industry.

Even if these projections are not met, the computer games industry is already, far and away the fastest growing entertainment industry in history. In both the United States and the UK, video games revenue is already considerably larger than music or movie sales. In the UK, gaming revenues are now greater than music and DVD sales combined and four times greater than cinema box office takings, with further expansion predicted. The above statistics take into account many different formats and platforms, including different gaming consoles, mobile phone and PC games, games rentals and online gaming subscription fees, so it is arguably not a single phenomenon in itself. Yet, what unifies them all is, inarguably, a single phenomenon – gaming.

These figures might perhaps give a skewed conception of the industry’s scale, and it must be remembered that much of the income derives from the sale of expensive consoles and hardware accessories. It is necessary to look at the scale of participation to get a clearer idea of the industry’s penetration. For online gaming alone, not including gambling, there are an estimated 500 million gamers; a number predicted to grow to 1.5 billion in the next decade. Zynga, the company responsible for social games such as Farmville and Mafia Wars, boasts a total of 266 million active monthly accounts, with Farmville alone having 62 million players at last count.

In the developed world, the average age for video game players is roughly 35, a number which is slowly increasing. This trend is less surprising when one considers the age and maturity of the industry; those who grew up playing arcade, console and PC games in the 80s have largely stayed in touch with newer formats and continue to do so. Globally the average age ranges from mid to late 20s. Just over 20% of gamers in the US are over the age of 50. This is a very cross-generational phenomenon.

The gender distribution of gamers is also now approaching parity. A 2009 study showed that 60% of gamers were male and 40% female, though the distribution varies considerably by format, with almost 80% of female gamers preferring the Sony Wii compared to only 41% of males. It is estimated that female gamers now constitute a majority of online social gamers. Gaming has also increasingly become a family activity, especially with the introduction of consoles such as the Wii and Microsoft’s new Kinect controller for the Xbox 360. In the developed world, over 70% of children aged between 8 and 18 have a video game console, not including other platforms such as personal computers and mobile phones.

In the US, where an estimated 67% of households play video games, depending on whose statistics you accept, the average amount of time spent by players is between 8 and 18 hours per week. A 2007 study in the US found that 97% of boys and 94% of girls aged 12 to 17 played video games regularly, with little variation according to ethnic or economic background.

Not only is the video games industry the fastest growing entertainment industry in history, it is also one of the fastest growing cultural phenomena in history – a phenomenon that has been subject to a great deal of stigma, condescension and negativity. Computer gaming has long been derided for the prevalence of violent themes, with all manner of claims being made about its social and personal impact. Yet, social research has repeatedly exploded myths about the influence of computer games on people’s lives, particularly with regard to violent behaviour.

In the United States, a country where almost the entire youth cohort has been exposed to computer gaming, often with violent themes, juvenile crime-rates are now at an all-time low. Indeed, violent criminals have been shown to consume less popular media before offending, with causes of crime being more closely linked to parenting, mental illness and economic status. Psychological studies indicate that violent computer games do not turn otherwise non-violent people into violent criminals. Indeed, gaming has been shown to be a highly effective outlet for aggression. Considering that roughly 90% of males play video games, it is dubious at best to cite this as a cause without examining broader crime trends, which indicate a reduction in criminal behaviour.

This is all perhaps less surprising when we take into account that studies of primate behaviour suggest that apes are capable of making clear distinctions between play fighting and actual fighting. Just as children who stage mock sword fights with sticks know the limits of contact and engagement, and almost all such play will end with first blood. Again, both with apes and children, those who fail to make the distinctions between play and combat tend to be those who have a psychological predisposition to violence, either through mental illness or traumatic socialisation. As with many such influences, violent movies being paramount, we must ask – do we legislate for the norm, or for those rare exceptions?

The games industry has been notorious for its stereotyping of women and there has been much valid criticism on this front. Traditionally a pre-occupation of young men, computer game designers made often very unsophisticated appeals to their pre-occupation with sex and sexual imagery. Gender typing in games tended to reflect chauvinistic attitudes with two-dimensional characters with exaggerated proportions presented as subservient objects of titillation. This trend has, however, shifted significantly in recent years with the introduction of far more well-developed, powerful and independent female characters. The Tomb Raider series marked an interesting turning point, wherein a strong, intelligent and capable female character not only allowed female gamers to feel empowered, but also provided the requisite titillation to keep male gamers interested. Increasingly computer games have catered to women and also to men who preferred more interesting and intellectually appealing female characters. Indeed, in the 2005 release, Tomb Raider: Legend, Lara Croft’s breasts were reduced from a DD cup to a C cup. Bioware has long been leading the way on this front with its more deeply-drawn female characters in games such as Baldurs Gate 2, Mass Effect 1 & 2, Neverwinter Nights 1 & 2, and Dragon Age Origins. Many girls who play computer games now cite a sense of empowerment through their online avatars, an empowerment which extends into their everyday lives. Games designers have also recently begun to introduce sympathetic homosexual characters, such as Zevran in Bioware’s Dragon Age Origins.

Gaming has also been derided as a mindless pre-occupation with little personal or social benefit, yet increasingly research indicates that games are extremely effective educational tools. Gamers have improved hand-eye co-ordination, are better at multi-tasking and have considerably increased ability to process information from their peripheral vision compared to non-gamers. In his book Everything bad is good for you, Steven Johnson argues that computer games both demand and reward more than traditional games like Monopoly. Many games serve as a sort of ethical testing ground, with genuine choices and consequences. We can feel deeply guilty about the actions of our avatars, or our treatment of other characters, be they the avatars of other players, or computer-controlled bots. The way gamers play often mirrors the way in which they interact with people in real life, and games where actions and choices have moral consequences offer a chance to learn about social interaction.

There are many different genres of games such as shooters, simulators, adventure, action adventure, Role-playing, action role-playing and strategy, to name a few. There are also a wide variety of goals within games. Some games merely hone our skills at a particular, often meaningless reflex task. Others engage with stories, sometimes linear, sometimes open ended. Some games are merely about acquisition, a sort of “cumulomania”; others have more noble goals, such as saving lives, helping the disadvantaged or slaying monsters. Some games have a difficult learning curve, others a simple, easy learning-curve. Some games are especially literary with seemingly endless detail about the game world; its history, culture, politics and landscape; other games simply require shooting as many things as possible. What is almost always present in every game, however, is some form of competition and some form of goal, quest, outcome or reward. This can be played out as either PvP (Player vs Player), PvE (Player vs. Environment), or a player competing against their own standards. It can be a race against time, or a strategic, tactical battle against sophisticated AI. The pace will vary significantly, as will the pressure, but most successful games present challenges to players that are not beyond them, but might, ultimately, be difficult to achieve.

Jane McGonigal, a games designer, researcher and author of the book Reality Is Broken: Why Games Make us Better and How they Can Change the World, has made many strong arguments in favour of computer gaming. She cites four positive factors associated with gaming: Urgent Optimism, Social Fabric, Blissful Productivity and Epic Meaning. She states that gaming involves the desire to tackle difficult obstacles, the willingness to create communities, the joy of working hard to achieve goals, and the sense of a great story or meta-narrative.

She cites the example of World of Warcraft (hereafter WoW), where over twelve million players have, since its beginning in 2004, spent a grand total of more than 6 million years playing the game. In WoW, complete strangers from across the globe team up in groups of up to six (with larger groups for raids), and co-operate in solving quests and achieving particular goals and outcomes. Knowing their role, according to the class or profession of their avatar, players will join together and help each other in a common cause, often communicating through speaking, or simply typing in often very basic English, the lingua franca of online gaming. The enjoyment of the exercise and the need to co-operate makes it not only a fun experience – although of course, things can go horribly wrong – but also a very social, diplomatic experience.

This type of co-operation is significant when we consider just how much people in the developed world and beyond are gaming, especially in MMORPGs (hereafter MMOs). In a 2010 TED talk, Jane McGonigal stated that: “The average young person today in a country with a strong gamer culture will have spent 10000 hours playing online gaming by the age of 21… the same time spent in school from fifth grade to high school graduation with perfect attendance… what we’re looking at, is an entire generation of young people who are virtuoso gamers.”

She sees this as a parallel education creating a virtually unprecedented human resource, and asks the question “what exactly are gamers getting so good at?”

Principally, it seems, energetic and willing co-operation in solving problems in teams with complete strangers from different cultural and geographical backgrounds. If such skills can be harnessed to solving legitimate social, economic and logistical problems, she argues, then this would be of immense benefit to global society as a whole. With this idea in mind, McGonigal has been a driving force behind the development of games designed to mirror global problems and find solutions, such as World Without Oil, a sort of participatory economic and environmental simulation set in a time of peak oil. This type of grand narrative is a commonly recurring theme in computer games and is potentially compelling for all gamers, but particularly so for those engaged more by stories than mere action or accumulation.

In his book, The Study of Games, Brian Sutton-Smith writes, “Each person defines games in his own way — the anthropologists and folklorists in terms of historical origins; the military men, businessmen, and educators in terms of usages; the social scientists in terms of psychological and social functions. There is overwhelming evidence in all this that the meaning of games is, in part, a function of the ideas of those who think about them…”

Games can make us feel proud of ourselves, they can make us feel more capable and more determined. They can also leave us with as intense a recollection of story and experience as any film or book. Already games have become one of the dominant modes for conveying narratives to people of all ages. Their storylines are often old myths and narratives rehashed, but by making the player the protagonist, they achieve a unique level of emotional investment in the story. Just as some books are un-put-downable, or as a movie keeps us glued to the screen, games can be equally mesmerising, often over considerably longer time spans.

There are of course many problems that derive from gaming, largely on account of them being so compelling. This is particularly the case with MMOs such as WoW, though it manifests itself in many ways – be it obsessive playing of Patience, Bejewelled Blitz or Farmville, or the infamous “just one more move” syndrome associated with Sid Meier’s Civilization series.

Whilst not actually certified as a psychological disorder, video game addiction displays many of the symptoms of compulsive disorders and impulse control disorder. Players of MMOs are considerably more likely to suffer from addiction or overuse, playing on average two hours a day more than regular gamers. A 2006 poll suggested that roughly 12% of online gamers displayed addictive behaviour. A 2009 survey in Toronto of 9000 students from grades 7 to 12 showed that roughly 10% spent 7 or more hours day in front of a screen. Other studies have indicated that problematic gaming behaviour effects roughly 4% of regular computer gamers, and this often corresponded with other underlying mental health issues.

There have been notable cases of addictive gaming leading to death, either indirectly through neglect or directly through derived health problems. In 2009, in an ironically tragic incident, a three-month old baby died of malnutrition whilst her Korean parents spent hours in an internet café raising a virtual baby in the online game Prius. In 2005 a Korean man suffered a cardiac arrest and died after spending 50 hours playing Starcraft in an internet café.

The reasons for the addictive nature of MMOs are many and complex. The term “grinding” refers to playing continuously, often without pause, and often repeating the same process to achieve a result as quickly as possible or to harvest loot or other items. There are so many possible goals in MMOs, such as levelling, crafting or making money, that players can easily become obsessive about achieving these outcomes at the eclipse of other concerns. Owing to the need to co-operate and participate in parties of players to succeed in quests, many players also see playing as a social obligation to their fellow gamers, particularly those players who are closely involved with a guild. There is also pressure to continue playing in order to stay in touch with other players, some of whom advance very rapidly on account of devoting so much of their spare time to playing. Once a significant level-gap has opened between two characters, it is no longer worthwhile teaming up on quests.

There are also many players who enjoy acting in a deliberately anti-social manner within MMO gameworlds. Different situations can develop different attitudes. There is often a stark contrast between PvP servers and PvE servers, with the former attracting people who can only, based on their in-game behaviour, be classified as psychopaths. The much vaunted but ill-received and poorly populated Age of Conan MMO became infamous for the behaviour on its PvP servers. It was common for players to camp near area transitions and, in effect, to assassinate travelling players who were unable to defend themselves while they loaded into the new zone. Thankfully, for every “troll” there are usually three or four community-minded gamers, and PvP servers can also bring out the best in people, with powerful characters seeking to defend the weak from the ravages of more bloodthirsty players.

The sheer proliferation of MMOs has created hundreds if not thousands of often tight-knit global communities. Recent statistics indicated up to 12 million registered accounts at World of Warcraft, and roughly 3.5 million for Aion, a game mostly popular in Korea and east Asia. The server populations can vary dramatically, with the science fiction space-trading game EVE Online holding the record of 54,446 players simultaneously active on a single server in 2010.

Players on the same server will often band together to assist each other, and many develop a very community-minded spirit. Educating new players can also be a real pleasure; not only in the technical aspects of the game and various styles of gameplay, but also in the social mores and ethics of the gaming world or particular server. Experienced players will often make the effort to advise players about what will be required in particular scenarios, for it is foolish to assume, when going into an instance, that what one has learned through hard experience is common knowledge. Such assumptions will often result in disappointment and embarrassment. The more investment experienced players make in new players, the more one might expect such things in return. It also helps to ensure a better crop of players, especially in PUGs (Pick-up Groups), which can otherwise be a very hit and miss experience.

MMOs also benefit those who are shy or who have socio-phobic tendencies. Whether they are uncomfortable with their appearance or some form of disability, the internet can provide a suitably anonymous means through which to interact with others successfully. Avatars are more often than not an approximation of how we wish to look, realistically or otherwise, and few people will look as good, or, for that matter, as ugly and formidable in real life as they might in the context of a game.

Video games are also a wonderful vehicle for a sort of “identity tourism”. In video games, players often assume another race or gender. Terry Flew, associate professor of Media and Communications in the Creative Industries at QUT, in Brisbane, suggests that much of the appeal of MMOs lies in the ability to assume the role of someone or something that is not possible in real life, and then to step into a virtual social context. In many cases, the online identity may become more acceptable to the player than their real-life identity. This can even lead to tensions between gamers and the game-creators, the former considering their avatars to be theirs, with the latter considering all content to be the property of the manufacturer. Male and female gamers regularly gender-bend and most experience excitement rather than discomfort at doing so. Negative responses to men playing female characters are generally frowned upon and considered out of step with the game-world’s mores. In games which originate from an Asian context, strongly influenced by Anime styles, male characters often have a feminised appearance, with large, round eyes, soft, pale skin and delicate features.

Another fascinating internal dynamic of MMOs is their economies. Much has been made of players selling virtual goods and services for real money: levelling characters, the sale of rare items, or indeed, the sale of established characters and whole accounts to other players. Such actions are, in almost all cases, a breach of contract and are punished heavily by account suspensions or character deletion. Yet far more fascinating is the workings of the virtual economies in-game, most significantly through the auction houses. Here players can choose to set a starting bid and a buy-out price for literally anything they find or craft in game. It takes some time for economies to get started, but once an MMO has been up and running for some time, market forces take over. Rare items, weapons, armour, clothing, crafting materials, reagents, components, minerals, decorative attire, anything and everything has a potential buyer and prices will fluctuate accordingly. Learning what sells well adds a whole extra dimension to obtaining loot, herbs, minerals, components and what have you. In most MMOs there is a surge of players logged on over weekends – a convenient time for crafting in particular, and the more savvy players will list items required for these processes in anticipation of a buying spree.

The in-game economy is a real economy and mastering it is no mere adjunct to gameplay – it is a practical necessity. To put it bluntly, players who don’t know how to generate income are an underclass. Their inferior weapons and armour, lack of accessories such as mana potions, salves and healing wands, can often prove costly when a party is stretched to the limit. In Dungeons & Dragons Online (DDO) a cleric who cannot heal is a grave liability. Similarly, tanks and DPS (Damage per second) toons in DDO fronting up against, for example, a clay golem, without appropriate weaponry to beat its resistances and damage reduction will be of next to no use. Learning to make money in game ensures a better playing experience for all involved, and discerning players will blacklist those who are not well enough equipped to perform their role. The learning-curve of an in-game economy is often a significant educational experience in financial management.

Other virtual phenomena have startling parallels in reality. Take for instance the proliferation of psychologists in Second Life. Here one can talk to an accredited analyst, whilst sitting on a virtual couch. And outside, in the real world, psychologists are now using virtual simulations to help with phobias by putting people in the virtual presence of situations they fear, whilst providing structured reassurance. Consider also the “Corrupted Blood” plague incident in WoW, possibly the most fascinating glitch in the history of gaming. The Corrupted Blood plague, a debilitating, and potentially fatal debuff which was supposed only to affect players in a raid instance, made its way into the game-world through player pets and minions. It was then transmitted from pets and minions to players, who transmitted it to other players and so on. Within hours of the first outbreak, major cities in game were heavily affected because of strong player concentrations, with lower level characters being killed almost instantly. The reactions of players and the rapidity of the spread of the disease has since been studied by epidemiologists.

The debate continues about whether or not computer games can ever be considered to be “art”. One objection is that, on the grounds of player involvement, a sort of co-authorship is taking place. Yet are not installations often an interactive experience requiring the presence and, occasionally, the participation of an audience? If we are to judge games on the basis of artistic merit, then we must ask does all cinema, music and painting automatically achieve the standard by which we define art from commercial product, or just plain junk? Computer games are another genre, another medium, with many different levels of design and expression. One could focus on the components, such as the art of story-telling, the art of design of both the engine and the skins that clad it, the art of writing, both dialogue and in-game descriptions, or one could focus on the package as a whole.

Computer games have also generated a vast amount of creativity amongst their devotees. Many games that can be customised have large communities of highly skilled, literate and artistic modders. Bioware’s Neverwinter Nights games encouraged people to use the toolset to create adventures. Using the the same virtual components – landscapes, buildings, trees, monsters, character models, etc –  used in making the original game, players constructed their own complete game-settings and plots. In one collaboration, a Hungarian science-fiction author wrote a module entitled Tortured Hearts, with over 400,000 words of dialogue, and a complex array of possible role-playing interactions in an extensive world. Playing it in its entirety required almost 150 hours of game-time. This was one of many thousands of modules, several of such exceptional quality they were rated by community members as superior to the original game. These hobbyists are not only giving pleasure to themselves and others, but also honing their skills, and in some cases, finding subsequent employment as game designers.

Whether we like it or not, computer games are already deeply embedded in modern society and will likely become even more so in the future. Theorists, noting the readiness of people to engage with gaming in so many different contexts, have began to postulate on the game-ification of life, where people are encouraged to do public good or improve themselves by game-like reward systems, or via game-like mechanisms. Just what the long-term social implications will be are difficult to predict, but the initial scare of a grossly negative impact appears not to have materialised, and the appeal of gaming has increased dramatically.

The long antiquated idea that video-gaming is essentially an anti-social pursuit is no longer supportable in the era of MMOs and social network-based games. It was always something of a misconception when one considers the nature of console-gaming – gamers have been competing and cooperating with their friends and family since the the days of the first consoles. On a smaller scale, the LAN party, in which players bring their computers to a friend’s house and connect via a local router, is another example of socialising both physically and via an interface.

Perhaps in the not-too distant future, more affluent houses will contain a room, let’s call it the “iRoom,” where the entire space is utilised for the sake of gaming. A central, ceiling-mounted, 360 degree projector turns the space into a completely immersive environment of interiors & exteriors; speakers embedded flush with the walls provide surround sound, whilst receptors collect both movement and voice data from the player or players who stand in the midst of this space. Such a space will ultimately be a luxury product, but ever since consoles provided steering wheels and handguns, we have been moving towards this level of immersion.

People who treat games lightly and dismiss them as an ugly, crass, superficial and violent form of popular culture, will be disappointed to learn that not only are they not going to go away, but they are on target to become the supreme entertainment format and a dominant cultural phenomenon in the developed and developing worlds. Artists need not fear them, but instead, they should get on board. This vast gravy train is steaming ahead and writers, composers, painters, designers, voice artists and actors will find many opportunities for gainful and satisfying employment in this unstoppable industry of the future. It seems that for video games, the only way is up, and with the diversity of the market, there is, quite literally, something for everyone.

Read Full Post »

This is an essay written for my Masters in Creative Writing, c. 2005. It is not particularly well researched, but seems relevant and eloquent enough to warrant posting.

Death in Venice

Death in Venice is a brief, yet complex novel which ought really to be called a novella.[1] Within its eighty-odd pages, Thomas Mann combines psychology, myth and eroticism with questions of the nature and role of the artist and the value of art. It is a metaphorical and allegorical novel which deals with themes common to German Romanticism, namely the proximity of love and death. That all this takes place within the context of a simple and linear story about an ageing writer’s homoerotic obsession with a fourteen year-old Polish boy in Venice makes it all the more remarkable.

Two of the major themes I wish to touch on in this discussion are those of Mann’s understanding of and concern with the role of the artist, and the manner in which he has made use of personal experience in his work. I will also examine the way in which this novella developed from its initial conception as a rather different story altogether.

Thomas Mann’s early work focused almost entirely on the problem of art and the role of the artist. Mann was conflicted between immense distrust of art as a “decadent evasion” and the elevation of art as “a source and medium of the interpretative critique of life.”[2] His thinking was to a great degree informed by the writings of Friedrich Nietzsche, yet he was certainly not as strictly Nietzschean as many of his contemporaries. In his 1903 work, Tonio Kröger, Mann explored the impact of a devotion to art and a bohemian lifestyle on the ability to live a normal life and retain a normal range of emotions. The character of Tonio Kröger “suffers from the curse of being the ‘Literat’, the writer who stands fastidiously apart from experience precisely because he has seen through it all. His critical, knowing, sceptical stance conflicts with his craving for ordinary, unproblematic living.”[3]

In a sense Mann established a sort of artistic manifesto through the character of Tonio who concludes that his art must be “an art in which formal control does not become bloodless schematism, but is, rather, able to achieve a lyrical – almost ballad-like – intensity and simplicity; an art which combines a precise sense of mood, of place with passages of reflection and discursive discussion; an art which is both affectionate yet critical, both immediate yet detached, sustained by a creative eros that has the capacity for formal control, for argument in and through the aesthetic structure.”[4]

Though Tonio Kröger predates Death in Venice by almost ten years, many of the conclusions reached in its composition inform the structure and purpose of his later work.

In Death in Venice, Mann once again displays his focus on questions about the nature of the artist and his art. After introducing his character of Gustave von Aschenbach and providing the inspiration behind his trip to Venice, Mann seems impatient to unload as much character detail as possible. He outlines Aschenbach’s career as a writer with both overt and covert cynicism which pinpoints the ironies inherent in his gradual transition from energetic bohemian to clockwork establishment figure. This dense and often turgid biography acts as a sort of premise to a novella that in many ways constitutes a narrative critique of art and artists and the nature of beauty, to name two of its principal themes.

Thomas Mann makes this plain early on in the following passage:

The new type of hero favoured by Aschenbach, and recurring many times in his works, had early been analysed by a shrewd critic: ‘The conception of an intellectual and virginal manliness, which clenches its teeth and stands in modest defiance of the swords and spears that pierce its side.’ That was beautiful, it was spirituel, it was exact, despite the suggestion of too great passivity it held. Forbearance in the face of fate, beauty constant under torture, are not merely passive. They are a positive achievement, an explicit triumph; and the figure of Sebastian is the most beautiful symbol, if not of art as a whole, yet certainly of the art we speak of here. Within that world of Aschenbach’s creation were exhibited many phases of this theme: there was the aristocratic self-command that is eaten out within and for as long as it conceals its biologic decline from the eyes of the world… [5]

It is no accident that the first theme here mentioned should conform so closely to the tale that is to follow. Mann had long been intrigued by the concept of an older man who has given himself single-mindedly to high achievements, only to be seized, late in life, by love of an inappropriate object who will prove his downfall.”[6]

Thomas Mann had never shied away from using his characters and the situations into which he placed them as a forum for self-analysis. As far as he was concerned, “the personal was given its highest value when converted to literature.”[7] This was made nowhere more plain than in his brother, Heinrich’s, play about their sister, Carla’s suicide. Thomas Mann championed the play and ensured it got produced and he and his brother caused a scandal when they stood up and applauded vigorously on the opening night.

Mann was later to write:

“The personal element is all. Raw material is only the personal.”[8]

One of the most interesting aspects of Death in Venice is the degree to which it is based on real events. Within the context of this class, we have already to some degree addressed the question of how much of ourselves we might incorporate into our works; what elements of our personal experience might we deploy within the context of a piece of writing and how might we disguise or manipulate these. Death in Venice is an example both of great skill and great good fortune for almost the entire story derives from real events which are described in minute detail with a desire to be faithful to recollection.

In his memoir entitled, Sketch of my Life, Mann wrote that:

Nothing is invented in Death in Venice. The “pilgrim” at the North Cemetery, the dreary Pola boat, the grey-haired rake, the sinister gondolier, Tadzio and his family, the journey interrupted by a mistake about the luggage, the cholera, the upright clerk at the travel bureau, the rascally ballad singer, all that and anything else you like, they were all there. I had only to arrange them when they showed at once and in the oddest way their capacity as elements of composition. Perhaps it has to do with this: that as I worked on the story – as always it was a long-drawn-out job – I had at moments the clearest feelings of transcendence, a sovereign sense of being borne up such as I had never before experienced.[9]

Mann had indeed travelled with his wife and brother to an Adriatic resort, only to find it dull and oppressive, and had then made the decision to move on to Venice. He bought a ticket as described, saw the old fop on the boat as they were setting out and, upon arrival in Venice, he and his family were then transported to the Lido by an unlicensed Gondolier who dropped them off and fled without paying after unloading their luggage.

The Polish family were also present and are rendered as faithfully as possible. The accuracy of Mann’s descriptions were later attested in anecdotes and photographs provided by Count Wladyslaw Moes, upon whom Tadzio was based and who was tracked down by Mann’s daughter, Erica, in the 1960s. He also acknowledged that the tussle on the beach between Tadzio and Jaschiu had taken place in precisely the way described and even claimed to have been aware of a mysterious man who watched him continually during his stay.[10]

Not only did Mann base the context and characters upon what he witnessed and encountered, but the character of Aschenbach was a combination of himself and Gustave Mahler, who was a close personal friend of Mann and who was, at the time of Mann’s holiday in Venice, on his death-bed. During his stay in Venice, Mann read regular newspaper reports concerning Mahler’s declining health and this seems to have inspired him to borrow Mahler’s age and appearance for the character of Aschenbach.[11]

On the other hand, Aschenbach’s habits and profession are of an accurate autobiographical nature; his three hours of writing every morning, his midday nap, his tea-time and afternoon walks which are taken precisely where Mann took his, his devoting his evenings to writing letters, and his special interest in prepubescent boys.[12]

While very little of the context and events of the story might be invented, it certainly did not present itself to Mann as a whole already plotted. The prevailing themes of art and beauty in Death in Venice were originally earmarked for a different sort of story altogether.

What I originally wanted to deal with was not anything homoerotic at all. It was the story – seen grotesquely – of the aged Goethe and that little girl in Marienbad whom he was absolutely determined to marry, with the acquiescence of her social-climbing mother and despite the outraged horror of his own family, with the girl not wanting it at all – this story with its terribly comic, shameful, awesomely ridiculous situations, the embarrassing, touching, and grandiose story is one which I may someday write after all. What was added to the amalgam at the time was a personal, lyrical travel experience that determined me to carry things to an extreme by introducing the motif of “forbidden” love.[13]

Mann’s great achievement with Death in Venice was to find such strong, if simple, narrative strain within an otherwise non-narrative sequence of events from the basis of a desire to examine a theme.

One of the paradoxes of Mann’s style in Death in Venice lies in the fact that despite its thorough realism, which derives to a very great degree from his detailed description of personal experiences, the story allows myth and legend to have a very palpable existence. In every regard, Death in Venice is a “highly stylised composition characterised by a tense equilibrium of realism and idealisation.”[14] Rich in metaphor, myth and psychology; its very title is unequivocal in establishing the teleological nature of the story.

Nowhere is the palpability of mythical elements more strongly realised than in the figure of the stranger, through whose various manifestations Aschenbach is guided inexorably to his fate. The stranger takes the form of the traveller at the cemetery, the goatee’d captain of the ship from Pola, the Gondolier and finally the musician, all of whom share devilish qualities in their appearance or assume a devilish quality through their actions and context.[15]

The stranger at the cemetery first appears “standing in the portico, above the two apocalyptic beasts.”[16] The ship’s captain makes the simple act of purchasing a ticket take on the trappings of a magic show through his flourishes.

He made some scrawls on the paper, strewed bluish sand on it out of a box, thereafter letting the sand run off into an earthen vessel, folded the paper with bony yellow fingers, and wrote on the outside… … His copious gestures and empty phrases gave the odd impression that he feared the traveller might alter his mind.[17]

The process becomes more akin to the signing of a devil’s contract and once again, Aschenbach is being drawn towards his fate. When the Gondolier rows him across to the Lido, it is as though he is being taken across the Styx by Charon in a coffin. Finally he encounters the musician who reeks of death and who further acts to ensure that Aschenbach is not inclined to leave Venice by maintaining the deception regarding the outbreak of cholera.[18]

Metaphor and suggestion are continually present. The graveyard at the very beginning has a chapel in the Byzantine style – uncommon and therefore distinct in Bavaria – and surely acting as a metaphor for Venice, with its Byzantine cathedral in San Marco, thus creating another link between Venice and death.[19]

Aschenbach’s initial vision of faraway places, a vision of a “tropical marshland beneath a reeking sky, steaming, monstrous rank – a kind of primeval wilderness-world of islands, morasses and alluvial channels,” describes both the point of origin of the Cholera, and the unpleasant aspect which Venice assumes.[20] Indeed the cholera is merely the embodiment of a metaphysical process taking place within Aschenbach.

Nothing is coincidental about the writing in this work, just as the chair in the gondola is “coffin black,” just as the foppish man with the dyed moustache and goatee, with the wig and rouge heralds the fate awaiting Aschenbach.

In Death in Venice Mann uses contrast and counterpoint, combining modernity & myth, realism and fantasy to make an otherwise minimalist and linear plot so engaging.[21]

Metaphorically the story is that of the “tragedy of the creative artist whose destiny is to be betrayed by the values he has worshipped, to be summoned and destroyed by the vengeful deities of Eros, Dionysis and Death.” At a realistic level it is more a sombre parable about the physical and moral degradation of an ageing artist who relaxes his discipline.[22]

Death in Venice also functions as a series of philosophical reflections on the nature of beauty. The descriptions of Tadzio are variations on a sort of formulaic theme – that of him being representative of beauty’s very essence. At first Aschenbach’s obsession is portrayed as a realistic, psychological infatuation just as his fantasies are initially sublimated and artistic; likening Tadzio to works of art. As his fantasies become gradually more erotic, however, the language becomes increasingly baroque and mythological. As Aschenbach’s behaviour becomes increasingly inappropriate in his infatuated pursuit, culminating in his cosmetic attempt to look younger, so the language of his infatuation becomes more fantastical and ludicrous. By the end of the story the language has become as decadent and unrestrained as Aschenbach’s behaviour.[23]

It is made clear at the start that Aschenbach is a writer whose style shows “an almost exaggerated sense of beauty, a lofty purity, symmetry and simplicity” and whose work shows a “stamp of the classical.” Apart from allowing Mann more easily to locate the discussion of beauty and art within the context of Platonic philosophy, it has been argued that through allusions to antiquity and its different moral standards, he was attempting to soften the blow of the prevailing theme of homosexuality.[24]

Tadzio, is initially like one of the many youths for whom the Olympian gods “conceived a fondness” being likened to Ganymede, Hyacinthus, and eventually Eros and Hermes. He is paradoxically the inspiration and challenge to the artist’s creative urge and its nemesis. He combines both Apollonian and Dionysian qualities, an inspiration to work and a lure to dissipation, stupor and the final disintegration of the body and mind.[25] In a work that closely explores the spirit and mentality of the artist, Tadzio embodies everything that threatens to undermine discipline and the sacrifices that are required to produce great work.

With the exception of its rather ponderous beginning, Death in Venice is a masterful combination of fantasy and realism within a novella that at times reads like an essay or philosophical tract. It is a very deliberate work by a writer who felt that art ought to have a purpose even if it was to undermine itself by debunking myths about its necessity and usefulness.

What makes Death in Venice so remarkable is that even with all of this contrivance and artifice, it moves forward with such a meticulously sustained level of psychological realism that its mythical and metaphorical trappings seem rather ideally coincidental more so than they do artificially contrived. Mann achieves this through intensive detail derived from recent and fresh personal experiences and through exploration of the extremities of his own psychological predilections. Keeping the degree of autobiographical material in mind, it is tempting to conclude that Mann has achieved a daring and self-effacing exploration of his innermost feelings within the context of a speculative projection of one of his possible futures. On the other hand it could equally be said that Mann merely used elements of himself to give more truth to a scathing caricature of the German literary establishment. Either way, Death in Venice is an imaginative and intense piece of writing which raises important questions about the nature of beauty and the nature of the artist, and whilst it provides no clear answers, it offers very telling insights.


Mann, Thomas, Der Tod in Wenedig, 1912; trans, H. T. Lowe-Porter, Death in Venice, Penguin, 1928.

_____, Pariser Rechenschaft¸Berlin, 1926.

_____, A Sketch of my Life, New York, 1960.

Feuerlicht, Ignace, Thomas Mann, Twayne Publishers, New York, 1968.

Hollingdale,R. J. Thomas Mann; a Critical Study, Rupert Hart-Davis, London, 1971.

Swales, Martin, Thomas Mann: a Study, Heinemann, London, 1980.

Von Gronicka, André, “Myth plus Psychology: a Stylistic Analysis of Death in Venice,” in Henry Hatfield, ed. Thomas Mann: a Collection of Critical Essays, Prentice-Hall, New Jersey, 1964, pp. 46-61.

Winston, Richard, Thomas Mann. The Making of an Artist 1875-1911, Constable, London, 1982.

[1] Thomas Mann, Der Tod in Wenedig, 1912. I have used the 1928 translation of H. T. Lowe-Porter, reprinted in Death in Venice, Penguin, 1955.

[2] Martin Swales, Thomas Mann: a Study, Heinemann, London, 1980, p. 29.

[3] Swales, Thomas Mann, pp. 29-33.

[4] Swales, Thomas Mann, p. 33.

[5] Mann, Death in Venice, pp. 11-12.

[6] Richard Winston, Thomas Mann. The Making of an Artist 1875-1911, Constable, London, p. 269.

[7] Winston, Thomas Mann, p. 276

[8] Mann, Pariser Rechenschaft¸Berlin, 1926, p. 119; André Von Gronicka, “Myth plus Psychology: a Stylistic Analysis of Death in Venice,” in Henry Hatfield, ed. Thomas Mann: a Collection of Critical Essays, Prentice-Hall, New Jersey, 1964, pp. 46-61; p. 49.

[9] Thomas Mann, Sketch of my Life, New York, 1960.

[10] Winston, Thomas Mann, pp. 267-70.

[11] Winston, Thomas Mann, pp. 267-8.

[12] Winston, Thomas Mann, pp. 268-9.

[13] Mann, Sketch of my Life; Winston, Thomas Mann, pp. 269-70.

[14] Von Gronicka, “Myth plus Psychology,” pp. 50-3.

[15] Swales, Thomas Mann, pp. 38-39.

[16] Mann, Death in Venice, p. 4.

[17] Mann, Death in Venice, p. 17.

[18] Von Gronicka, “Myth plus Psychology,” pp. 53-5; Ignace Feuerlicht, Thomas Mann, Twayne Publishers, New York, 1968, pp. 121-4.

[19] Mann, Death in Venice, p. 4.

[20] Mann, Death in Venice, pp. 5-6.

[21] Von Gronicka, “Myth plus Psychology,” p. 51.

[22] Swales, Thomas Mann, p. 41.

[23] Von Gronicka, “Myth plus Psychology,” pp. 51-3.

[24] Feuerlicht, Thomas Mann, pp. 118-24.

[25] Von Gronicka, “Myth plus Psychology,” p. 55.

Read Full Post »

%d bloggers like this: