Wednesday, May 24, 2017

Priestly Rowan Atkinson

Friday, May 19, 2017

What is the Antikythera Mechanism and how was it discovered 115 years ago?

On 17th of May in 1902, Greek archaeologist Valerios Stais found a corroded chunk of metal which turned out to be part of the world's first computer and became known as the Antikythera Mechanism.

The Google Doodle of May 17, 2017, commemorates the 115th anniversary of the device's discovery illustrating "how a rusty remnant can open up a skyful of knowledge and inspiration".

Here is what you need to know about the device.

What is the Antikythera Mechanism?

The Antikythera mechanism is the world's first analogical computer, used by ancient Greeks to chart the movement of the sun, moon and planets, predict lunar and solar eclipses and even signal the next Olympic Games.

The 2,000-year-old astronomical calculator could also add, multiply, divide and subtract. It was also able to align the number of lunar months with years and display where the sun and the moon were in the zodiac.

The device is an intricate system of more than 30 sophisticated bronze gears housed in a wooden and bronze case the size of a shoebox built around the end of the 2nd Century BC.

A 2016 study found that the device may also have had a fortune telling function.

How was it discovered?

In 1902 Valerios Staiswas sifting through some artefacts from a wrecked Roman cargo ship at Antikythera, which had been discovered two years earlier, when he noticed an intriguing bit of bronze among the treasures.

The chunk of bronze, which looked like it might be a gear or wheel, turned out to be part of the Antikythera Mechanism.

What do we know about it?

Researchers say the device was probably made on the island of Rhodes. While there was only one device ever found, they do not think it was unique.

There are minute inscriptions on the remaining fragments of its outer surfaces which point to at least two people being involved in that, and there could have been more people making its gears, according to Mike Edmunds, an astrophysics professor at the University of Cardiff in Wales who has been studying it for over a decade.

More than a dozen pieces of classical literature stretched over a period from about 300 BC to 500 AD, make references to devices such as that found at Antikythera.
How old is it?

The instrument has been variously dated to about 85 BC, but recent studies suggest it may be even older (circa 150 BC).

The cargo ship inside which it was found is believed to have sunk around 60 BC.

How advanced is it?
The crank-powered device was way ahead of its time, its components are as intricate as those of some 18th-century clocks.

It is unclear what happened to that technology to have been lost. Its mechanical complexity would be unrivalled for at least another 1,000 years until the appearance of medieval clocks in European cathedrals.
Where can I see it?

Replicas of the ancient computing device are on display at the National Archaeological Museum in Athens, Greece.

The Telegraph

Tuesday, May 16, 2017

Laying down the law

 Ending Catholicism in England and introducing the Reformation was a messy and conflicted process

The English were surprisingly divided about the break with Rome

Just a day after the English Book of Common Prayer was first used in Sampford Courtenay, Devon, on Whitsunday in 1549, an angry mob appeared at the church door. They demanded that the elderly rector reconsider using the new liturgy. Somewhat sheepishly, one imagines, he decided to don his popish vestments and revert to saying the Latin mass.

That village protest was the first of a series of English uprisings in Norfolk, Oxfordshire and the south-west, which led to perhaps 10,000 deaths as King Edward VI’s regime suppressed dissent. It would be a mistake to think that the English Reformation was mostly peaceful, with beheadings and burnings confined to a small and fervent elite.

The historiography of Tudor England usually focuses on the monarchs’ Reformation: how the state imposed religious change on the nation. Shelves groan with royal histories, but new accounts of how the ordinary English felt, objected to and imbibed it all are much more scarce. On the 500th anniversary of Martin Luther’s Reformation, Peter Marshall has written a fine history of a momentous time as seen from the bottom up, drawing on a wide range of primary sources and his evident scholarship.

Mr Marshall has two contentions. First, that the English did not meekly comply with religious change. In the cities they were enthused by it, but many others resisted, especially in the rural and conservative north and west of the country. Second, that though royal supremacy was the aim, the state ultimately lost control as Christian pluralism flowered. In places the King’s majesty was questioned, as some began to think afresh about monarchy and church government. England ended with a less united religion than it had at the start of the 16th century.

The central story will be familiar. Henry VIII wanted to cut financial and legal ties to the Catholic church, in order to achieve national sovereignty and marry whom he liked. He was keen to shut down monasteries, rivals to kingly power for nearly 1,000 years, but he was never a zealous advocate of radical new ideas, about the meaning of the communion service, for example.

Henry’s attempts to please opposing court factions left England with a vague, incoherent set of tenets for a church without a pope, thinks Mr Marshall. Confusion about the national religion led more people to define and investigate their faith for themselves. Under Henry’s children, Edward VI and Mary, state zealotry fuelled outrage and enthusiasm. Edward’s ministers set out to destroy idolatry in church, including saints’ paintings, church silver, inappropriate altars and glitzy vestments. Mary returned sovereignty to Rome and launched a campaign of burning heretics.

In St Paul’s Cathedral hung a rood, a grand figure of Christ on the cross, the centre of the medieval churchgoer’s attention and piety, which provided a political bellwether through these years. The rood was ordered to come down under Edward. It crashed to the floor, killing two labourers beneath: perhaps not a great omen. The rood was ordered up again in Mary’s reign. A man rose from his pew to deliver a mocking encomium to “your Mastership”, the ascendant rood. It soon came down again under Elizabeth I.

What became known as the Elizabethan settlement—a return to Protestantism—far from settled the matter. The queen’s bishops wanted to go further than Edward VI; some in England wished to ban bishops altogether, looking to John Calvin in Geneva for inspiration. Elizabeth’s bishops despaired of her liking for icons and vestments, but defended her nonetheless.

Mr Marshall provides convincing evidence that Catholicism survived well into Elizabeth’s reign. At least 800 clergymen were deprived or removed themselves for reasons of conscience, including as many as a quarter of the clergy in one diocese, Rochester, that is not far from Canterbury. Only 21 out of 90 senior clergy in northern England assented to the settlement, and 36 openly disagreed. Dissent among middle-ranking clergy was even higher. Of those not removed by the 1559 flu epidemic, fewer than half wished to continue.

A rebellion reckoned to be 7,000-strong in favour of the pope in 1569 was brutally suppressed. Many followers of the old religion simply conformed and dissembled. It is hard to understand how the people coped through these years. Tombs were vandalised; vicars protested at funerals. One village curate was known to shave his Protestant beard every time a change in religion was rumoured. However the English survived the Reformation, they did so as a nation divided.

Whig histories typically focus on the progress that the state and evangelicals made in forging a Church of England: a history of the winners. Mr Marshall’s contribution is a riveting account of the losers as well, the English zealots and cynics who wanted a better world or an unchanging one. The resulting story is of a Henrician supremacy that failed and an Elizabethan unity that never was.

Saturday, May 13, 2017

Thursday, May 11, 2017

Who was Ferdinand Monoyer?

The method for testing how well we are able to see without the help of glasses hasn't changed a great deal in over a century.

It follows a basic principle: read from rows of gradually shrinking letters until you're unable to distinguish the shapes any more. From this, opthamologists can determine people's clarity of vision.

The most well known test is the Snellen chart. It was introduced around the same time as a rival test called the Monoyer chart.

Named after its creator Ferdinand Monoyer, the Monoyer chart was designed more than 100 years ago and was the first eye test to use a decimal system.

The 9th of May would be Monoyer's 181st birthday, he has been honoured with a Google Doodle.

Who was Ferdinand Monoyer? 
Monoyer, who was born in France in 1836, pioneered the way we measure eye sight. He grew up in Lyon before moving to the University of Strasbourg in 1871. He eventually returned to Lyon, were he died aged 76 in 1912.

He is best known for creating the Monoyer chart, as well as introducing the dioptre as a measurement for visual clarity.

"He developed the dioptre, the unit of measurement for vision that's still used today," said Google. "The dioptre measures the distance you'd have to be from text to read it. Most notably, Monoyer devised an eye chart where every row represents a different dioptre, from smallest to largest."

11 things you didn't know about your eyes
1. A human eye weighs approximately 28g, is 2.5cm wide and has six muscles
2. They hardly grow from the time when you're born
3. Eyes can only see the colours red, blue and green. They make all others from these three
4. They can spot around 50,000 shades of grey
5. The average blink takes one tenth of a second
6. People blink around 12 to 17 times per minute
7. It takes an eye 48 hours to heal a scratch
8. There are around 137 million rod and cone cells in you eyes
9. The muscle that controls the eye is the most active in the body
10. Images arrive at the eye upside down, split in half and distorted
11. Peripheral vision is low quality and almost black-and-white


Wednesday, May 10, 2017

To what extent should travellers adjust their dress when abroad?

There seems to have been a mini-spate of Americans worried by Middle Eastern-looking travellers recently. Towards the end of last year, there were reports of Arabic-speaking Americans not being allowed on a plane because their fellow flyers were scared of them, and of a group of Middle Eastern-looking men being ejected from a flight because they had asked to sit together.

This week, Ahmed Al Menhali, a businessman from the United Arab Emirates, dressed in traditional thobe and ghutra, was forced to the floor and handcuffed by police (see picture) after they received a call saying he had been talking on a mobile phone outside an Ohio hotel “pledging allegiance to ISIS”. The details are a little unclear, but reports suggest that the receptionist at the hotel panicked at the sight of the Arab man talking on a mobile phone. She then hid at the back of the hotel and contacted her sister who, it seems, called 911 and added the embellishment about jihadism. Mr Al Menhali, who has a history of health problems, collapsed shortly after the police uncuffed him after realising he was doing nothing untoward.

Following the incident, the UAE foreign ministry warned its citizens against wearing traditional dress while in America. That is surely an overreaction; still it is a sad state of affairs when travellers from different cultures feel they can’t go about their business in foreign-looking clothes.

Mr Al Menhali’s mode of dress was innocent enough. But such matters are not always straightforward. Many of us, when we travel abroad, adapt what we wear. Women, for example, know not to bare too much flesh when they go to certain parts of the Middle East or Asia. And that seems fair: there is a balance to be drawn between being culturally sensitive and not offending your hosts, and the freedom to dress as you would at home.

But where would, for example, having your tattoo on show in Japan fall on that spectrum? Should inked Westerners feel they should cover them up, because they are frowned upon in the country? What of the Maori woman who was barred from a Japanese resort because of her elaborately painted face?

And then there is the question of how to deal with a dress code when it forms part of the law. Following the Ohio affair, the UAE also advised women to abide by rules not to wear the veil in countries such as France, where it is forbidden. In some nations the opposite is true. Gulliver remembers interviewing two women who were doing business in Saudi Arabia who had, against all their principles, reluctantly agreed to wear headscarves. Compared with blazing a trail for their sex in a patriarchal country, they thought that the lesser of two evils. That was a matter for their conscience. But what of Air France stewardesses, who were compelled by their bosses to wear headscarves when the airline resumed flying to Iran?

Gulliver wishes he could think of a hard-and-fast rule for how to behave in such circumstances. The best that he can come up with is that if you have chosen to visit or do business somewhere then you should generally accept the cultural norm. But if your conscience doesn’t allow for that, then be strong and make a stand. And culture, although important, doesn’t trump everything, particularly if it is simply a veil for repression. Naked prejudice, as seems to have been case with Mr Al Menhali, should not be tolerated wherever you are in the world

Tuesday, May 09, 2017

The Handmaids Tale

“The Handmaid’s Tale”, Hulu’s TV adaptation of Margaret Atwood’s dystopian novel from 1985, is without question the bleakest, most pitiless television drama for years. Narrated by a woman called Offred (“of Fred”, after the man to whom she now belongs), it imagines a world in which, after a mass fertility crisis, a fundamentalist Christian movement has taken part of America by force, named it Gilead and made the Bible the rule of law. Offred, whose husband has been shot and whose child has been taken from her, is one of the Handmaids: fertile women forcibly impregnated by the junta’s top brass. Each episode has been extraordinarily frightening, with every few minutes throwing up another peep-through-yer-fingers twist: women blinded for talking back, women circumcised for having same-sex affairs, women everywhere disenfranchised of their minds and bodies.

Atwood’s story feels especially relevant now. The anachronistic sexism of some of Donald Trump’s campaign performances raised the possibility of a regression to earlier mores; and the power of this series lies in its depiction of the slipperiness of the slope from bad to worse.

The TV show makes one striking change from the source material. In the novel, Offred (Elisabeth Moss) tells the story in flashback, switching between the present where she is enslaved as a Handmaid and a past which was a hyper-sexualised dystopia, where men and women visit “Pornomarts”, “Pornycorners” and “Feels on Wheels vans” (the playful branding is a consistent tic of Atwood’s science fiction). In this adaptation the past is closer to our world, but is an era of creeping transition. When Moss’s character and her friend are out for a jog in vests that reveal a flash of cleavage, they get a horrified look from a woman in a high-necked top; later, in their local coffee shop, the male barman calls them “fucking sluts” with the air of one seizing a newly granted privilege. They laugh with disbelief, which is also their reaction when they get home to find that their bank accounts have been suspended. “Nothing changes instantaneously,” Offred observes. “In a gradually heating bathtub you’d be boiled to death before you knew it.”

This series is particularly good at portraying the incredulity – the sense that it can’t happen here – that ensues as the bathwater heats up. The revolution-era Georgiana of its sets and costumes (troupes of handmaidens in scarlet and white, the winding riverscapes dotted with hanging corpses) are a triumph of production design, and flesh out the cod-historical roots of the Christian nationalism it portrays. But dystopias like this, however compelling, have a distancing effect of their own. While Gilead’s stylised world is horrifying, it’s also comfortingly removed from reality. Its nightmares may harrow us, but we may also think that even if we have not yet achieved the final goals of feminism, at least we’re not dressing women up in pilgrim bonnets and making them rape-toys for Abe Lincoln impersonators.

By contrast, the series’ contemporary sequences concentrate on the tipping points, when we realise that something uninspected has suddenly gone much too far. The moment a policeman opens fire on a protest march. The moment armed men enter a publisher’s office and order all the women to leave. The moment you find you’re not allowed to own property any more. And reactions – particularly from the male cast – are similarly well caught. The fourth episode portrays the day of the coup, when Moss’s character is escorted from her workplace and returns home to find that all her property has been transferred by law to her husband. “You know I’ll take care of you,” he says. But men “taking care” of women is the very thought-pattern on which the new regime will be built.

These little moments and others – the handwringing male boss who complains that he “doesn’t have a choice”, the colleagues who won’t meet Moss’s eye as she is escorted from the building – help to bridge the gap between the complacency of the present and the wild, imagined future. “This may not seem ordinary to you right now,” says Aunt Lydia, the cattle-prod-wielding guardian and trainer to the Handmaids. “But after a time it will. This will become ordinary.” This series shows how that nasty little bit of mind-magic can – and, heaven forbid, could – still work.

Friday, May 05, 2017

The kids are alright

Worries about the damage the internet may be doing to young people has produced a mountain of books—a suitably old technology in which to express concerns about the new. Robert Bly claims that, thanks to the internet, the “neo-cortex is finally eating itself”. Today's youth may be web-savvy, but they also stand accused of being unread, bad at communicating, socially inept, shameless, dishonest, work-shy, narcissistic and indifferent to the needs of others.

The man who christened the “net generation” in his 1997 bestseller, “Growing Up Digital”, has no time for such views. In the past two years, Don Tapscott has overseen a $4.5m study of nearly 8,000 people in 12 countries born between 1978 and 1994. In “Grown Up Digital” he uses the results to paint a portrait of this generation that is entertaining, optimistic and convincing. The problem, he suspects, is not the net generation but befuddled baby-boomers, who once sang along with Bob Dylan that “something is happening here, but you don't know what it is”, yet now find that they are clueless about the revolutionary changes taking place among the young.

“As the first global generation ever, the Net Geners are smarter, quicker and more tolerant of diversity than their predecessors,” Mr Tapscott argues. “These empowered young people are beginning to transform every institution of modern life.” They care strongly about justice, and are actively trying to improve society—witness their role in the recent Obama campaign, in which they organised themselves through the internet and mobile phones and campaigned on YouTube. Mr Tapscott's prescient chapter on “The Net Generation and Democracy: Obama, Social Networks and Citizen Engagement” alone should ensure his book a wide readership.

Contrary to the claims that video games, Facebook and constant text-messaging have robbed today's young of the ability to think, Mr Tapscott believes that “Net Geners” are the “smartest generation ever”. The experience of parents who grew up watching television is misleading when it comes to judging the 20,000 hours on the internet and 10,000 hours playing video games already spent by a typical 20-year-old American today. “The Net Generation is in many ways the antithesis of the TV generation,” he argues. One-way broadcasting via television created passive couch potatoes, whereas the net is interactive, and, he says, stimulates and improves the brain.

There is growing neuroscientific support for this claim. People who play video games, for example, have been found to process complex visual information more quickly. They may also be better at multi-tasking than earlier generations, which equips them better for the modern world.

Mr Tapscott identifies eight norms that define Net Geners, which he believes everyone should take on board to avoid being swept away by the sort of generational tsunami that helped Barack Obama beat John McCain. Net Geners value freedom and choice in everything they do. They love to customise and personalise. They scrutinise everything. They demand integrity and openness, including when deciding what to buy and where to work. They want entertainment and play in their work and education, as well as their social life. They love to collaborate. They expect everything to happen fast. And they expect constant innovation.

These patterns have important implications for the workplace. Employers who ban the use of Facebook in the office—the equivalent of forbidding older staff to use their rolodexes—show clear signs of being out of touch, he argues. Two out of three Net Geners feel that “working and having fun can and should be the same thing”. That does not mean they want to play games all day, but that they want the work itself to be enjoyable. They also expect collaboration, constant feedback and rapid career advancement based on merit. How they will react to being fired en masse as the downturn worsens remains to be seen, but Mr Tapscott suspects they will take it in their stride.

Two things do worry Mr Tapscott. One is the inadequacy of the education system in many countries; while two-thirds of Net Geners will be the smartest generation ever, the other third is failing to achieve its potential. Here the fault is the education, not the internet, which needs to be given a much bigger role in classrooms (real and virtual). The second is the net generation's lack of any regard for personal privacy, which Mr Tapscott says is a “serious mistake, and most of them don't realise it.” Already, posting pictures of alcohol fuelled parties, let alone mentioning drug use or other intimate matters, is causing a growing number of job applicants to fail the “reference test” as employers trawl Facebook and MySpace for clues about the character and behaviour of potential employees.

More optimistically, the Net Geners are much more positive than their predecessors about their family. Half of those interviewed regard at least one parent as their “hero”. Mr Tapscott believes the internet is producing an improved, more collaborative version of family life, which he calls the “open family”. Parents increasingly recognise that their youngsters have digital expertise they lack but want to tap, and also that their best defence against their children falling foul of the dark side of the internet, such as online sexual predators, is to win their children's trust through honest conversation. Ironically, Mr Tapscott's recommended “platform” for this essential social networking could hardly be more old tech: the family dinner table.

The Economist

Wednesday, May 03, 2017

Tuesday, April 25, 2017

Different side of London

Different angle #London #visitbritain

A post shared by Maria Dellaporta (@dellaportamaria) on

Tuesday, April 18, 2017

Explaining Britain’s immigration paradox

Migration is good for the economy. So why are the places with the biggest influxes doing so badly?

“The Golden Cross Welcomes you to Redditch!” The greeting, on the wall of a pub outside the town’s railway station, is valiant. But the dingy wire fence and mossy concrete beneath it let down the enthusiasm of the sign’s welcome. Redditch is struggling. In recent years, wages have fallen. It has also seen a rapid rise in the number of migrants, in particular those from eastern Europe. Perhaps linking these two phenomena, the people of Redditch voted 62:38 to leave the European Union in the referendum last June.

Immigration is a boon for Britain. The 9m-odd foreign-born people living there bring with them skills and attitudes that make the country more productive. Younger and better educated than natives, immigrants pay more in tax than they use in the way of public services. For some institutions they are indispensable: perhaps 30% of doctors in Britain are non-British.

Even so, Britain is unenthusiastic about immigration. Surveys find that roughly half of people would like it reduced “a lot” and fewer than 5% want it to go up. Many politicians interpret the vote for Brexit as a plea to reduce the number of new arrivals. Although the government has recently hinted that net migration may not fall by much after Britain leaves the EU, a group called Leave Means Leave, backed by two-dozen MPs, is calling for it to be slashed to a sixth of its current level.

To understand this antipathy to immigration, we examined the ten local authorities that saw the largest proportional increase in foreign-born folk in the ten years from 2005 to 2015 (we excluded Northern Ireland, because of differences in its data). Whereas big cities such as London have the greatest share of immigrants among their populations, the places that have experienced the sharpest rises are mostly smaller towns, which until recently had seen little immigration.

Top of the list is Boston, in Lincolnshire, where in 2005-15 the number of foreign-born residents rose from about 1,000 to 16,000. In 2005 immigrants were about one in 50 of the local population. They are now one in four. All ten areas we looked at saw at least a doubling in the share of the population that was born outside Britain.

These ten areas—call them Migrantland—voted about 60:40 in favour of leaving the EU, compared with 52:48 across Britain. Boston went for Brexit by 76:24, the highest margin of any local authority. And whereas it has often been noted that there was no link between the size of a place’s migrant population and local enthusiasm for Brexit (consider London, both cosmopolitan and heavily for Remain), we found some link between the increase in the number of migrants and the likelihood to vote Leave (see chart). London boroughs such as Hackney and Newham have welcomed large numbers of foreigners for centuries. People in those places have got used to newcomers, suggests Tony Travers of the London School of Economics. “But when your local population of migrants goes from 10% to 15% in a decade, that’s where you get the bite.”

Jacqui Smith, a former MP for Redditch and Labour home secretary in 2007-09, sees his point. “I know there’s racism in London, but people have largely become used to diverse communities...The transitional impact in Redditch is much greater,” she says. Redditch has in recent years acquired a couple of Polish supermarkets. Those who are well-off, mobile and confident find those sorts of developments interesting—“You think, ‘I’ll be able to get some Polish sausage’,” says Ms Smith. But those who lack housing or work worry about what such changes represent. The staff at an employment agency in Redditch attest to such fears. Most of the workers they place in jobs are from eastern Europe. “They’re brilliant, we love them,” smiles one member of staff. But when locals come looking for work and see how many foreign names are on the agency’s register, there is some resentment, she says.

The wrong place at the wrong time

It is tempting to conclude that such attitudes are motivated by prejudice. Yet a closer look at the economy and public services in Migrantland makes clear that its residents have plenty to be angry about—even if the migrants are not the culprits.

Places where living is cheap and jobs plentiful are attractive to newcomers. In 2005 the average house in Migrantland cost around £140,000 (then $255,000), compared with more than £150,000 across Britain. Unemployment was lower than average. Low-skill jobs blossomed. Migrantland seems to be more dependent on agriculture than the rest of the country. The big change in Boston, says Paul Gleeson, a local Labour councillor, is that previously-seasonal work, such as fruit- and veg-picking, has become permanent as technology and new crop varieties have lengthened the agricultural season. This means the people doing that work now live there permanently, too. Manufacturing centres are nearby: food processing, for instance, is a big employer in Boston and Mansfield.

Given the nature of the jobs on offer, it is unsurprising that the new arrivals are often young and not particularly well educated or Anglophone. We estimate that whereas over 40% of the Poles living in London have a higher-education qualification, only about a quarter do in the East Midlands, where three of our ten areas are. One in 20 people in Boston cannot speak English well or at all, according to the 2011 census. Small wonder that integration is hard. Many landlords do not allow tenants to drink or smoke inside, so people sit out on benches, having a drink and a cigarette. “Because they’re young, not because they’re foreign, they might not put their tins in the bin,” says Mr Gleeson.

What’s more, the places that have seen the greatest surges in migration have become poorer. In 2005-15 real wages in Migrantland fell by a tenth, much faster than the decline in the rest of Britain. On an “index of multiple deprivation”, a government measure that takes into account factors such as income, health and education, the area appears to have become relatively poorer over the past decade.

Are the newcomers to blame? Immigration may have heightened competition for some jobs, pushing pay down. But the effect is small. A House of Lords report in 2008 suggested that every 1% increase in the ratio of immigrants to natives in the working-age population leads to a 0.5% fall in wages for the lowest 10% of earners (and a similar rise for the top 10%). Since Migrantland relies on low-paid work, it probably suffered more than most.

But more powerful factors are at play. Because the area is disproportionately dependent on manufacturing, it has suffered from the industry’s decline. And since 2010 Conservative-led governments have slashed the number of civil servants, in a bid to right the public finances. The axe has fallen hard on the administrative jobs that are prevalent in unglamorous parts of the country. Migrantland’s public-sector jobs have disappeared 50% faster than those in Britain as a whole. In the Forest of Dean they have dropped by over a third. Meanwhile, cuts to working-age benefits have sucked away spending power.

Even before austerity, it had long been the case that poor places had the most threadbare public services. Medical staff, for instance, prefer to live in prosperous areas. Our analysis suggests that Migrantland is relatively deprived of general practitioners. Doctors for the East Midlands are trained in Nottingham and Leicester, but fewer people want to study there than in London, for instance. After training there, half go elsewhere. In 2014 there were 12 places for trainee doctors in Boston; only four were filled.

Follow the money

What can be done? In places where public spending has not yet caught up with a rapidly enlarged population, the government could target extra funding in the short term. The previous Labour government ran a “migration impacts fund”, introduced by Ms Smith. She acknowledges that the amounts involved were small (the budget was just £35m per year) but argues that the point was to reassure people that the government understood fears that immigration can make things tough for a time. The current government has launched a similar initiative, though it is no better funded.

And although Britons dislike immigration, they do not feel the same resentment towards immigrants themselves. Once they have been placed in jobs alongside each other, locals and migrants tend to rub along, says the Redditch recruitment agency. A music festival was recently held in the town to raise money for children’s hospital wards in Poland. Local Poles took part in the Holocaust commemoration this year, says Bill Hartnett, leader of the council.

All that may be encouraging, but it does not provide a way to improve conditions in the left-behind places to which migrants have rushed. To many people, Brexit may appear to be just such a policy. They have been told a story that leaving the EU will make things better in their area, says Mr Gleeson. “It won’t.”

The Economist

Sunday, April 16, 2017

When is it OK to shoot a child soldier?

Canada writes rules for troops who face armed nine-year-olds

ONE of the worst dilemmas soldiers face is what to do when they confront armed children. International law and most military codes treat underage combatants mainly as innocent victims. They offer guidance on their legal rights and on how to interrogate and demobilise them. They have little to say about a soul-destroying question, which must typically be answered in a split second: when a kid points a Kalashnikov at you, do you shoot him? Last month Canada became the first country to incorporate a detailed answer into its military doctrine. If you must, it says, shoot first.

Such encounters are not rare. Child soldiers fight in at least 17 conflicts, including in Mali, Iraq and the Philippines. Soldiers in Western armies, sometimes acting as peacekeepers, have encountered fighters as young as six on land and at sea. More than 115,000 young combatants have been demobilised since 2000, according to the UN. For the warlords who employ them, children offer many advantages: they are cheap, obedient, expendable, fearless when drugged and put opponents at a moral disadvantage. Some rebel armies are mostly underage.

In 2000 a group of British peacekeepers in Sierra Leone who refused to fire on children armed with AK-47s were taken hostage by them. One paratrooper died and 11 others were injured in their rescue. Soldiers who have shot children sometimes suffer from crippling psychological wounds. A Canadian who protected convoys in Afghanistan from attack by young suicide-bombers has not been able to hug his own children since he came home four years ago. Some soldiers have committed suicide. “We always thought it was the ambush or the accident that was the hardest point” of a war, said Roméo Dallaire, a retired Canadian general, in testimony before a parliamentary hearing on military suicides in March. In fact, the “hardest one is the moral dilemma and the moral destruction of having to face children.”

The Geneva Convention and other international accords prohibit attacking schools, abducting children and other practices that harm them. But they do not tell soldiers what to do when they confront children as combatants, making self-defence feel like a war crime. On March 2nd Canada adopted a military doctrine that explicitly acknowledges soldiers’ right to use force to protect themselves, even when the threat comes from children. “A child soldier with a rifle or grenade launcher can present as much of a threat as an adult soldier carrying the same armament,” it says. It is based in part on research by the Child Soldiers Initiative, an institute founded by Mr Dallaire that works towards ending the use of children as fighters.

The new doctrine goes well beyond the moment of confrontation. Intelligence officers, it says, should report on the presence of child soldiers and how they are being used. Soldiers deployed in areas with child fighters should be prepared psychologically, trained to handle confrontations with kids and assessed by psychologists when they return. The instruction suggests ways to ensure that killing children is a last resort. It recommends shooting their adult commanders to shatter discipline and prompt the youngsters to flee or surrender. It warns against the use of lightly armed units, which are vulnerable to “human-wave” attacks by children.

The authors of the new directive seem to be aware that a policy to shoot child soldiers even in self-defence could provoke outrage. So far, human-rights groups have expressed understanding. Canada is trying to strike a balance between treating children as innocents and recognising them as battlefield threats, says Jo Becker, a children’s-rights specialist at Human Rights Watch in New York. Britain is considering guidelines of its own, and other countries may follow. Canada may soon put its doctrine to the test. Its government has promised to send 600 troops on a three-year peace mission to Africa. It has not revealed yet where exactly they will go. Wherever it is, they are likely to meet gun-toting children. By acknowledging their right to defend themselves, Canada’s government may lessen the trauma of those forced to fight the youngest warriors.

The Economist

Saturday, April 15, 2017

Thursday, April 13, 2017

Tuesday, April 11, 2017

The voices in our heads


Talking to your yogurt again,” my wife, Pam, said. “And what does the yogurt say?”

She had caught me silently talking to myself as we ate breakfast. A conversation was playing in my mind, with a research colleague who questioned whether we had sufficient data to go ahead and publish. Did the experiments in the second graph need to be repeated? The results were already solid, I answered. But then, on reflection, I agreed that repetition could make the statistics more compelling.

I often have discussions with myself—tilting my head, raising my eyebrows, pursing my lips—and not only about my work. I converse with friends and family members, tell myself jokes, replay dialogue from the past. I’ve never considered why I talk to myself, and I’ve never mentioned it to anyone, except Pam. She very rarely has inner conversations; the one instance is when she reminds herself to do something, like change her e-mail password. She deliberately translates the thought into an external command, saying out loud, “Remember, change your password today.”

Verbal rehearsal of material—the shopping list you recite as you walk the aisles of a supermarket—is part of our working memory system. But for some of us talking to ourselves goes much further: it’s an essential part of the way we think. Others experience auditory hallucinations, verbal promptings from voices that are not theirs but those of loved ones, long-departed mentors, unidentified influencers, their conscience, or even God.

Charles Fernyhough, a British professor of psychology at Durham University, in England, studies such “inner speech.” At the start of “The Voices Within” (Basic), he also identifies himself as a voluble self-speaker, relating an incident where, in a crowded train on the London Underground, he suddenly became self-conscious at having just laughed out loud at a nonsensical sentence that was playing in his mind. He goes through life hearing a wide variety of voices: “My ‘voices’ often have accent and pitch; they are private and only audible to me, and yet they frequently sound like real people.”

Fernyhough has based his research on the hunch that talking to ourselves and hearing voices—phenomena that he sees as related—are not mere quirks, and that they have a deeper function. His book offers a chatty, somewhat inconclusive tour of the subject, making a case for the role of inner speech in memory, sports performance, religious revelation, psychotherapy, and literary fiction. He even coins a term, “dialogic thinking,” to describe his belief that thought itself may be considered “a voice, or voices, in the head.”

Discussing experimental work on voice-hearing, Fernyhough describes a protocol devised by Russell Hurlburt, a psychologist at the University of Nevada, Las Vegas. A subject wears an earpiece and a beeper sounds at random intervals. As soon as the person hears the beep, she jots notes about what was in her mind at that moment. People in a variety of studies have reported a range of perceptions: many have experienced “inner speech,” though Fernyhough doesn’t specify what proportion. For some, it was a full back-and-forth conversation, for others a more condensed script of short phrases or keywords. The results of another study suggest that, on average, about twenty to twenty-five per cent of the waking day is spent in self-talk. But some people never experienced inner speech at all.

In his work at Durham, Fernyhough participated in an experiment in which he had an inner conversation with an old teacher of his while his brain was imaged by fMRI scanning. Naturally, the scan showed activity in parts of the left hemisphere associated with language. Among the other brain regions that were activated, however, were some associated with our interactions with other people. Fernyhough concludes that “dialogic inner speech must therefore involve some capacity to represent the thoughts, feelings, and attitudes of the people with whom we share our world.” This raises the fascinating possibility that when we talk to ourselves a kind of split takes place, and we become in some sense multiple: it’s not a monologue but a real dialogue.

Early in Fernyhough’s career, his mentors told him that studying inner speech would be fruitless. Experimental psychology focusses on things that can be studied in laboratory situations and can yield clear, reproducible results. Our perceptions of what goes on in our heads are too subjective to quantify, and experimental psychologists tend to steer clear of the area.

Fernyhough’s protocols go some way toward working around this difficulty, though the results can’t be considered dispositive. Being prompted to enter into an inner dialogue in an fMRI machine is not the same as spontaneously debating with oneself at the kitchen table. And, given that subjects in the beeper protocol could express their experience only in words, it’s not surprising that many of them ascribed a linguistic quality to their thinking. Fernyhough acknowledges this; in a paper published last year in Psychological Bulletin, he wrote that the interview process may both “shape and change the experiences participants report.”

More fundamentally, neither experiment can do more than provide a rough phenomenology of inner speech—a sense of where we experience inner speech neurologically and how it may operate. The experiments don’t tell us what it is. This hard truth harks back to William James, who concluded that such “introspective analysis” was like “trying to turn up the gas quickly enough to see how the darkness looks.”

Nonetheless, Fernyhough has built up an interesting picture of inner speech and its functions. It certainly seems to be important in memory, and not merely the mnemonic recitation of lists, to which my wife and many others resort. I sometimes replay childhood conversations with my father, long deceased. I conjure his voice and respond to it, preserving his presence in my life. Inner speech may participate in reasoning about right and wrong by constructing point-counterpoint situations in our minds. Fernyhough writes that his most elaborate inner conversations occur when he is dealing with an ethical dilemma.

Inner speech could also serve as a safety mechanism. Negative emotions may be easier to cope with when channelled into words spoken to ourselves. In the case of people who hear alien voices, Fernyhough links the phenomenon to past trauma; people who live through horrific events often describe themselves “dissociating” during the episodes. “Splitting itself into separate parts is one of the most powerful of the mind’s defense mechanisms,” he writes. Given that his fMRI study suggested that some kind of split occurred during self-speech, the idea of a connection between these two mental processes doesn’t seem implausible. Indeed, a mainstream strategy in cognitive behavioral therapy involves purposefully articulating thoughts to oneself in order to diminish pernicious habits of mind. There is robust scientific evidence demonstrating the value of the method in coping with O.C.D., phobias, and other anxiety disorders.

Cognitive behavioral therapy also harnesses the effectiveness of verbalizing positive thoughts. Many athletes talk to themselves as a way of enhancing performance; Andy Murray yells at himself during tennis matches. The potential benefits of this have some experimental support. In 2008, Greek researchers randomly assigned tennis players to one of two groups. The first was trained in motivational and instructional self-talk (for instance, “Go,” “I can,” “Shoulder, low”). The second group got a tactical lecture on the use of particular shots. The group trained to use self-talk showed improved play and reported increased self-confidence and decreased anxiety, whereas no significant improvements were seen in the other group.

Sometimes the voices people hear are not their own, and instead are attributed to a celestial source. God’s voice figures prominently early in the Hebrew Bible. He speaks individually to Adam, Eve, Cain, Noah, and Abraham. At Mt. Sinai, God’s voice, in midrash, was heard communally, but was so overwhelming that only the first letter, aleph, was sounded. But in later prophetic books the divine voice grows quieter. Elijah, on Mt. Horeb, is addressed by God (after a whirlwind, a fire, and an earthquake) in what the King James Bible called a “still small voice,” and which, in the original Hebrew (kol demamah dakah), is even more suggestive—literally, “the sound of a slender silence.” By the time we reach the Book of Esther, God’s voice is absent.

In Christianity, however, divine speech continues through the Gospels—the apostle Paul converts after hearing Jesus admonish him. Especially in evangelical traditions, it has persisted. Martin Luther King, Jr., recounted an experience of it in the early days of the bus boycott in Montgomery, in 1956. After receiving a threatening anonymous phone call, he went in despair into his kitchen and prayed. He became aware of “the quiet assurance of an inner voice” and “heard the voice of Jesus saying still to fight on.”

Fernyhough relates some arresting instances of conversations with God and other celestial powers that occurred during the Middle Ages. In fifteenth-century France, Joan of Arc testified to hearing angels and saints tell her to lead the French Army in rescuing her country from English domination. A more intimate example is that of the famous mystic Margery Kempe, a well-to-do Englishwoman with a husband and family, who, in the early fifteenth century, reported that Christ spoke to her from a short distance, in a “sweet and gentle” voice. In “The Book of Margery Kempe,” a narrative she dictated, which is often considered the first autobiography in English, she relates how a series of domestic crises, including an episode of what she describes as madness, led her to embark on a life of pilgrimage, celibacy, and extreme fasting. The voice of Jesus gave her advice for negotiating a deal with her frustrated and worried husband. (She agreed to eat; he accepted her chastity.) Fernyhough writes imaginatively about the various registers of voice she hears. “One kind of sound she hears is like a pair of bellows blowing in her ear: it is the susurrus of the Holy Spirit. When He chooses, our Lord changes that sound into the voice of a dove, and then into a robin redbreast, tweeting merrily in her ear.”

Forty years ago, Julian Jaynes, a psychologist at Princeton, published a landmark book, “The Origin of Consciousness in the Breakdown of the Bicameral Mind,” in which he proposed a biological basis for the hearing of divine voices. He argued that several thousand years ago, at the time the Iliad was written, our brains were “bicameral,” composed of two distinct chambers. The left hemisphere contained language areas, just as it does now, but the right hemisphere contributed a unique function, recruiting language-making structures that “spoke” in times of stress. People perceived the utterances of the right hemisphere as being external to them and attributed them to gods. In the tumult of attacking Troy, Jaynes believed, Achilles would have heard speech from his right hemisphere and attributed it to voices from Mt. Olympus:

The characters of the Iliad do not sit down and think out what to do. They have no conscious minds such as we say we have, and certainly no introspections. When Agamemnon, king of men, robs Achilles of his mistress, it is a god that grabs Achilles by his yellow hair and warns him not to strike Agamemnon. It is a god who then rises out of the gray sea and consoles him in his tears of wrath on the beach by his black ships. . . . It is one god who makes Achilles promise not to go into battle, another who urges him to go, and another who then clothes him in a golden fire reaching up to heaven and screams through his throat across the bloodied trench at the Trojans, rousing in them ungovernable panic. In fact, the gods take the place of consciousness.

Jaynes believed that the development of nerve fibres connecting the two hemispheres gradually integrated brain function. Following a theory of Homeric authorship that assumed the Odyssey to have been composed at least a century after the Iliad, he pointed out that Odysseus, who is constantly reflecting and planning, manifests a self-consciousness of mind. The poem’s emphasis on Odysseus’ cunning starts to seem like the celebration of the emergence of a new kind of consciousness. For Jaynes, hearing the voice of God was a vestige of our past neuroanatomy.

Jaynes’s book was hugely influential in its day, one of those rare specialist works whose ideas enter the culture at large. (Bicamerality is an important plot point in HBO’s “Westworld”: Dolores, an android played by Evan Rachel Wood, is led to understand that a voice she hears, which has urged her to kill other android “hosts” at the park, comes from her own head.) But Jaynes’s thesis does not stand up to what we now know about the development of our species. In evolutionary time, the few thousand years that separate us from Achilles are a blink of an eye, far too short to allow for such radical structural changes in the brain. Contemporary neurologists offer alternative explanations for hearing celestial speech. Some speculate that it represents temporal-lobe epilepsy, others schizophrenia; auditory hallucinations are common in both conditions. They are also a feature of degenerative neurological diseases. An elderly relative with Alzheimer’s recently told me that God talks to her. “Do you actually hear His voice?” I asked. She said that she does, and knows it is God because He said so.

Remarkably, Fernyhough is reluctant to call such voices hallucinations. He views the term as pejorative, and he is notably skeptical about the value of psychiatric diagnosis in voice-hearing cases:

It is no more meaningful to attempt to diagnose . . . English mystics (nor others, like Joan, from the tradition to which they belong) than it is to call Socrates a schizophrenic. . . . If Joan wasn’t schizophrenic, she had “idiopathic partial epilepsy with auditory features.” Margery’s compulsive weeping and roaring, combined with her voice-hearing, might also have been signs of temporal lobe epilepsy. The white spots that flew around her vision (and were interpreted by her as sightings of angels) could have been symptoms of migraine. . . . The medieval literary scholar Corinne Saunders points out that Margery’s experiences were strange then, in the early fifteenth century, and they seem even stranger now, when we are so distant from the interpretive framework in which Margery received them. That doesn’t make them signs of madness or neurological disease any more than similar experiences in the modern era should be automatically pathologized.

In his unwillingness to draw a clear line between normal perceptions and delusions, Fernyhough follows ideas popularized by a range of groups that have emerged in the past three decades known as the Hearing Voices Movement. In 1987, a Dutch psychiatrist, Marius Romme, was treating a patient named Patsy Hage, who heard malign voices. Romme’s initial diagnosis was that the voices were symptoms of a biomedical illness. But Hage insisted that her voice-hearing was a valid mode of thought. Not coincidentally, she was familiar with the work of Julian Jaynes. “I’m not a schizophrenic,” she told Romme. “I’m an ancient Greek!”

Romme came to sympathize with her point of view, and decided that it was vital to engage seriously with the actual content of what patients’ voices said. The pair started to publicize the condition, asking other voice-hearers to be in touch. The movement grew from there. It currently has networks in twenty-four countries, with more than a hundred and eighty groups in the United Kingdom alone, and its membership is growing in the United States. It holds meetings and conferences in which voice-hearers discuss their experiences, and it campaigns to increase public awareness of the phenomenon.

The movement’s followers reject the idea that hearing voices is a sign of mental illness. They want it to be seen as a normal variation in human nature. Their arguments are in part about who controls the interpretation of such experiences. Fernyhough quotes an advocate who says, “It is about power, and it’s about who’s got the expertise, and the authority.” The advocate characterizes cognitive behavioral therapy as “an expert doing something to” a patient, whereas the movement’s approach disrupts that hierarchy. “People with lived experience have a lot to say about it, know a lot about what it’s like to experience it, to live with it, to cope with it,” she says. “If we want to learn anything about extreme human experience, we have to listen to the people who experience it.”

Like other movements that seek to challenge the authority of psychiatry’s diagnostic categories, the Hearing Voices Movement is controversial. Critics point out that, while depathologizing voice-hearing may feel liberating for some, it entails a risk that people with serious mental illnesses will not receive appropriate care. Fernyhough does not spend much time on these criticisms, though in a footnote he does concede the scant evidentiary basis of the movement’s claims. He mentions a psychotherapist sympathetic to the Hearing Voices Movement who says that, in contrast to the ample experimental evidence for the efficacy of cognitive behavioral therapy, “the organic nature of hearing voices groups” makes it hard to conduct randomized controlled trials.

Fernyhough is not only a psychologist; he also writes fiction, and in describing this work he emphasizes the role of hearing voices. “I never mistake these fictional characters for real people, but I do hear them speaking,” he writes in “The Voices Within.” “I have to get their voices right—transcribe them accurately—or they will not seem real to the people who are reading their stories.” He notes that this kind of conjuring is widespread among novelists, and cites examples including Charles Dickens, Joseph Conrad, Virginia Woolf, and Hilary Mantel.

Fernyhough and his colleagues have tried to quantify this phenomenon. Ninety-one writers attending the 2014 Edinburgh International Book Festival responded to a questionnaire; seventy per cent said that they heard characters speak. Several writers linked the speech of their characters to inner dialogues even when they are not actively writing. As for plot, some writers asserted that their characters “don’t agree with me, sometimes demand that I change things in the story arc of whatever I’m writing.”

The importance of voice-hearing to many writers might seem to validate the Hearing Voices Movement’s approach. If the result is great literature, it would be perverse to judge hearing voices an aberration requiring treatment rather than a precious gift. It’s not that simple, however. As Fernyhough writes, “Studies have shown a particularly high prevalence of psychiatric disorders (particularly mood disorders) in those of proven creativity.” Even leaving aside the fact that most people with mood disorders are not creative geniuses, many writers find their creative talent psychologically troublesome, and even prize an idea of themselves as, in some sense, abnormal. The novelist Jeanette Winterson has heard voices that she says put her “in the crazy category,” and the idea has a long history: Plato’s “mad poet,” Aristotle’s “melancholic genius,” and John Dryden’s dictum that “great wits are sure to madness near allied.” But, in cases where talent is accompanied by real psychological disturbance, do the creative benefits really outweigh the costs to the individual?

On a frigid night in January, 1977, while working as a young resident at Massachusetts General Hospital, I was paged to the emergency room. A patient had arrived by ambulance from McLean Hospital, a famous psychiatric institution in nearby Belmont. Sitting bolt upright, laboring to breathe, was the poet Robert Lowell. I introduced myself and performed a physical examination. Lowell was in congestive heart failure, his lungs filling with fluid. I administered diuretics and fitted an oxygen tube to his nostrils. Soon he was breathing comfortably. He seemed sullen and, to distract him from his predicament, I asked about a medallion that hung from a chain around his neck. “Achilles,” he replied, with a fleeting smile.

I’ve no idea if Lowell knew of Jaynes’s book, which had come out the year before, but Achilles was a figure of lifelong importance to him, one of many historical and mythical figures—Alexander the Great, Dante, T. S. Eliot, Christ—with whom he identified in moments of delusional grandiosity. In Achilles, Lowell seemed to find a heroic reflection of his own mental volatility. Achilles’ defining attribute—it’s the first word of the Iliad—is mēnin, usually translated as “wrath” or “rage.” But in a forthcoming book, “Robert Lowell, Setting the River on Fire: A Study of Genius, Mania, and Character,” the psychiatry professor Kay Redfield Jamison points out that Lowell’s translation of the passage renders mēnin as “mania.” As it happens, mania was Lowell’s most enduring diagnosis in his many years as a psychiatric patient.

In her account of Lowell’s hospitalization, Jamison cites my case notes and those of his cardiologist in the Phillips House, a wing of Mass General where wealthy Boston Brahmin patients were typically housed. Lowell wrote a poem about his stay, “Phillips House Revisited,” in which he overlays impressions of the medical crisis I had witnessed (“I cannot entirely get my breath, / as if I were muffled in snow”) with memories of his grandfather, who had died in the same hospital, forty years earlier.

There was a long history of mental illness in Lowell’s family. Jamison digs up the records of his great-great-grandmother, who was admitted to McLean in 1845, and who, doctors noted, was “afflicted with false hearing.” Lowell, too, suffered from auditory hallucinations. Sometimes, before sleep, he would talk to the heroes from Hawthorne’s “Greek Myths.” During a hospitalization in 1954, he often chatted to Ezra Pound, who was a friend—but not actually there. Among his contemporaries, recognition of Lowell’s mental instability was inextricably bound up with awe of his talent. The intertwining of madness and genius remains an essential part of his posthumous legend, and Lowell himself saw the two as related. Jamison quotes a report by one of his doctors:

Patient’s strong emotional ties with his manic phase were very evident. Besides the feeling of well-being which was present at that time, patient felt that, “my senses were more keen than they had ever been before, and that’s what a writer needs.”

But Jamison also shows that Lowell sometimes saw his episodes of manic inspiration in a more coldly medical light. After a period of intense religious revelation, he wrote, “The mystical experiences and explosions turned out to be pathological.” Splitting the difference, Jamison suggests that his mania and his imagination were welded into great art by the discipline he exerted between his manic episodes.

Lowell was discharged from Mass General on February 9th. Jamison quotes a note that one of my colleagues wrote to the doctors at McLean: “Thank you for referring Mr. Lowell to me. He proved to be just as interesting a person and a patient as you suggested he might be.” Later that month, Lowell had recovered sufficiently to travel to New York and do a reading with Allen Ginsberg. He read “Phillips House Revisited.” That September, he died.

The New Yorker

Sunday, April 09, 2017

Patchwork politics

Tony Judt decided to write “Postwar” while changing trains at Vienna's Westbahnhof terminus in December 1989. One historical era was ending and another was about to begin. Mr Judt, a Londoner who was educated at Cambridge and in Paris, and who is now professor of European Studies at New York University, believes that 1989 marked the end of the legacy of the second world war. The 44 years that followed were, in a sense, an “interim age: a post-war parenthesis, the unfinished business of a conflict that ended in 1945 but whose epilogue had lasted for another half century.”

If the first world war destroyed old Europe, the second, Mr Judt believes, created the conditions for a new, non-ideological Europe. The grand ideas which had shaken the continent since the French Revolution were now dead. All that was left was “the promise of liberty”, a promise fulfilled in western Europe in 1945, but which the rest of Europe had to wait for until 1989.

Europe had proved unable to liberate itself from National Socialism; nor could it keep Communism at bay unaided. It relied for its freedom and security upon the benevolence and goodwill of America. The movement for European unity, Mr Judt believes, “was grounded in weakness, not strength”. It was because Europe's influence was declining that it began to unite; and it was because Britain did not, in the 1950s, see its position in that light, that it did not join the European movement.

General de Gaulle famously had a “certain idea of France”. Mr Judt has a “certain idea of Europe”, a community of values whose system of inter-state relations is a model to be copied, rather than, as in the past, a warning to be avoided. For Europe shows the world that nationalism is obsolete. Mr Judt does not face up to the problem that Europe's virtue is bought largely at the price of loss of influence. How many divisions has the pope, Stalin once asked. The same question may be asked of Europe.

Mr Judt also argues that the new Europe, with the significant exceptions of the Soviet Union and Yugoslavia, was ethnically homogeneous. The peace settlement after the first world war, based as it was on Woodrow Wilson's principle of self-determination, had created states in central and eastern Europe with large minority populations. Post-second world war Europe was built out of the rubble of Nazism and Communism. “Hitler and Stalin between them had blasted flat the demographic heath upon which the foundations of a new and less complicated continent were then laid.”

Europe is not the same pre-war melting pot, but is composed instead of “hermetic national enclaves”. Minorities have been either expelled, as with the Germans from Poland and Czechoslovakia after 1945, or, as with the Jews, murdered. Mr Judt is particularly good on the centrality of the Holocaust to the new Europe. In a moving epilogue, entitled, “From the House of the Dead”, he declares that “the recovered memory of Europe's dead Jews has become the very definition and guarantee of the continent's recovered humanity.” The new Europe remains mortgaged to its terrible past. That is why, he concludes, the European Union “may be an answer to history, but it can never be a substitute.”

Yet, as Mr Judt, himself shows, the nations of western Europe have become far less hermetically sealed or ethnically homogeneous than in 1945. Immigration and asylum have given rise to new and acute cultural cleavages; and it is precisely because the nations of Europe have failed to become genuine melting pots that so much of European politics now revolves around issues of multiculturalism.

Mr Judt deals with grand and important themes. But, after announcing them in a powerful introduction, he proceeds to tell us at great length mainly what we know already. His discussion is chronological not thematic, and the main ideas get lost in what is now a familiar story, although the story is told with some skill. Nevertheless, few books of nearly 1,000 pages justify their length, and “Postwar” is no exception. When Lord Beaverbrook was sent a 700-page biography of his fellow press magnate, Lord Northcliffe, he dispatched it unread to the University of New Brunswick, saying, “It weighs too much.” Sadly, “Postwar” is likely to suffer the same fate.

The Economist

Tuesday, April 04, 2017

Monday, April 03, 2017

Friday, March 31, 2017

Some Saudi women are secretly deserting their country

Women are fed up with being treated like children

Can Saudi Arabia keep its women? Last month’s appointment of women to head two big banks and Tadawul, the kingdom’s stock exchange, offers hope that the path to a fulfilling career is not completely blocked. But the restrictions of Saudi life remain so irksome that covertly, silently, many women are finding ways out.

On family trips abroad, some jump ship. Some, having been sent to Western universities at the government’s expense, postpone their return indefinitely. Others avail themselves of clandestine online services offering marriages of convenience to men willing to whisk them abroad. Iman, an administrator at a private hospital in Riyadh, has found a package deal for $4,000 offering an Australian honeymoon during which she plans to scarper.

Propelling the flight is the kingdom’s wilaya, or guardianship, law. Although it has received less publicity than the world’s only sex-specific driving ban, it imposes harsher curbs on female mobility. To travel, work or study abroad, receive hospital treatment or an ID card, or even leave prison once a sentence is served, women need the consent of a male wali, or guardian. From birth to death, they are handed from one wali to the next—father, husband and, if both of those die, the nearest male relative. Sometimes that might be a teenage son or brother, because although boys are treated as adults from puberty, women are treated as minors all their lives.

Iman, a divorcee, is subject to the guardianship of her brother, who at 17 is barely half her age. He lets her work as a manager at a hospital, but pockets her earnings. She says she is kept like a chattel, while he spends her money on drugs and weekends in massage parlours in neighbouring Bahrain. Her ex-husband refuses to let her see their children. Her brother prevents her from completing her studies in Europe. If she protests, he threatens to beat her.

She tried going to court to have the guardianship transferred to a more sympathetic elder brother, but the judge dismissed the case, she says, while talking on his phone. Though she dressed demurely in a full veil, she suspects the judge objected to her presenting her own case. Social services offer poor refuge, since hostels for abused women resemble prisons where the windows are barred and visitors banned. When she hears other women say that their brothers don’t beat them, Iman assumes they are lying “because they are scared of social housing”.

Estimates of the number of “runaway girls”, to use the Saudi term, are imprecise, but, says Mansour al-Askar, a sociologist at Imam Muhammad ibn Saud University in Riyadh, the rate is rising. By his estimates, over a thousand flee the kingdom every year, while more escape Riyadh for Jeddah, the kingdom’s more liberal coastal metropolis.

Dissenting Saudi scholars insist that the guardianship laws stem not from Islam, but the Bedouin customs that still hold sway in much of Arabia’s hinterland. Khadija, the Prophet Muhammad’s first wife, was a merchant who sponsored her husband. His subsequent wives moved between Medina and Mecca without him. “Islam freed women from the wilaya,” says Hassan al-Maliki, a theologian in Riyadh who has sometimes been jailed for free-thinking. “A woman can choose whom she marries.” But the clerics who man the judiciary maintain that guardians protect the vulnerable and keep families and, by extension, society together. Last December the courts sentenced a man caught denouncing the wilaya on social media to a year in jail. Another Saudi study, at a university in Mecca, acknowledged that some runaways might be fleeing physical abuse, but said that most had been influenced by the “misuse of social media, copying other cultures and weak beliefs”.

Economists note that the guardianship system makes Saudi Arabia poorer. More than a quarter of the 150,000 students the kingdom sends abroad every year are women. Given that many defer their return or choose to remain in more liberal places like Dubai, much of the $5bn the government spends on their studies each year is going to waste. “Saudi Arabia is losing the battle to keep its talent,” says Najah al-Osaimi, a female Saudi academic who has settled in Britain.

Awkwardly for reformers, some of the most tenacious advocates of the wilaya are women, particularly in obscurantist southern provinces like Asir. Despite such beguiling hashtags as #StopEnslavingSaudiWomen and #IAmMyOwnGuardian, a social-media campaign to end the wilaya system attracted just 14,000 signatures.

Use them or lose them

Saudi Arabia’s leaders acknowledge the need to make the kingdom more women-friendly. Already, more women attend Saudi universities than men. And although some men still send their own photographs when they apply for jobs for their wives (and even attend their interviews), in 2012 the kingdom waived the need for women to have their guardians’ approval for four types of work, including clothes-shop assistants, chefs and amusement-park attendants.

In upmarket malls, women can be seen selling aftershave, boldly spraying samples onto male hands. Broadminded men can give their female wards five-year permits to move unaccompanied (though they get updates by text message whenever their charges travel abroad). Countrywide, the dress code has relaxed a bit. In big cities, women have added streaks of colour and patterns to the black abayas or cloaks that the state requires them to wear. Even in Burayda, the bastion of Saudi Arabia’s puritanical right, women have cut slits for their eyes in veils that hitherto fully covered their faces, and let their abayas slip from their heads to their shoulders.

Nonetheless, many women seethe with frustration. On social media, footage of women riding motorbikes has gone viral. So too has a female silhouette, whisky bottle in hand, dancing on her car roof. A female pop group, clad in black, sings songs of protest from dodgems, toy cars, skateboards, roller-skates and other wheeled vehicles that they can legally drive. Unless the system adapts, warns Mr al-Askar, the sociologist, it risks crumbling. Judges and the police should work together to strip oppressive men of their right to be walis, he says. But for Iman, the hospital manager, reform can’t come soon enough. An Australian honeymoon awaits.

The Economist
Related Posts Plugin for WordPress, Blogger...