Wednesday, September 29, 2010


Here. My "new home" post (explaining some of the whys) is here, and my first "real" new post is here. I'll try to keep blogging there; if you'd like to keep reading, please update your feedreader. This blog is closed.

Friday, September 17, 2010

The Indus argument continues

Last year much excitement and noise occurred, including on this blog [1, 2, 3], when a group of scientists (led by Rajesh Rao at the University of Washington, and including my colleague Ronojoy Adhikari) published a brief paper in Science supplying evidence, on statistical grounds, that the Indus symbols constituted a writing system. In their words, they "present evidence for the linguistic hypothesis by showing that the script’s conditional entropy is closer to those of natural languages than various types of nonlinguistic systems."

This rather modest claim outraged Steve Farmer, Richard Sproat and (presumably) Michael Witzel (FSW), who had previously "proved" that the Harappan civilization was not literate (the paper was subtitled "The myth of a literate Harappan civilization"). In a series of online screeds, they attacked the work of Rao et al: for reviews, see this previous post, and links and comments therein.

Now Richard Sproat has published his latest attack on Rao et al. in the journal Computational Linguistics. Rao et al have a rejoinder, as do another set of researchers, and Sproat has a further response to both groups (but primarily to Rao et al); all these rejoinders will appear in the December issue of Computational Linguistics.

To summarise quickly, the way I see it: Sproat claims (as he previously did on the internet) that Rao et al.'s use of "conditional entropy" is useless in distinguishing scripts from non-scripts, because one can construct non-scripts with the same conditional entropy, and because their extreme ("type 1" and "type 2") non-linguistic systems are artificial examples. Rao et al. respond that that is a mischaracterisation of what they did, observe that Sproat entirely fails to mention the second figure from the same paper or the more recent "block entropy" results, and repeat (in case it wasn't obvious) that they don't claim to prove anything, only offer evidence. They give inductive and Bayesian arguments for why the mass of evidence, including their own, should increase our belief that the Indus symbols were a script.

In connection with the Bayesian arguments, Rao et al. do me the honour of citing my blog post on the matter, thus giving this humble blog its first scholarly citation. My argument was as follows: Given prior degrees of belief, $P(S)$ for the script hypothesis and $P(NS)$ for the non-script hypothesis, and give "likelihoods" of data given each hypothesis, $P(D|S)$ and $P(D|NS)$, Bayes' theorem tells us how to calculate our posterior degrees of belief in each hypothesis given the data:
$P(S|D) = \frac{P(D|S)P(S)}{P(D|S)P(S) + P(D|NS)P(NS)}$
We can crudely estimate P(D|NS) by looking at the "spread" of the language band of the figure 1A in their Science paper and ask how likely it is that a generic non-language sequence would fall in that band: assuming that it can fall anywhere between the two extreme limits that they plot, we can eyeball it as 0.1 (the band occupies 10% of the total spread) [Update 17/09/2010: See the plot below, which is identical to the one in Science, except for the addition of Fortran (blue squares, to be ignored here).] Let us say a new language is very likely (say 0.9) to fall in the same band. Then $P(D|NS) = 0.1$, $P(D|S) = 0.9$. If we were initially neutrally placed between the hypotheses ($P(NS) = P(S) = 0.5$), then we get $P(S|D) = 0.9$: that is, after seeing these data we should be 90% convinced of the script hypothesis. Even if we started out rather strongly skeptical of the script hypothesis ($P(S) = 0.2$, $P(NS) = 0.8$), the Bayesian formula tells us that, after seeing the data, we would be almost 70% convinced ($P(S|D) = 0.69$).

We can quibble with these numbers, but the general point is that this is how science works: we adjust our degrees of belief in hypotheses based on the data we have and the extent to which the hypotheses explain those data.

Sproat apparently disagrees with this "inductive" approach, and accuses Rao et al of lack of clarity in their goals. On the first page, he clarifies that he was talking only of the Science paper and has not read carefully analysed [correction 17/09/10] the more recent papers by Rao and colleagues; he says those works do not affect questions on the previous paper, writing,

'To give a stark example, if someone should eventually demonstrate rigorously that cottontop tamarins are capable of learning “regular” grammars, that would have no bearing on the questions currently surrounding Marc Hauser’s 2002 publication in Cognition.'

In this way Sproat succeeds in insinuating, without saying it, that the work of Rao et al. may have been fraudulent. (Link to Hauser case coverage)

A little later, on the claim that the arguments of FSW "had been accepted by many archaeologists and linguistics", he offers this belated evidence that such people do exist:

Perhaps they do not exist? But they do: Andrew Lawler, a science reporter who in 2004 interviewed a large number of people on both sides of the debate notes that “many others are convinced that Farmer, Witzel, and Sproat have found a way to move away from sterile discussions of decipherment, and they find few flaws in their arguments” (Lawler 2004, page 2029), and quotes the Sanskrit scholar George Thompson and University of Pennsylvania Professor Emeritus of Indian studies Frank Southworth.

Having thus convincingly cited a science reporter to prove that the academic community widely accepts FSW's thesis, he proceeds to the actual claims about the symbols; after a few pages of nitpicks not very different from the above, he addresses a point which he had previously raised in this comment: why does figure 1A in the Science paper not include Fortran? He suspects that Fortran's curve would have overlapped significantly with the languages, "compromising the visual aspect of the plot". I actually find that explanation credible(*), and I was not comfortable with the manner of presentation of the data in the Science paper: but I view this as a problem with the "system" rather than the authors. Enormous prestige is attached to publication in journals like Science. To allow more authors to publish, Science has a one-page "brevia" format (which Rao et al. used) that allows essential conclusions to be presented on that printed page, while the substance of the paper is in supplementary material online. Rao et al. can argue, correctly, that they hid nothing in their full paper (including the supplementary material); but obviously what was shown in the main "brevia" format was selected for maximum instantaneous visual impact. And they are not the only ones to do this. I'd argue that formats like "brevia" are designed to encourage this sort of thing, and the blame goes to journals like Science. It is annoying, but to compare it with the Hauser fraud is odious.

Sproat's response doesn't improve in the subsequent pages. He distinguishes between his preferred "deductive" way of interpreting data and the "inductive" approach preferred by Rao et al; he complains that they did not clarify this in their original paper (though I would have thought the language was clear enough, that they nowhere claimed to be "deducing" anything, only offering "evidence"); he nitpicks (as I would have expected) with the Bayesian arguments. Overall, for all his combativeness, he is notably vaguer in his assertions than previously. He ends on this petulant note:

I end by noting that Rao et al., in particular, might feel grateful that they were given an opportunity to respond in this forum. My colleagues and I were not so lucky: when we wrote a letter to Science outlining our objections to the original paper, the magazine refused to publish our letter, citing “space limitations”. Fortunately Computational Linguistics is still open for the exchange of critical discussion.

The openness of CL is to be applauded, but I can think of some additional explanations for why Computational Linguistics allowed the response while Science did not. One is that the Science paper by Rao et al. was not a vicious personal attack on another set of researchers, and as such, did not merit a "rejoinder" unless it could be shown that the paper was wrong. Another may have been the quality of Rao et al's response on this occasion (Sproat could, if he liked, offer us a basis for comparison by linking his rejected letter to Science) [update 17/09/10: here].

I don't expect this exchange in a scholarly journal to end the argument, but perhaps the participants can take a break now.

(*) UPDATE 17/09/2010: Rajesh Rao writes:

By the way, the reason that Fortran was included in Fig 1B rather than 1A is quite mundane: a reviewer asked us to compare DNA, proteins, and Fortran, and we included these in a separate plot in the revised version of the paper. Just to prove we didn't have any nefarious designs, I have attached a plot that Nisha Yadav created that includes Fortran in the Science Fig 1A plot. The result is what we expect.

The plot is below (click to enlarge); the blue squares are the Fortran symbols.

Rajesh also remarks that the Bayesian posterior probability estimates -- that I derived from the bigram graph in the Science paper -- can probably be sharpened from the newer block entropy results. However, since Sproat makes it clear that he is only addressing the Science paper and is unwilling to let later work influence his perception, I think it's worth pointing out that the data in the Science paper are already rather convincing.

Tuesday, August 17, 2010

How to distinguish fake coin tosses...

Dilip posted an interesting problem the other day: if you were a professor teaching probability theory, and asked your students to toss a coin 100 times and write down the sequence of heads and tails that they obtained, and some of them cheated and simply made up a sequence of heads and tails, how could you tell?

It is interesting because, generically, very few sequences are truly random; and the human mind is certainly incapable of randomness. But what signs of non-randomness could you look for? The answer that Dilip, apparently, had in mind is that true random sequences will usually contain long "runs" of heads or tails (say, 6 or more heads, or 6 or more tails, in succession in 100 coin tosses). However, individuals generating random sequences will perceive such short "runs" as non-random and correct for them. But this is not a very reliable answer by itself: the probability of a run of 6 or more (heads or tails) in 100 tosses is about 80%, so a truly random run will fail this test about a fifth of a time, while a smart student will probably throw in such runs. I argued that if one combined this with various other tests, one should be able to tell quite reliably. But at the moment I am unsure whether 100 tosses are enough for this. Certainly nobody could say for sure whether a sequence of 5 tosses was generated by a coin or a human. Meanwhile, I strongly doubt that any human could generate a sequence of 1000 random symbols (coin-tosses, numbers, whatever) that would fool the statistical tests. But can one reliably tell which of the following two is "random" and which isn't?

Sequence 1:


Sequence 2:

Neither of these was generated by tossing a coin. One was made by me, by pressing "h" or "t" "randomly" on a keyboard (ie, I, a human being aware of the usual "pitfalls", was trying to generate a random sequence, fairly rapidly without thinking much about it). The other was made using the pseudorandom number generator in Python, which is based on the Mersenne twister. I would guess that the Mersenne twister is more random than I am: what I would be interested in knowing, from any experts reading this, is whether one of the above sequences can be demonstrated, statistically, to be so non-random that the chances are very high it was generated by me and not by the program. I am, moreover, interested in the method and not the answer (which you have a 50% chance of getting right randomly). If you confidently identify the Mersenne twister-generated sequence, it is safe to say that the problem is with your test and not with the Mersenne twister.

The "bonus question" that came up in Dilip's blog is, what is the probability of observing a run of 6 or more heads or tails (let's call them 6-runs) in 100 coin tosses?

Kovendhan gave an approximate answer which seemed to work well in practice, but it turns out that he made two errors and a corrected calculation does poorly. The probability of a particular choice of 6 tosses (let's call it a 6-mer) being all heads or all tails is (1/2)5 (it is (1/2)6 for all heads, and the same for all tails). The probability then that it is not all-heads or all-tails is 1-(1/2)5. There are 95 ways to choose 6 successive tosses in 100. The probability of none of these 95 is all-heads or all-tails is ( 1-(1/2)5)95 = about 0.049 approximately. The probability of at least one stretch of 6 identical tosses (all heads or all tails) existing would then seem to be 0.951 -- pretty near certain. The approximation consists of neglecting the fact that adjacent 6-mers here are not independent: eg, if your chain of tosses is HHTHTT, not only does this fail to give a 6-run, but it will also fail to do so on any of the next 3 tosses at least.

Meanwhile, Kovendhan used (1/2)6 instead of (1/2)5 for the individual 6-run, and 94 for the number of 6-mers, which yields 0.772, but he reported 0.781 -- I'm not sure how he got that. My numerical experiments suggested the true number is a little above 0.8, which is close to Kovendhan's fortuitously incorrect calculation of his approximate method, but quite far from what his approximation should really give.

The moral is, be aware of approximations and, if possible, have an estimate of their effect. Kovendhan's approximation is in fact very similar to Linus Pauling's when he calculated the zero-temperature entropy of ice. In the crystal structure of ice, each oxygen atom has four neighbouring oxygen atom, in a locally tetrahedral arrangement. Along each of these O-O bonds is a hydrogen atom, but not centrally located: two H atoms must be closer to the central O atom, and two must be closer to the neighbouring O atoms. Globally, there are many ways to satisfy this; to count the ways, Pauling essentially assumed that the configurations of the local "tetrahedra" could be counted independently. This is like Kovendhan's assumption about 6-mers; unfortunately, while two tetrahedra in ice share at most one corner, two 6-mers in the toss sequence can share up to 5 tosses, which makes the 6-mers much less independent than the tetrahedra in ice.

I attempted an answer which I give below. A commenter observed that a formula exists for the probability of a run of 6 heads, and the same formula gives the probability of a run of 6 tails. However, the probability of six heads or tails is trickier.

My approach (which will be recognisable by physicists, computer scientists and others) was this: supposing we can calculate the probability P(N) that there are no 6-runs in N tosses, how do we calculate P(N+5), the probability that there are no 6-runs in N+5 tosses? If we can do this, we can start from P(5) = 1 (there are no 6-runs in 5 tosses, obviously) and build it up from there.

Naively, a 6-run can be built up at any of the five tosses between N and N+5: for example, if the previous five tosses up until N were all heads, then tossing heads again will give a 6-run. So we must consider all possibilities for both the five tosses from N-4 to N, and the five tosses for N+1 to N+5: ten tosses in total. There are 210 = 1024 possibilities for these 10 tosses, so it looks like a counting problem. The number of possibilities of "successful" runs in these 10 tosses can be enumerated as follows (where "N" stands for "any", and one can replace H with T in all these examples):
HHHHH HNNNN (32 possibilities: 24 for the last 4 tosses, times 2 for replacing H with T)
THHHH HHNNN (16 possibilities)
NTHHH HHHNN (16 possibilities)
NNTHH HHHHN (16 possibilities)
NNNTH HHHHH (16 possibilities)
for a total of 96 possibilities. There are then 1024-96 = 928 cases where there is no run of 6 heads or tails. So the "naive" answer is P(N+5) = P(N)*928/1024. If we want P(100), we can use this to go all the way back to P(5) = 1:
P(100) = P(95)(928/1024) = P(90) (98/1024)2 ...
to get
P(100) = P(5) (928/1024)19 = 0.154 roughly
so the probability of at least one 6-run in 100 tosses is about 0.846.

Unfortunately, this is still not correct, because the 1024 possibilities -- and the 96 possibilities with runs -- are not all equally probable: they are conditional on the premise that there is no 6-run up until the N-5th toss. So, for example, the case "HHHHH TNNNN" should be weighted by the fact that it would be disallowed for half the possible sequences prior to this (the ones that ended with H); the case THHHH HNNNN would disallow far fewer sequences (the ones that end in TTTTT, which is only 1 in 32 sequences).

Therefore, I considered as prior possibilities, the five tosses numbered N-14 to N-10 (that is, the five tosses preceding the current ten-mer). There are the following ten possibilities that should be distinguished:
and each of these has a "prior probability" (1/4 for NNNHT, 1/32 for HTTTT, etc), and each disallows a fraction of the 1024 10-mers we are considering, as well as a fraction of the 96 10-mers that contain 6-runs. If you do the calculation separately for each of these 10 prior possibilities, and then weight the average by their prior probabilities, you end up with
P(no 6-run in 100 tosses) = (1481219/1614480)19 = 0.195 roughly,
and P(at least 1 6-run) = 0.805 roughly.

This, as it turns out, is in excellent agreement with what I had previously obtained numerically.

Final question for anyone who has read this far: is this the exact answer? (I posted on Dilip's blog that I think it is, but don't go by that.) It is, however, "good enough" in my opinion, by two measures: the remaining error is small; and the issues have (I think) been adequately illustrated that the answer can be (laboriously) improved, if need be.

I'd be fascinated, however, if there is an exact answer on the lines of the "run of heads" answer linked above.

Friday, August 06, 2010

Hand over the master keys, or else...

I find it comical that India's security agencies (now joined by several other countries) are demanding the "encryption keys" to BlackBerry devices. Can our government's security experts be ignorant of basic cryptography?

BlackBerry's encryption methods are not new, not novel, not unique, not even unusual. The technology to encrypt e-mail has existed since the early 1990s, and is called OpenPGP (after PGP or Pretty Good Privacy, the first program to implement it). It is usable on pretty much all e-mail systems and is built into Blackberries. There are no "master keys" here: each user has a public key and a private key, and messages can be encrypted with the public key but decrypted only with the private key. (Conversely, messages can be digitally "signed" with the private key and the signature can be verified with the public key). If A wants to send an encrypted message to B, A encrypts it with B's public key -- which A should have a copy of. The public key is meant to be public, and it is common for people to display it on their personal webpages and elsewhere. But B's private key is needed to decrypt it, and only B has (or should have) that key. Wikipedia has a good description of public key cryptography.

As far as I can tell, BlackBerry's "enterprise security" is a somewhat different system to secure communication between BlackBerry's servers and the customer's device, but it too is key-based cryptography (3DES or AES) that requires a private key for each device. RIM, the makers of BlackBerry, say they do not possess copies of customers' private keys, and indeed it would be alarming if they did. They are not being pioneers here (except, perhaps, in bringing it to wide use among their customers): this is standard practice in cryptography.

The government can ban BlackBerries, but it will have to ban e-mail: all email can be encrypted, using a method that dates back to 1991. And in fact it's easier than that: webmail providers such as Google Mail allow the entire session to be encrypted, and it is trivial to do this by clicking a few checkboxes (even my GMail app on my non-BlackBerry phone does this) -- so no agency can snoop without accessing Google's own servers. Perhaps our security agencies will next demand the root password for Google's data servers.

Alternatively, our government can try addressing our real security problems, and their underlying causes.

Friday, July 30, 2010

Yet more thoughts on Apple

It's been several months since we got our Mac Mini [1, 2]. Previously my wife used a Linux laptop. It worked well, except when it didn't, and I had to help out.

The Mac is just the same, except that when it works well, it works beautifully. Steve Jobs values aesthetics above everything else. But when it doesn't work...

So a few days ago she calls me to say the computer is not booting. I go over to look. Not only is she quite correct, but there's no way of telling what the problem is: all Apple gives you is a white screen with an Apple logo and an endlessly-spinning counter.

I go online with my laptop, and find that there are ways to boot differently by holding down various key combinations on boot. First I try "safe mode". It boots, and all seems well; but when I try the regular boot it fails again. And now "safe mode" doesn't work too.

Then I try "verbose" boot. This gives a scrolling screen of boot messages, of the kind familiar to Unix/Linux users. I see some messages about the filesystem but I don't understand them. The boot gets stuck at a point that I can't make sense of.

Then I try "single user". This time, I get a boot prompt that helpfully tells me to "fsck -fy". I do so, and after some churning, it tells me "filesystem cannot be repaired." I think, huh? I have seen serious filesystem errors on linux and unix, which can be repaired only at the cost of losing data: but I have never seen a filesystem that could not be repaired.

Googling gives me the dubious advice that repeatedly trying fsck should fix the problem, but it does not. I try the disk repair tool that comes from Apple's install DVD, but that too refuses to repair the filesystem.

Finally, "backup and reinstall" is the only way to go. I get a USB hard drive, use my unix skills to mount it and format it with the HFS+ filesystem in single-user mode, and back up all my wife's data (only a couple of unimportant files failed to get copied, luckily). And I reformat and reinstall, as any good Windows sysadmin would do.


  • This has never happened to me on linux, which I've been using on my own computers for 10 years now, and on other computers for even longer. A couple of times the filesystem was sufficiently corrupted that some important system files got lost, but all I had to do was copy them over from another machine or reinstall the affected package.

  • Linux, like OS X, typically uses a "journalled" filesystem (usually ext3 or ext4 on linux, HFS+ on Mac). This means that, after an "unclean" shutdown, the filesystem need not be thoroughly checked. But even when the shutdown is not unclean, Linux systems are usually set up to check the filesystem automatically once every 30 days, or once every 100 mounts (reboots), or thereabouts. This is just a precaution: hardware and software errors can always count problems. As far as I can tell, Mac is not set up to do this. In fact, as far as we know, the machine was not shut off "uncleanly" at any time recently: what probably happened was that undetected filesystem errors grew until they became unrecoverable. Why does Mac OS X not schedule a periodic filesystem check? Is it because Jobs thinks users will get frustrated at that informationless, spinning progress indicator? If so, why not just tell the user that the filesystem is being checked? I'm sure most users won't mind.

  • My wife -- and other non-techie users -- could not have recovered the computer on her own. From all accounts, Apple's customer service is good and very likely they'd have done exactly what I did, but they would have taken a few days rather than a few hours.

  • We should have taken backups, and got away very lightly considering we didn't. After this incident, we bought a new USB hard drive and set up Apple's "Time Machine" on it. This, like all Apple software, is slick and shiny; how well it works remains to be seen, or hopefully will not need to be seen for a while.

  • I strongly suspect that the "filesystem could not be recovered" message was not the truth, but an example of Apple's control-freakishness. The filesystem could perhaps be recovered only by losing a few files (a common-enough situation). Rather than let the user make that choice, Apple wants you to call customer service at the slightest sign of trouble -- by escalating that trouble, and also by hiding all useful information from the user, making it available only via arcane key combinations at boot time.

So if anyone out there is thinking of buying Apple: it's slick hardware and software, but in times of trouble, it's probably much harder to fix than Windows. And harder than other Unix-like systems, because it hides so much of its Unixness on the grounds of being user-friendly, or something. Still, for many people, the slickness probably makes up for anything else.

Friday, July 23, 2010

Giant steps

Madhav Chari, jazz pianist, performed with an all-Chennai trio -- consisting of himself, Naveen Kumar (bass) and Jeoraj George (drums) yesterday at the Museum Theatre in Chennai. I have written about Madhav before, when he performed with a French rhythm section [1,2] (who also back him on his recent CD, "Parisian thoroughfares"); and had previewed the concert here. Suffice it to say that it lived up to its prior billing. In an e-mailed announcement Madhav had declared it to be "absolutely the very first international standard jazz group from India since the incpetion of jazz in the country in 1927." It was. He said "We play jazz music: thats what we do." That's what they did. Over half of the programme was of Madhav's own compositions, beginning with "Tales of the south" (a reference, he said, both to New Orleans and to Chennai) and ending with "Blues for Havana". In addition they threw in pieces by Charlie Parker, Thelonious Monk, John Coltrane, Cole Porter, and Sherwin/Maschwitz's "A nightingale sang in Berkeley square" (which Madhav played unaccompanied). They nailed all of them. Jeoraj took several drum solos, while Naveen played extended bass solos on Madhav's "Rejoice" and "Blues for Havana".

Madhav repeatedly said that the band is still feeling its way and is not really a mature outfit, which is why they chose not to play Ellington. But if there were flubs, I did not notice. The Parker was taken at breakneck speed, Porter's "Love for Sale" and Madhav's "Tango sentimental" were rhythmically very complex, and the chord changes in Coltrane's "Giant Steps" are a challenge for the best musicians. The band sailed through all of them.

But almost equally entertaining was Madhav's patter before the songs. He declared Chennai the most advanced city for percussion in Asia (previously he had said that though Chennai audiences may not understand jazz, they understand music better than anyone). He has a dim view of what has long passed for "jazz" in this country (perpetrated by people like Louis Banks), and took several potshots at the elites of Mumbai, Kolkata and Delhi; he challenged anyone from those cities to measure themselves against Naveen and Jeoraj; he conceded that the sizeable audience yesterday (well over 400) may be achievable for jazz in Kolkata, but declared that there is no jazz drummer in that city who can keep time, so Chennai is ahead on that count.

Towards the end, he recounted a lady at a recent party asking him why he blew his own trumpet so much, and asked the audience (to resounding cheers): "Well, if I have the greatest jazz band in the history of India, am I supposed to keep quiet about it?"

Indeed, a few years ago I marvelled that there was a jazz pianist in this "conservative" city who was the equal of the best in New York. Now I find that there is an entire world-class jazz piano trio in this city -- but it now seems exciting rather than surprising. My opinion is that Madhav really does not need to blow his own trumpet. His piano, and his new rhythm section, are eloquent enough.

Wednesday, July 21, 2010

Should one pray for Hitch? - continued

Christopher Hitchens' own answer to the question is here, along with much other interesting stuff. In Hitch's words,

Well look, I mean, I think that prayer and holy water, and things like that are all fine. They don’t do any good, but they don’t necessarily do any harm. It’s touching to be thought of in that way. It makes up for those who tell me that I’ve got my just desserts... I have to say there’s some extremely nice people, including people known to you [interviewer Hugh Hewitt], have said that I’m in their prayers, and I can only say that I’m touched by the thought.

Yesterday I received my copy of his new memoirs, Hitch-22. The immediately striking thing is that he has chosen to be photographed smoking a cigarette for its cover. This was before the cancer diagnosis, and he does like to be considered a contrarian, but if he were superstitious I wonder whether he would now think of it as tempting fate. Hitchens is also known for his prodiguous consumption of alcohol (I am surprised that the book cover does not portray him holding a glass of Scotch); and smoking and drinking are both significant risk factors for oesophageal cancer, especially in combination in large quantities.

If I were religious, I'd pray for him. As it is, I (like millions of other strangers) offer him my best wishes: I hope that he recovers fully and, meanwhile and afterwards, suppresses his contrarian urges sufficiently to obey his doctors when they ask him to stop poisoning his body in this way.

As for the material between the covers of his book: I have only read as far as the beginning of the third chapter (on his father). The "prologue with premonitions" is not his most memorable piece of writing, but that is only because his standards are so high. It is, however, sprinkled (as one would expect) with interesting anecdotes and thoughts. His portrait, in the next chapter, of his mother Yvonne -- her life, her death, his relationship with her, and his thoughts on her after she died -- is stunning and harrowing: if the book maintains that sort of intensity, it would be a life-altering experience for any reader, I would think. I have a large and growing pile of books that are only partially read, but despite the considerable bulk of this book, I will not be surprised if I finish it sooner than many other recent purchases.

Sunday, July 18, 2010

Reading comprehension in Open magazine

Today I read this article in Open magazine, on allegations that Sharad Pawar's daughter, Supriya Sule, is a citizen of Singapore and therefore should have her Indian citizenship revoked. The article unquestioningly quotes Mrunalini Kakade, who lost the election to Sule in 2009.

However, nowhere in the article is there evidence that she is a citizen of Singapore: the phrase used, consistently, is "Permanent Resident" which is a status for non-nationals, short of citizenship (Singapore Government web site, Wikipedia; links produced by a few seconds on google). What Open's rather breathless article says is

According to [Kakade's] petition, Supriya Sule holds 'Singapore citizenship'--Permanent Resident Identification Number S 69726251--in addition to her Indian one. This is against domestic rules that do not permit dual citizenship.

The giveaway, as Mrunalini Kakade tells Open, was Supriya Sule’s disclosure that she owns property in Singapore. Under the law of that country, only a permanent resident of Singapore is allowed to purchase property there...

"Besides, she is also the director of Laguna International Pvt Ltd. In this context, her nationality is shown as a 'Singapore Permanent Resident'... "

So, all the evidence that Kakade has supplied, at least as quoted by Open Magazine, suggests that Sule is a "permanent resident" of Singapore -- not a citizen -- just as thousands of Indian citizens are permanent residents of the United States. There is nothing in India's laws that prohibits citizens from permanent residency of another country.

What should we make of a news magazine that writes a 1300+ word on this issue without addressing this point, or asking Kakade to clarify?

Friday, July 09, 2010


Cross-border terrorism is almost dead. Pakistan is engulfed in its own problems. So why does the Kashmir problem not die too?

Could it be because ordinary people do not like living in a police state? And, when they protest, they do not like being treated as terrorists and fired upon?

The local media is prevented from doing their jobs, and the "national" media (ignorant of Kashmiri, and broadcasting to those who are ignorant of Kashmiri) is free to lie. (Link via Shivam)

We shoot down unarmed protestors. Which incites more protest, and we shoot them down too. (Even unarmed motorcycles are not spared.) We ban the media. We squash civil liberties. And all this is "legalised" by the draconian Armed Forces Special Powers Act (which was originally framed for the north-east, and extended to Kashmir in 1990). Our "law" allows the army to fire on protestors, invade people's homes, search them, take people away without warrant, and be immune from prosecution for all this. That's the law that has ruled the north-east for over 50 years, and Jammu and Kashmir for 20 years.

Now, why do we call ourselves a democracy? Why do we pretend that we have a free press? And why do we expect the people of those states to be grateful for these things?

Tuesday, July 06, 2010

Should one pray for Hitch? And should he know?

The question is engaging the religious. Christopher Hitchens has been diagnosed with cancer. Given his well-known atheism, should a religious-minded well-wisher pray for him?

On the religious side, Rabbi David Wolpe, who has debated Hitchens frequently on religion, puts it very well (as quoted on Goldblog) in my opinion: "I would say it is appropriate and even mandatory to do what one can for another who is sick; and if you believe that praying helps, to pray. It is in any case an expression of one's deep hopes. So yes, I will pray for him, but I will not insult him by asking or implying that he should be grateful for my prayers."

I wish all religious leaders were so open-minded: too often, religious impositions are accompanied by the implication that one should be grateful for the favour, or the threat that one is condemned if one is not grateful.

A scientist on the Dish goes a bit further in asserting that one should not even inform Hitchens (let alone demand his gratitude) that one is praying for him: to do so would be "malicious". In support, he links this randomized trial on the effect of prayer on patients who had undergone coronary artery bypass graft surgery. The study showed that, on patients who did not know whether or not they were being prayed for, prayer had no effect; but patients who knew with certainty that they were being prayed for did significantly worse (exhibited more complications within 30 days of the procedure).

So there you have it. Pray if you like, but don't tell.

(Actually, I'd be surprised if those results were replicable with other ailments: the only explanation that I can think of is that patients who know they are being prayed for believe that their prognosis is particularly poor, and therefore are under more stress -- which is particularly relevant here since they are heart patients. In particular, patients were told, via messages in envelopes, either that they "may or may not be prayed for" or that they "will be prayed for". Perhaps the latter statement was truly frightening to a lot of the patients. I'm unconvinced that the study was ethical: at the minimum, they could have chosen a different ailment, on which stress would not have such a direct and obvious effect.)