tag:blogger.com,1999:blog-3112258799568696095.post8269113986375988152..comments2024-04-29T12:40:39.793+05:30Comments on E's flat, ah's flat too: How to distinguish fake coin tosses...Rahul Siddharthanhttp://www.blogger.com/profile/04809667965184094636noreply@blogger.comBlogger13125tag:blogger.com,1999:blog-3112258799568696095.post-68155753331075014752016-05-13T23:25:18.952+05:302016-05-13T23:25:18.952+05:30I did the test of gzip. I randomly generated 10000...I did the test of gzip. I randomly generated 10000 different 100-length string and zipped them. The length of zipped result follow a normal distribution with mean=43.7683 and std=1.852. The first string's zipped result length is 39 (pdf=0.00783, and 1.66% of the test results) and the second is 43 (pdf=0.19765 and about 17.44% of the test results).<br /><br />Also, I did the full test for P(Xn | X_(n-1) ). the result is:<br />(format: [[P(t|t), p(h|t)], [p(t|h), p(h|h)]])<br />for string 1:<br />[[38, 22], [22, 17]]<br />for string 2:<br />[[14, 31], [31, 23]]<br /><br />Further, I proposed another method to test the randomness<br />first, if we interpret the sequence as a sequence of 1-d points, we could count the density of all possible 1-d points.<br />For example, for sequence [0,0,1,1], there are two 1-d points. 0 occurs twice and 1 occurs twice, and the result follows the uniform density along all possible 1-d points.<br /><br />Then, what if we interpret the sequence as a higher-dimensional points? For example, we can treat [0,0,1,1] as two 2-d points (0,0), (1,1) and the result is not uniform along all 2-d points [(0,0), (0,1), (1,0), (1,1)].<br /><br />If a sequence is truly random, then it should follow the uniformness along all dimensional interpretation.<br /><br />Therefore, I did the test of 2-d interpretation on your sequences.<br />(format: [P([0,0]), P([0,1]), P([1,0]), P([1,1])])<br />first sequence:<br />[0.5, 0.16, 0.18, 0.16] std=0.14<br />second sequence:<br />[0.22, 0.3, 0.3, 0.18] std=0.052<br /><br />From those results, I think the second sequence has more probability to be "truly random".Matrixwhttps://www.blogger.com/profile/02089577588545098725noreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-30763588413937901522014-03-21T12:31:53.022+05:302014-03-21T12:31:53.022+05:30Anonymous, nice point, and am wondering how you ca...Anonymous, nice point, and am wondering how you came across this blogpost after so many years!Rahul Siddharthanhttps://www.blogger.com/profile/04809667965184094636noreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-89141773921673527602014-03-21T12:12:40.491+05:302014-03-21T12:12:40.491+05:30Another test of randomness that I think might be p...Another test of randomness that I think might be particularly likely for humans to fail at is that the probability of choosing the next result is independent of the previous result. Humans will have a hard time making this true. P(N=H | N-1=H) will almost certainly not equal P(H). On your data set, sequence 2 has the probability that the next result is the same as the previous is only 39/99, much less than the ideal 49.5/99. Sequence 1 on the other hand has a 55/99 chance that the next result is the same as the previous, much closer to the idea. A quick simulation suggests that only 456/10000 pseudorandom runs have a conditional probability as far from ideal as 39, whereas 3090/10000 have conditional probability as extreme as 55. The implied p-value for Sequence 1 being non-random is 0.31, whereas the implied p-value for Sequence 2 being non-random is 0.046, just low enough to reject Sequence 2's randomness if the standard cutoff of 0.05 is used. A longer sequence would of course narrow the spread on the truly random sequences, while probably not changing too much for the human, so this method should be better and better for longer sequences. Alternatively it should also perform better on more naive humans, who will tend even more to want to avoid runs. It looks like 100 flips is a borderline case for sophisticated humans. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-46860785704619135012010-08-21T23:15:18.538+05:302010-08-21T23:15:18.538+05:30Rahul
Thanks. I was hoping I would be right, or mi...Rahul<br />Thanks. I was hoping I would be right, or might've looked quite silly with that answer :)<br /><br />For the algorithm:<br />Collect a large number of human-generated sequences and an equal no of computer generated sequences in a database (the more the better). Then use bayesian methods (which i am not yet very familiar with) to determine probability of the given input sequence being random.phttps://www.blogger.com/profile/00752298947741450496noreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-71989115354062826022010-08-21T08:50:30.865+05:302010-08-21T08:50:30.865+05:30Prithwiraj: nice answer and you're right.
The...Prithwiraj: nice answer and you're right.<br /><br />The next question is, can one write a computer program to do the job? The program should take an input sequence, and return the probability that it was generated by a human (or other nonrandom process); and its confidence should increase with the size of the input.Rahul Siddharthanhttps://www.blogger.com/profile/04809667965184094636noreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-77208693921632738272010-08-21T02:06:59.113+05:302010-08-21T02:06:59.113+05:30crude answer:
sequence 1 has 60H and 40T and start...crude answer:<br />sequence 1 has 60H and 40T and starts with T<br />sequence 2 has 54H and 46T and starts with H<br /><br />very likely you did sequence 2 by hand, and subconsciously <br />1.tried to have close to 50% tails. in reality, 50-50 would occur in much larger samples<br />2. began with Hphttps://www.blogger.com/profile/00752298947741450496noreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-8590800258878803082010-08-19T13:21:34.794+05:302010-08-19T13:21:34.794+05:30Thanks, I completely missed the distinction betwee...Thanks, I completely missed the distinction between HHHH.. and THHH...asnoreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-32214075805835000172010-08-18T11:15:14.209+05:302010-08-18T11:15:14.209+05:30km - ps: about Deolalikar, I do find it disappoint...km - ps: about Deolalikar, I do find it disappointing that, after having obtained a huge amount of feedback from eminent people (including at least two Fields Medallists) for his earlier manuscript, he has chosen to restrict his newer manuscript to a "small number of researchers", and -- at least on his <a href="http://www.hpl.hp.com/personal/Vinay_Deolalikar/" rel="nofollow">web page</a> -- fails to acknowledge the usefulness of any of the previous feedback. <br /><br />If he is really on to something, I would think the open method would be best.Rahul Siddharthanhttps://www.blogger.com/profile/04809667965184094636noreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-9486240010357104852010-08-18T11:09:09.617+05:302010-08-18T11:09:09.617+05:30km -- it's a bit beyond me, and others are doi...km -- it's a bit beyond me, and others are doing a great job -- in particular <a href="http://rjlipton.wordpress.com/" rel="nofollow">Richard Lipton</a>. The current <a href="http://rjlipton.wordpress.com/2010/08/15/the-p%E2%89%A0np-proof-is-one-week-old/" rel="nofollow">consensus</a> seems to be that the proof is flawed, probably fatally so, and the remaining question is whether any interesting results can still be retrieved.<br /><br />Anonymous: well, that article also talks about heads specifically, not "heads or tails". But the approach is similar to what I used, except I build it up 5 at a time rather than 1 at a time, and get a formula rather than write a computer program.<br /><br />Kapil -- I'll hold off commenting yet on whether you are right, but -- as you know -- a number like 62 or 48 is not very significant without knowing how much it will vary. So for example, if you generated 1000 random 100-long sequences of H/T, gzipped them all, and calculated the mean/variance of the zipped files, and then found that one of my sequences was well within your expected range and the other was well outside, then that would be pretty convincing... <br /><br />as - no, that is why I am writing sequences like THHHHHHNNN -- to avoid counting examples that were already counted in HHHHHHNNNN.Rahul Siddharthanhttps://www.blogger.com/profile/04809667965184094636noreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-3402821050145393572010-08-18T07:23:54.182+05:302010-08-18T07:23:54.182+05:30I just looked up Feller(Vol 1. Sec XIII.8) and he ...I just looked up Feller(Vol 1. Sec XIII.8) and he discusses the case of consecutive H or T and writes down an explicit formula for mean time before we get such a run. Am confident that the formula exists for the probability and is hidden in a mid term question heap in the Princeton basement.<br /><br />Thank you for putting up a solution. One question though, I was wondering if you were over counting a little bit, as in when looking from N-5 to N+5, how many times would you be counting solutions like HHHHHHHHHH(10 times). Maybe, I missed something.asnoreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-90371848795851682172010-08-18T07:08:04.902+05:302010-08-18T07:08:04.902+05:30A fairly standard test for randomness is Maurer...A fairly standard test for randomness is Maurer's test which checks<br />how compressible the sequence is. The problem with that test is that<br />it takes too long on data-sets that are large enough and it is not<br />significant on data-sets that are small! Still, why not use it?<br /><br />Anyway, I just took your seq1 and seq2 and applied 'gzip -9' on them.<br />Here is what I got:<br /><br /> 58 /tmp/seq1.gz<br /> 62 /tmp/seq2.gz<br /><br />I then applied 'lzma --best' on both.<br /><br /> 47 seq1.lzma<br /> 48 seq2.lzma<br /><br />This seems to indicate that the first sequence is less random than<br />the second.Kapil Paranjapehttps://www.imsc.res.in/~kapil/noreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-55357091347979576952010-08-18T00:02:45.823+05:302010-08-18T00:02:45.823+05:30I begin to lose you in the third section, but how ...I begin to lose you in the third section, but how about this for a precise answer?<br /><br />http://www.win.tue.nl/~iadan/blockq/rows.pdfAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-3112258799568696095.post-75505902323857252292010-08-17T19:17:34.274+05:302010-08-17T19:17:34.274+05:30I didn't see you blog about Deolalikar's N...I didn't see you blog about Deolalikar's N = NP? Any thoughts?kmhttps://www.blogger.com/profile/16040339235134145847noreply@blogger.com