Shared posts

12 Mar 05:47

【异闻观止】袁贵仁回应“西方价值观教材不能进课堂”

by 无可奉告

十二届全国人大四次会议新闻中心于3月10日15时在梅地亚中心多功能厅举行记者会,邀请教育部部长袁贵仁就“教育改革和发展”的相关问题回答中外记者的提问。

华尔街日报记者:部长,想问一下,你去年说西方价值观的教材不适合课堂,希望你可以解释一下,西方价值观具体是指什么?因为马克思主义本身好像是西方的一个概念吧?另外,想请教你,教育部对这些有西方价值观的教材如何处理?

袁贵仁:谢谢你提出这个问题。你知道,马克思不是中国人,我们把马克思主义作为指导思想,体现了中国共产党的开放精神。我们党把科学的理论和中国的实际相结合,确定为我们的指导思想,一旦确定了,就坚定不移。

我们说的马克思主义,是和中国国情相结合的不断创新的马克思主义,这是我们坚定不移的。我们所说的价值观,就是马克思主义倡导的,和中国传统文化有机结合的价值观,我想这是你刚才这个问题表面上的意思。实际上你是想知道,中国政府或者教育部门是如何推进学生的思想品德、思想政治教育的,或者换句话说,你说的是我们通常讲的学校的德育问题。

我可以这样说,中国有着重视德育包括你刚才说的价值观这样的优良传统,“教书育人,育人为本,德智体美,以德为先”,这是中国学校和教师教育活动的宗旨和根本。我们始终把德育放在首位,如何加强德育,就是包括你刚才讲的思想政治、思想道德教育,主要是三个方面:

在内容上,我们强调坚持、坚定理想信念,加强核心价值观教育,加强优秀传统文化教育,加强中国革命传统教育,这是我们一贯的方针,使我们的学生成为中国特色社会主义的合格建设者和可靠接班人,这是我们的办学宗旨和方向。

在方法上,我们强调融入和贯通。融入就是把我们的指导思想、原则、宗旨融入到各类课程之中,贯通就是要把它贯通到大中小幼教育教学活动之中,我们还特别强调实践、体验,要着力抓好志愿者服务、抓好劳动教育,使他们从理论和实践的结合上来接受我们的价值观,来接受我们的优秀传统文化和革命传统文化。

在功能上,我们特别强调思想政治教育、思想品德教育,要强调学校教育、家庭教育、社会教育共同推进,因此我们认为,家风、校风、政风、行风,包括中国共产党作为执政党的党风,对青少年学生的影响至关重要。我们认为,教师的为人师表、家长的以身作则、国家公务人员和社会名人包括我们在座的各位记者也是名人,因为你们出面的场合机会比较多,许多人都认识你们,我们这些人的榜样示范十分紧要。因此我们强调,在中国的教育中,要高度重视不断改进我们的德育工作。

最后我想表达的是,中国的青少年一代是有理想、有担当、有抱负的一代,是可以也应当大有作为的一代,他们会为中国的繁荣富强、为构建世界命运共同体作出自己应有的贡献。谢谢。


© 无可奉告 for 中国数字时代, 2016. | Permalink |
Post tags: 共产党, 洗脑,教育,价值观, 西方价值观
订靠谱新闻 获穿墙捷径 请发电邮(最好用gmail)至:sub@chinadigitaltimes.net

30 Jul 16:58

创意十足的猫咪刺绣衬衫

by 皮卡啾

这些超精美的小猫刺绣衬衫出自11区奈良一位拥有5个孩子的妈妈Hiroko Kubota之手。各种举止神态的猫咪们爬上了衬衫的口袋,就像口袋里装着一只小猫,为原本单调的白色衬衫增添了更加灵动的元素~~

Hiroko在一次访问中说,她平时就很喜欢做一些手工活,儿子对买来的衣服的不满让她萌生了自己动手做的想法,于是Hiroko开始在衬衣上绣各种各样的猫咪。

o-TAIL-570

670fba9eca8fc2719ad562a2f456c483d7c518f3.14.2.9.2

20150618142217

从猫的毛色到瞳孔中散发的光辉都需要付出极大的耐心一针一针的刺绣。令她没有想到的是,她的这些刺绣照片在网上引起了强烈反响。在把这些作品的图片放到网上之后价值200——300美元的衬衫几乎瞬间就销售一空。

bae0b92b23e5be4aeec452f4b57d5e6abf3f1e28_m

cat-1

)0NRU0{$ZKUSN@69NLQ(19H

 现在她的个人作品集已经出版,里面收录了大量猫咪衬衫的图片,喜欢她作品的亲们这下可以大饱眼福了。

41Fs80qbn0L._SY344_BO1,204,203,200_

 

 

(via)

 

 

27 Feb 20:53

【春节怎么办】为什么你妈逼你结婚?

by 科学家种太阳

俗话说得好,祸不单行,情人节正好又连着新春佳节,单身狗们急需用已经剁掉的手再网购几对备用的膝盖了。在接受父母亲朋“动之以情”的逼婚攻势之前,不如祭出“晓之以理”的科学大杀器,分析分析逼婚背后的原因,这样起码能……嗯……死个明白。

在心理学领域内,对个体行为进行分析和描述的众多理论中,计划行为理论(Theory of Planned Behavior, TPB)是其中影响较大、体系相对较完整的理论模型之一。研究者认为,人的行为主要受内部动机和外部动机影响,前者表现为个体态度,后者表现为社会主观规范,两者共同决定了个体最终的行为决策——并以此提出了理性行为理论(Theory of Reasoned Action, TRA)。

基于这套理论模型,我们或许可以从行为态度、主观规范和知觉行为这三个心理学角度弄清楚——为什么你妈会逼你结婚?

如果你觉得文章太长,请随时记得看两眼这个九宫格,非常实用,解决生活中很多问题,果壳网诸君亲测有效。

一 行为态度

从内部动机来看,根据父母对于婚姻的态度,从积极到消极可以分为三种情况:看好婚姻,不置可否,消极面对。

看好婚姻:“我像你这么大的时候早结婚了,过得也挺好啊”

父母发自内心地相信结婚是美好的,希望你也能一样幸福。可是彼之蜜糖,吾之砒霜,为什么父母非要将自己的喜好强加于我呢?我这么发自内心地热爱果壳网,也没有强按父母的头过来看这篇文章啊!

这全都是因为“亲子参照效应”(parent-child-reference effect)。亲子参照效应是自我参照效应(self-reference effect)的延伸和扩展。一个人进行字形及语义加工的记忆任务时,与自我相关联的字词,记忆成绩会更好——这就是自我参照效应的作用。实际上,那些与自己较为亲密的他人相关的字词,记忆成绩也会更好。研究者认为,这是由于个体通过建立亲密关系,而将他人包括进了自我的表征之中。

而亲子参照效应,是指父母与子女互相都会将对方包括在自己的自我概念之中。国内的一些研究者认为,对于更多地基于社会关系而非自我定位来定义自己的中国人而言,亲子关系可以说是最亲密的社会关系之一,因而对自我意识和自我概念的影响也非常大。因此,父母将自己的喜好强加给子女,也就是顺理成章的事情了。

不置可否:“我结婚这么多年,不也过来了嘛”

父母并不觉得婚姻多么美好,更多的是一种习惯。习惯了每个人都应当处在一段婚姻关系当中的父母,自然会觉得还不结婚的你怪怪的。习惯成自然,越自然越好。

20世纪60年代,心理学家扎荣茨(Robert Zajonc)进行了一系列实验室实验,证明只要让被试多次看到不熟悉的刺激,他们对该刺激的评价就要高于其他没看过的类似刺激。这就是所谓的简单曝光效应(the mere exposure effect),越熟悉的事就越有好感。其背后的原因也很简单:大脑认知加工陌生事物时,需要付出更多的精力,比较累;而加工那些相对熟悉的事物时,就轻松许多,于是会觉得“如沐春风”。所以当看到与自己生活经验不符的情况时,父母就会觉得你可能出了问题,于是催促你赶快结婚吧。

消极面对:“我过得不好,现在就看你啦”

父母的婚姻生活可能非常不幸福,觉得婚姻简直是苦难。可他们还是要眼睁睁地看你往里跳,甚至闭上眼把你往火坑里推。是亲生的嘛?这是几个意思啊?简单来说,两个意思:

1. 自私的心理补偿机制

心理补偿(compensation)是最早由阿德勒提出的一种心理适应机制。它是指个体为了弥补自身在某一方面的心理劣势感,而努力在其他领域追求成功。这种补偿不仅会体现在个体的不同方面,而且会扩展至与个体心理距离相近的其他个体——比如父母自己未能取得一定的社会地位和成就,就会愈发望子成龙望女成凤。同理,有的父母越是自己婚姻不幸福,就会越发盼望子女能尽快找到幸福的婚姻,补偿自己在情感方面的遗憾。

2. 邪恶的嫉妒心理

我们可能都有过类似的经历,如果自己没有得到某样东西,就会希望别人也得不到,维持心理平衡。如果不巧正好身边的人得到了,我们就会更加痛苦。比如情人节时你的男神/女神对你不理不睬,甚至另觅新欢,这本来就够你难过的,可新欢居然是你的好闺蜜/好哥们儿,恐怕你就更不能忍了,很可能要和这个“叛徒”一刀两断。这是为什么呢?

因为嫉妒。婚姻并不幸福的父母却逼迫孩子结婚,很可能或多或少有这样的心态。上一代人所处的那个压抑人性的年代里,连自由恋爱都是奢求,更不要说单身主义。在成家立业步入不惑之后,父母那一代人回顾这一路的时代不公和命运多舛,那些不甘、屈辱、痛苦、愤懑迫于政治正确和人性光辉的压抑无法释放,只能像火山爆发一样找到看似合理的出口喷泄出来,灼伤甚至焚毁身边的至亲至爱。单身的你即使过得很好,有的父母也希望你能一起到这挣扎的围城里来住着。

过个年容易么!图片来源:epochtimes.com

二 主观规范

从外部动机来看,根据父母对于社会规范的看法,还是从积极到消极,可以分成三种情况:积极导向,应当遵守,被迫达标。

积极导向:“你看别人家孩子都结婚了,多好啊”

父母觉得跟大家保持一致才是正常的,而正常的才是幸福的。其实正常的未必就是幸福的,同时幸福未必一定要和大家一样才行——你爱看果壳网的文章,跟朋友圈里专门挑谣言转发的亲戚说不到一块儿去,求同存异不也挺好嘛。可为什么父母会产生那样的错觉呢?

因为证实偏差(Confirmation Bias)。证实偏差是决策偏差的一种,英国心理学家皮特·沃森(Peter Wason)最早提出这一概念,说的是人们有意或无意地寻找支持自己看法的证据,忽略不支持自己观点的意见,并将模棱两可的信息向着有利于自己立场的方向进行解释,甚至不惜花费时间和资源贬低与相反观点。父母那一辈人基于自己的生活经验,预设了结婚才能幸福的假设,而后只记住了那些幸福的已婚者和不幸的单身狗,越来越强化这一观点,于是逼迫你结婚。

应当遵守:“随大流不会错的,别人都结婚了你也结吧”

父母觉得存在的就是合理的,所以既然大家都结婚了,你也应该结婚。天底下没有两片完全相同的树叶,别人的选择未必适用于你,为什么父母会希望你跟多数人保持一致呢?

因为社会认同(Social Identity)。社会认同是指群体行为中表现出来的内群体偏好(in-group favoritism)和外群体偏见(out-group derogation)。社会心理学家塔杰菲尔(Henry Tajfel)认为,社会认同是指“个体通过自我觉察,意识到自己属于特定的社会群体,同时也认识到群体成员这一身份带给自己的价值和情感意义”。个体获取社会认同的主要目的是减少主观不确定性,获取积极的自我评价,提升自尊。基于对“儿大当婚,女大当嫁”这样传统价值观的认同,父母觉得结婚的才是 “正常人”,而将一直不结婚的人看作“异类”。他们当然不希望你也变成异类中的一员,因此盼着你早点结婚。

咦?大爷你说啥?图片来源:youtube.com

被迫达标:“别人都结婚了,你别给爹妈丢脸”

父母其实并不关心结婚好还是不好,只是觉得别人家孩子都结婚了,而你还不结婚,他们自己脸上挂不住。可是你才是爹妈的亲骨肉,又不是买彩票中的或者充话费送的,为什么别人的看法比你的想法还重要呢?

因为社会比较(Social Comparison)。社会比较最早由社会心理学家里昂·费斯廷格(Leon Festinger)提出。他认为,个体有一种内驱力,希望将自己的观点和能力与他人进行比较,从而获取有关自身观点和能力的评价。人们通过与和自己类似的人平行比较,来更准确地评价自己(evaluate self);通过和比自己差的人向下比较,来提升自己(enhance self);通过和比自己好的人向上比较,来提高自己(improve self)。然而,由于人类在认知过程中更重视负面和消极的信息,因此社会比较往往会导致自我评价降低和自我威胁感知。俗话说人比人,气死人,都是因为你这个不肖子孙连个婚都结不了,爹妈才觉得自己在同辈人中简直抬不起头来。可问题是,别人都还在看果壳网呢,你们跟别人比点儿这种好的行不行啊!

三 知觉行为控制

从内部、外部动机来看,你已经明白了父母为什么想要你尽快结婚。可是想归想,很多父母还是愿意尊重子女自己的选择,并不会出言相逼甚至出手干涉。但为什么有的父母最终还是下了逼婚的“黑手”呢?根据父母最终做出逼婚行为的直接原因,可以分为三种情况:社会责任,自我评价,权威控制。

社会责任:“我催你结婚还不是为了你好”

父母觉得逼婚也是为了你好,当爹妈的有责任把你从小养大,一直照顾到你成家立业。可是子女毕竟是独立的个体,有着自己的想法,在结婚这种终身大事上父母为什么不能听听子女的意见呢?

因为社会责任感。研究者认为,在亲缘关系越相近的对偶角色中,相互之间越熟悉亲密,就越相互负有责任。不管现实中的你身高多高,年龄多大,在父母的内心深处你仍然是那个只会张嘴吃奶,抬屁股拉屎的小婴儿。这个世界仍像当年那样危机四伏,撞伤你的桌角变成了生活中的挫折,绊倒你的石头变成了工作中的困难,你那么羼弱、那么无助,父母怎么忍心放手不管,眼睁睁地看你装逼看你飞?特别是面对婚姻大事,父母觉得自己负有相当的义务来替你做出决定。

自我评价:“你听我的,肯定没错”

你已经成长为一个相对独立的个体了,为什么父母还总想要事事控制着你呢?这和自我评价(self-evaluation)有关。

自我评价是指个体对自己思想、愿望、行为和个性特点的判断和评价,是自我意识(self awareness)的主要组成部分之一。人类通过外界的反馈来认识自己、评价自己,并帮助自己保持内在一致性,决定个人对经验怎样解释以及决定人们的期望。首先,随着年龄的增长,父母这辈人在工作环境中逐渐从核心骨干慢慢边缘化,在信息爆炸劳动力剧增的变革中话语权快速丧失;其次,上一代人由于社会环境和时代背景的局限,普遍没什么业余爱好来填充逐渐空洞的日常生活;最后,父母早已习惯了子女在幼年时作为附属品或“宠物”般的存在,对自己言听计从、百依百顺。而逐步成长起来的子女试图摆脱父母的控制,寻求自身的独立,一直都大包大揽的父母无法面对“空巢”后留下的内心空虚。在连番打击下,父母难免会自我评价降低,并且急于提高。就算没法再次扮演时代弄潮儿,重新奏响改革主旋律,管教一下子女总是可以的。于是,不逼你结婚,逼谁?不是你结婚,谁结?你不结婚,干嘛?

你不结婚你想干嘛?图片来源:enet.com.cn

权威控制:“你还听不听你妈的了!”

相比于你到底结不结婚这件事,父母其实更关心的是,你到底听不听我们的赶紧结婚?相比于你不结婚这件事,父母更生气的是,你居然不听话不去赶紧结婚。虽然这看起来是一回事,但其实完全不是一回事——前者是对于事实的不解,后者是对于权威的质疑。

质疑权威其实并不像人们平时提倡的那样受到鼓励,特别是在中国这样的高权力距离的传统文化背景下。权力距离(Power Distance)由著名的跨文化研究和管理学专家,荷兰学者霍夫斯泰德(Geert Hofstede)提出,用于描述一个国家的机构或组织(例如家庭、学校、社区、工作场所等)中,弱势成员对于权力分配不平等的期待和接受程度。霍夫斯泰德使用权力距离指数(Power Distance Index, PDI)来描述不同国家的差异,得分界定为0到100之间。权力距离越高,个体越认同社会中存在的权力不平等,甚至觉得这是一件好事,同时社会流动性也越差,这种现象常见于亚洲、非洲、东欧、中东、拉丁美洲等集体主义文化的国家,中国的PDI得分为80分;相反,权力距离越低,整体社会对于权力不平等的容忍度就越低,就越关注个体是否拥有相对平等的机会,这种现象常见于美国、加拿大、英国、澳大利亚、新西兰等信奉个人主义的国家,美国的PDI得分为40分。

只有存在服从,才会存在权威。当被社会赋予了服从预期的子女居然开始反抗父母时,基于高权力距离的家长权威受到了直接挑战,父母感到自身地位受到了威胁,有必要捍卫自己的权威地位。这时结婚与否的事实已经不重要了,重要的这家里谁说了算。你结不结婚?听不听我的?你还认不认我这个妈?你还算不算是个人?你让我还活不活了?刀刀见血,步步紧逼。战争即和平、自由即奴役、无知即力量,你妈在看着你。你再多说一句,你妈就炸了——实际上,每次拿着果壳网的谣言粉碎文章试图向父母解释他们的错误时,碰到的情况都差不太多。

小结

现在,我们用一套分析方式,拆解了“为什么你妈逼你结婚”这个问题。其实这套分类体系还可以用来分析各种行为,比如为什么你妈逼你穿秋裤

所以,今年春节,你妈逼你结婚了嘛?(编辑:球藻怪)

参考文献:

  1. Adler, A. (1917). Study of organ inferiority and its psychical compensation: A contribution to clinical medicine. In E. Jelliffe & W. A. White (Eds.). Nervous and mental disease monograph series (Vol. 24). New York: Nervous and Mental Disease Publishing Company.
  2. Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In: Kuhl J, Beckman J, (Eds.), Action control: from cognition to behavior (pp. 11-39). Heidelberg, Germany: Springer.
  3. Ajzen, I. (1991). The theory of planned behavior. Organizational behavior and human decision processes, 50: 179-2117.
  4. Aron, A., Aron, E. N., & Smollan, D. (1992). Inclusion of other in the self scale and the structure of interpersonal closeness. Journal of Personality and Social Psychology, 63(4), 596–612.
  5. Aron, A., McLaughlin-Volpe, T., Mashek, D., Lewandowski, G., Wright, S. C., & Aron, E. N. (2004). Including others in the self. European Review of Social Psychology, 15(1), 101–132.
  6. Burns, R. (1982). Self-concept Development and Education. Dorchester, UK: Henry Ling Ltd.
  7. 丁冬苗. (2013). 亲子参照效应及其影响因素. 硕士学位论文, 浙江师范大学.
  8. Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7(2): 117-140.
  9. Fishbein, M., Ajzen I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research reading. MA: Addison-Wesley.
  10. Gilbert, D. T., Giesler, R. B., Morris, K. A. (1995). When comparisons arise. Journal of Personality and Social Psychology, 69(2): 227.
  11. Hofstede, G. (2005). Cultures and Organizations: Software of the Mind. London: McGraw-Hill.
  12. 孔繁昌, 张妍, 陈红. (2010). 自我-他人表征: 共享表征还是特异表征. 心理科学进展, 18(8), 1263-1268.
  13. Morse, S., Gergen, K. J. (1970). Social comparison, self consistency, and the concept of self. Journal of Personality and Social Psychology, 16(1): 118.
  14. Mussweiler, T., Ruter, K., Epstude, K. (2004). The man who wasn’t there: Subliminal social comparison standards influence self evaluation. Journal of Experimental Social Psychology, 40(5): 689-696.
  15. Rogers, T. B., Kuiper, N. A., & Kirker, W. S. (1977). Self-reference and the encoding of personal information. Journal of Personality and Social Psychology, 35(9), 677–688.
  16. Suls, J., Martin, R., Wheeler, I., 2002. Social Comparison: Why, with whom, and with what effect? Current Directions in Psychological Science, 11(5): 159-163.
  17. Tajfel, H. & Turner, J. C. (1986). The social identity theory of intergroup behavior. In S. Worchel & Austin (Eds.). Psychology of intergroup relations (pp. 7–24). Chicago: Nelson-Hall.
  18. Titchener, E. B. (1910). Textbook of psychology. New York: Macmillan.
  19. Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task.  Quarterly Journal of Experimental Psychology, 01(12), 1292-1400.
  20. Wason, P. C. (1966). Reasoning In New Horizons in Psychology. Penguin, Hammond worth, UK.
  21. Yan, Y. X. (1996). The Culture of Guanxi in a North China Village. The China Journal, 35, 1-25.
  22. 杨国枢, 余安邦. (1993). 中国人的心理与行为——理念及方法篇 (pp. 87-142). 台北: 桂冠图书公司.
  23. 杨帅, 黄希庭, 傅于玲. (2012). 内侧前额叶皮质——“自我”的神经基础. 心理科学进展, 20(6), 853–862.
  24. 杨帅, 黄希庭, 陈有国等. (2014). 人际距离调节自我-他人的神经表征: 来自oFRN的证据. 心理学报, 46(5), 666–676.
  25. 尹娣. (2012). 亲子依恋对亲子参照效应的影响. 硕士学位论文, 浙江师范大学.
  26. Zajonc, R. B. (1968). Attitudinal Effects of Mere Exposure. Journal of Personality and Social Psychology, 9, 1-27.

文章题图:subaonet.com

面对熊孩子、亲戚逼婚、爆竹震天响、本命年红裤衩、亲戚买的不靠谱保健品 ,该怎么过好这个春节?来果壳专区和小伙伴们一起想辙!

你可能感兴趣

  1. 秀恩爱,更恩爱
  2. 共度“13-14”:科学迷的10大结婚誓词
  3. 还不让那极品变成前任?
  4. 为何情人节会变成“情人劫”
  5. 先捂热,再调情
  6. 认知偏差:人类是怎样通过犯错误来适应世界的?
  7. 恋爱问题多?积极点,开口说
  8. 【情人节特稿】3条建议助你浪漫一天
  9. 有孩子会让人更幸福吗?
  10. 拥抱还是家暴:催产素的阴暗面
  11. 80分美女没人爱?
  12. 性格能互补,恋爱更幸福
25 Feb 21:43

Cracking the Confusion: Encryption Decision Tree

This is the final post in this series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post, and find the other posts under “related posts” in full article view.

Choosing the Best Option

There is no way to fully cover all the myriad factors in picking a specific encryption option in a (relatively) short paper like this, so we compiled a visual decision tree to at least get you into the right bucket.

Here are a few notes on the decision tree.

  • This isn’t exhaustive but should get you looking at the right set of technologies.
  • In all cases you will want secure external key management.
  • In general, for discreet data you want to encrypt as high in the stack as possible. When you don’t need as much separation of duties, encrypting lower may be easier and more cost effective.
  • For both database and cloud encryption, in a few cases we recommend you encrypt in the application instead.
  • When we list multiple options the order of preference is top to bottom.
  • As you use this tree keep the Three Laws in mind, since they help guide the security value of your decision.

Encryption Decision Tree

Once you understand how encryption systems work, the different layers where you can encrypt, and how they combine to improve security (or not), it’s usually relatively easy to pick the right approach.

The hard part is to then architect and implement the encryption technology and integrate it into your data center, application, or cloud service. That’s where our other encryption research can be valuable, and the following reports should help:

- Rich (2) Comments Subscribe to our daily email digest
14 Feb 23:44

惊人的肖像!!近距离看猫头鹰!

by 勇敢的阿黛拉女王陛下

1

摄影师布拉德威尔逊捕捉到不同种类猫头鹰的近距离肖像。每只鸟都用纯黑色的背景反衬猫头鹰灿烂鲜艳的色彩。让我们惊叹于这些照片是这样的与众不同。小小的羽毛,短喙,看着摄像机的明亮的眼神与热切的目光。

拍摄这些照片的过程可不是那么简单的,威尔逊每天和一只猫头鹰耗上好几个小时,可是他们仍然无动于衷,要他们老老实实对这照相机,即使是一瞬间他们也是不肯的…“让动物像人一样看镜头是很难得”威尔逊说道。

威尔逊的想法是让拍出的猫头鹰显得庄严而高贵,但是有的翅膀受伤或是很依赖人。于是威尔逊让人类隐藏于相机框架之外,是这些猫头鹰独自一只拍出的照片显得庄严而强大。

2

3

4

5

6

7

8

9

via

15 Jul 15:48

Risks of Not Understanding a One-Way Function

by Bruce Schneier

New York City officials anonymized license plate data by hashing the individual plate numbers with MD5. (I know, they shouldn't have used MD5, but ignore that for a moment.) Because they didn't attach long random strings to the plate numbers -- i.e., salt -- it was trivially easy to hash all valid license plate numbers and deanonymize all the data.

Of course, this technique is not news.

ArsTechnica article. Hacker News thread.

15 Jul 15:40

This Common Home Appliance Can Compromise Your Entire Security

by Bruce Schneier

LIFX is a smart light bulb that can be controlled with your smart phone via your home's Wi-Fi network. Turns out that anyone within range can obtain the Wi-Fi password from the light bulb. It's a problem with the communications protocol.

16 Jun 21:38

The NSA is Not Made of Magic

by Bruce Schneier

I am regularly asked what is the most surprising thing about the Snowden NSA documents. It's this: the NSA is not made of magic. Its tools are no different from what we have in our world, it's just better-funded. X-KEYSCORE is Bro plus memory. FOXACID is Metasploit with a budget. QUANTUM is AirPwn with a seriously privileged position on the backbone. The NSA breaks crypto not with super-secret cryptanalysis, but by using standard hacking tricks such as exploiting weak implementations and default keys. Its TAO implants are straightforward enhancements of attack tools developed by researchers, academics, and hackers; here's a computer the size of a grain of rice, if you want to make your own such tools. The NSA's collection and analysis tools are basically what you'd expect if you thought about it for a while.

That, fundamentally, is surprising. If you gave a super-secret Internet exploitation organization $10 billion annually, you'd expect some magic. And my guess is that there is some, around the edges, that has not become public yet. But that we haven't seen any yet is cause for optimism.

15 May 19:41

What happens when an unstoppable PR force hits an NP-hard problem? The answer’s getting clearer

by Scott

Update (Jan. 23): Daniel Lidar, one of the authors of the “Defining and detecting…” paper, was kind enough to email me his reactions to this post.  While he thought the post was generally a “very nice summary” of their paper, he pointed out one important oversight in my discussion.  Ironically, this oversight arose from my desire to bend over backwards to be generous to D-Wave!  Specifically, I claimed that there were maybe ~10% of randomly-chosen 512-qubit problem instances on which the D-Wave Two slightly outperformed the simulated annealing solver (compared to ~75% where simulated annealing outperformed the D-Wave Two), while also listing several reasons (such as the minimum annealing time, and the lack of any characterization of the “good” instances) why that “speedup” is likely to be entirely an artifact.  I obtained the ~10% and ~75% figures by eyeballing Figure 7 in the paper, and looking at which quantiles were just above and just below the 100 line when N=512.

However, I neglected to mention that even the slight “speedup” on ~10% of instances, only appears when one looks at the “quantiles of ratio”: in other words, when one plots the probability distribution of [Simulated annealing time / D-Wave time] over all instances, and then looks at (say) the ~10% of the distribution that’s best for the D-Wave machine.  The slight speedup disappears when one looks at the “ratio of quantiles”: that is, when one (say) divides the amount of time that simulated annealing needs to solve its best 10% of instances, by the amount of time that the D-Wave machine needs to solve its best 10%.  And Rønnow et al. give arguments in their paper that ratio of quantiles is probably the more relevant performance comparison than quantiles of ratio.  (Incidentally, the slight speedup on a few instances also only appears for certain values of the parameter r, which controls how many possible settings there are for each coupling.  Apparently it appears for r=1, but disappears for r=3 and r=7—thereby heightening one’s suspicion that we’re dealing with an artifact of the minimum annealing time or something like that, rather than a genuine speedup.)

There’s one other important point in the paper that I didn’t mention: namely, all the ratios of simulated annealing time to D-Wave time are normalized by 512/N, where N is the number of spins in the instance being tested.  In this way, one eliminates the advantages of the D-Wave machine that come purely from its parallelism (which has nothing whatsoever to do with “quantumness,” and which could easily skew things in D-Wave’s favor if not controlled for), while still not penalizing the D-Wave machine in absolute terms.


A few days ago, a group of nine authors (Troels Rønnow, Zhihui Wang, Joshua Job, Sergio Boixo, Sergei Isakov, David Wecker, John Martinis, Daniel Lidar, and Matthias Troyer) released their long-awaited arXiv preprint Defining and detecting quantum speedup, which contains the most thorough performance analysis of the D-Wave devices to date, and which seems to me to set a new standard of care for any future analyses along these lines.  Notable aspects of the paper: it uses data from the 512-qubit machine (a previous comparison had been dismissed by D-Wave’s supporters because it studied the 128-qubit model only); it concentrates explicitly from the beginning on comparisons of scaling behavior between the D-Wave devices and comparable classical algorithms, rather than getting “sidetracked” by other issues; and it includes authors from both USC and Google’s Quantum AI Lab, two places that have made large investments in D-Wave’s machines and have every reason to want to see them succeed.

Let me quote the abstract in full:

The development of small-scale digital and analog quantum devices raises the question of how to fairly assess and compare the computational power of classical and quantum devices, and of how to detect quantum speedup. Here we show how to define and measure quantum speedup in various scenarios, and how to avoid pitfalls that might mask or fake quantum speedup. We illustrate our discussion with data from a randomized benchmark test on a D-Wave Two device with up to 503 qubits. Comparing the performance of the device on random spin glass instances with limited precision to simulated classical and quantum annealers, we find no evidence of quantum speedup when the entire data set is considered, and obtain inconclusive results when comparing subsets of instances on an instance-by-instance basis. Our results for one particular benchmark do not rule out the possibility of speedup for other classes of problems and illustrate that quantum speedup is elusive and can depend on the question posed.

Since the paper is exceedingly well-written, and since I have maybe an hour before I’m called back to baby duty, my inclination is simply to ask people to RTFP rather than writing yet another long blog post.  But maybe there are four points worth calling attention to:

  1. The paper finds, empirically, that the time needed to solve random size-N instances of the quadratic binary optimization (QUBO) problem on D-Wave’s Chimera constraint graph seems to scale like exp(c√N) for some constant c—and that this is true regardless of whether one attacks the problem using the D-Wave Two, quantum Monte Carlo (i.e., a classical algorithm that tries to mimic the native physics of the machine), or an optimized classical simulated annealing code.  Notably, exp(c√N) is just what one would have predicted from theoretical arguments based on treewidth; and the constant c doesn’t appear to be better for the D-Wave Two than for simulated annealing.
  2. The last sentence of the abstract (“Our results … do not rule out the possibility of speedup for other classes of problems”) is, of course, the reed on which D-Wave’s supporters will now have to hang their hopes.  But note that it’s unclear what experimental results could ever “rule out the possibility of speedup for other classes of problems.”  (No matter how many wrong predictions a psychic has made, the possibility remains that she’d be flawless at predicting the results of Croatian ping-pong tournaments…)  Furthermore, like with previous experiments, the instances tested all involved finding ground states for random coupling configurations of the D-Wave machine’s own architecture.  In other words, this was a set of instances where one might have thought, a priori, that the D-Wave machine would have an immense home-field advantage.  Thus, one really needs to look more closely, to see whether there’s any positive evidence for an asymptotic speedup by the D-Wave machine.
  3. Here, for D-Wave supporters, the biggest crumb the paper throws is that, if one considers only the ~10% of instances on which the D-Wave machine does best, then the machine does do slightly better on those instances than simulated annealing does.  (Conversely, simulated annealing does better than the D-Wave machine on the ~75% of instances on which it does best.)  Unfortunately, no one seems to know how to characterize the instances on which the D-Wave machine will do best: one just has to try it and see what happens!  And of course, it’s extremely rare that two heuristic algorithms will succeed or fail on exactly the same set of instances: it’s much more likely that their performances will be correlated, but imperfectly.  So it’s unclear, at least to me, whether this finding represents anything other than the “noise” that would inevitably occur even if one classical algorithm were pitted against another one.
  4. As the paper points out, there’s also a systematic effect that biases results in the D-Wave Two’s favor, if one isn’t careful.  Namely, the D-Wave Two has a minimum annealing time of 20 microseconds, which is often greater than the optimum annealing time, particularly for small instance sizes.  The effect of that is artificially to increase the D-Wave Two’s running time for small instances, and thereby make its scaling behavior look better than it really is.  The authors say they don’t know whether even the D-Wave Two’s apparent advantage for its “top 10% of instances” will persist after this effect is fully accounted for.

Those seeking something less technical might want to check out an excellent recent article in Inc. by Will Bourne, entitled “D-Wave’s dream machine” (“D-Wave thinks it has built the first commercial quantum computer.  Mother Nature has other ideas”).  Wisely, Bourne chose not to mention me at all in this piece.  Instead, he gradually builds a skeptical case almost entirely on quotes from people like Seth Lloyd and Daniel Lidar, who one might have thought would be more open to D-Wave’s claims.  Bourne’s piece illustrates that it is possible for the mainstream press to get the D-Wave story pretty much right, and that you don’t even need a physics background to do so: all you need is a willingness to commit journalism.

Oh.  I’d be remiss not to mention that, in the few days between the appearance of this paper and my having a chance to write this post, two other preprints of likely interest to the Shtetl-Optimized commentariat showed up on quant-ph.  The first, by a large list of authors mostly from D-Wave, is called Entanglement in a quantum annealing processor.  This paper presents evidence for a point that many skeptics (including me) had been willing to grant for some time: namely, that the states generated by the D-Wave machines contain some nonzero amount of entanglement.  (Note that, because of a technical property called “stoquasticity,” such entanglement is entirely compatible with the machines continuing to be efficiently simulable on a classical computer using Quantum Monte Carlo.)  While it doesn’t address the performance question at all, this paper seems like a perfectly fine piece of science.

From the opposite side of the (eigen)spectrum comes the latest preprint by QC skeptic Michel Dyakonov, entitled Prospects for quantum computing: Extremely doubtful.  Ironically, Dyakonov and D-Wave seem to agree completely about the irrelevance of fault-tolerance and other insights from quantum computing theory.  It’s just that D-Wave thinks QC can work even without the theoretical insights, whereas Dyakonov thinks that QC can’t work even with the insights.  Unless I missed it, there’s no new scientific content in Dyakonov’s article.  It’s basically a summary of some simple facts about QC and quantum fault-tolerance, accompanied by sneering asides about how complicated and implausible it all sounds, and how detached from reality the theorists are.

And as for the obvious comparisons to previous “complicated and implausible” technologies, like (say) classical computing, or heavier-than-air flight, or controlled nuclear fission?  Dyakonov says that such comparisons are invalid, because they ignore the many technologies proposed in previous eras that didn’t work.  What’s striking is how little he seems to care about why the previous technologies failed: was it because they violated clearly-articulated laws of physics?  Or because there turned out to be better ways to do the same things?  Or because the technologies were simply too hard, too expensive, or too far ahead of their time?  Supposing QC to be impossible, which of those is the reason for the impossibility?  Since we’re not asking about something “arbitrary” here (like teaching a donkey to read), but rather about the computational power of Nature itself, isn’t it of immense scientific interest to know the reason for QC’s impossibility?  How does Dyakonov propose to learn the reason, assuming he concedes that he doesn’t already know it?

(As I’ve said many times, I’d support even the experiments that D-Wave was doing, if D-Wave and its supporters would only call them for what they were: experiments.  Forays into the unknown.  Attempts to find out what happens when a particular speculative approach is thrown at NP-hard optimization problems.  It’s only when people obfuscate the results of those experiments, in order to claim something as “commercially useful” that quite obviously isn’t yet, that they leave the realm of science, and indeed walk straight into the eager jaws of skeptics like Dyakonov.)

Anyway, since we seem to have circled back to D-Wave, I’d like to end this post by announcing my second retirement as Chief D-Wave Skeptic.  The first time I retired, it was because I mistakenly thought that D-Wave had fundamentally changed, and would put science ahead of PR from that point forward.  (The truth seems to be that there were, and are, individuals at D-Wave committed to science, but others who remain PR-focused.)  This time, I’m retiring for a different reason: because scientists like the authors of the “Defining and detecting” preprint, and journalists like Will Bourne, are doing my job better than I ever did it.  If the D-Wave debate were the American Civil War, then my role would be that of the frothy-mouthed abolitionist pamphleteer: someone who repeats over and over points that are fundamentally true, but in a strident manner that serves only to alienate fence-sitters and allies.  As I played my ineffective broken record, the Wave Power simply moved from one triumph to another, expanding its reach to Google, NASA, Lockheed Martin, and beyond.  I must have looked like a lonely loon on the wrong side of history.

But today the situation is different.  Today Honest Abe and his generals (Honest Matthias and his coauthors?) are meeting the Wave Power on the battlefield of careful performance comparisons against Quantum Monte Carlo and simulated annealing.  And while the battles might continue all the way to 2000 qubits or beyond, the results so far are not looking great for the Wave Power.  The intractability of NP-complete problems—that which we useless, ivory-tower theorists had prophesied years ago, to much derision and laughter—would seem to be rearing its head.  So, now that the bombs are bursting and the spins decohering in midair, what is there for a gun-shy pampleteer like myself to do but sit back and watch it all play out?

Well, and maybe blog about it occasionally.  But not as “Chief Skeptic,” just as another interested observer.

06 Mar 06:02

防止21-三体综合征的基金会为何阻挠病因发现者受奖?

by Ent

本文作者:Ent

玛尔特·戈蒂耶(Marthe Gautier)

2014年1月31日,第七届人类医学遗传学大会一项原定的议程——颁奖表彰88岁高龄的玛尔特·戈蒂耶(Marthe Gautier)在发现21-三体综合征中做出的贡献——在外来的压力下被迫取消。这压力来自于意想不到的方向:热罗姆·勒热讷基金会(Jérôme Lejeune Foundation),一个旨在防止21-三体综合征和其他出生缺陷的基因会,且影响力不小。

一场科研剽窃引发的故事

因为在所谓“正史”里,发现唐氏综合征的病因的是热罗姆·勒热讷本人,但事实上,全部的实验都是玛尔特·戈蒂耶做的。科学史家公认戈蒂耶才是最大的贡献者,而勒热讷不过是窃取了她的成果。

男科学家“窃取”女科学家的成果,这在科学史上并不是第一次。沃森和克里克偷用了罗莎琳·富兰克林的数据,发现了DNA双螺旋;约塞琳·柏奈尔观测发现了脉冲星,但最后获得诺奖的是她的导师修维什。到目前为止,前者遮遮掩掩地道过了歉,后者早已双方握手言和,像热罗姆·勒热讷基金会这样公然撕破脸皮直接阻挠对方领奖的,好像还是第一次,而且此时热罗姆·勒热讷本人已经作古了……

为什么这次事件会闹得这么大?因为它涉及堕胎,这是现代社会里,医学和宗教冲突最激烈的领域。而这个故事,要从戈蒂耶和勒热讷在巴黎初次见面说起。

戈蒂耶、勒热讷和21-三体综合征

1862年,英国医生约翰·兰顿·唐注意到在那些”天生白痴“的婴儿里,有些孩子具有共同的特征:面宽、眼睛小而上挑。到20世纪初,人们已经意识到,这是一种相对常见的先天病,发病率约为千分之一左右。那时在很多国家,患儿通常会被专门机构收留,但几乎没有什么像样的治疗。大部分患者很早就会夭折,没几个人能活到20岁,完全谈不上生活质量。

1956年,一位名叫玛尔特·戈蒂耶的年轻女医生在哈佛经历了1年的儿科进修后,回到巴黎。她得到了一所当地医院的临床职位。医院儿科主任雷蒙·图尔潘对唐氏综合征很感兴趣,很多年前他就猜想这可能与染色体异常有关,但是没时间深入研究。有一天图尔潘发牢骚说没人理睬他的猜想,戈蒂耶想起了自己在哈佛受过相关训练,便自告奋勇接下了这项研究。

医院拨给她一间废弃实验室,里面有一台冰箱、一台离心机和一台质量很差的显微镜。没有经费。她自掏腰包买玻璃器皿,自己养了一只公鸡作为血清来源,而如果需要人血样本,就用自己的。

到了1957年底,正常细胞的一切实验都已经步上正轨,细胞里的46条染色体清晰可见。(这在当时是了不起的成就,要知道仅仅2年前大家还以为人细胞里的染色体是48条。)她从图尔潘那里要来了病人的组织样本,对比之下非常明确:病人有47条染色体,其中21号多了一条。但是她的显微镜太差,无法拍照,而要想发表论文,拍照是必须的。

这期间图尔潘本人从来没有来过她的实验室,但是图尔潘门下的一个学生则时常来访,他叫热罗姆·勒热讷。有一天戈蒂耶谈到了她无法拍照的麻烦,勒热讷提出说可以拿她的玻片去别的实验室帮她拍照。此后戈蒂耶就再也没有见过那些玻片,直到2个月后,在蒙特利尔的国际遗传学大会上,勒热讷向全世界宣布他发现了唐氏综合征的病因,使用的照片正是这些玻片。在提交的论文里,勒热讷是第一作者,图尔潘是通讯作者,而毫不知情的高蒂被安在了不起眼的中间位置,按照论文的描述,她的贡献主要是“从美国带回了一种新的组织培养方法”。

戈蒂耶在打击之下决定告别科研,重返临床和教学岗位。而勒热讷却名声鹊起,不光是因为他发现了唐氏综合征的病因是21号染色体多了一条,而且还找到了一种原则上的预防方式:婴儿没出生的时候你不知道他的相貌和智力,但可以取样获得他/她的染色体,如果在怀孕早期能进行基因诊断,发现21号染色体出了问题,那么堕胎再怀,不就能避免悲剧了吗?

但勒热讷是一个天主教徒,而天主教是反对堕胎的。

勒热讷的反堕胎运动

今天,反堕胎运动更常见的名字是“支持生命”(pro-life)(对应的,允许堕胎的一派叫“支持选择”

pro-choice)。勒热讷本人则把堕胎叫做“染色体种族主义”。他认为,就算是唐氏综合征的患儿,自怀孕那一刻也有生存权,医学只能提升他/她的生存质量,就算提升不了也不能用堕胎“解决”问题。

但是堕胎不是一个简单的伦理问题,而是掺杂了无数政治和宗教因素,情况错综复杂。根据美国堕胎联盟的统计,自1977年以来,美国和加拿大的堕胎诊所至少有8名工作人员被谋杀,遭遇41次炸弹袭击、173次纵火、91次炸弹或纵火未遂、619次炸弹威胁、665次炭疽杆菌威胁、1264次蓄意破坏、1630次非法入侵、100次丁酸臭弹攻击。如此复杂的现实局势,纸面上的讨论显然不能满足要求,大家需要偶像、需要榜样、需要英雄来为己方代言。

而热罗姆·勒热讷,世界著名科学家,虔诚天主教徒,和教宗约翰保罗二世过从甚密,“发现”了一种极重要的先天病的病因并致力于它的治疗,而且反对堕胎。这样一个人物来当英雄,真是再合适不过了。

1994年,勒热讷因肺癌去世,为了纪念他而成立了杰荷·勒仁基金会。这个基金会有三重任务:研究,关怀,号召。前两者都很不错,但第三部分用了很大的精力反对堕胎。勒仁在世时虽然贬低了戈蒂耶的重要性,但至少没有否认她起了作用。可是这个基金会既然以勒热讷为精神领袖,那就容不得这样的“诋毁”。基金会网页上的勒热讷个人简介里,把一切成果都归给勒热讷,对戈蒂耶只字未提。页尾的“了解更多”链接指向了一个名为“勒热讷教授之友协会”的组织,该组织的目的是争取天主教会为勒热讷追赠宣福礼,而宣福礼需要至少一个认证过的神迹,距离封圣只有一步之遥。该基金把勒热讷看做近乎圣人,当然不能容忍国际会议颁奖给真正的发现人了。

因此,就出现了开场的闹剧:一个名义上致力于救助21-三体综合征患儿的组织,威逼一个学术界的国际会议取消对该病病因发现人的授奖。但在这个年代,这样的行为只能是适得其反。

在颁奖取消之后3天,2月3日法国网站sciences.blogs.liberation.fr报道了这一事件,2月5日一位法国学生将这个报道转给了他的Coursera课程“有用的遗传学”,引起了授课老师Rosie Redfield的注意。在Redfield的宣传下,科学史的结论终于开始走入科学界和更广大的舆论,戈蒂耶也有了自己的英文维基页面。对于一位88岁的老人,正义来得太晚了一些,但是终归是来了。

参考资料

Gautier, Marthe; Harper, Peter S. (2009). "Fiftieth anniversary of trisomy 21: returning to a discovery". Human Genetics 126 (2): 317–324. doi:10.1007/s00439-009-0690-1

27 Nov 04:09

Theorems From Physics?

by Pip


Can information physics extract proofs from reality?

LandauerIEEE
IEEE source

Rolf Landauer was a physicist and computer engineer who spent most of his career at IBM north of New York City, becoming an IBM Fellow in 1969. According to his longtime colleague Charles Bennett, Landauer “did more than anyone else to establish the physics of information processing as a serious subject for scientific inquiry.” One was was his discovery and formulation of a principle connecting non-reversible computation steps and thermodynamic entropy, which according to Wikipedia’s article is widely accepted as a physical law.

Today Ken and I want to talk about the possible role of physical laws in generating proofs of complexity assertions, even {\mathsf{P \neq NP}} itself.

Recently we have heard of interest connecting {\mathsf{P \neq NP}}, plus the related topic of one-way functions, to Landauer’s principle itself. According to Bennett again, Landauer’s principle states:

[A]ny logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information bearing degrees of freedom of the information processing apparatus or its environment.

Bennett has defended this assertion against various objections. Last year a team from the École Normale Supérieure de Lyon, the University of Augsburg, and the University of Kaiserslautern gave empirical support by measuring the tiny amount of energy released as heat when an individual bit is erased. See their article in Nature. Real computers today are said to operate within three orders of magnitude of the energy-efficiency limit which Landauer’s bound imposes. So we can envision a new kind of impossible physical machine: one that can erase bits more coolly. The question is whether any complexity assertion has a side—true or false—that would enable a violation of this bound, thus ‘proving’ the other side.

Physics and Math

The philosopher in us recoils dogmatically at the notion of such a “physical proof.” Complexity theory is part of mathematics, and mathematical theorems are not supposed to be contingent. This is a fancy philosophical term for propositions that are “true in some possible worlds and false in others.” In particular, the truth of a mathematical proposition is not supposed to depend on any empirical fact about our particular world.

Of course physical observations can sometimes aid in discovering proofs. They can help one guess which side—true or false—to try to prove. What we mean is something stronger: whether appeals to physical observations or laws can constitute a proof by themselves—or part of a proof, or some kind of certificate of a proof.

Our openness begins by coupling something Landauer himself was noted for saying,

Information is inevitably physical.

—with the following translation of what John Wheeler meant by “it from bit”:

Physics is ultimately informational.

Information is what we “do”—with theorems and proofs—and mathematical theorems underlie the most fundamental physical models. If information and physics are as tightly bound as they say, we ought to expect some “flow” in the other direction. The question remains, how?

We will not try here to evaluate particular papers we have seen or heard about, such as this or this by Alexandre de Castro of Brazil, or this on energy complexity by Feng Pan, Heng-liang Zhang, and Jie Qi of China. This paper by Yuriy Zayko of Russia reaches the opposite conclusion about {\mathsf{P = NP}} from Landauer’s work, so they can’t all be right, while this by Armando Matos also treats Landauer’s principle and factoring. We invite comments from better-versed readers. Rather, we wish to examine the larger issue: can physics be used to prove mathematical theorems? Indeed.

Physical Proofs?

Imagine that someone shows the following: If {\mathsf{P=NP}}, then some physical principle is violated. Most likely this would be in the form of a Gedankenexperiment, but nevertheless it would be quite interesting. Yet I am at a loss to say what it would mean. Indeed the question is: “Is this a proof or not?”

Let’s call an argument that shows that If some mathematical statement {X} is true, then some physical principle {Y} is incorrect a Physics Proof—say a PP for short. And let’s call a usual math proof a MP. Can we prove something about the relationship between PP’s and MP’s? Can we, for example, prove statements in math via PP’s? Even statements that we already know are correct? Can there exist a PP that shows that set theory is consistent? Does this violate the famous Incompleteness Theorem of Kurt Gödel?

We have discussed this in an earlier post, in which we also referenced papers by Scott Aaronson and Bennett himself. All this has left us still quite interested in the possibility that PP could exist, and we will try to give some new illustrations of the possibilities.

A Trivial Example

Let’s look at a PP that shows that multiplication of natural numbers is commutative. Suppose that {n} and {m} are such numbers greater than zero. Consider the a box {n=3} by {m=4} of unit squares:

* * * *
* * * *
* * * *

Then its area is clearly {nm}. Now rotate the box:

* * *
* * *
* * *
* * *

The area is invariant under rotation—this is the physical principle that we are using. But now the area is {mn}. So we conclude that

\displaystyle  nm = mn.

Wow, what a surprise—if we were doing standup comedy we would not expect much more than tomatoes from this. But an interesting post with this example by Peter Cameron goes on to show that the formal proofs in Peano arithmetic and set theory also have their downsides. Our understanding is this method is used in some grade schools as a way to help students understand multiplication.

Relationship to Relativity

We have previously told the history of the “no-cloning” theorem in quantum mechanics. A paper purporting to demonstrate faster-than-light (FTL) communication was found to rely on the assumption that an arbitrary pure binary quantum state can be duplicated by a quantum process. If one takes FTL communication to be a violation of physical law, this could be said to constitute a proof of the negation of the assumption. To be sure, a proof was soon found by several people using elementary linear algebra. Hence the no-cloning theorem itself is just a piece of mathematics. The question is whether it could have been said to be “already proved” by physics before the simple proof was found.

In quantum communication theory, it seems to be legitimate to argue a theorem based on its negation implying FTL communication. We keep intending to post in greater detail about a paper by Ulvi Yurtsever which we read as showing that if one could gain a nontrivial probabilistic prediction advantage against an unbiased quantum random source, then local causality would be violated—in particular, FTL communication would be possible. There are other potential examples on Philip Gibbs’ FTL page, currently maintained by physicist John Baez at UC Riverside.

The Information Ratchet

Ken has thought about this recently upon reading some reviews of the book Life’s Ratchet by Peter Hoffman of Wayne State University. According to a review in Nature:

[The book] engagingly tells the story of how science has begun to realize the potential for matter to spontaneously construct complex processes, such as those inherent to living systems.

Our question is,

Can we possibly judge the differential impact on the speed of this ratchet between the truth and the falsity of {\mathsf{P = NP}}, or of more-concrete algorithmic assertions?

If so, that is if we can quantify the impact, then by observing the speed of generative life processes in the lab, coupled with mapping out the early history of life on Earth, we might ascertain the (un)availability of certain concretely feasible approximative or exact algorithms empirically, in advance of possibly proving it.

Relationship To Time Travel

For an even more speculative notion, perhaps an unfair one, suppose that we want to factor a large number. Assume it would take {1,000} years on a laptop. Here is what we could do: We create a computer that can run for a thousand years. This would no doubt require some clever engineering: the machine probably would need to do self-repair and also use a renewable source of power. While it would be a difficult piece of engineering, it does not seem to violate any physical principles. Start it running on the factoring problem. Then jump into your time machine, go into the future, get the answer, and return. This means we could do a huge computation in seconds. Does this mean that we can “prove”:

If time-travel is possible, then many concrete “hard” factoring instances are easy?

I am very confused. I hope you are too, even before looking up literature on “closed timelike curves” and algorithms such as in section 8 of Scott’s survey. Note, taking the contrapositive yields that if you believe certain concrete instances of factoring are yea-hard in reality, then you’re saying time travel is impossible.

Open Problems

What do you think? If someone found a PP of {\mathsf{P \neq NP}} would that prove they are not equal? Would this win the Clay Prize? What would it really mean?

We note that Landauer’s principle did inspire theorems about reversible computation by Bennett and others; this and other stories are told in a lovely memoir by Bennett with Alan Fowler written in 2009 for the National Academy of Sciences. The ETOPIM Association awards an annual medal in Landauer’s honor.


26 Nov 19:19

Surveillance as a Business Model

by Bruce Schneier

Google recently announced that it would start including individual users' names and photos in some ads. This means that if you rate some product positively, your friends may see ads for that product with your name and photo attached—without your knowledge or consent. Meanwhile, Facebook is eliminating a feature that allowed people to retain some portions of their anonymity on its website.

These changes come on the heels of Google's move to explore replacing tracking cookies with something that users have even less control over. Microsoft is doing something similar by developing its own tracking technology.

More generally, lots of companies are evading the "Do Not Track" rules, meant to give users a say in whether companies track them. Turns out the whole "Do Not Track" legislation has been a sham.

It shouldn't come as a surprise that big technology companies are tracking us on the Internet even more aggressively than before.

If these features don't sound particularly beneficial to you, it's because you're not the customer of any of these companies. You're the product, and you're being improved for their actual customers: their advertisers.

This is nothing new. For years, these sites and others have systematically improved their "product" by reducing user privacy. This excellent infographic, for example, illustrates how Facebook has done so over the years.

The "Do Not Track" law serves as a sterling example of how bad things are. When it was proposed, it was supposed to give users the right to demand that Internet companies not track them. Internet companies fought hard against the law, and when it was passed, they fought to ensure that it didn't have any benefit to users. Right now, complying is entirely voluntary, meaning that no Internet company has to follow the law. If a company does, because it wants the PR benefit of seeming to take user privacy seriously, it can still track its users.

Really: if you tell a "Do Not Track"-enabled company that you don't want to be tracked, it will stop showing you personalized ads. But your activity will be tracked -- and your personal information collected, sold and used -- just like everyone else's. It's best to think of it as a "track me in secret" law.

Of course, people don't think of it that way. Most people aren't fully aware of how much of their data is collected by these sites. And, as the "Do Not Track" story illustrates, Internet companies are doing their best to keep it that way.

The result is a world where our most intimate personal details are collected and stored. I used to say that Google has a more intimate picture of what I'm thinking of than my wife does. But that's not far enough: Google has a more intimate picture than I do. The company knows exactly what I am thinking about, how much I am thinking about it, and when I stop thinking about it: all from my Google searches. And it remembers all of that forever.

As the Edward Snowden revelations continue to expose the full extent of the National Security Agency's eavesdropping on the Internet, it has become increasingly obvious how much of that has been enabled by the corporate world's existing eavesdropping on the Internet.

The public/private surveillance partnership is fraying, but it's largely alive and well. The NSA didn't build its eavesdropping system from scratch; it got itself a copy of what the corporate world was already collecting.

There are a lot of reasons why Internet surveillance is so prevalent and pervasive.

One, users like free things, and don't realize how much value they're giving away to get it. We know that "free" is a special price that confuses peoples' thinking.

Google's 2013 third quarter profits were nearly $3 billion; that profit is the difference between how much our privacy is worth and the cost of the services we receive in exchange for it.

Two, Internet companies deliberately make privacy not salient. When you log onto Facebook, you don't think about how much personal information you're revealing to the company; you're chatting with your friends. When you wake up in the morning, you don't think about how you're going to allow a bunch of companies to track you throughout the day; you just put your cell phone in your pocket.

And three, the Internet's winner-takes-all market means that privacy-preserving alternatives have trouble getting off the ground. How many of you know that there is a Google alternative called DuckDuckGo that doesn't track you? Or that you can use cut-out sites to anonymize your Google queries? I have opted out of Facebook, and I know it affects my social life.

There are two types of changes that need to happen in order to fix this. First, there's the market change. We need to become actual customers of these sites so we can use purchasing power to force them to take our privacy seriously. But that's not enough. Because of the market failures surrounding privacy, a second change is needed. We need government regulations that protect our privacy by limiting what these sites can do with our data.

Surveillance is the business model of the Internet -- Al Gore recently called it a "stalker economy.: All major websites run on advertising, and the more personal and targeted that advertising is, the more revenue the site gets for it. As long as we users remain the product, there is minimal incentive for these companies to provide any real privacy.

This essay previously appeared on CNN.com.

12 Nov 00:40

Reading Group at Harvard Law School

by Bruce Schneier

In Spring Semester, I'm running a reading group -- which seems to be a formal variant of a study group -- at Harvard Law School on "Security, Power, and the Internet. I would like a good mix of people, so non law students and non Harvard students are both welcome to sign up.

12 Nov 00:27

Another Snowden Lesson: People Are the Weak Security Link

by Bruce Schneier
Yao

As they always are.

There's a story that Edward Snowden successfully socially engineered other NSA employees into giving him their passwords.

12 Nov 00:27

Risk-Based Authentication

by Bruce Schneier
Yao

I think similar mechanisms have been implemented by Google and banks

I like this idea of giving each individual login attempt a risk score, based on the characteristics of the attempt:

The risk score estimates the risk associated with a log-in attempt based on a user's typical log-in and usage profile, taking into account their device and geographic location, the system they're trying to access, the time of day they typically log in, their device's IP address, and even their typing speed. An employee logging into a CRM system using the same laptop, at roughly the same time of day, from the same location and IP address will have a low risk score. By contrast, an attempt to access a finance system from a tablet at night in Bali could potentially yield an elevated risk score.

Risk thresholds for individual systems are established based on the sensitivity of the information they store and the impact if the system were breached. Systems housing confidential financial data, for example, will have a low risk threshold.

If the risk score for a user's access attempt exceeds the system's risk threshold, authentication controls are automatically elevated, and the user may be required to provide a higher level of authentication, such as a PIN or token. If the risk score is too high, it may be rejected outright.

07 Nov 22:07

The Story of the Bomb Squad at the Boston Marathon

by Bruce Schneier

This is interesting reading, but I'm left wanting more. What are the lessons here? How can we do this better next time? Clearly we won't be able to anticipate bombings; even Israel can't do that. We have to get better at responding.

Several years after 9/11, I conducted training with a military bomb unit charged with guarding Washington, DC. Our final exam was a nightmare scenario -- a homemade nuke at the Super Bowl. Our job was to defuse it while the fans were still in the stands, there being no way to quickly and safely clear out 80,000 people. That scenario made two fundamental assumptions that are no longer valid: that there would be one large device and that we would find it before it detonated.

Boston showed that there's another threat, one that looks a lot different. "We used to train for one box in a doorway. We went into a slower and less aggressive mode, meticulous, surgical. Now we're transitioning to a high-speed attack, more maneuverable gear, no bomb suit until the situation has stabilized," Gutzmer says. "We're not looking for one bomber who places a device and leaves. We're looking for an active bomber with multiple bombs, and we need to attack fast."

A post-Boston final exam will soon look a lot different. Instead of a nuke at the Super Bowl, how about this: Six small bombs have already detonated, and now your job is to find seven more -- among thousands of bags -- while the bomber hides among a crowd of the fleeing, responding, wounded, and dead. Meanwhile the entire city overwhelms your backup with false alarms. Welcome to the new era of bomb work.

04 Oct 03:22

Will Keccak = SHA-3?

by Bruce Schneier

Last year, NIST selected Keccak as the winner of the SHA-3 hash function competition. Yes, I would have rather my own Skein had won, but it was a good choice.

But last August, John Kelsey announced some changes to Keccak in a talk (slides 44-48 are relevant). Basically, the security levels were reduced and some internal changes to the algorithm were made, all in the name of software performance.

Normally, this wouldn't be a big deal. But in light of the Snowden documents that reveal that the NSA has attempted to intentionally weaken cryptographic standards, this is a huge deal. There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.

At this point, they simply have to standardize on Keccak as submitted and as selected.

CDT has a great post about this.

Also this Slashdot thread.

27 May 18:52

Sex, Lies, And Quantum Computers

by rjlipton


Okay, no sex, but a discussion about quantum computers.

images

Steven Soderbergh directed the famous movie: Sex, Lies, and Videotape. This 1989 movie won the Palme d’Or at the 1989 Cannes Film Festival, and is now in the United States Library of Congress’ National Film Registry for being “culturally, historically, or aesthetically significant.”

Today I want to explore claims about quantum computation.

With all due apologies to Soderbergh his title seems perfect for our discussion. There may be no sex, but there is plenty of sizzle surrounding quantum computation. We just finished a thorough and wonderful debate here on the feasibility of quantum computers–see this for part of the debate. It was ably moderated by Ken, and the two advocates were Gil Kalai, against, and Aram Harrow, for.

While there are still many interesting issues to add to the debate, our discussion is not about whether quantum computers are feasible or not. We will stipulate that they can and will be built, eventually in some future time. The issue is about the present:

What has been proved about them so far?

And sadly we will discuss: what is believed to be proved but has no proof.

Claims

There are many claims in the literature on quantum computers that I would like to address here. Some are right, some are wrong, and some are at best misleading. Let’s start.

Quantum Computers have been proved to be more powerful than classical. Wrong.

This has been repeated many times and is often claimed in the literature. But there is no mathematical proof that a quantum computer that runs in polynomial quantum time cannot be simulated in polynomial classic time. None. Let me repeat that. There is no such proof. It is an open problem whether

\displaystyle  \mathsf{P} = \mathsf{PSPACE}.

If this is true, then quantum polynomial time equals polynomial time. Okay, most do not believe that this is true, but we are talking about what is proved. Nor is there any speedup theorem about improving the exponent {k} in a general {n^k}-time algorithm when you have {n} qubits vis-à-vis {n} bits. Thus from a mathematical view point it is clear that there is no proof that quantum is better than classical. None. Zero.

Quantum Computers can harness exponential parallelism, trying every possible solution at once. At best half-true.

Quantum computers can create superpositions of {2^n}-many basis vectors, each representing a string in {\{0,1\}^n} that can be a trial solution. However, the best-known quantum algorithm for finding a single solution still has exponential running time, order-of {2^{n/2}}, and this is tight in black-box models (see below). The allowed linear algebra operations restrict the way this parallelism can be exploited. Scott Aaronson in particular has expended much effort debunking claims that quantum computers are imminently effective at solving certain NP-hard problems.

Quantum Computers can factor integers faster than classical ones are known to. Right.

But misleading, especially when the “are known to” part is sloughed off. There is no proof that factoring cannot be done classically in polynomial time. None. The best factoring algorithms are indeed super-polynomial, but there is no mathematical proof that they are optimal. So tomorrow, or next week, or secretly already?, there could be a classical polynomial time factoring algorithm. Peter Sarnak, for example, is on the record as believing this. I am too. But beliefs aside, there certainly could be such an algorithm.

Quantum Computers have been proved to be more powerful than classical in the black box model. Right. But this is at best misleading; at worst {\dots} There are proofs that quantum machines can out-perform classical in this model. But the model is unfair. The classic machine gets only oracle access to some black box function say {f(x)}. Its job is to find some {s} so that {f(s)} has some property. As with oracle results in complexity theory, it is often easy to show that this requires a huge exponential search.

What about the quantum machines? They can zip along and solve the problem in polynomial time. So this proves they are more powerful—right? No. The difficulty is that the quantum machine needs to have the function {f} encoded as part of its quantum circuit. So the quantum computation gets the circuit representation of the black box, or the box is not black. The box is a white box—we can see the wires and gates that implement it.

This seems unfair, and is at best misleading. The fair comparison is to allow the classic machine the same ability to see the circuit representation of the box. The trouble now is the proof disappears. Without the black box restriction there is no mathematical proof that a classic machine cannot unravel what the box does and cheat in some way. So this is at best misleading. We also tried to see what happens if we open the box for the classical algorithm, here and here.

Quantum Computers cannot be efficiently simulated by classical, even for fifty qubits.Wrong.

I heard this the other day, and also it is stated on the web. For instance, Neil Gershenfeld was quoted by Julian Brown in his 2002 book Minds, Machines, and the Multiverse: The Quest for the quantum computer as saying,

“Round about fifty qubits is where you begin to beat classical computers. What that means is that with custom hardware tuned for computation with spectroscopy, you could just begin to graze the point where classical computers run out.”

Yes this was over a decade ago, but petabyte-scale computing was on the horizon then. Note that the interest in this question is quite reasonable, even though fifty qubits are way too few to implement any interesting quantum algorithm, certainly with the overhead of current fault-tolerant coding. The thought goes that fifty qubits may, however, be sufficient to do something interesting. Perhaps they can solve some physical question about a real system? A very interesting question. Let’s turn to discuss this question and scale in more detail.

The Fifty Bit Problem

The challenge is to figure out how we can simulate on a classical computer a quantum computation that uses at most fifty qubits. This is not nearly enough qubits to do anything interesting for cryptography, but makes for a nice question. The obvious ways to simulate such a quantum computation is not impossible for a classical machine, but is still not a simple computation. Thus the challenge: can we do this classically? Can we do a fifty quantum qubit problem on a laptop? Or on a server? Or in the cloud?

The obvious solution is to write down the initial vector of size {N=2^{50}} and start applying the quantum gates to the vector. This immediately hits a problem, since such a vector is really large. But it is not that large. The size is “just” a petabyte—or actually 1/8 of a petabyte. The MapReduce cloud framework has already been used to carry out some basic algorithms on petabytes of data.

Quantum operations are notionally sparse, each affecting only a small constant number of qubits, generally up to three. Under the standard encoding, each qubit splits the long vector into a measurement set for a {1} value and a set for the {0} value, each of size {N/2}. However, under different encoding schemes the operations could be local. In particular, many important quantum states, before and after some common quantum circuit operations, are succinct. There is scope for doing simulations analogous to what happens with homomorphic encryption, whereby operators might work directly on the succinct functional representations.

It strikes us that simulating a fifty-qubit algorithm classically is a concrete large-data kind of problem, one that may be interesting for its own sake. This stands apart from recent work on general “de-quantization” of circuits and algorithms, for which Maarten van den Nest’s ArXiV page is a good place to start.

Open Problems

What is really going on with the information structures that are acted on by quantum computers? That they are {2^n}-long bit-vectors in any sense we have come to doubt. Is there a better general way to represent them?

Even with the {2^n}-vector representation, for {n = 50} qubits, can we make classical simulations concretely efficient?


04 May 11:47

儿童不识数?别把善良当无知

by 比喻是个好东西

儿童没有恒常性概念?

发展心理学是心理学的一个分支,研究的是关于小孩子如何长大。皮亚杰(Jean Piaget)是这门学科的先驱,做了许多开创性的研究。其中有一个,是说儿童什么时候开始认识到数量守恒的。举个简单的例子:6个瓶子和6个罐子整整齐齐地排成两排,一一对应(图一)。研究人员问小孩:“瓶子和罐子哪个多呀?”小孩说:“一样多”。然后,研究人员把罐子之间的距离拉大(图二)。作为成年人,你知道这个动作并没有改变罐子的数量。而当研究人员再次问了同样的问题“瓶子和罐子哪个多”的时候,大多数孩子却都回答:“罐子多。”

于是皮亚杰说,幼儿在四五岁的时候还不知道移动物体的位置不能改变物体的数量。这个20世纪50年代的实验成为了发展心理学的经典。
 

图一  

图二  

数量守恒被认为是算术的基础。这也就是为什么皮亚杰的结论影响了我们的整个教育体制:在六七岁之前,很少有小孩开始系统学习算术。

皮亚杰错了!

然而我们小时候真的那么“傻”,不知道罐子的数量不随摆放位置而改变吗?有一些意想不到的证据对皮亚杰的发现构成了质疑:研究者通过一些巧妙的实验发现,低等动物如老鼠和鸽子都有数量的概念,而黑猩猩更是可以在两个数量间轻松地判断大小。为什么作为高等动物的人类,在四五岁的时候竟然在算术上落后于动物?是皮亚杰的实验出了什么问题吗?

是的,皮亚杰的实验的确出了问题。1967年,麻省理工的梅勒(Jacques Mehler)和贝弗(Tomas Bever)在《科学》(Science)上发表了一个研究。他们首先做了一个加强版的皮亚杰实验:哪怕罐子实际的数量比瓶子少,只要罐子的距离被拉开(图三),大多数孩子都会说罐子多。那么,皮亚杰究竟错在哪里呢?在第二个实验中,研究人员简单地把所有的瓶瓶罐罐都换成了M&M巧克力豆,这次,他们避免了问“哪一行多”这样的问题,而是让孩子们选择选取上面那行还是下面那行,绝大多数孩子立刻秒杀了上面那行(巧克力多!)——吃货本性显露无遗!

图三 

当数学题被当作文字游戏

这么说,小孩子是完全有能力判断数量大小的了。那皮亚杰的问题究竟出在哪里?是小孩子们没看到巧克力豆所以不配合吗?恰恰相反,是因为他们太配合了。事实上,梅勒和托贝弗还发现,2-3岁的孩子,无论是用瓶瓶罐罐,还是巧克力豆,都能够不受排列干扰做出正确的判断。而4-5岁的孩子之所以在瓶瓶罐罐的问题上出错,不是由于他们不识数,而是由于他们已经初识对话常规(conversational norm)。

对话常规是语言哲学家保罗•格莱斯(Paul Grice)提出的概念。他认为人与人之间的对话服从若干常规。其中之一说的是是说话者表达的每个信息都是有意义的,称做“相关性准则”(maxim of relation)。这就是说,在信息交流的过程中,每一句话都跟说话彼时彼刻的环境和上下文有关系。即便内容相同,也不会是简单的重复。

有这么个笑话:老板有次到处找程序员A不见,于是给程序员B发短信,写道:“有没有A的手机号码?”B回复:“我有!”老板一肚子标点符号,耐着性子继续码字:“那能不能请你给我一下?”答曰:“没问题!”

如果你笑了,那么应该看出来了,老板的第一句话不应该从语义的角度(semantic),而应该从语用的角度(pragmatic)来理解:他想要电话!很多情况下,“话中有话”都是对话常规精确运用的范例。

而我们正是从4-5岁开始慢慢意识到这样的常规的存在,并慢慢地在我们的社会生活中实践它。这也是为什么2-3岁的孩子反而不会“数错”瓶瓶罐罐——在这个维度,2-3岁的孩子还都是群有一说一,有二说二的“直肚肠”,而4-5岁的孩子却多长了个心眼。例如在皮亚杰的瓶瓶罐罐实验里,4-5岁小孩们会想:“这些个大人,原本就这几个瓶瓶罐罐,你已经问过我哪个多了,现在又来问我一遍。而期间唯一变化过的就是罐子那行的长度,所以,新的那个问题一定是跟那个长度相关的,哪怕听上去像是问数量呢。那我就说罐子变多了好了。”

爱丁堡大学的心理学家麦加里格尔(James McGarrigle)和唐纳德森(Margaret Donaldson) 1974年发表在《认知》(Cognition)上的研究证实了这个推断。他们首先重复了皮亚杰最早的那个实验。皮亚杰的发现是稳定的:在研究人员把罐子的间距拉开之后有5/6的小孩说罐子更多。然而,在另一组实验中,研究人员故意离开房间,然后安排了一个装扮成泰迪熊的人去把罐子的距离同样得拉开。熊熊走了之后,研究人员回来,着急地说:“哎呀这个讨厌的熊熊又来搞破坏啦!现在瓶子多还是罐子多呀?”这次,有近2/3的小孩报告一样多!他们跟研究人员说:“不要担心,熊熊来了只不过把罐子重排了一下,并没有把罐子取走!”他们知道,熊熊的“捣乱”令第二遍询问数量有了明显的意义 。

皮亚杰的陷阱

小孩子们懂的东西远比大人们以为的要多。他们善良地揣测大人们的意图,摸不着门道,却被大人们盲目地贴上了“尚未健全”的标记。小孩子眼中的世界简单而美好。他们小心翼翼地守护着刚刚悟到的对话常规;他们对于“提问”的定义从来就是从无知到有知的工具,却怎想得到这个世界上还有明明自己知道答案还要来提问的“陷阱”?而这个陷阱,到头来成了对善意的惩罚。 

许多人对于长大的理解,源自回顾自己长大的过程。而今天这里介绍的这一系列研究,仿佛是一次学院派的解剖,告诉人们那一整个世界的善良和美好,从哪里开始一点一点的凋零。 皮亚杰的书里说,儿童对于数量守恒的“掌握”往往要到上了学以后——从那时起,他们对重复的问题会给出重复正确的答案。我想,那多半是因为等上了小学,经过了第一次考试,每个小孩都会明白,原来这个世界遍地都是陷阱。

编辑的话:上了这么多年学的经验告诉我们,最有用的还是揣摩出题人意图。

相关小组

小儿科
Geek Parents

参考文献

Dehaene, S. (1997). The number sense: How the mind creates mathematics. New York:Oxford University Press.

Grice, H. Paul (1975), “Logic and Conversation,” in Syntax and Semantics, Vol. 3, Speech Acts, ed. Peter Cole and Jerry L. Morgan, New York: Academic Press, 41–58.

供图:shutterstock