由买买提看人间百态

boards

本页内容为未名空间相应帖子的节选和存档,一周内的贴子最多显示50字,超过一周显示500字 访问原贴
Military版 - 国内治学真是浮躁啊
相关主题
不懂AI, 将军们凭什么说AI就是统计?做AI的,你真的不知道李飞飞(Li Fei-Fei), 算是瞎了!
打脸文章:关于deep learning性解放的巨大破坏力——精子战争与民族智商的下降
人工智能研究中国已经超越美国南非的问题是矫枉过正,结束种族隔离是对的,黑人执政是错的
这些AI就是垃圾吹牛的东西full stack马工算个几把毛,还不是被你们嘲笑的臭教书的管着 (转载)
北京大学“人工智能前沿与产业趋势”第七讲Lex Fridman对Lecun的采访
百度建立硅谷研发中心:凸显中国公司野心MRI研究:独生子女智商相关大脑灰质多
李彦宏: 人工智能中国不敢说数一 数二是肯定的三哥出演李安新电影主角
加拿大、澳大利亚这两个国家靠什么维持生计?简单谈谈高科技下的人民战争会是什么样子
相关话题的讨论汇总
话题: 34话题: learning话题: deep话题: brain话题: nets
进入Military版参与讨论
1 (共1页)
n********t
发帖数: 21
1
当人家Hinton老爷子几十年如一日的坚持神经网络的时候,也没看这些人跟着坚持。那
时候各个
都是SVM,kernel专家,等到deep learning 突然火了,一个个都想来分一杯,于是瞬
间就多了
那么多deep learning 专家。
如果中国科学家不改变这种跟风,赶潮流的态度,恐怕离科技强国还有很遥远的距离。
中国总是不缺乏获得自然科学一等奖却被发现什么鸟东西都不是的院士,和拿了诺贝尔
奖然而凭
不上院士的科学家。一个可以靠数数(论文)来评奖的体系,如何能期望鼓励产出原始创
新。
看他的论文,3-4年前还没有开始做deep learning,真正开始做deep learning来应用
到CV也就是
近2-3年的事情,2015年已经灌了101篇已发表论文了。就算说手底下有众多硕士博士,
但这
个频率也是够令人震惊(2天一篇?),估计邮箱里光论文发表的相关邮件就能把他忙死吧
,还有什
么时间来研究deep learning?如果这都是2-3年就是专家,那专家的成本也未免太低
了点。
抛开民族仇恨不谈,中国的这些科学工作者真该向日本的科学家学习学习,能够耐的住
寂寞,
做些正真的原始创新。而不是什么火了搞什么。
更新:
@Naiyan Wang 更正google scholar上很多不是他的论文,主页上是35篇(是没那么恐
怖了,不过依然觉得很多)。作为对比我们可以看看deep learning 鼻祖每年都有多少
论文
Geoffrey E. Hinton's Publications: in reverse chronological orde
评论中有人认为黑的有点武断,我不否认颜水成老师在CV里的工作还是很扎实的,只是
看不惯鼓吹的风格。 baby-like, true AI,最好工具和模型等等,还是不要乱说的好
。AI已经经历数个从狂热到失望的周期了。
搬运Yann LeCun关于deep learning和human brain的关系:
Yann LeCun
October 24, 2014 ·
In a recent post on Oct 21, I linked to Michael Jordan's interview in
IEEE Spectrum, in which he comments on various topics including neural nets
and deep learning.
Mike was somewhat unhappy about the way his opinions were expressed in the
interview and felt compelled to write a long comment to my post to clarify
some of his positions.
Some of his comments could be construed as being largely dismissive of "
neural nets" and, by extension, of deep learning. In fact, he does not
criticize deep learning. He has said nice things about the recent practical
success of deep learning and convolutional nets in his recent Reddit AMA.
What he does criticize is the hype that surrounds some works that claim to
be neurally inspired or to "work like the brain". Let me say, as
forcefully as I can, that he and I totally agree on that.
Mike and I have been friends since we met at the first Connectionist Summer
School at CMU in 1986 (which was co-organized by Geoff Hinton). We have a
lot of common interests, even if our favorite topics of research have seemed
orthogonal for many years. In a way, it was inevitable that our paths would
diverge. Mike's research direction tends to take radical turns every 5
years or so, from cognitive psychology, to neural nets, to motor control, to
probabilistic approaches, graphical models, variational methods, Bayesian
non-parametrics, etc. Mike is the "Miles Davis of Machine Learning",
who reinvents himself periodically and sometimes leaves fans scratching
their heads after he changes direction.
Here are a few things Mike and I agree on regarding deep learning, neural
nets and such (he will comment if he disagrees):
1. There is nothing wrong with deep learning as a topic of investigation,
and there is definitely nothing wrong with models that work well, such as
convolutional nets.
2. There is nothing wrong with getting a bit on inspiration from
neuroscience. Old-style neural nets, convnets, SIFT, HoG and many other
successful methods have all been inspired by neuroscience to some degree.
3. The neural inspiration in models like convolutional nets is very tenuous.
That's why I call them "convolutional nets" not "
convolutional neural nets", and why we call the nodes "units"
and not "neurons". As Mike says in his interview, our units are very
simple cartoonish elements, when compared to real neurons. Yes, most of the
ideas behind some of the most successful deep learning models have been
around since the 80's. That doesn't make them less useful.
4.There is something very wrong with claiming that a model is good just
because it is inspired by the brain. Several efforts have attracted the
attention of the press and have increased the hype level by claiming to "
;work like the brain" or to be "cortical". There is quite a bit
of hype around "brain-like chips", "brain-scale simulations"
, "spiking this", and "cortical that". Much of these claims
are unsubstantiated and are not backed by real and believable results. Hype
has killed AI several times in the past. We don't want that to happen
again.
5. Serious research in deep learning and computational neuroscience should
not be conflated with over-hyped work on brain-like systems. The fact that
an organization receives 10^7 or 10^9 Dollars or Euros in investment or in
research funding does not make it "serious". Real results and the
recognition of the research community make it serious.
6. Among serious researchers, there are four kinds of people. (1) People who
want to explain/understand learning (and perhaps intelligence) at the
fundamental/theoretical level. (2) People who want to solve practical
problems and have no interest in neuroscience. (3) People who want to
understand intelligence, build intelligent machines, and have a side
interest in understanding how the brain works. (4) People whose primary
interest is to understand how the brain works, but feel they need to build
computer models that actually work in order to do so. There is nothing wrong
with any of these approaches to research.
7. People whose primary interest is to understand how the brain works will
be driven to work on models that are biologically plausible. They will
occasionally come up with (or work on) methods that are useful but not
particularly plausible biologically. Our dear friend Geoff Hinton falls into
this category.
8. Trying to figure out a few principles that could be the basis of how the
brain works (through mathematics and computer models) is a perfectly valid
topic of investigation. How does the brain solve the "credit assignment
problem"? how does the brain build representations of the perceptual
world? These are important questions that must be researched.
编辑于 2015-10-19著作权归作者所有
n********g
发帖数: 6504
2
低调做人的还是多数。只不过爱显摆的容易被看见。
磨成一剑,很多不止10年。
陈景润如此;张益唐如此;你说的Hinton我应该见过坐一起吃过饭大致也如此。不确定
的原因是当年他太低调。以致我都忘了饭友名字。只记得他做的啥。
得到别人接受你的证明,10年算快的了。我被那些搞CS的直接骂过无数次傻逼。砸人饭
碗啊。Hinton被骗子估计也少不了。

【在 n********t 的大作中提到】
: 当人家Hinton老爷子几十年如一日的坚持神经网络的时候,也没看这些人跟着坚持。那
: 时候各个
: 都是SVM,kernel专家,等到deep learning 突然火了,一个个都想来分一杯,于是瞬
: 间就多了
: 那么多deep learning 专家。
: 如果中国科学家不改变这种跟风,赶潮流的态度,恐怕离科技强国还有很遥远的距离。
: 中国总是不缺乏获得自然科学一等奖却被发现什么鸟东西都不是的院士,和拿了诺贝尔
: 奖然而凭
: 不上院士的科学家。一个可以靠数数(论文)来评奖的体系,如何能期望鼓励产出原始创
: 新。

g******t
发帖数: 11249
3
你以为老外不灌水?
没funding雇不起那么多学生

【在 n********t 的大作中提到】
: 当人家Hinton老爷子几十年如一日的坚持神经网络的时候,也没看这些人跟着坚持。那
: 时候各个
: 都是SVM,kernel专家,等到deep learning 突然火了,一个个都想来分一杯,于是瞬
: 间就多了
: 那么多deep learning 专家。
: 如果中国科学家不改变这种跟风,赶潮流的态度,恐怕离科技强国还有很遥远的距离。
: 中国总是不缺乏获得自然科学一等奖却被发现什么鸟东西都不是的院士,和拿了诺贝尔
: 奖然而凭
: 不上院士的科学家。一个可以靠数数(论文)来评奖的体系,如何能期望鼓励产出原始创
: 新。

b********n
发帖数: 1
4
你真是牛

【在 n********g 的大作中提到】
: 低调做人的还是多数。只不过爱显摆的容易被看见。
: 磨成一剑,很多不止10年。
: 陈景润如此;张益唐如此;你说的Hinton我应该见过坐一起吃过饭大致也如此。不确定
: 的原因是当年他太低调。以致我都忘了饭友名字。只记得他做的啥。
: 得到别人接受你的证明,10年算快的了。我被那些搞CS的直接骂过无数次傻逼。砸人饭
: 碗啊。Hinton被骗子估计也少不了。

1 (共1页)
进入Military版参与讨论
相关主题
简单谈谈高科技下的人民战争会是什么样子北京大学“人工智能前沿与产业趋势”第七讲
电影推荐:《HOME地球很美有赖你》 (转载)百度建立硅谷研发中心:凸显中国公司野心
kayaker长得挺帅的李彦宏: 人工智能中国不敢说数一 数二是肯定的
洋小伙爱上农家妹 称不管多远也要娶回家(图)加拿大、澳大利亚这两个国家靠什么维持生计?
不懂AI, 将军们凭什么说AI就是统计?做AI的,你真的不知道李飞飞(Li Fei-Fei), 算是瞎了!
打脸文章:关于deep learning性解放的巨大破坏力——精子战争与民族智商的下降
人工智能研究中国已经超越美国南非的问题是矫枉过正,结束种族隔离是对的,黑人执政是错的
这些AI就是垃圾吹牛的东西full stack马工算个几把毛,还不是被你们嘲笑的臭教书的管着 (转载)
相关话题的讨论汇总
话题: 34话题: learning话题: deep话题: brain话题: nets