a*****g 发帖数: 19398 | 1 By Roland Moore-Colyer
Mon Nov 16 2015, 07:20
http://www.theinquirer.net/inquirer/feature/2434242/facebook-s-
FACEBOOK IS A COMPANY known primarily for its social feed of emotional
statuses, endless emojis, pictures of 'hols with the ladz', and, of course,
a big blue thumbs-up.
Normally associated with tech giants like IBM, Google and Apple, or some
disruptive Tech City startup, artificial intelligence (AI) is not the first
thing to spring to mind when pondering Zuckerberg's 1.5 billion-strong
social network.
Yet alongside solar-powered drones, virtual reality headsets and a wealth of
coding tech, Facebook is also building its own deep learning and cognitive
computing AI.
mike-schroepfer"The core goal here is to build systems that can better
understand and perceive the world the way we as people do, so it can help us
manage that world," said Mike Schroepfer (pictured) Facebook's chief
technical officer.
This may sound like Facebook is just making another virtual assistant with a
few smart moves and dry witticisms. But the social networking giant
actually appears to be pushing the boundaries of AI tech that could leave
Siri and Cortana scratching their digital heads.
Memory Networks, the moniker Facebook has given its AI technology, is what
Schroepfer sees as the key to unlocking the door that separates deep
learning machines which need to be taught, from intelligent systems that
learn by themselves.
"[Memory] is in my opinion a fundamentally missing component of AI; there's
no way we could view AI systems that can do the sorts of things we want
without the capability to learn and retain new facts that they've never seen
before," he said.
"One of the challenges with AI systems is many of the existing systems are
dumb pattern matchers; you ask it a question, it gives you an answer, it
doesn't learn as it goes.
"So one of the challenges with Memory Networks is can we take a neural net,
this thing that you train, and can we attach a short-term memory to it so
that it can take in data and answer questions based on that data."
Schroepfer described standard deep learning systems as just ways to create a
black box of data for pattern matching after lengthy training.
But where Memory Networks differs is the ability to ingest new data and use
machine learning to effectively get incrementally smart over time, rather
than rely on being taught by a human.
In practice, this works by having a deep learning neural network to act as a
‘reasoning' system, which uses logical techniques like deduction, applied
to data in a separate memory to turn it into knowledge that can be used to
answer questions.
Through a form of associative memory - the ability to learn and remember the
relationship between unrelated items such as the name of a place and its
appearance - Memory Networks can then store and retrieve internal answers,
observations and knowledge, thus getting smarter.
Schroepfer used the example of feeding Memory Networks the basic script of a
film. Through natural language comprehension, Memory Networks can reason
the movie's events and timeline to answer general questions without being
specifically taught to answer exact queries.
In effect, the neural network is applying logic and reasoning to its
memories, learning from knowledge and experience, not unlike our own fleshy
human brains.
Photographic memory
Robot artificial intelligence
Applying Memory Networks to natural text-based language is only one half of
Facebook's AI research.
Schroepfer explained that adding image recognition into the mix is the way
to help Memory Networks better perceive the world, or most likely pictures
uploaded onto social media.
Facebook's image recognition tech analyses photos at a pixel level, and has
been trained to recognise patterns among them to better distinguish separate
different objects in a photo even if they overlap. This process of
segmentation then allows the AI tech to identify each object in the picture.
Schroepfer noted Facebook's image recognition system can do this 30 percent
faster than most other systems and through using 10 times less training data.
But the magic happens when image recognition is combined with Memory
Networks. This produces Visual Q&A, an AI system that answers questions
posed to it via manual or voice inputs by people with impaired vison who
want to know what a picture is composed of and what is happening in it.
Think smart, look sharp
AI brain
Schroepfer highlighted how the company's AI research was exploring how it
can use image recognition to teach neural networks to perceive whether
something is going to happen from observation, rather than have an innate
understanding of the situation. This is similar to how children work out
when something is going to fall without understanding the physics behind it.
"That's how people learn; they learn by messing around with the world and
seeing what happens. And we have computer systems now that are brilliant on
a lot of things but don't understand basic physics and don't understand
operations of the world because they haven't been able to observe it," he
explained.
"So one of the other things we're trying to do is to teach computers some
basic common sense about the world, and one way we are doing this is by
stacking blocks together and showing an image of that to a computer and
asking it to determine is this stack of blocks going to fall in this case or
stand up."
According to Schroepfer, Facebook's techs were able to build a classifier
that is over 90 percent accurate at identifying when said block was going to
fall, and can in fact beat most humans at the task.
"It's one of many different ways we're trying to help systems understand
what's going to happen in the future and help us think about not just
reacting to what's happened but helping me plan things in the future,"
Schroepfer added. He noted how the elements of this AI research were being
added into the M assistant to make it more capable or understanding complex
requests.
Beating the human
inside-brain-mind
You could question why Facebook is effectively looking to create an AI with
'common sense' rather than relying on strict logic systems, which can often
work their way through all possible outcomes to come up with the right
answer.
As ever, Schroepfer provided a compelling reason, backed up by an example;
in this case, pitting a computer against a human in the Chinese board game
Go, a game people consistently win over computers.
This is because unlike chess, where a computer can trump humans by working
through all the possibilities of board configurations, Go has significantly
more moves. For example, after the first two moves in chess there are 400
possible next moves; with Go there are 130,000. This is too much information
for AI to crunch without bursting into a silicon sweat.
So Facebook looked to combine traditional AI tech designed to apply deep
learning methods with Go and connect this with image recognition.
Schroepfer said that this approach gave the AI the ability to work out from
patterns on the board what's a good move, instead of crunching thousands
upon thousands of potential moves. In short, Facebook created a Go AI that
has intuition.
"We've built up some of the image recognition technology and connected that
together to some deep learning systems about possible good moves, and
basically in a short number of months we've built a Go AI that can beat some
of the AIs that were designed specifically for the purpose of playing Go,
and it's as good as a very good amateur player," said Schroepfer, without
sounding smug.
While the idea of beating humans may send a chill up some people's spines
and send others running deep into the internet-free zones of Wales while
screaming ‘Skynet', an army of Facebook-branded remembering, reasoning and
predicating AIs is some way off; 10 years or more according to Schroepfer.
But the social media company, which is now very much a major technology
industry player, and its AI research is a good indication of how neural
networks and smart systems in the near future will be developed.
"The lesson really here is that by combining the different technologies, you
could very rapidly build something that was better than the thing that
people have been working on for many, many years, and I think this will be
one of the many ways we will see advances in AI in the future," said
Schroepfer.
He concluded with Facebook's ultimate AI destiny: "When these AI systems get
good enough, we can afford to scale it to the entire planet; it's a super
power we can give to every person on the planet."
We only hope everyone remembers that with great power comes great
responsibility. μ | o*****p 发帖数: 2977 | 2 围棋算法的作者的文章。野心勃勃。
http://zhuanlan.zhihu.com/yuandong/20364622
关于围棋AI的新思路
田渊栋
时隔两年,又找到了赶文章的感觉,17号坐飞机从加州飞回匹兹堡,身在万米高空还在
跑实验改文章,飞机上的网络实在是破得可以,接了VPN登陆到公司机器,按一个字符
都要等半秒钟才看到回显,但是亏得前一天晚上写了个好使的分布式框架,又有协作者
@Yan Zhu 的帮忙,真正能做到改几行脚本就启动百台机器跑想要的实验。什么是生产
力,这就是生产力。
努力终于有了回报,我们用深度学习做智能围棋的文章终于在arXiv上公开了,链接见http://arxiv.org/abs/1511.06410。这篇文章同时已投稿至ICLR 2016,中心思想是使用深度学习模型来训练AI走下一步,就能使AI达到KGS Go Server上1d-2d的水平,比去年Google DeepMind团队发布出的性能要好不少(见http://arxiv.org/abs/1412.6564)。再加上传统的蒙特卡洛树搜索,棋力能更上一层楼,当然,我们现在离最好的软件(比如说天顶,银星)还有相当的差距。
在此之前,11月初已有一系列关于这个项目的媒体报道(Facebook进军智能围棋 欲造
新深蓝击败人类)。不过我们的目标并非如媒体所说要战胜顶尖职业棋手,而是借着这
个平台来研究传统人工智能(如搜索)和深度学习的结合点。成为”新深蓝“听起来宏
伟,但容易陷入为变强而变强的困局,通过打各种补丁以达成一点点的性能提升,并非
是我们所要的。一个好的研究课题,应当是在立意时就超越以往方法,而非费尽力气比
别人高几个点。
我们一直以来说围棋很难,理由是围棋有着天文数字一样的对局可能性,让搜索无从进
行。但这不是围棋困难的唯一原因,大量的问题同样有天文数字一样的穷举复杂度,但
是因为估值函数的结构相对简单,所以贪心法,动态规划等就可以得到最优解,而不用
穷尽解空间。围棋不是这样,它对盘面的估值函数非常复杂,是棋手通过大量的经验积
累而成的直觉,不能用一两个原则简单概括。以前的围棋程序通常将这些直觉总结成一
些规则和公式以辅助搜索,这就要求写程序的人有较高的棋力和总结能力,但就算如此
,人力毕竟有时而穷,规则的数目上去了之后,调节各规则之间的权重就成为非常累人
的工作,往往耗去了大量的时间之后,还离真正的直觉很远(如果大家有兴趣去看开源
围棋软件Pachi或者Fuego的代码,就会发现里面的参数之多出乎想像)。深度卷积网络
的出现,让通过大量数据直接学习这种直觉成为可能,从而在这个无法搜索并且无法估
值的无解问题上撕开一个口子。
当然,“撕开口子”和“解决问题”之间,还差着十万八千里,现有的深度学习框架需
要大量数据进行训练,因此对于一些少见的情形效果不好。像我们训练的这个网络,虽
然被KGS上的朋友评论为”感觉上和真人在下“,”大局观非常好“,但碰到征子这种
初学者都会的局面时,有时还是无所适从(笑)。另外局部战斗的计算也有很多可以改
进的地方。我们当然可以加些简单规则以应付大多数征子和一些局部战斗的情况(要全
部覆盖那是很难的),这样棋力当然变强,但却违背了用一个比较干净的模型去解决问
题的初衷。历史上无数的例子证明,这种方式在将来能走得更远。
将来我们希望通过对网络的分析,能够明白神经网络的学习和决策的过程,看它如何总
结,如何应用大量的经验给出下一步。在明白了它的学习思路后,希望能找到进一步改
进的方案。比如说人可以从死活题中学到很多东西,并且举一反三地应用到实战中,人
看过几盘棋就知道问题在哪儿,并且因此改变自己的应对方案。对此,神经网络能不能
做到?另外,人虽然不如机器那样能同时进行海量搜索,但人在应对没见过的局面的时
候,仍然会进行推演,将过去的经验“以某种方式”重新组合以应对当前局面,虽然没
推几步,但效率比暴力盲目搜索要高多了。长期的由经验习得的模式识别能力,加上在
线的高效推演(搜索),可能是解决小样本问题的一把钥匙,同时,推演本身暗含着逻
辑,有逻辑的人凭直觉做的推演一般是对的,并且能用逻辑去验证自己的推演过程,找
到可能的漏洞以改进;而没逻辑的人在推演时,只会盲目地寻找下一步,走到哪里算哪
里。深度学习是否能做到这一点,值得进一步探究。
最后,大家如果有兴趣的话,可以去KGS Go Server和我们的两个AI下几局,他们的名
字叫darkforest和darkfores1(笑),更强的darkfores2和基于蒙特卡洛树搜索的改进
版会在之后陆续放出。
多谢关注。
==============================
更新及一些问题的回答:
1. 目前在KGS上三个AI都已经放上去了。darkforest是1d,darkfores1是2d,新放的
darkfores2在2d和3d间波动,有一次上了4d。总的来说这些AI比较神经刀,能干掉5d也
能输给4k,不用搜索,在有些局面下的弱点很明显。尽管如此,KGS上大家纷纷表示对
于一个完全不用搜索只凭“感觉”下的AI来说,已经是史无前例了。
2. 目前为止,我们的方法比爱丁堡大学和DeepMind的现有结果好很多;更重要的是,
我们公开了对战接口,并在网络平台上与人类棋手进行了实际对战,这使得我们论文的
结果更加坚实可信。
3. 源码暂时不会公开,所有数据集都是从网上采得,完全是公开的,欢迎大家重现我
们论文中的方法。我相信会有更好的文章出来。
4. 取名darkforest因为我是《三体》的粉丝。而黑暗森林是个人最欣赏的一部。
5. 我们组做围棋的现在有两个,全职的就我一个,主要算法和九成以上的程序都是我
亲自操刀的,包括高效走子,神经网络训练和蒙特卡洛树搜索。
,
first
【在 a*****g 的大作中提到】 : By Roland Moore-Colyer : Mon Nov 16 2015, 07:20 : http://www.theinquirer.net/inquirer/feature/2434242/facebook-s- : FACEBOOK IS A COMPANY known primarily for its social feed of emotional : statuses, endless emojis, pictures of 'hols with the ladz', and, of course, : a big blue thumbs-up. : Normally associated with tech giants like IBM, Google and Apple, or some : disruptive Tech City startup, artificial intelligence (AI) is not the first : thing to spring to mind when pondering Zuckerberg's 1.5 billion-strong : social network.
| r****y 发帖数: 26819 | 3 不可信。如果是有学习能力的,那么应该是在下多少盘的基础上达到什么水平,下更多
盘又会是什么水平。或者说下的盘数增加,方差会越来越小。
这个目标一看就是固定的一次性的,没法前进的。
见http://arxiv.org/abs/1511.06410。这篇文章同时已投稿至ICLR 2016,中心思想是使用深度学习模型来训练AI走下一步,就能使AI达到KGS Go Server上1d-2d的水平,比去年Google DeepMind团队发
【在 o*****p 的大作中提到】 : 围棋算法的作者的文章。野心勃勃。 : http://zhuanlan.zhihu.com/yuandong/20364622 : 关于围棋AI的新思路 : 田渊栋 : 时隔两年,又找到了赶文章的感觉,17号坐飞机从加州飞回匹兹堡,身在万米高空还在 : 跑实验改文章,飞机上的网络实在是破得可以,接了VPN登陆到公司机器,按一个字符 : 都要等半秒钟才看到回显,但是亏得前一天晚上写了个好使的分布式框架,又有协作者 : @Yan Zhu 的帮忙,真正能做到改几行脚本就启动百台机器跑想要的实验。什么是生产 : 力,这就是生产力。 : 努力终于有了回报,我们用深度学习做智能围棋的文章终于在arXiv上公开了,链接见http://arxiv.org/abs/1511.06410。这篇文章同时已投稿至ICLR 2016,中心思想是使用深度学习模型来训练AI走下一步,就能使AI达到KGS Go Server上1d-2d的水平,比去年Google DeepMind团队发布出的性能要好不少(见http://arxiv.org/abs/1412.6564)。再加上传统的蒙特卡洛树搜索,棋力能更上一层楼,当然,我们现在离最好的软件(比如说天顶,银星)还有相当的差距。
| o*****p 发帖数: 2977 | 4 一般人也是这样,学到几d就止步不前了。不会下着下着就变成职业水准。
这计算机只是学得比人快,到达平台期更快而已。至于你说的进步过程,输入的棋谱
少,它的棋力必然小,这应该是必然的。当然这个算法目前的水平是,最高学到2~3d。
不过考虑到作者是今年开始做的项目,在我看来相当难得了。
还在
字符
作者
生产
【在 r****y 的大作中提到】 : 不可信。如果是有学习能力的,那么应该是在下多少盘的基础上达到什么水平,下更多 : 盘又会是什么水平。或者说下的盘数增加,方差会越来越小。 : 这个目标一看就是固定的一次性的,没法前进的。 : : 见http://arxiv.org/abs/1511.06410。这篇文章同时已投稿至ICLR 2016,中心思想是使用深度学习模型来训练AI走下一步,就能使AI达到KGS Go Server上1d-2d的水平,比去年Google DeepMind团队发
| r****y 发帖数: 26819 | 5 不是这个意思,这里的神经网络只是用来tune一些参数的,不能起到不断学习进步的作
用,作者已经用了十四万的对局数据,但是在kgs上的等级分基本固定了,除非人工再t
une参数。
这里比较取巧的就是拿MCTS来做同步,而神经网络只起到打酱油调参数作用。事实上,
绝大多数时候MCTS是不采取神经网络计算结果的。到第300到第400步之间的时候,MCTS
采用神经网络计算结果的比率是30.6%,前面都是23%左右,一般300步早就到官子了,
官子每步棋都只有30%的准头,请问这神经网络不就是个瞎支招打酱油的么?
几个月时间做到1d就算不错,但你怎么判断这1d是MCTS的功劳还是神经网络的功劳呢?
如果作者把MCTS部分拿出来做个纯MCTS的程序,我估计就算没有1d也差不多了。换句话
说,神经网络和学习功能没有足够的说服力,paper自己也承认纯用神经网络的远远比
不上MCTS跟神经网络混搭的,这说明神经网络只是噱头,还是靠MCTS出力,而别人做的
MCTS的程序早就不止1d了,那这篇paper和项目的价值在哪。既然做了跟纯神经网络
程序对比的,干嘛不敢做一个跟自己的纯MCTS算法对比的呢?
【在 o*****p 的大作中提到】 : 一般人也是这样,学到几d就止步不前了。不会下着下着就变成职业水准。 : 这计算机只是学得比人快,到达平台期更快而已。至于你说的进步过程,输入的棋谱 : 少,它的棋力必然小,这应该是必然的。当然这个算法目前的水平是,最高学到2~3d。 : 不过考虑到作者是今年开始做的项目,在我看来相当难得了。 : : 还在 : 字符 : 作者 : 生产
| o*****p 发帖数: 2977 | 6 看简介好像是只用神经网络能拿到1~2段,加了MCTS更高。
http://www.lanke.cc/forum/portal.php?mod=view&aid=387
再t
MCTS
【在 r****y 的大作中提到】 : 不是这个意思,这里的神经网络只是用来tune一些参数的,不能起到不断学习进步的作 : 用,作者已经用了十四万的对局数据,但是在kgs上的等级分基本固定了,除非人工再t : une参数。 : 这里比较取巧的就是拿MCTS来做同步,而神经网络只起到打酱油调参数作用。事实上, : 绝大多数时候MCTS是不采取神经网络计算结果的。到第300到第400步之间的时候,MCTS : 采用神经网络计算结果的比率是30.6%,前面都是23%左右,一般300步早就到官子了, : 官子每步棋都只有30%的准头,请问这神经网络不就是个瞎支招打酱油的么? : 几个月时间做到1d就算不错,但你怎么判断这1d是MCTS的功劳还是神经网络的功劳呢? : 如果作者把MCTS部分拿出来做个纯MCTS的程序,我估计就算没有1d也差不多了。换句话 : 说,神经网络和学习功能没有足够的说服力,paper自己也承认纯用神经网络的远远比
|
|