由买买提看人间百态

boards

本页内容为未名空间相应帖子的节选和存档,一周内的贴子最多显示50字,超过一周显示500字 访问原贴
CS版 - Vapnik获2008年ACM Paris Kanellakis理论与实践奖
相关主题
问个小白问题为啥Vapnik没拿图灵奖学术届讲的是开创性的贡献
vapnik牛x的那个叫VC theory请教一下这个做Learning的faculty水平如何?
一道Support Vector Machines的面试题,如何解答?关于CS CONFERENCE的几点总结
预测几个值图灵奖的得主牛人给普及一下SVM和VC dimension的关系
牛人很神奇的简历啊[转载] 有学人工智能的吗?
2010年图灵奖得主Leslie G. ValiantUW recruits superstars of computer-science world
Valiant和Vapnik的level差的相当大啊求教machine learning的基础材料
Valiant 是理论大牛现在活跃的CS的一个牛人
相关话题的讨论汇总
话题: vapnik话题: data话题: training话题: algorithm话题: paris
进入CS版参与讨论
1 (共1页)
B****x
发帖数: 17
1
ACM Paris Kanellakis Theory and Practice Award奖励对计算机应用有重大影响的理论
发现. http://awards.acm.org/kanellakis/
2008
Cortes, Corinna
Vapnik, Vladimir
Paris Kanellakis Theory and Practice Award
2008 – Vladimir Vapnik
Fellow of NEC Laboratories/Columbia University (2008)
Citation
For the development of Support Vector Machines, a highly effective
algorithm for classification and related machine learning problems.
Full Citation
In their 1995 paper titled "Support vector networks," Cortes and Vapnik
introduced Support vector machines (SVMs).
In typical computational learning problems, the designer first provides
training data by labeling data as positive and negative. The data, for
instance, can be profiles of typical customers, and is modeled as a
point in a high-dimensional geometric space, where each dimension
corresponds to a user attribute such as age.
The label for each data point can record, for instance, whether the
corresponding customer likes or dislikes a particular product. The
learning algorithm then computes a decision boundary that separates
positive examples from the negative, so that in the future, this
boundary can be used to predict whether a new customer will like or
dislike the product based on her profile. Prior to the publication of
this paper, this method could be applied only when a decision boundary
specifiable by a hyperplane (that is, a linear constraint over attribute
variables) was guaranteed to exist for the training data.
Unfortunately, this is never the case for real-world training data which
contains many errors. The paper by Cortes and Vapnik revolutionized the
field by providing for the first time an accurate algorithm that
converges in polynomial time even when the training data is not
separable by a hyperplane.
This is achieved by mapping training data points to a higher-dimensional
space, and then determining a hyperplane that minimizes the error on the
training set, while maximizing the distance, or the margin, between the
positive and negative examples. The margin-maximizing solution is
obtained by solving a quadratic optimization problem, and works well in
practice.
With this algorithm, the machine learning community was armed with a
general technique that could be used in noisy real-world scenarios. The
theoretical foundations articulated in this paper have inspired
extensive research making SVM one of the most frequently used algorithms
in the machine learning literature. The algorithm and its variants have
been used successfully in a wide variety of practical applications such
as handwriting recognition, speech synthesis, medical diagnosis, protein
structure prediction, face detection, weather forecasting, intrusion
detection, and text mining.
p*********g
发帖数: 226
2
基本上是Vapnik先打下基础,然后
Corinna Cortes 拓展到 kernel (Wahba 的实在不能算)
Bennett and Mangasarian 1992 提出 soft-margin loss (但不在SVM背景下)
再前面点,RKHS were invented by Aronszajn and coworkers in the 50s.
kernel + soft-margin 之后才迎来了10-15 年左右的辉煌,不过现在让位于 sparse/
low rank 了。
1 (共1页)
进入CS版参与讨论
相关主题
现在活跃的CS的一个牛人牛人很神奇的简历啊
An artificial intelligence based internet answer engine2010年图灵奖得主Leslie G. Valiant
谁有Mario Martin 的SVM Incremental Regression 包Valiant和Vapnik的level差的相当大啊
速成machine learning?Valiant 是理论大牛
问个小白问题为啥Vapnik没拿图灵奖学术届讲的是开创性的贡献
vapnik牛x的那个叫VC theory请教一下这个做Learning的faculty水平如何?
一道Support Vector Machines的面试题,如何解答?关于CS CONFERENCE的几点总结
预测几个值图灵奖的得主牛人给普及一下SVM和VC dimension的关系
相关话题的讨论汇总
话题: vapnik话题: data话题: training话题: algorithm话题: paris