参考资料:
https://en.wikipedia.org/wiki/Inductive_bias
http://blog.sina.com.cn/s/blog_616684a90100emkd.html
Machine Learning. Tom M. Mitchell
下面我认为比较关键的内容都用红色字体标注:
mokuram (mokuram) 于Tue Jan 4 05:22:24 2005)
提到:就是学习器在学习的时候带有的偏见。(这个说法不很准确)比如决策数分类器,很多决策数都采用 奥砍姆剃刀 原则 这样的归纳偏置。也就是说,在众多能解决问题的决策数中,选择最简单的。具体有关这个问题的探讨,请参阅Tom的MACHINE LEARNING中文版本国内有售faiut (繁星满天) 于Tue Jan 4 10:25:09 2005)提到:这个概念理解起来总是模模糊糊的,还不会用自己的话描述。jueww (觉·Hayek) 于Tue Jan 4 13:02:01 2005)提到:我喜欢用偏好这个单词。大概相对于model complexity之类。mokuram (mokuram) 于Tue Jan 4 13:53:03 2005)提到:归纳偏置是隔标准的术语,英文是inductive biasjueww (觉·Hayek) 于Tue Jan 4 20:16:58 2005)提到:但翻译成中文用偏置不好吧。。。bias and variance analysis里面翻译成偏置、偏离才差不多。mokuram (mokuram) 于Wed Jan 5 00:46:37 2005)提到:增华军先生在翻译TOM 的MACHINE LEARNING时,就是这样翻译的,感觉MACHINE LEARNING时国外很著名的教材,增先生的翻译水平,也还不错.jueww (觉·Hayek) 于Wed Jan 5 10:05:22 2005)提到:怎么说都是各人偏好吧,反正没人会把中文写的论文当回事。翻译一个术语真的需要对这个行业的中文和英文非常懂才行。偏置是一个电子行业的术语,容易产生误解。bias在ML中意思不止一个,用英语表达混乱了点,否则你也不会有这种疑问。如果在中文时能够将两种意思用不同汉字表达,不是更好?题归正传,我对BIAS的理解倒没有像你这么看教材看得仔细,TOM那本东东我没觉得有多少用处,所以没仔细看过。我完全是凭文献中出现的上下文猜测这个单词的意思得。我觉得用“model complexity”或者说"representation ability“代替BIAS好像一般没什么问题,被你这么一问倒也发现真的不知道这个东东是说啥的了。。。刚上网查了把,豁然开朗,嘿嘿:Informally speaking, the inductive bias of a machine learning algorithm refersto additional assumptions, that the learner will use to predict correct outputs for situations that have not been encountered so far.In machine learning one aims at the construction of algorithms, that are ableto learn to predict a certain target ouput. For this the learner will be presented a limited number of training examples that demonstrate the intended relation of input and output values. After successful learning, the learner is supposed to approximate the correct output, even for examples that have not been shown during training. Without any additional assumptions, this task cannot besolved since unseen situations might have an arbitrary output value. The kindof necessary assumptions about the nature of the target function are subsumedin the term inductive bias. A classical example of an inductive bias is Occam's Razor, assuming that the simplest consistent hypothesis about the target function is actually the best. Here consistent means that the hypothesis of the learner yields correct ouputs for all of the examples that have been given to the algorithm.Approaches to a more formal definition of inductive bias are based on mathematical logic. Here, the inductive bias is a logical formula that, together withthe training data, logically entails the hypothesis generated by the learner.Unfortunately, this strict formalism fails in many practical cases, where theinductive bias can only be given as a rough description (e.g. in the case of neural networks).跟我猜的意思基本一样。。。NeuroNetwork (刮开有奖:=>███████) 于Wed Jan 5 13:30:58 2005)提到:这两个bias根本就不是一回事NeuroNetwork (刮开有奖:=>███████) 于Wed Jan 5 14:26:11 2005)
提到:DT的bias首先是disjunctive probability similarity,然后才是the shorter the betterihappy (人似秋鸿来有信) 于Thu Jan 6 10:13:19 2005)
提到:这个居然mark了?不是误人子弟吗?那段英文说的倒是没错的,"翻译一个术语真的需要对这个行业的中文和英文非常懂才行。"也没错,其他都错了bias和model complexity, representation ability完全是不同的东西。jueww (觉·Hayek) 于Thu Jan 6 13:06:18 2005)提到:是不一样啊。但我觉得就是差不多的东东。本质想谈的都是模型的推广能力,同一个东西换个角度表达的概念。只不过bias跟具体分类算法相关时可以说得更加清楚点。但如果是抽象的谈bias,我确实没理解bias比model representation ability多了什么新东西,请指教。the inductive bias of a machine learning algorithm refers to additional assumptions, that the learner will use to predict correct outputs for situations that have not been encountered so far.这个additional assumption我理解的就是模型的表达能力,只不过bias是相对于learning algorithm上来说的,而representation是相对于classification model来说的。mitchell、dietteriech喜欢用bias,而vapnik喜欢用model complexity。faiut (繁星满天) 于Thu Jan 6 22:21:10 2005)
提到:本来概念迷迷糊糊的,现在看了你的介绍豁然开朗。3xjueww (觉·Hayek) 于Thu Jan 6 22:28:13 2005)提到:呵呵。相互帮忙,何乐不为啊。再说真正搞过一样东西的人,都会碰到一样的、很多书上没有的东西的。。。只能靠自己领会了。搞开发是这样,搞所谓的研究估计也这样。ihappy (人似秋鸿来有信) 于Fri Jan 7 01:04:02 2005)
提到:其实mitchell那本书这个部分讲的很好啊。首先,他举了一个例子,说明任何bias-free的learner都是fruitless,不能用来对任何unseen sample进行分类。换句话说,就是说,没有bias的learner没有任何generalizability。 这个和model complexity是不同的,如果选择了不合适的model complexity,只是可能泛化能力变差而已,仍然有泛化能力。所以,这个所谓的inductive bias是your PRIOR assumption about the learner.这里英文用bias这个词是合适的,至于中文应该翻译成什么,我自己也没有找到什么合适的,似乎目前知道的,偏置这个翻译可以用。第二,inductive bias和occam razor有很大关系,因为通常大家的prior assumption,就是inductive bias,会选择occam razor,或者说,选择合适的complexity比较小的model,但是这两者并不等价。比如说candidate elimination的inductive bias是解存在(或者说version space不为空),decision tree的inductive bias是短的树(这个近似于model complxity),以及高information gain的属性位置偏高(这个就不是model complexity)第三,inductive bias主要是个概念,实用性很差--除了有限的几种简单learner,几乎没法说明其他learner的inductive bias是什么,而且对实际应用指导性很差。但是对于machine learning的研究人员来说,这个概念是必须搞清楚的--以及他和model complexity的区别jueww (觉·Hayek) 于Fri Jan 7 01:23:55 2005)提到:领教了。不过还是不懂,也不觉得需要懂。。。当文献中用bias指向不同的分类器并进行比较时,我理解就是意在比较它们之间complexity,representation ability,generalization ability,而你觉得这些例子并不指它们在不同分类器之间比较。但现实是文献中就是用bias来泛指各种分类器。下面是一篇文献的题目和摘要。如果是prior,还能control吗?反而model complexity来代替的话,就很好理解了。Control of inductive bias in supervised learning using evolutionary computation: a wrapper-based approachSource Data mining: opportunities and challenges table of contentsPages: 27 - 54 Year of Publication: 2003ISBN:1-59140-051-1 Author William H. Hsu Kansas State UniversityIn this chapter, I discuss the problem of feature subset selection for supervi
sed inductive learning approaches to knowledge discovery in databases (KDD), and examine this and related problems in the context of controlling inductive bias. I survey several combinatorial search and optimization approaches to thisproblem, focusing on data-driven, validation-based techniques. In particular,I present a wrapper approach that uses genetic algorithms for the search component, using a validation criterion based upon model accuracy and problem complexity, as the fitness measure. Next, I focus on design and configuration of high-level optimization systems (wrappers) for relevance determination and constructive induction, and on integrating these wrappers with elicited knowledgeon attribute relevance and synthesis. I then discuss the relationship betweenthis model selection criterion and those from the minimum description length (MDL) family of learning criteria. I then present results on several syntheticproblems on task-decomposable machine learning and on two large-scale commercial data-mining and decision-support projects: crop condition monitoring, and loss prediction for insurance pricing. Finally, I report experiments using theMachine Learning in Java (MLJ) and Data to Knowledge (D2K) Java-based visual programming systems for data mining and information visualization, and severalcommercial and research tools. Test set accuracy using a genetic wrapper is significantly higher than that of decision tree inducers alone and is comparableto that of the best extant search-space based wrappers.