人工智能-机器学习课件

上传人:仙*** 文档编号:240977590 上传时间:2024-05-22 格式:PPT 页数:35 大小:786.50KB
返回 下载 相关 举报
人工智能-机器学习课件_第1页
第1页 / 共35页
人工智能-机器学习课件_第2页
第2页 / 共35页
人工智能-机器学习课件_第3页
第3页 / 共35页
点击查看更多>>
资源描述
1Artificial IntelligenceMachine Learning2What is Learning?Herbert Simon:“Learning is any process by which a system improves performance from experience.”What is the task?ClassificationProblem solving/planning/control3ClassificationAssign object/event to one of a given finite set of categories.Medical diagnosisCredit card applications or transactionsFraud detection in e-commerceWorm detection in network packetsSpam filtering in emailRecommended articles in a newspaperRecommended books,movies,music,or jokesFinancial investmentsDNA sequencesSpoken wordsHandwritten lettersAstronomical images4Problem Solving/Planning/ControlPerforming actions in an environment in order to achieve a goal.Solving calculus problemsPlaying checkers,chess,or backgammonBalancing a poleDriving a car or a jeepFlying a plane,helicopter,or rocketControlling an elevatorControlling a character in a video gameControlling a mobile robot5Sample Category Learning ProblemInstance language:size small,medium,largecolor red,blue,greenshape square,circle,triangleC=positive,negativeD:ExampleSizeColorShapeCategory1smallredcirclepositive2largeredcirclepositive3smallredtrianglenegative4largebluecirclenegative6Hypothesis SelectionMany hypotheses are usually consistent with the training data.red&circle(small&circle)or(large&red)(small&red&circle)or(large&red&circle)not (red&triangle)or(blue&circle)not (small&red&triangle)or(large&blue&circle)BiasAny criteria other than consistency with the training data that is used to select a hypothesis.7GeneralizationHypotheses must generalize to correctly classify instances not in the training data.Simply memorizing training examples is a consistent hypothesis that does not generalize.Occams razor:Finding a simple hypothesis helps ensure generalization.8Hypothesis SpaceRestrict learned functions a priori to a given hypothesis space,H,of functions h(x)that can be considered as definitions of c(x).For learning concepts on instances described by n discrete-valued features,consider the space of conjunctive hypotheses represented by a vector of n constraints where each ci is either:?,a wild card indicating no constraint on the ith featureA specific value from the domain of the ith feature indicating no value is acceptableSample conjunctive hypotheses are(most general hypothesis)(most specific hypothesis)9Inductive Learning HypothesisAny function that is found to approximate the target concept well on a sufficiently large set of training examples will also approximate the target function well on unobserved examples.Assumes that the training and test examples are drawn independently from the same underlying distribution.This is a fundamentally unprovable hypothesis unless additional assumptions are made about the target concept and the notion of“approximating the target function well on unobserved examples”is defined appropriately(putational learning theory).10Evaluation of Classification LearningClassification accuracy(%of instances classified correctly).Measured on an independent test data.Training time(efficiency of training algorithm).Testing time(efficiency of subsequent classification).11Category Learning as SearchCategory learning can be viewed as searching the hypothesis space for one(or more)hypotheses that are consistent with the training data.Consider an instance space consisting of n binary features which therefore has 2n instances.For conjunctive hypotheses,there are 4 choices for each feature:,T,F,?,so there are 4n syntactically distinct hypotheses.However,all hypotheses with 1 or more s are equivalent,so there are 3n+1 semantically distinct hypotheses.The target binary categorization function in principle could be any of the possible 22n functions on n input bits.Therefore,conjunctive hypotheses are a small subset of the space of possible functions,but both are intractably large.All reasonable hypothesis spaces are intractably large or even infinite.12Learning by EnumerationFor any finite or countably infinite hypothesis space,one can simply enumerate and test hypotheses one at a time until a consistent one is found.For each h in H do:If h is consistent with the training data D,then terminate and return h.This algorithm is guaranteed to terminate with a consistent hypothesis if one exists;however,it is obviously computationally intractable for almost any practical problem.13Efficient LearningIs there a way to learn conjunctive concepts without enumerating them?How do human subjects learn conjunctive concepts?Is there a way to efficiently find an unconstrained boolean function consistent with a set of discrete-valued training instances?If so,is it a useful/practical algorithm?14Conjunctive Rule LearningConjunctive descriptions are easily learned by finding all commonalities shared by all positive examples.Must check consistency with negative examples.If inconsistent,no conjunctive rule exists.ExampleSizeColorShapeCategory1smallredcirclepositive2largeredcirclepositive3smallredtrianglenegative4largebluecirclenegativeLearned rule:red&circle positive 15Limitations of Conjunctive RulesIf a concept does not have a single set of necessary and sufficient conditions,conjunctive learning fails.ExampleSizeColorShapeCategory1smallredcirclepositive2largeredcirclepositive3smallredtrianglenegative4largebluecirclenegative5mediumredcirclenegativeLearned rule:red&circle positive Inconsistent with negative example#5!16Decision TreesTree-based classifiers for instances represented as feature-vectors.Nodes test features,there is one branch for each value of the feature,and leaves specify the category.Can represent arbitrary conjunction and disjunction.Can represent any classification function over discrete feature vectors.Can be rewritten as a set of rules,i.e.disjunctive normal form(DNF).red circle posred square negred triangle negblue neggreen poscolorredbluegreenshapecirclesquaretrianglenegposposnegnegcolorredbluegreenshapecirclesquaretriangle B C A B C17Properties of Decision Tree LearningContinuous(real-valued)features can be handled by allowing nodes to split a real valued feature into two ranges based on a threshold(e.g.length 3 and length 3)Classification trees have discrete class labels at the leaves,regression trees allow real-valued outputs at the leaves.Algorithms for finding consistent trees are efficient for processing large amounts of training data for data mining tasks.Methods developed for handling noisy training data(both class and feature noise).Methods developed for handling missing feature values.18Top-Down Decision Tree InductionRecursively build a tree top-down by divide and conquer.:+:+:colorredbluegreen:+:+:19shapecirclesquaretriangleTop-Down Decision Tree InductionRecursively build a tree top-down by divide and conquer.:+:+:+:+:colorredbluegreen:+:+pos:negpos:negneg20Decision Tree Induction PseudocodeDTree(examples,features)returns a tree If all examples are in one category,return a leaf node with that category label.Else if the set of features is empty,return a leaf node with the category label that is the most common in examples.Else pick a feature F and create a node R for it For each possible value vi of F:Let examplesi be the subset of examples that have value vi for FAdd an out-going edge E to node R labeled with the value vi.If examplesi is empty then attach a leaf node to edge E labeled with the category that is the most common in examples.else call DTree(examplesi,features F)and attach the resulting tree as the subtree under edge E.Return the subtree rooted at R.21Picking a Good Split FeatureGoal is to have the resulting tree be as small as possible,per Occams razor.Finding a minimal decision tree(nodes,leaves,or depth)is an NP-hard optimization problem.Top-down divide-and-conquer method does a greedy search for a simple tree but does not guarantee to find the smallest.General lesson in ML:“Greed is good.”Want to pick a feature that creates subsets of examples that are relatively“pure”in a single class so they are“closer”to being leaf nodes.There are a variety of heuristics for picking a good test,a popular one is based on information gain that originated with the ID3 system of Quinlan(1979).22EntropyEntropy(disorder,impurity)of a set of examples,S,relative to a binary classification is:where p1 is the fraction of positive examples in S and p0 is the fraction of negatives.If all examples are in one category,entropy is zero(we define 0log(0)=0)If examples are equally mixed(p1=p0=0.5),entropy is a maximum of 1.Entropy can be viewed as the number of bits required on average to encode the class of an example in S where data compression(e.g.Huffman coding)is used to give shorter codes to more likely cases.For multi-class problems with c categories,entropy generalizes to:23Entropy Plot for Binary Classification24Information GainThe information gain of a feature F is the expected reduction in entropy resulting from splitting on this feature.where Sv is the subset of S having value v for feature F.Entropy of each resulting subset weighted by its relative size.Example:+:+:2+,2:E=1 sizebig small1+,1 1+,1E=1 E=1Gain=1(0.51+0.51)=02+,2 :E=1 colorred blue2+,1 0+,1E=0.918 E=0Gain=1(0.750.918+0.250)=0.3112+,2 :E=1 shapecircle square2+,1 0+,1E=0.918 E=0Gain=1(0.750.918+0.250)=0.31125Hypothesis Space SearchPerforms batch learning that processes all training instances at once rather than incremental learning that updates a hypothesis after each example.Performs hill-climbing(greedy search)that may only find a locally-optimal solution.Guaranteed to find a tree consistent with any conflict-free training set(i.e.identical feature vectors always assigned the same class),but not necessarily the simplest tree.Finds a single discrete hypothesis,so there is no way to provide confidences or create useful queries.Another Red-Circle Data Set26ExampleSizeColorShapeCategory1smallredcirclepositive2largeredcirclepositive3smallredsquarenegative4largebluecirclenegative27Weka J48 Trace 1data java weka.classifiers.trees.J48-t figure.arff-T figure.arff-U-M 1Options:-U-M 1 J48 unpruned tree-color=blue:negative(1.0)color=red|shape=circle:positive(2.0)|shape=square:negative(1.0)|shape=triangle:positive(0.0)color=green:positive(0.0)Number of Leaves :5Size of the tree:7Time taken to build model:0.03 secondsTime taken to test model on training data:0 secondsA Data Set Requiring Disjunction28ExampleSizeColorShapeCategory1smallredcirclepositive2largeredcirclepositive3smallredtrianglenegative4largebluecirclenegative5smallgreencirclepositive29Weka J48 Trace 2data java weka.classifiers.trees.J48-t figure3.arff-T figure3.arff-U-M 1Options:-U-M 1 J48 unpruned tree-shape=circle|color=blue:negative(1.0)|color=red:positive(2.0)|color=green:positive(1.0)shape=square:positive(0.0)shape=triangle:negative(1.0)Number of Leaves :5Size of the tree:7Time taken to build model:0.02 secondsTime taken to test model on training data:0 seconds30Weka J48 Trace 3data java weka.classifiers.trees.J48-t contact-lenses.arff J48 pruned tree-tear-prod-rate=reduced:none(12.0)tear-prod-rate=normal|astigmatism=no:soft(6.0/1.0)|astigmatism=yes|spectacle-prescrip=myope:hard(3.0)|spectacle-prescrip=hypermetrope:none(3.0/1.0)Number of Leaves :4Size of the tree:7Time taken to build model:0.03 secondsTime taken to test model on training data:0 seconds=Error on training data=Correctly Classified Instances 22 91.6667%Incorrectly Classified Instances 2 8.3333%Kappa statistic 0.8447Mean absolute error 0.0833Root mean squared error 0.2041Relative absolute error 22.6257%Root relative squared error 48.1223%Total Number of Instances 24 =Confusion Matrix=a b c -classified as 5 0 0|a=soft 0 3 1|b=hard 1 0 14|c=none=Stratified cross-validation=Correctly Classified Instances 20 83.3333%Incorrectly Classified Instances 4 16.6667%Kappa statistic 0.71 Mean absolute error 0.15 Root mean squared error 0.3249Relative absolute error 39.7059%Root relative squared error 74.3898%Total Number of Instances 24 =Confusion Matrix=a b c -classified as 5 0 0|a=soft 0 3 1|b=hard 1 2 12|c=none31Evaluating Inductive HypothesesAccuracy of hypotheses on training data is obviously biased since the hypothesis was constructed to fit this data.Accuracy must be evaluated on an independent(usually disjoint)test set.Average over multiple train/test splits to get accurate measure of accuracy.K-fold cross validation averages over K trials using each example exactly once as a test case.32K-Fold Cross ValidationRandomly partition data D into k disjoint equal-sized subsets P1PkFor i from 1 to k do:Use Pi for the test set and remaining data for training Si=(D Pi)h=L(Si)i=errorPi(h)Return the average difference in error:33K-Fold Cross Validation CommentsEvery example gets used as a test example once and as a training example k1 times.All test sets are independent;however,training sets overlap significantly.Measures accuracy of hypothesis generated for(k1)/k|D|training examples.Standard method is 10-fold.If k is low,not sufficient number of train/test trials;if k is high,test set is small and test variance is high and run time is increased.If k=|D|,method is called leave-one-out cross validation.34Learning CurvesPlots accuracy vs.size of training set.Has maximum accuracy(Bayes optimal)nearly been reached or will more examples help?Is one system better when training data is limited?Most learners eventually converge to Bayes optimal given sufficient training examples.Test Accuracy#Training examples100%Bayes optimalRandom guessing35Cross Validation Learning CurvesSplit data into k equal partitionsFor trial i=1 to k do:Use partition i for testing and the union of all other partitions for training.For each desired point p on the learning curve do:For each learning system L Train L on the first p examples of the training set and record training time,training accuracy,and learned concept complexity.Test L on the test set,recording testing time and test accuracy.Compute average for each performance statistic across k trials.Plot curves for any desired performance statistic versus training set size.
展开阅读全文
相关资源
相关搜索

最新文档


当前位置:首页 > 管理文书 > 施工组织


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!