大连理工大学计算机科学与技术学院

上传人:沈*** 文档编号:243911604 上传时间:2024-10-01 格式:PPT 页数:29 大小:1.21MB
返回 下载 相关 举报
大连理工大学计算机科学与技术学院_第1页
第1页 / 共29页
大连理工大学计算机科学与技术学院_第2页
第2页 / 共29页
大连理工大学计算机科学与技术学院_第3页
第3页 / 共29页
点击查看更多>>
资源描述
单击此处编辑母版标题样式,单击此处编辑母版文本样式,第二级,第三级,第四级,第五级,*,*,Clustering,大连理工大学计算机科学与技术学院,2010,年春季,Google News,They didnt pick all 3,400,217 related articles by hand,Or A,Or Netflix,Others.,Hospital Records,Scientific Imaging,Related genes,related stars,related sequences,Market Research,Segmenting markets,product positioning,Social Network Analysis,Data mining,Image segmentation,What is clustering?,Clustering,:the process of grouping a set of objects into classes of similar objects,Documents within a cluster should be similar.,Documents from different clusters should be dissimilar.,A data set with clear cluster structure,How would you design an algorithm for finding the three clusters in this case?,Google News:automatic clustering gives an effective news presentation metaphor,For improving search recall,Cluster hypothesis,-Documents in the same cluster behave similarly with respect to relevance to information needs,Therefore,to improve search recall:,Cluster docs in corpus a priori,When a query matches a doc,D,also return other docs in the cluster containing,D,Hope if we do this:The query“car”will also return docs containing,automobile,Because clustering grouped together docs containing,car,with those containing,automobile.,Issues for clustering,Representation for clustering,Document representation,Vector space?Normalization?,Centroids,arent length normalized,Need a notion of similarity/distance,How many clusters?,Fixed a priori?,Completely data driven?,Avoid“trivial”clusters-too large or small,In an application,if a clusters too large,then for navigation purposes youve wasted an extra user click without whittling down the set of documents much.,The Distance Measure,How the similarity of two elements in a set is determined,e.g.,Euclidean Distance,Manhattan Distance,Inner Product Space,Maximum Norm,Or any metric you define over the space,Hierarchical Clustering vs.,Partitional Clustering,Types of Algorithms,Hierarchical Clustering,Builds or breaks up a hierarchy of clusters.,Partitional Clustering,Partitions set into all clusters simultaneously.,Partitional Clustering,Partitions set into all clusters simultaneously.,K-Means Clustering,Simple Partitional Clustering,Choose the number of clusters,k,Choose k points to be cluster centers,Then,K-Means Clustering,iterate,Compute distance from all points to all k-,centers,Assign each point to the nearest k-center,Compute the average of all points assigned to,all specific k-centers,Replace the k-centers with the new averages,But!,The complexity is pretty high:,k*n*O(distance metric)*num(iterations),Moreover,it can be necessary to send,tons,of data to each Mapper Node.Depending on your bandwidth and memory available,this could be impossible.,Furthermore,There are three big ways a data set can be large:,There are a large number of elements in the set.,Each element can have many features.,There can be many clusters to discover,Conclusion Clustering can be huge,even when you distribute it.,Canopy Clustering,Preliminary step to help parallelize computation.,Clusters data into overlapping Canopies using super cheap distance metric.,Efficient,Accurate,Canopy Clustering,While there are unmarked points,pick a point which is not strongly marked,call it a canopy center,mark all points within some threshold of,it as in its canopy,strongly mark all points within some,stronger threshold,After the canopy clustering,Resume hierarchical or partitional clustering as usual.,Treat objects in separate clusters as being at infinite distances.,MapReduce Implementation:,Problem Efficiently partition a large data set(say movies with user ratings!)into a fixed number of clusters using Canopy Clustering,K-Means Clustering,and a Euclidean distance measure.,The Distance Metric,The Canopy Metric($),The K-Means Metric($),Steps!,Get Data into a form you can use(MR),Picking Canopy Centers(MR),Assign Data Points to Canopies(MR),Pick K-Means Cluster Centers,K-Means algorithm(MR),Iterate!,Steps 1(Netflix),Raw data,Movie recommendation data:,movieID,userID,rating,dateRated,The,Mapper,should parse each line of input data and map each,movieId,to a,userId,&rating.,The reducer should create,movieID,list of pairs,Steps 2:picking canopy centers(Netflix),The,mapper,maintains a list of generated canopies,If the current movie being some“near”threshold of an existing canopy,do nothing,Otherwise,emit it as an intermediate value and add it to the already-created list,The reducer does the same thing,Make sure that do not generate two canopy centers on top of each other(1 reducer),Distance measure:number of,userIDs,that two rated movies have in common,Step 3:Assign movies to canopies(Netflix),Each,mapper,need to load the set of canopies generated in step 2,Step 4:k-means iteration(Netflix),The,mapper,receives a movie,its,userID,/rating pairs,and its canopies,and emits movies data and its chosen k-center,The reducer receives a k-center and all movies which are bound to that k-center.It calculates the new position of the k-center,Elbo
展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 管理文书 > 施工组织


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!