Chapter20ParallelDatabases

上传人:xx****x 文档编号:242871434 上传时间:2024-09-10 格式:PPT 页数:42 大小:225.50KB
返回 下载 相关 举报
Chapter20ParallelDatabases_第1页
第1页 / 共42页
Chapter20ParallelDatabases_第2页
第2页 / 共42页
Chapter20ParallelDatabases_第3页
第3页 / 共42页
点击查看更多>>
资源描述
Click to edit Master text styles,Second level,Third level,Fourth level,Fifth level,*,Silberschatz, Korth and Sudarshan,20.,41,Click to edit Master title style,Database System Concepts,Chapter 20: Parallel Databases,Introduction,I/O Parallelism,Interquery Parallelism,Intraquery Parallelism,Intraoperation Parallelism,Interoperation Parallelism,Design of Parallel Systems,1,Introduction,Parallel machines are becoming quite common and affordable,Prices of microprocessors, memory and disks have dropped sharply,Databases are growing increasingly large,large volumes of transaction data are collected and stored for later analysis.,multimedia objects like images are increasingly stored in databases,Large-scale parallel database systems increasingly used for:,storing large volumes of data,processing time-consuming decision-support queries,providing high throughput for transaction processing,2,Parallelism in Databases,Data can be partitioned across multiple disks for parallel I/O.,Individual relational operations (e.g., sort, join, aggregation) can be executed in parallel,data can be partitioned and each processor can work independently on its own partition.,Queries are expressed in high level language (SQL, translated to relational algebra),makes parallelization easier.,Different queries can be run in parallel with each other.Concurrency control takes care of conflicts.,Thus, databases naturally lend themselves to parallelism.,3,I/O Parallelism,Reduce the time required to retrieve relations from disk by partitioning,the relations on multiple disks.,Horizontal partitioning tuples of a relation are divided among many disks such that each tuple resides on one disk.,Partitioning techniques (number of disks =,n,):,Round-robin,:,Send the,i,th,tuple inserted in the relation to disk,i,mod,n,.,Hash partitioning,:,Choose one or more attributes as the partitioning attributes.,Choose hash function,h,with range 0,n,- 1,Let,i,denote result of hash function,h,applied tothe partitioning attribute value of a tuple. Send tuple to disk,i,.,4,I/O Parallelism (Cont.),Partitioning techniques (cont.):,Range partitioning,:,Choose an attribute as the partitioning attribute.,A partitioning vector ,v,o,v,1, .,v,n,-2, is chosen.,Let,v,be the partitioning attribute value of a tuple. Tuples such that,v,i,v,i,+1,go to disk,I,+ 1. Tuples with,v,v,0,go to disk 0 and tuples with,v,v,n-2,go to disk,n,-1.,E.g., with a partitioning vector 5,11, a tuple with partitioning attribute value of 2 will go to disk 0, a tuple with value 8 will go to disk 1, while a tuple with value 20 will go to disk2.,5,Comparison of Partitioning Techniques,Evaluate how well partitioning techniques support the following types of data access:,1.Scanning the entire relation.,2.Locating a tuple associatively ,point queries,.,E.g.,r.A,= 25.,3.Locating all tuples such that the value of a given attribute lies within a specified range ,range queries,.,E.g., 10,r.A, s.B.,For joins were partitioning is not applicable, parallelization can be accomplished by,fragment and replicate,technique,Depicted on next slide,Special case ,asymmetric fragment-and-replicate,:,One of the relations, say,r, is partitioned; any partitioning technique can be used.,The other relation,s, is replicated across all the processors.,Processor,P,i,then locally computes the join of,r,i,with all of s using any join technique.,24,Depiction of Fragment-and-Replicate Joins,a. Asymmetric Fragment and Replicate,b. Fragment and Replicate,25,Fragment-and-Replicate Join (Cont.),General case: reduces the sizes of the relations at each processor.,r,is partitioned into n partitions,r,0,r,1, .,r,n,-1,;s is partitioned into,m,partitions,s,0,s,1, .,s,m,-1,.,Any partitioning technique may be used.,There must be at least m * n processors.,Label the processors as,P,0,0,P,0,1, .,P,0,m,-1,P,1,0, .,P,n,-1,m,-1,.,P,i,j,computes the join of,r,i,with,s,j,. In order to do so,r,i,is replicated to,P,i,0,P,i,1, .,P,i,m,-1, while s,i,is replicated to,P,0,i,P,1,i, .,P,n,-1,i,Any join technique can be used at each processor,P,i,j,.,26,Fragment-and-Replicate Join (Cont.),Both versions of fragment-and-replicate work with any join condition, since every tuple in,r,can be tested with every tuple in,s,.,Usually has a higher cost than partitioning, since one of the relations (for asymmetric fragment-and-replicate) or both relations (for general fragment-and-replicate) have to be replicated.,Sometimes asymmetric fragment-and-replicate is preferable even though partitioning could be used.,E.g., say,s,is small and,r,is large, and already partitioned. It may be cheaper to replicate,s,across all processors, rather than repartition,r,and,s,on the join attributes.,27,Partitioned Parallel Hash-Join,Parallelizing partitioned hash join:,Assume,s,is smaller than,r,and therefore,s,is chosen as the build relation.,A hash function,h,1,takes the join attribute value of each tuple in,s,and maps this tuple to one of the,n,processors.,Each processor,P,i,reads the tuples of,s,that are on its disk,D,i, and sends each tuple to the appropriate processor based on hash function,h,1,. Let,s,i,denote the tuples of relation,s,that are sent to processor,P,i,.,As tuples of relation,s,are received at the destination processors, they are partitioned further using another hash function,h,2, which is used to compute the hash-join locally.,(Cont.),28,Partitioned Parallel Hash-Join (Cont.),Once the tuples of,s,have been distributed, the larger relation,r,is redistributed across the,m,processors using the hash function,h,1,Let r,i,denote the tuples of relation,r,that are sent to processor,P,i,.,As the,r,tuples are received at the destination processors, they are repartitioned using the function,h,2,(just as the probe relation is partitioned in the sequential hash-join algorithm).,Each processor,P,i,executes the build and probe phases of the hash-join algorithm on the local partitions,r,i,and,s,of,r,and,s,to produce a partition of the final result of the hash-join.,Note: Hash-join optimizations can be applied to the parallel case,e.g., the hybrid hash-join algorithm can be used to cache some of the incoming tuples in memory and avoid the cost of writing them and reading them back in.,29,Parallel Nested-Loop Join,Assume that,relation,s,is much smaller than relation,r,and that,r,is stored by partitioning.,there is an index on a join attribute of relation,r,at each of the partitions of relation,r,.,Use asymmetric fragment-and-replicate, with relation,s,being replicated, and using the existing partitioning of relation,r,.,Each processor,P,j,where a partition of relation,s,is stored reads the tuples of relation,s,stored in,D,j, and replicates the tuples to every other processor,P,i,.,At the end of this phase, relation,s,is replicated at all sites that store tuples of relation,r,.,Each processor,P,i,performs an indexed nested-loop join of relation,s,with the i,th,partition of relation,r,.,30,Other Relational Operations,Selection,(r),If,is of the form a,i,= v, where a,i,is an attribute and v a value.,If r is partitioned on a,i,the selection is performed at a single processor.,If,is of the form l = a,i,= u (i.e.,is a range selection) and the relation has been range-partitioned on a,i,Selection is performed at each processor whose partition overlaps with the specified range of values.,In all other cases: the selection is performed in parallel at all the processors.,31,Other Relational Operations (Cont.),Duplicate elimination,Perform by using either of the parallel sort techniques,eliminate duplicates as soon as they are found during sorting.,Can also partition the tuples (using either range- or hash- partitioning) and perform duplicate elimination locally at each processor.,Projection,Projection without duplicate elimination can be performed as tuples are read in from disk in parallel.,If duplicate elimination is required, any of the above duplicate elimination techniques can be used.,32,Grouping/Aggregation,Partition the relation on the grouping attributes and then compute the aggregate values locally at each processor.,Can reduce cost of transferring tuples during partitioning by partly computing aggregate values before partitioning.,Consider the,sum,aggregation operation:,Perform aggregation operation at each processor P,i,on those tuples stored on disk D,i,results in tuples with partial sums at each processor.,Result of the local aggregation is partitioned on the grouping attributes, and the aggregation performed again at each processor P,i,to get the final result.,Fewer tuples need to be sent to other processors during partitioning.,33,Cost of Parallel Evaluation of Operations,If there is no skew in the partitioning, and there is no overhead due to the parallel evaluation, expected speed-up will be 1/n,If skew and overheads are also to be taken into account, the time taken by a parallel operation can be estimated as,T,part,+ T,asm,+ max (T,0, T,1, , T,n-1,),T,part,is the time for partitioning the relations,T,asm,is the time for assembling the results,T,i,is the time taken for the operation at processor P,i,this needs to be estimated taking into account the skew, and the time wasted in contentions.,34,Interoperator Parallelism,Pipelined parallelism,Consider a join of four relations,r,1,r,2,r,3,r,4,Set up a pipeline that computes the three joins in parallel,Let P1 be assigned the computation of temp1 = r,1,r,2,And P2 be assigned the computation of temp2 = temp1 r,3,And P3 be assigned the computation of temp2 r,4,Each of these operations can execute in parallel, sending result tuples it computes to the next operation even as it is computing further results,Provided a pipelineable join evaluation algorithm (e.g. indexed nested loops join) is used,35,Factors Limiting Utility of Pipeline Parallelism,Pipeline parallelism is useful since it avoids writing intermediate results to disk,Useful with small number of processors, but does not scale up well with more processors. One reason is that pipeline chains do not attain sufficient length.,Cannot pipeline operators which do not produce output until all inputs have been accessed (e.g. aggregate and sort),Little speedup is obtained for the frequent cases of skew in which one operators execution cost is much higher than the others.,36,Independent Parallelism,Independent parallelism,Consider a join of four relations,r,1,r,2,r,3,r,4,Let P1 be assigned the computation of temp1 = r,1,r,2,And P2 be assi
展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 图纸专区 > 大学资料


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!