并行程序设计导论第一章课件

上传人:痛*** 文档编号:241282330 上传时间:2024-06-15 格式:PPT 页数:43 大小:3.37MB
返回 下载 相关 举报
并行程序设计导论第一章课件_第1页
第1页 / 共43页
并行程序设计导论第一章课件_第2页
第2页 / 共43页
并行程序设计导论第一章课件_第3页
第3页 / 共43页
点击查看更多>>
资源描述
1Copyright 2010,Elsevier Inc.All rights ReservedChapter 1Why Parallel Computing?An Introduction to Parallel ProgrammingPeter Pacheco.2Copyright 2010,Elsevier Inc.All rights ReservedRoadmapnWhy we need ever-increasing performance.nWhy were building parallel systems.nWhy we need to write parallel programs.nHow do we write parallel programs?nWhat well be doing.nConcurrent,parallel,distributed!#Chapter Subtitle.3Changing timesCopyright 2010,Elsevier Inc.All rights ReservednFrom 1986 2002,microprocessors were speeding like a rocket,increasing in performance an average of 50%per year.nSince then,its dropped to about 20%increase per year.4An intelligent solutionCopyright 2010,Elsevier Inc.All rights ReservednInstead of designing and building faster microprocessors,put multiple processors on a single integrated circuit.5Now its up to the programmersnAdding more processors doesnt help much if programmers arent aware of themn or dont know how to use them.nSerial programs dont benefit from this approach(in most cases).Copyright 2010,Elsevier Inc.All rights Reserved.6Why we need ever-increasing performancenComputational power is increasing,but so are our computation problems and needs.nProblems we never dreamed of have been solved because of past increases,such as decoding the human genome.nMore complex problems are still waiting to be solved.Copyright 2010,Elsevier Inc.All rights Reserved.7Climate modelingCopyright 2010,Elsevier Inc.All rights Reserved.8Protein foldingCopyright 2010,Elsevier Inc.All rights Reserved.9Drug discoveryCopyright 2010,Elsevier Inc.All rights Reserved.10Energy researchCopyright 2010,Elsevier Inc.All rights Reserved.11Data analysisCopyright 2010,Elsevier Inc.All rights Reserved.12Why were building parallel systemsnUp to now,performance increases have been attributable to increasing density of transistors.nBut there areinherent problems.Copyright 2010,Elsevier Inc.All rights Reserved.13A little physics lessonnSmaller transistors=faster processors.nFaster processors=increased power consumption.nIncreased power consumption=increased heat.nIncreased heat=unreliable processors.Copyright 2010,Elsevier Inc.All rights Reserved.14Solution nMove away from single-core systems to multicore processors.n“core”=central processing unit(CPU)Copyright 2010,Elsevier Inc.All rights ReservednIntroducing parallelism!.15Why we need to write parallel programsnRunning multiple instances of a serial program often isnt very useful.nThink of running multiple instances of your favorite game.nWhat you really want is forit to run faster.Copyright 2010,Elsevier Inc.All rights Reserved.16Approaches to the serial problemnRewrite serial programs so that theyre parallel.nWrite translation programs that automatically convert serial programs into parallel programs.nThis is very difficult to do.nSuccess has been limited.Copyright 2010,Elsevier Inc.All rights Reserved.17More problemsnSome coding constructs can be recognized by an automatic program generator,and converted to a parallel construct.nHowever,its likely that the result will be a very inefficient program.nSometimes the best parallel solution is to step back and devise an entirely new algorithm.Copyright 2010,Elsevier Inc.All rights Reserved.18ExamplenCompute n values and add them together.nSerial solution:Copyright 2010,Elsevier Inc.All rights Reserved.19Example(cont.)nWe have p cores,p much smaller than n.nEach core performs a partial sum of approximately n/p values.Copyright 2010,Elsevier Inc.All rights ReservedEach core uses its own private variablesand executes this block of codeindependently of the other cores.20Example(cont.)nAfter each core completes execution of the code,is a private variable my_sum contains the sum of the values computed by its calls to Compute_next_value.nEx.,8 cores,n=24,then the calls to Compute_next_value return:Copyright 2010,Elsevier Inc.All rights Reserved1,4,3,9,2,8,5,1,1,5,2,7,2,5,0,4,1,8,6,5,1,2,3,9.21Example(cont.)nOnce all the cores are done computing their private my_sum,they form a global sum by sending results to a designated“master”core which adds the final result.Copyright 2010,Elsevier Inc.All rights Reserved.22Example(cont.)Copyright 2010,Elsevier Inc.All rights Reserved.23Example(cont.)Copyright 2010,Elsevier Inc.All rights ReservedCore01234567my_sum8197157131214Global sum8+19+7+15+7+13+12+14=95Core01234567my_sum95197157131214.24Copyright 2010,Elsevier Inc.All rights ReservedBut wait!Theres a much better wayto compute the global sum.25Better parallel algorithmnDont make the master core do all the work.nShare it among the other cores.nPair the cores so that core 0 adds its result with core 1s result.nCore 2 adds its result with core 3s result,etc.nWork with odd and even numbered pairs of cores.Copyright 2010,Elsevier Inc.All rights Reserved.26Better parallel algorithm(cont.)nRepeat the process now with only the evenly ranked cores.nCore 0 adds result from core 2.nCore 4 adds the result from core 6,etc.nNow cores divisible by 4 repeat the process,and so forth,until core 0 has the final result.Copyright 2010,Elsevier Inc.All rights Reserved.27Multiple cores forming a global sumCopyright 2010,Elsevier Inc.All rights Reserved.28AnalysisnIn the first example,the master core performs 7 receives and 7 additions.nIn the second example,the master core performs 3 receives and 3 additions.nThe improvement is more than a factor of 2!Copyright 2010,Elsevier Inc.All rights Reserved.29Analysis(cont.)nThe difference is more dramatic with a larger number of cores.nIf we have 1000 cores:nThe first example would require the master to perform 999 receives and 999 additions.nThe second example would only require 10 receives and 10 additions.nThats an improvement of almost a factor of 100!Copyright 2010,Elsevier Inc.All rights Reserved.30How do we write parallel programs?nTask parallelism nPartition various tasks carried out solving the problem among the cores.nData parallelismnPartition the data used in solving the problem among the cores.nEach core carries out similar operations on its part of the data.Copyright 2010,Elsevier Inc.All rights Reserved.31Professor PCopyright 2010,Elsevier Inc.All rights Reserved15 questions300 exams.32Professor Ps grading assistantsCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#3.33Division of work data parallelismCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#3100 exams100 exams100 exams.34Division of work task parallelismCopyright 2010,Elsevier Inc.All rights ReservedTA#1TA#2TA#3Questions 1-5Questions 6-10Questions 11-15.35Division of work data parallelismCopyright 2010,Elsevier Inc.All rights Reserved.36Division of work task parallelismCopyright 2010,Elsevier Inc.All rights ReservedTasks1)Receiving2)Addition.37CoordinationnCores usually need to coordinate their work.nCommunication one or more cores send their current partial sums to another core.nLoad balancing share the work evenly among the cores so that one is not heavily loaded.nSynchronization because each core works at its own pace,make sure cores do not get too far ahead of the rest.Copyright 2010,Elsevier Inc.All rights Reserved.38What well be doingnLearning to write programs that are explicitly parallel.nUsing the C language.nUsing three different extensions to C.nMessage-Passing Interface(MPI)nPosix Threads(Pthreads)nOpenMPCopyright 2010,Elsevier Inc.All rights Reserved.39Type of parallel systemsnShared-memorynThe cores can share access to the computers memory.nCoordinate the cores by having them examine and update shared memory locations.nDistributed-memorynEach core has its own,private memory.nThe cores must communicate explicitly by sending messages across a network.Copyright 2010,Elsevier Inc.All rights Reserved.40Type of parallel systemsCopyright 2010,Elsevier Inc.All rights ReservedShared-memoryDistributed-memory.41Terminology nConcurrent computing a program is one in which multiple tasks can be in progress at any instant.nParallel computing a program is one in which multiple tasks cooperate closely to solve a problemnDistributed computing a program may need to cooperate with other programs to solve a problem.Copyright 2010,Elsevier Inc.All rights Reserved.42Concluding Remarks(1)nThe laws of physics have brought us to the doorstep of multicore technology.nSerial programs typically dont benefit from multiple cores.nAutomatic parallel program generation from serial program code isnt the most efficient approach to get high performance from multicore computers.Copyright 2010,Elsevier Inc.All rights Reserved.43Concluding Remarks(2)nLearning to write parallel programs involves learning how to coordinate the cores.nParallel programs are usually very complex and therefore,require sound program techniques and development.Copyright 2010,Elsevier Inc.All rights Reserved.
展开阅读全文
相关资源
正为您匹配相似的精品文档
相关搜索

最新文档


当前位置:首页 > 管理文书 > 施工组织


copyright@ 2023-2025  zhuangpeitu.com 装配图网版权所有   联系电话:18123376007

备案号:ICP2024067431-1 川公网安备51140202000466号


本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。装配图网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知装配图网,我们立即给予删除!