马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。
您需要 登录 才可以下载或查看,没有账号?注册
x
UPCRC_Whitepaper.pdf
(1.73 MB, 下载次数: 110 )
For many decades, Moore’s law has bestowed a wealth of transistors that hardware designers and compiler writers have converted to usable performance, without changing the sequential programming interface. The main techniques for these performance benefits—increased clock frequency and smarter but increasingly complex architectures—are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster. This historic shift to multicore processors changes the programming interface by exposing parallelism to the programmer, after decades of sequential computing.
Parallelism has been successfully used in many domains such as high performance computing (HPC), servers, graphics accelerators, and many embedded systems. The multicore inflection point, however, affects the entire market, particularly the client space, where parallelism has not been previously widespread. Programs with millions of lines of code must be converted or rewritten to take advantage of parallelism; yet, as practiced today, parallel programming for the client is a difficult task performed by few programmers. Commonly used programming models are prone to subtle, hard to reproduce bugs, and parallel programs are notoriously hard to test due to data races, non-deterministic interleavings, and complex memory models. Mapping a parallel application to parallel hardware is also difficult given the large number of degrees of freedom (how many cores to use, whether to use special instructions or accelerators, etc.), and traditional parallel environments have done a poor job virtualizing the hardware for the programmer. As a result, only the highest performance seeking and skilled programmers have been exposed to parallel computing, resulting in little investment in development environments and a lack of trained manpower. There is a risk that while hardware races ahead to ever-larger numbers of cores, software will lag behind and few applications will leverage the potential hardware performance.
Moving forward, if every computer will be a parallel computer, most programs must execute in parallel and most programming teams must be able to develop parallel programs, a daunting goal given the above problems. Illinois has a rich history in parallel computing starting from the genesis of the field and continues a broad research program in parallel computing today [1]. This program includes the Universal Parallel Computing Research Center (UPCRC), established at Illinois by Intel and Microsoft, together with a sibling center established at Berkeley. These two centers are focused on the problems of multicore computing, especially in the client and mobile domains.
This paper describes the research vision and agenda for client and mobile computing research at Illinois, focusing on the activities at UPCRC (some of which preceded UPCRC).
Given the long history of parallel computing, it is natural to ask whether the challenges we face today differ from those of the past. Compared to the HPC and server markets, the traditional focus of parallel computing research, the client market brings new difficulties, but it also brings opportunities. Table 1 summarizes some of the key differences. |