1975 — 1978 |
Wise, David Friedman, Daniel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Structured Recursion: Its Properties, Translation, and Implementation |
0.915 |
1978 — 1982 |
Wise, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Aspects of Applicative Programming For Systems |
0.915 |
1982 — 1984 |
Wise, David Friedman, Daniel |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Applicative Programming For Indeterminate Systems (Computer Research) |
0.915 |
1984 — 1988 |
Johnson, Steven Wise, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Methods and Architectures For Applicative Programming |
0.915 |
1990 — 1993 |
Wise, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Hierarchy of Memory For Parallel Functional Programming
Computer memory is structured in layers, for example (moving from inner to outer registers, "main" memory/RAM, and disk. Each is slower but larger than those within it. These layers are reflected in the design of conventional programming languages (respectively, by local variables, non-local variable and arrays, and files), and programs are designed to take best advantage of their relative speed, size, and persistency. Functional or applicative programming languages are most promising for parallel processing, but they do not yet deal with this layering of memory. Rather, present practice there is to treat all memory as RAM, structured as a "heap" for linked data structures, even though this unilevel model restricts their utility. Furthermore, although linked structures are very attractive for partitioning problems among processors, parallel heap management is an open problem. This project explores methods of reconciling the necessary layering of physical memory into the practice of purely functional programming. One goal is to demonstrate performance of reference-counting memory (RCM) hardware, which can manage a heap shared by many processors. Reckoning at the memory address--remotely from any processor--it recovers most unused memory without any processor synchronization and little additional communication. Another goal is to implement a persistent file system within a purely functional language. Persistency requires that files survive certain unpredictable failures. Therefore, the system must retain current state of the files, even though the concept of state is forbidden in functional programming.
|
0.915 |
1993 — 1999 |
Wise, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
An Infrastructure For Conceptualization and Visualization
9303189 Wise This award provides infrastructure for the support of Conceptualization and Visualization of Computation. The equipment supported includes high performance graphical workstations, a parallel computer, and high speed networking facilities. The faculty involved in the project are drawn from the department of Computer Science but have substantial collaborations with computational scientists and engineers at Indiana University. The research supported by this infrastructure includes automated theorem proving, circuit validation, parallel functional programming, scientific visualization, visualization of Monte Carlo methods, visualization of processor utilization on scalable architectures, visual programming, and visual performance monitoring and analysis.
|
0.915 |
1997 — 2000 |
Leone, Mark (co-PI) [⬀] Wise, David Dybvig, R. Kent [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Staged Compiler Architecture For Dynamic Program Optimization
This project focuses on the design and implementation of a compiler architecture for dynamically optimizing stand-alone programs. Dynamic optimization uses run-time values and invariants to improve code as it executes, yielding performance superior to that of statically optimal code. Dynamic optimization incurs run-time overhead: time spent optimizing and compiling code at run time detracts from any resulting improvement. This overhead will be reduced using a staged compilation model, which permits static compilation to be suspended after reaching a high-level, mid-level, or low-level intermediate representation. This technique supports lightweight optimizations, which are fast but typically yield minor improvements, as well as heavyweight optimizations, which are time consuming but often provide substantial performance benefits. The proposed system will incorporate automatic program analyses and profiling tools to uncover opportunities for dynamic optimization in ordinary code, and a rich set of optimization directives will give the programmer fine- grained control over dynamic optimization when necessary.
|
0.915 |
2000 — 2004 |
Wise, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Compiler Support For Morton-Order Matrices
Two problems that constrain big computations appear to relax in the face of a different representation for matrices and their constituent blocks. Morton (or Z) order represents the elements of a matrix in computer memory consecutively by larger and larger blocks. This locality contrasts with row-major (or column-major) order, which stores only elements in the same row (column) close to one another.
This project explores compilers for Morton order as the default representation for arrays in higher-level programming languages. It develops techniques to recompile existing programs to code that uses Morton-order internally and respects modern memory hierarchies and superscalar processors. These will interface cleanly to future programs that use Ahnentafel indices, a generalization of Morton order, to control divide-and-conquer algorithms to descend blocks of a matrix recursively. So, it also supports the design of parallel algorithms that decompose a computation into disjoint, memory-local processes.
Morton order, and block algorithms that use it, implicitly improve access patterns into hierarchical memory: from registers, through caches, to RAM, to swapping disk. That is, Morton order enhances locality of reference of one process, while it helps to schedule multiple processes that don't interfere with each other.
|
0.915 |
2002 — 2006 |
Wise, David Lumsdaine, Andrew (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Itr: a Paradigm of Parallel Programming For Morton-Ordered Matrices
We propose a novel marriage of new fundamentals of data structures and algorithms with the productive tools of generic programming. Together they promise tools that could make high performance computing available to any scientist. The data structure is the Morton-order representation of matrices (and arrays in general), which admits both row, column, and quadrant decompositions. Its benefits are grounded in architecture where it solves problems of locality at all levels of a memory hierarchy. The favored algorithms use divide- and-conquer recursion-a style associated with functional programming-because they address the problems of balanced scheduling and communication of distributed and multiprocessing. Generic programming contributes, also, to this higher-level algebraic view of program construction. The anticipated result is a style for parallel programming without requiring explicit choreography either of memory or of processors. That is, the programmer can specify computations, and eventually new algorithms, with a high-level language closer to mathematics. We propose to develop the tools and the techniques to carry these programs to an efficient parallel implementation.
Products will include a new programming style, new efficiencies for parallel and Grid computing, tools to carry old codes into this environment, tools to support the new generation of programs, support for teaching a new generation of programmers, and new algorithms that can only be inspired by this high-level perspective on programming.
|
0.915 |
2002 — 2008 |
Fox, Geoffrey (co-PI) [⬀] Bramley, Randall (co-PI) [⬀] Lumsdaine, Andrew (co-PI) [⬀] Wise, David Plale, Beth (co-PI) [⬀] |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Cise Research Infrastructure: a Research Infrastructure For Collaborative, High-Performance Grid Applications
0202048 Wise, David S. Indiana University - Bloomington
RI: A Research Infrastructure for Collaborative, High-Performance Grid Applications
This project, developing an experimental infrastructure for distributed high performance computing, supports ten research projects extending the location-transparency that the Grid provides for computation resources to the full spectrum of activities which end-users require. Services being explored include software development, parallel code middleware, distributed software components for scientific computing, security for parallel remote method invocation, managing large-scale data streams, and collaboration methodologies. The research builds on and extends the institutions collaborations with several national Grid research teams. In contrast to existing national and university infrastructure available through production machines, this research requires an environment tolerant of experimental network protocols, temporary middleware, and other system-level changes. The infrastructure will contribute to the following research projects: a. Opie: basic work on parallel matrix algorithms that achieve high efficiency across many architectural platforms b. LAM: middleware MPI implementations supporting hierarchical and fault-tolerant parallel computing c. dQUOB: application of SQL queries to live data streams d. RMI Security: basic research into security mechanisms for remote method invocation, allowing security to be traded off with efficiency e. HPJ: High Performance Java creating a language platform for portable high performance coding f. Grid Broker: reliable, robust publish/subscribe service for introducing fault tolerance into the distributed Grid environment g. Community Grids Collaboratory: advanced collaboration capabilities with applications to both distance education and distributed communities h. Xports: design of methodologies for remote instrument access and data management of the resulting extremely large data sets i. Software Components: distributed software component model designed for applications that use parallel computing "nodes" in wide-area Grid environments j. Science Portals: set of tools that allow programmers to build Grid distributed applications accessed and controlled from desktop environments and web browsers Major improvements to infrastructure supporting all these projects include a 16-node cycle server and a large-scale file server as well as network upgrades to and within the building.
|
0.915 |
2006 — 2009 |
Wise, David |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
A Paradigm of Parallel Programming For Morton-Ordered Matrices
Background
Writing portable parallel programs in high-level languages on modern computing engines is heavily constrained by locality of reference; they can only run fast if they reuse data already close to the processor, and if they can compute much more than they must communicate.
Intellectual Merit
The proposed research addresses this problem with a new style of programming that organizes computation so that it is addressed implicitly. "Divide-and-conquer" applies at once to recursive control, to data sharing, to broad communication, and to machine organization. Programmers will again be able to solve important problems without much knowledge about the memory hierarchy or network communication. It will deliver high-performance computing to ordinary scientists, and allow generations of students to learn it as part of their first programming language. Success will advance computing, and all sciences and engineering that depend upon it.
Broader Impacts
Products include a new programming style, new efficiencies for parallel and Grid computing, tools to carry old codes into this environment, tools to support the new generation of programs, support for teaching a new generation of programmers, and new algorithms that can only be inspired by this high-level perspective on programming.
The project offers huge payoffs, both in the future growth of useful applications of parallel computing and in delivering access to this technology to many more students and future scientists and engineers. The results of this research can simplify the teaching and learning of large-scale computing. Results will be disseminated via the usual channels of conference and journal papers, and programming tools will be delivered over the net.
|
0.915 |