Expertise
The equipment supported includes high performance graphical workstations, a parallel computer, and high speed networking facilities. The faculty involved in the project are drawn from the department of Computer Science but have substantial collaborationswith computational scientists and engineers at Indiana University. The research supported by this infrastructure includes automated theorem proving, circuit validation, parallel functional programming, scientific visualization, visualization of Monte Carlo methods, visualization of processor utilization on scalable architectures, visual programming, and visual performance monitoring and analysis.

Computer memory is structured in layers, for example (moving from inner to outer registers, ''main'' memory/RAM, and disk. Each is slower but larger than those within it These layers are reflected in the design of conventional programming languages (respectively, by local variables, non-local variable and arrays, and files), and programs are designed to take best advantage of their relative speed, size, and persistency. Functional or applicative programming languages are most promising for parallel processing, but they do not yet deal with this layering of memory. Rather, present practice there is to treat all memory as RAM, structured as a ''heap'' for linked data structures, even though this unilevel model restricts their utility. Furthermore, although linked structures are very attractive for partitioning problems among processors, parallel heap managementis an open problem. This project explores methods of reconciling the necessary layering of physical memory into the practice of purely functional programming. One goal is to demonstrate performance of reference-counting memory (RCM) hardware, which can manage a heap shared by many processors. Reckoning at the memory address--remotely from any processor--it recovers most unused memory without any processor synchronization and little additional communication. Another goal is to implement a persistent file system within a purely functional language. Persistency requires that files survive certain unpredictable failures. Therefore, the system must retain current state of the files, even though the concept of state is forbidden in functional programming. Applicative programming, multiprocessing architectures and algorithms. The equipment supported includes high performance graphical workstations, a parallel computer, and high speed networking facilities. The faculty involved in the project are drawn from the department of Computer Science but have substantial collaborationswith computational scientists and engineers at Indiana University. The research supported by this infrastructure includes automated theorem proving, circuit validation, parallel functional programming, scientific visualization, visualization of Monte Carlo methods, visualization of processor utilization on scalable architectures, visual programming, and visual performance monitoring and analysis.

Computer memory is structured in layers, for example (moving from inner to outer registers, ''main'' memory/RAM, and disk. Each is slower but larger than those within it These layers are reflected in the design of conventional programming languages (respectively, by local variables, non-local variable and arrays, and files), and programs are designed to take best advantage of their relative speed, size, and persistency. Functional or applicative programming languages are most promising for parallel processing, but they do not yet deal with this layering of memory. Rather, present practice there is to treat all memory as RAM, structured as a ''heap'' for linked data structures, even though this unilevel model restricts their utility. Furthermore, although linked structures are very attractive for partitioning problems among processors, parallel heap managementis an open problem. This project explores methods of reconciling the necessary layering of physical memory into the practice of purely functional programming. One goal is to demonstrate performance of reference-counting memory (RCM) hardware, which can manage a heap shared by many processors. Reckoning at the memory address--remotely from any processor--it recovers most unused memory without any processor synchronization and little additional communication. Another goal is to implement a persistent file system within a purely functional language. Persistency requires that files survive certain unpredictable failures. Therefore, the system must retain current state of the files, even though the concept of state is forbidden in functional programming.
Canada, Computer Architecture, Computer Programming Languages, Computer Software, Computer Theory, Formal Semantics, France, Linear Programming, Parallel Algorithms, Parallel Programming, Program Verification, United Kingdom (Great Britain & Northern Ireland), United States
Degrees
PhD, University of Wisconsin, 1971
PhD, 1971
PhD
Keywords
canada france united states computer science software computer theory computer architecture formal semantics program verification linear programming parallel algorithms parallel programming