|
|
The Implementation of Computation group, under the direction of
Prof. André DeHon
studies how we physically implement computations. Our efforts span from
algorithms and problem descriptions, through compute models,
architectures, and runtime systems, and down to physical substrates,
including work on design mapping between these levels. We attempt to
systematically understand the design space for programmable computing
devices and the impact which both substrate costs and mapping technology
have on that design space. Currently, we are focusing on Programmable
System-on-a-Chip designs, (what is the organization and architectural
model for the integrated, heterogeneous, large capacity ICs we will soon
be able to build?), interconnect (what are the fundamental interconnect
requirements of a design? and how do we systematically design
interconnect? how do we map onto interconnect
substrates?), and
``messy'' computing (how do we guarantee correct or adequate
behavior when fabrication is stochastic, devices fail both
transiently and permanently during operation, and programs
contain bugs?).
|
Research Vectors
Our goal is to understand how we physically implement computations:
Given a computation to perform (domain of computations), how do we
design and build an efficient device (minimum resources, maximum
performance, minimum energy) out of our physical building blocks (
e.g. contemporary CMOS VLSI, molecular substrates)? How do
relative costs of the physical substrate effect which solutions are most
efficient (e.g. relative costs of switches versus wires)? How do
we describe computations? How do we characterize the requirements of a
computation? How do we algorithmically map from a high-level
specification to a substrate, automatically filling in necessary
implementation details? What tradeoffs do we have between algorithmic
complexity and optimality guarantees when performing such mappings?
What abstractions do we use to manage the complexity of these designs so
that we can fully exploit the computational capabilities provided by
modern and future substrates while minimizing the human effort required
to exploit these substrates?
After 60 years of building computing machines, we have a large body of
knowledge that addresses many pieces of these questions. However, we do
not have a systematic understanding of these issues, nor do we organize
and teach this material to students in a systematic way. Too much of
what we know is anecdotal, historical, and relies on substrate cost
assumptions which have or will change dramatically. Further, after 40
years of Moore's law, the size of systems we can physically build today
is enormous. As a result, human conceptual complexity is the key
limiter to the systems we can build, as well as the efficiency of the
systems we build. Today's design task is simply too large and too
important to tackle in an ad hoc manner.
|
|
|