CCC @ FCRC

Under an agreement with the National Science Foundation, the Computing Research Association (CRA) has established the Computing Community Consortium (CCC) to engage the computing research community in articulating and pursuing longer-term research visions - visions that will capture the imagination of our community and of the public at large.

The CCC invites your engagement in this process! At FCRC, five special talks will sketch the possibilities. These talks are intended to be inspirational, motivational, and accessible. Please join us!

Introducing the Computing Community Consortium (pdf slides)

Monday June 11, 6-7 p.m., Grand Exhibit Hall

Christos Papadimitriou, UC Berkeley

The Algorithmic Lens: How the Computational Perspective is Transforming the Sciences

Abstract
Slides (pdf)

Tuesday June 12, 6-7 p.m., Grand Exhibit Hall

Bob Colwell, Independent Consultant

Computer Architecture Futures 2007

Abstract
Slides (pdf)

Wednesday June 13, 6-7 p.m., Grand Exhibit Hall

Randal Bryant, Carnegie Mellon University

Data-Intensive Super Computing: Taking Google-Style Computing Beyond Web Search

Abstract
Slides (pdf)

Thursday June 14, 6-7 p.m., Grand Exhibit Hall

Scott Shenker, UC Berkeley

We Dream of GENI: Exploring Radical Network Designs

Abstract
Slides (pdf)

Friday June 15, 11:30 a.m. - 12:30 p.m., Town and Country Ballroom (FCRC Keynote Talk)

Ed Lazowska, University of Washington and Chair, Computing Community Consortium

Computer Science: Past, Present and Future

Abstract
Slides (pdf) / Opportunities (pdf) (ppt) / Myths (pdf) (ppt)


Introducing the Computing Community Consortium (pdf slides)


Monday June 11, 6-7 p.m., Grand Exhibit Hall
Christos Papadimitriou, UC Berkeley
The Algorithmic Lens: How the Computational Perspective is Transforming the Sciences

Computational research transforms the sciences (physical, mathematical, life or social) not just by empowering them analytically, but mainly by providing a novel and powerful perspective which often leads to unforeseen insights. Examples abound: quantum computation provides the right forum for questioning and testing some of the most basic tenets of quantum physics, while statistical mechanics has found in the efficiency of randomized algorithms a powerful metaphor for phase transitions. In mathematics, the P vs. NP problem has joined the list of the most profound and consequential problems, and in economics considerations of computational complexity revise predictions of economic behavior and affect the design of economic mechanisms such as auctions. Finally, in biology some of the most fundamental problems, such as understanding the brain and evolution, can be productively recast in computational terms.

Slides (pdf)


Tuesday June 12, 6-7 p.m., Grand Exhibit Hall
Bob Colwell, Independent Consultant
Computer Architecture Futures 2007

For 50 years the computer architecture community has successfully found ways to organize each new generation of improved implementation technology. This synergy has been particularly striking since the late 1970s, as microprocessor system capability climbed in lockstep with the improving implementation technology afforded by Moore's Law.

It was a great ride while it lasted, but it's over: excessive thermal dissipation, design complexity, accumulated legacy effects, and worsening silicon process artifacts have combined to end the era of the single-core general-purpose processor. To try to keep the existing architectural franchises alive, the industry has launched precipitously down the multicore route, despite lack of applications, lack of software expertise, lack of tools, and lack of evidence that this road leads to a place where anyone wants to go. This is a gamble of planetary proportions.

Meanwhile, alternative implementation technologies are receiving new scrutiny (reconfigurable fabrics, nanoelectronics, quantum), but so far, CMOS's successor has not clearly emerged. It's a rough time. Coherent caches and ILP tricks no longer provide the illusion to software writers that their systems are simply faster. Now the software must directly comprehend the system architecture. If you had previously been of the opinion that most software was of unnecessarily high quality, this prospect may not upset you, but as for the rest of us, something between alarm and panic seems appropriate. Computer architecture must now wage a campaign on multiple fronts: continue to play out the single-thread time-to-solution traditional vein of gold to its end; have effective architectures ready if one or more alternative implementation technologies become ready for deployment; ready new architectural solutions to worsening future CMOS electricals; and work out new covenants with software at the operating system, compiler, and application levels, for the new era where architecture tricks can no longer hide the limitations of the hardware.

Slides (pdf)


Wednesday June 13, 6-7 p.m., Grand Exhibit Hall
Randal Bryant, Carnegie Mellon University
Data-Intensive Super Computing: Taking Google-Style Computing Beyond Web Search

Web search engines have become fixtures in our society, but few people realize that they are actually publicly accessible supercomputing systems, where a single query can unleash the power of several hundred processors operating on a data set of over 200 terabytes. With Internet search, computing has risen to entirely new levels of scale, especially in terms of the sizes of the data sets involved. Google and its competitors have created a new class of large-scale computer systems, which we label "Data-Intensive Super Computer" (DISC) systems. DISC systems differ from conventional supercomputers in their focus is on data: they acquire and maintain continually changing data sets, in addition to performing large-scale computations over the data.

With the massive amounts of data arising from such diverse sources as telescope imagery, medical records, online transaction records, and web pages, DISC systems have the potential to achieve major advances in science, health care, business, and information access. DISC opens up many important research topics in system design, resource management, programming models, parallel algorithms, and applications. By engaging the academic research community in these issues, we can more systematically and in a more open forum explore fundamental aspects of a societally important style of computing. Recent papers on parallel programming by researchers at Google (OSDI '04) and Microsoft (EuroSys '07) present the results of using up to 1800 processors to perform computations accessing up to 10 terabytes of data. Operating at this scale requires fundamentally new approaches to scheduling, load balancing, and fault tolerance. The academic research community must start working at these scales to have impact on the future of computing and to ensure the relevance of their educational programs.

Slides (pdf)


Thursday June 14, 6-7 p.m., Grand Exhibit Hall
Scott Shenker, UC Berkeley
We Dream of GENI: Exploring Radical Network Designs

Despite the great successes of the Internet-focused networking research agenda, there are many important problems that have not been solved through incrementally deployable modifications to the existing architecture. For instance, despite our best efforts, the current Internet is still not sufficiently reliable, manageable, or secure. Moreover, the current Internet architecture may not be fully capable of leveraging future technical advances (in optics and wireless, for example) or meeting future challenges (such as a world full of networked sensors). To address these problems, researchers are now entertaining a wide variety of radical approaches, some of which challenge the Internet's dearest architectural principles. Unfortunately, we have no way of validating these designs: analysis and simulation are informative but inadequate, and we cannot experimentally deploy designs on a large enough scale to know how they'd perform in practice. The Global Environment for Network Innovations (GENI) is a proposal for building an experimental facility that would support such large-scale deployments. This talk will discuss some of the technical challenges we face, the proposals being discussed, and the role GENI might play in furthering our research agenda.

Slides (pdf)


Friday June 15, 11:30 a.m.-12:30 p.m., Town and Country Ballroom (FCRC Keynote Talk)
Ed Lazowska, University of Washington and Chair, Computing Community Consortium
Computer Science: Past, Present, and Future

Computing research has made remarkable advances, but there's much more to be accomplished. The next ten years of advances should be even more significant, and even more interesting, than the past ten. The talks by Papadimitriou, Colwell, Bryant, and Shenker earlier in the week provide important examples of the opportunities that lie ahead of us. I will sketch others.

The National Science Foundation has created the Computing Community Consortium to engage computing researchers in an ongoing process of visioning -- of imagining what we might contribute to the world, in terms that we and the world might both appreciate. The CCC is "of, by, and for" the computing research community. It is just beginning. We have a mandate from NSF to help articulate the longer-term research visions for the field, and we are now creating the mechanisms to do so. I will take this opportunity to engage you as members of the field, inviting your contributions and suggestions.

Slides (pdf) / Opportunities (pdf) (ppt) / Myths (pdf) (ppt)


For further information on the Computing Community Consortium: http://www.cra.org/ccc/.
"Creating Visions for Computing Research" RFP here (pdf).