TechTalks from event: EduPar 2011

EduPar-11 Keynote

NSF/TCPP Curriculum Report and Panel Discussion

  • NSF/TCPP Curriculum Report and Panel Discussion Authors: Coordinator: Sushil Prasad (Georgia State)
    Committee Members and Panelists: Chtchelkanova, Almadena (NSF), Das, Sajal (University of Texas at Arlington, NSF), Das, Chita (Penn State, NSF), Dehne, Frank (Carleton University, Canada), Gouda, Mohamed (University of Texas, Austin, NSF), Gupta, Anshul (IBM T.J. Watson Research Center), Jaja, Joseph (University of Maryland), Kant, Krishna (NSF, Intel), La Salle, Anita (NSF), LeBlanc, Richard (Seattle University), Lumsdaine, Andrew (Indiana University), Padua, David (University of Illinois at Urbana-Champaign), Parashar, Manish (Rutgers, NSF), Patt, Yale (UT Austin), Prasad, Sushil (Georgia State University), Prasanna, Viktor (University of Southern California), Robert, Yves (INRIA, France), Rosenberg, Arnold (Colorado State University), Sahni, Sartaj (University of Florida), Shirazi, Behrooz (Washington State University), Sussman, Alan (University of Maryland), Weems, Chip (University of Massachusetts), and Wu, Jie (Temple University)

Early Adopter Session

  • Integrating Parallel and Distributed Computing Into Undergraduate Courses at All Levels Authors: Steven Bogaerts (Wittenberg University), Kyle Burke (Wittenberg University) and Eric Stahlberg (National Cancer Institute)
    Since 1989, all Calvin College computer science students have learned about concurrency constructs and distributed systems, and they have had the option of learning about parallelism since 1997. In 2006, manufacturers began releasing processors with multiple cores instead of faster clock speeds, making knowledge of shared-memory parallelism a necessity for all computer science students. In 2008, the department began integrating shared-memory parallel topics into its Data Structures course (aka CS2) and the Operating Systems and Networking course. Thanks to the NSF/IEEE TCCP 2011 Early Adopters Program, additional parallel topics are now being integrated into the Algorithms and Data Structures course, the Intro to Computer Architecture course, the Programming Language Concepts course, and the High Performance Computing course. This work provides an overview of the department’s curriculum, and the precise courses in which specific parallel topics and technologies are covered.
  • Experiences of an Undergraduate Parallel Computing Course Authors: Bo Hong (Georgia Tech)
    This article presents experiences in teaching an undergraduate parallel computing course. This is a new parallel computing course (Introduction to Parallel Computing) for senior ECE students at Georgia Tech. This is for the first time that parallel computing is systematically exposed to undergraduate ECE students at Georgia Tech. Students coming to the course have been prepared with knowledge of programming and computer architectures, but such preparation is almost exclusively limited to uni-processors and sequential programming. On the other hand, to provide a comprehensive overview of the field of parallel computing, the course expands over a rather broad range of topics including parallel computer architectures, parallel programming models, and parallel algorithms. The gap between the students’ background and the course requirement makes the course challenging for both the students and the instructor. This article presents my teaching experience of the course, and my thoughts on how to make parallel computing friendly and accessible for undergraduate students.
  • Early Adopter: Integrating Concepts from Parallel and Distributed Computing into the Undergraduate Curriculum Authors: Eileen Kraemer (University of Georgia)
    Early Adopter: Integrating Concepts from Parallel and Distributed Computing into the Undergraduate Curriculum
  • Integration of Parallel Topics in the Undergraduate Curriculum, 2011 Authors: Joel Adams (Calvin College)
    Since 1989, all Calvin College computer science students have learned about concurrency constructs and distributed systems, and they have had the option of learning about parallelism since 1997. In 2006, manufacturers began releasing processors with multiple cores instead of faster clock speeds, making knowledge of shared-memory parallelism a necessity for all computer science students. In 2008, the department began integrating shared-memory parallel topics into its Data Structures course (aka CS2) and the Operating Systems and Networking course. Thanks to the NSF/IEEE TCCP 2011 Early Adopters Program, additional parallel topics are now being integrated into the Algorithms and Data Structures course, the Intro to Computer Architecture course, the Programming Language Concepts course, and the High Performance Computing course. This work provides an overview of the department’s curriculum, and the precise courses in which specific parallel topics and technologies are covered.
  • ASU and Intel Collaboration in Parallel and Distributed Computation. Authors: Violet Syrotiuk, Yinong Chen, Eric Kostelich, Yann-Hang Lee, Alex Mahalov and Gil Speyer (Arizona State)
    Arizona State University (ASU) is working with Intel and the Intel Academic Community to integrate topics in Parallel and Distributed Computing (PDC) into the Computer Science (CS), the Computer Systems Engineering (CSE), and the Mathematics and Statistical Sciences (MAT) programs at the Undergraduate and Master's degree levels, leveraging ASU’s initiative in high performance computing (HPC) in the Advanced Computing Center.
  • Parallel Computing: Keys to a Future in Computing Authors: Stephen Providence (Hampton University)
    The triuvirate: parallel programming, parallel ar- chitecture and parallel algorithms is the mantra for education and research in parallel computing [2]. These three are necessary subareas that will prepare an undergraduate srudent to pursue studies in computational science or to develop software systems that exploit parallelism to solve NP-hard problems. We have been developing a curriculum that will take UG students in their freshmen year up through the masters level in parallel computing. The goal is to prepare students for the future of computing and to eventually institute a doctoral program in computational science or computer science.