Final Projects
Project topics
You will complete one of the parallel programming and analysis projects below. For all project topics, you must address or satisfy all of the following.
- Combine two different parallel programming models: distributed memory (i.e., MPI), shared memory (i.e., OpenMP), GPUs (i.e., CUDA, Kokkos, or OpenMP Offloading).
- Explore different parallelization strategies (i.e., domain decomposition, task-based, etc.).
- Develop a verification test to ensure the correctness of your solution. Ensure that the solution does not change with the number of parallel tasks.
- Address load balancing and strategies for maintaining balance as tasks are increased.
- Address memory usage and how it scales with tasks for your problem.
- Perform extensive scaling studies (i.e., weak, strong, thread-to-thread speedup). Your scaling studies should extend to as large a number of tasks as you are able to with your problem.
Note that for many of these project topics, parallel code can easily be obtained online. You must develop your own original code to address your problem. Researching your problem on the web is expected and encouraged, but I recommend you avoid looking directly at someone’s code for inspiration.
Final project reports will be graded based on this rubric.
1. Heat Equation
See Section 31.3 of HPSC.
2. Poisson Equation
See Section 4.2.2 of HPSC.
3. Conjugate Gradient
See Section 5.5.11 of HPSC.
4. Gaussian Elimination
See Section 5.1 of HPSC.
5. Molecular Dynamics
See Chapter 7 of HPSC.
6. Sorting and Combinatorics
See Chapter 8 of HPSC.
7. Graph analytics
See Chapter 9 of HPSC.
8. N-body Simulation
See Chapter 10 of HPSC.
9. Monte Carlo Transport
See Chapter 11 of HPSC.
10. Machine Learning
Open topic! Any reasonable approach to implementing a ML algorithm in parallel, satisfying the criteria laid out above, is good.
Project Reports
You will prepare and submit a report detailing your project, code, and results. The reports will be graded according to this rubric.