This is part of the DFG Schwerpunkt Nr. 1307, Algorithm Engineering.
“I would rather have today’s algorithms on yesterday’s computers than vice versa.” Philippe Toint
Modern computers significantly deviate from the uniform cost model traditionally used in algorithms research. This is particularly true for computing with large data on flash memory. Originally used in small portable devices, this block based solid-state storage technology is on its way to become a new standard level in the PC memory hierarchy, partially even replacing hard disks. Unfortunately, the read/write/erase performance of flash memory is highly dependent on access patterns, filling degree, and optimizations to prevent the devices from early wear-out. Therefore, even cache-efficient implementations of most classic algorithms may not exploit the benefits of flash.
After appropriately modeling flash memory we aim at the design, analysis, implementation, and experimental evaluation of graph algorithms tailored to these models. We will cover both fundamental questions and applied problems on massive graphs using flash memory. Additionally, we will explore connections to parallelism, energy-efficiency, and resilient computing. The best solutions will be added to libraries like STXXL. The potential benefits of this project are significantly improved methods to process large-scale graphs like the web or social network graphs.
We also keep on contributing to STXXL, the Standard Template Library for Extra Large Data Sets. Version 1.3.1. was released on March 10, 2011.