BEYONDMOORE addresses the timely research challenge of solving the software side of the Post Moore crisis. The techno-economical model in computing, known as the Moore’s Law, has led to an exceptionally productive era for humanity and numerous scientific discoveries over the past 50+ years. However, due to the fundamental limits in chip manufacturing we are about to mark the end of Moore’s Law and enter a new era of computing where continued performance improvement will likely emerge from extreme heterogeneity. The new systems are expected to bring a diverse set of hardware accelerators and memory technologies.
BEYONDMOORE has an ambitious goal to develop a software framework that performs static and dynamic optimizations, issues accelerator-initiated data transfers, and reasons about parallel execution strategies that exploit both processor and memory heterogeneity.
Budget: 1.5 Million Euros
The SparCity project aims at creating a supercomputing framework that will provide efficient algorithms and coherent tools specifically designed for maximising the performance and energy efficiency of sparse computations on emerging HPC systems, while also opening up new usage areas for sparse computations in data analytics and deep learning.
Overall, SparCity is a forward-looking project with a significant contribution to building Europe’s strengths in the application of HPC and related software tools, in the adoption of low-energy processing technologies, and in the development of advanced software and services for its citizens.
For details, visit the project website: http://sparcity.eu/
Budget 2.6 Million Euros
VERA aims to develop diagnostic tools for data movement, which is the main source of performance and energy inefficiency in parallel software. Technological advances and big data have increased the importance of data and data has become more critical than computation in terms of both energy consumption and performance in a software. Therefore, there is a need for performance tools that automatically detect and measure data movement in the memory hierarchy and between cores.
VERA will develop data movement tools that are much faster, much more comprehensive, much more scalable and highly accurate than previous studies that track and analyze data in parallel programs.
Collaborators: Paul Kelly (Imperial College London)
The project attempts to increase the efficiency of solving the task assignment problem by enhancing the classical population-based metaheuristic approach using recently introduced quantum annealing devices. The stochastic nature of the quantum annealing process provides an extra source of diversification essential for a thorough exploration of the search space. Additionally, it rapidly produces a large number of candidate solutions. On the other hand, the classical component of the algorithm is capable to guide the search, which allows ensuring the validity of the solution and to scale the efficiency of assignment at the cost of CPU time and/or the number of quantum annealing device queries.
Collaborators: Anastasiia Butko (Berkeley Lab)