The ASPIDE project aims to provide programming models to assist developers in building data-intensive applications for Exascale systems, while ensuring compliance with requested data management and performance.

Objective 1. Design and develop of a new Exascale programming models for extreme data applications.
The ASPIDE team is committed to design and develop a unified programming model that will support the implementation of scalable algorithms and applications on top of Exascale computing systems, aimed by the need of scalable and massive data analysis applications. The solution will provide a novel DSL abstraction with the objective of facilitating the implementation and execution of data centric computing application while addressing the interoperability and composability of all layers and components of Exascale computing systems. The ASPIDE‘s vision is to coordinate all those components by a convergence of traditional HPC programming models, mainly based on MPI, with the emerging technologies based on the data intensive paradigms by using efficient runtime mechanisms that will provide novel approaches for providing configurability, intelligent data placement and transfer techniques, data locality, and locality-aware schedulers.

Objective 2. Build new tools for monitoring extreme data analytics algorithms and applications.
The ASPIDE team is committed to design and develop scalable monitoring and data analysis tools by including both system and application data collection, data mining, and data-centric on-line performance analysis at Exascale level. Beyond the use of performance metrics, the efficiency will be targeted through advanced modelling and correlation heuristics aiming to understand their inter-relationships, essential for on-line optimisation and auto-tuning. An API and tools will be provided for the coordination of both data intensive applications and the infrastructure, with the idea of efficiently orchestrate monitoring data with runtimes and schedulers, considering the massive processes allocation of the Exascale systems and keeping in mind the required dynamic load balancing.

Objective 3. Adapt the data management techniques to the extreme scale applications.
The ASPIDE team is committed to provide high-performance and reliable support for extreme data applications. ASPIDE project will provide an integrated framework for efficient real-time and in-memory data analytics for large scale HPC infrastructures that focus on data-intensive computation.
The main target is the emerging data centric architectures, which are based on large memory hierarchies. This project focuses on optimising in-memory data layouts that are present in both traditional HPC and data-intensive applications. Supporting these memory layouts is a key point for the data management of large data analytic applications.

Objective 4. Validate the concepts and tools usefulness through extreme data applications.
The goal of this objective is to validate the ASPIDE approach using applications from different domains. Specifically we have selected applications from our industrial beneficiaries with a combination of concerns in performance, energy efficiency, and real time response. Chosen applications are used in massive medical image processing, on-line urban analytics, and fast monitoring of content over social networks. These applications will be used to study the effectiveness of the various steps in the ASPIDE vision. Successful achievement of this objective will result in the global evaluation of the project with real applications, showing the usefulness for exploitation of the developed technologies.