WP2: Unified programming model for data-intensive applications
The first task of Work Package 2 is Task 2.1 that has the goal to design the Data-centric Programming Model for Exascale Systems (DCEx) based upon data-aware basic operations for data-intensive applications supporting the scalable use of a massive number of processing elements and develop a prototype API based on that model. The DCEx programming model uses private data structures and limits the amount of shared data among parallel threads. The basic idea of DCEx is structuring programs into data-parallel blocks. Blocks are the units of shared- and distributed-memory parallel computation, communication, and migration in the memory/storage hierarchy. Computation threads execute close to the data, using near-data synchronization. In the DCEx model three main types of parallelism are exploited: data parallelism, task parallelism (data–driven), and SPMD (Singe- program multiple-data) parallelism.
This document discusses libraries and software tools for extreme data processing, it is structured as follows. Section 2 covers the state-of-the-art of existing libraries and tools for both data-intensive and data-analytics applications. Section 3 presents a prototype of a parallel programming framework for data-intensive application based on task parallelism and workflow execution.
This deliverable covers the description of the explored data collection and mining methodologies for improving data placement in data-intensive applications.
This deliverable describes the main concepts of the integration of the DCEx programming model into our parallel pattern programming approach. In particular, the main entities of DCEx will be shown in a more detailed way and a new DCEx execution model will be proposed.
This deliverable is aimed at describing the current prototypes for the tools developed in WP2. These include the DCEx programming model and its integration with DCEX. We describe the Application deployment Software as well as a tool developed for the induction of events and anomalies in the case of Exascale application deployments including examples on various testbeds. Furthermore we show how this induction can be utilized to validate both Exascale applications and the tools developed in WP3, both monitoring and analysis tools.
We present here how to design programs using the DCEx prototype and how some use cases are implemented with the advanced version of the DCEx prototype. Moreover, we discuss a scheduler prototype implemented for the management of data and task locality. The deliverable describes also the deployment of the prototype on parallel machines, as well as tools developed for autotuning, management of events and anomalies and interaction with the DCEx scheduler. Furthermore, we report on both the status of prototypes of monitoring and analysis tools.
This deliverable is aimed at reporting the final status of ASPIDE Integrated Development Environment (AIDE) framework, that is the software framework prototype developed by using the WP2 concepts and tools designed for programming Exascale systems. In particular, the deliverable provides information about the access and use of the AIDE prototype framework developed starting from the programming features and the run time mechanisms of the DCEx programming model. We illustrate here how to install the AIDE software components and how to design and execute programs using the DCEx prototype. We also discuss some aspects of use cases implementation with the advanced version of the DCEx prototype. Domain specific language examples related to the use cases are also discussed showing how the DCEx model can be exploited in specific application domains. Furthermore, we report on the prototypes of monitoring and anomaly induction tools.
WP3: Scalable Monitoring and Auto-tunning
Deliverable 3.1 is intended to serve as an scientific summary of the research activities conducted within the first half of Task 3.1. More concretely, the deliverable describes multiple concepts pertaining to the development of a general monitoring model capable of supporting exascale architectures. Therefore, the initial activities conducted within Task 3.1 included requirements analysis for the monitoring system in relation to the use-cases. Consequently, the results from the requirements analysis were used to define the monitoring approach, which has been purposely tuned for data-intensive applications, thus offering fine-granular performance information with low-overhead at the exascale level. The approach defined during the modeling stage, will later be developed as a monitoring tool and will be integrated within the exascale monitoring system as defined in Task 3.2.
This deliverable reports on the activities performed in WP3 for the initial definition of the Extreme Scale Monitoring Architecture. The work presented here is the result of T3.1 and T3.2. We present a comprehensive background and related work focus- ing on those components which can offer benefits to our Extreme Scale Monitoring Architecture.
Deliverable 3.3 is intended to serve as a scientific summary of the research activities conducted within the second half of Task 3.1 and its relation to Tasks 3.2 and 3.3. More concretely, the deliverable describes multiple concepts pertaining to the development of the analytical and monitoring system capable of collecting and mining the system and application performance data in exascale environments. Therefore, after the requirement analysis and the research of the initial version of the algorithm and tool for monitoring agents and aggregators assignment, next activities conducted within Task 3.1 included exploration of methods for data collection for the monitoring system in relation to the use-cases. Consequently, after collecting the data, pre-porcessing algorithms were explored for reduction of the dimensionality of the data. Thereafter, machine learning and data mining techniques were explored to extract reliable and meaningful information from the monitoring data, which has been purposely tuned for auto tuning of data-intensive applications, thus offering fine-granular performance information with low-overhead at the exascale level.
Deliverable 3.4 is intended to serve as a scientific summary of the latest research activities conducted within Task 3.3 and its relation to Tasks 3.2 and 3.4. More concretely, the deliverable describes event and anomalies detection concepts per-taining to the constraining of the autotuning process and analysis of the application behaviour. Therefore, after the requirement analysis and the research of the initial version of the event detection engine (presented in D3.3), next activities conducted within Task 3.3 included exploration of methods for data collection from various monitoring systems, and methods for application behaviour analysis related to the autotuning and application scheduling. Therefore, we present in this deliverable the latest machine learning and data mining techniques for extract reliable and meaningful information from the monitoring data, which has been purposely tuned for auto tuning of data-intensive applications, thus enabling efficient autotuning.
This deliverable reports on the activities performed in WP3 for the final definition of the Extreme Scale Monitoring Architecture. The work presented here is the result of T3.1 and T3.2. It is based on and extends D3.2, which contained a comprehensive summary of background and related work and requirements of extreme scale monitoring.
This deliverable reports on the results achieved by Tasks T3.3 and T3.4 of ASPIDE. The main activity of Task T3.4 was to support the DCEx unified programming model for auto-tuning of high-performance, automatically adaptable, and tunable data-intensive Exascale applications. The autotuner encompasses the infrastructure and the runtime system to provide the required parameter transformations, experimental, analysis, and tuning support. The autotuning solution assists the developers in understanding the non-functional properties of their codes by making it easy to analyze and experiment with parameter variations. The autotuner provides an environment that makes it easy to reveal tunable parameters to the compiler and runtime system, independently of the optimization objectives. The event detection engine, defined in T3.3, is also used to support the autotuning of the Exascale applications. In this document, we present our research activities related to the development of the autotuner, provide an extensive evaluation of the performance, and publish a user manual with a link to the source code of the autotuner and event detection engine.
WP4: Exascale data management
This deliverable reports on the activities carried out during the first half of Task T4.1. The main activity of Task T4.1 has been to to develop a methodology for profiling and analyzing data-intensive applications for identifying opportunities for exploiting data locality. This methodology has been used in WP5 to trace data operations in ASPIDE use cases. From those data and the previous experience of project partners, the dynamics of data movement and layout throughout the whole storage I/O data path from the back-end storage up to the application has been traced and studied to design mechanisms for exposing and exploiting data locality. Finally, in this tasks, techniques for providing dynamic configurations of the I/O system have been developed to enhance the data life-cycle by reflecting applications I/O patterns and needs.
This deliverable reports on the activities carried out during the Task T4.2 in ASPIDE. The main activity of Task T4.2 has been to provide solutions to leverage cross-layer data management for resilience and performance in Exascale systems. This work is based on the methodologies proposed in D4.1, resulting of Task 4.1.
This deliverable reports on the results achieved by Task T4.3 of ASPIDE. The main activity of Task T4.3 has been to define a data mining-based optimization system for the Intelligent Data Manager (IDM) module of the ASPIDE architecture. The goal of the proposed system is to optimize the in-memory execution of data intensive workflows, taking into account the memory requirements of the workflow tasks, so as to avoid or reduce main memory saturation events, which may happen when multiple tasks are executed concurrently on the same computing node. Swapping or spilling to disk caused by main memory saturation may result in significant time overhead. The proposed IDM optimization system is aimed at reducing this overhead, which otherwise could be particularly costly when running workflows involving very large datasets and/or complex tasks to be processed in-memory.
This deliverable reports on the results achieved by Task T4.4 of ASPIDE. The main activity of Task T4.4 was to use an in-memory data centric system as a scalable I/O architecture for temporary data generated during the execution of workflows. The aim was to achieve scalability to enhance four main aspects that contribute to the file I/O bottleneck illustrated as motivation above: metadata scalability, data scalability, locality exploitation, and file system server scalability. This has been achieved by developing a I/O optimisation framework supported by specifically tailored auto-tuning system and event detection engine that take the memory footprint into account.
WP5: Validation through applications
The first activity of Task T5.1 has been the definition of the strategy for use-case requirements collection and analysis. The strategy includes guidelines for identifying and formalizing these requirements. Requirements from all use cases have gathered in a repository that will be used during the project for follow up activities and their result evaluations. A sharing and negotiation process between the partners has been carried out to ensure that all project members share the same understanding and prioritization of requirements. The second activity of the task has been to organize, sort, and cluster requirements that are common to all the use cases. Common requirements are addressed with priority as that may have a major impact on the project. The set of use cases (urban computing, opinion mining, biomedicine and deep learning) have been selected to provide a good landscape to understand the needs of data-intensive applications in Exascale systems.
This report comprises detailed descriptions of the applications, the list of used WP2- WP4 concepts, their modes of integration with the applications, recommendations and APIs that will be taken into account in the redesign process, as well as the key performance indicators for the future evaluation of the improvements in performance through the re-design and re-implementation.
This document presents a final validation and evaluation of project outcomes through applications. The report comprises final descriptions of the applications developed/ported within the ASPIDE project, the list of WP2-WP4 concepts applied or integrated with the applications as well as evaluation of the improvements in performance through the re-design and re-implementation using the key performance indicators. Results are presented for 4 use cases that represent data-intensive appli- cations used in various domains such as medicine, smart cities, and transport. D5.5 complements the previous D5.3 deliverable with results of experiments that integrate methods and tools developed by ASPIDE, adding also some new applications. The software used in D5.5 evaluations was provided in D5.4.