The objective of the project is to expose multiple datasets as one unified API that can be easily used by non informatician persons. To achieve this, we are developing generic conversion tools to convert any type of dataset (SQL database, TSV, XML) to standardized data models defined by experts.
While data are becoming increasingly easy to find and access on the Web, significant effort and skill set is still required to process the amount and diversity of data into convenient formats. Consequently, scientists and developers are duplicating effort and are ultimately less productive in achieving their objectives. Here, we propose Data2Services, a new architecture to semi-automatically process diverse data into standardized data formats, databases, and services. Data2Services uses Docker to easily and faithfully execute data transformation pipelines. These pipelines involves the automated conversion of target data into a semantic knowledge graph that can be further refined to fit a particular data standard. The data can be loaded in a number of databases and are made accessible through native and auto-generated APIs.