Background
This fourth edition of the ENVRI FAIR School represents another unmissable opportunity to learn about FAIRness in the framework of Research Infrastructures. After dealing with Data FAIRness and Data Management during previous editions, this edition of the school focuses on Services for FAIRness, from their design to their development and publication.
 
Learning Objectives
It is expected that, by the end of the course, participants will be better positioned to:
1. Examine approaches and methodologies to webservices for data
2. Map tools and their related characteristics
3. Illustrate potential security threats and related solutions
4. Design (step by step) data services 
5. Develop and publish a data service
 
Contents and Structure
Training Sessions:
1. Overview on the ENVRI project and goals
2. Design webservices for data
3. Develop webservices for data
4. Publish and secure data services by means of APIs
5. Best practices and lessons learned (experts panel)
 
Target Audience
The course is aimed at ENVRI data centres staff, RIs representatives, IT experts,
researchers and PhD candidates and individuals interested in developing webservices.

Background 

The ENVRI Community International Winter School 2021 'ENVRI-FAIR Resources: Access & Discoverability' focuses on Data FAIRness focuses and covers semantic navigation, Jupyter environments for visualisation and data discovery, resource access tools and cloud computing. Aiming at supporting end users to make the best use of the data, the School establishes the end user perspective as a crucial element to develop good user interfaces and services to interact with data.


Learning Objectives

It is expected that, by the end of the course, participants will be better positioned to:

1. Discuss basic concepts of semantics presenting how they can enrich data resources, enhance FAIRness and foster discoverability of data resources

2. Examine the full life cycle of an 'on demand' model run and results visualization for the creation of a new data product.

3. Illustrate how resources (datasets, services, workflows) can be created, published and accessed on a metadata catalogue (LifeWatch ERIC Metadata Catalogue).

4. Demonstrate the basic steps to run a legacy application in cloud, develop native cloud applications, automate application deployment and auto-scale a runtime application.


Content and Structure

Training Sessions:

1. Semantics

2. VREs, Data analysis and Visualization

3. Resource Access Tools

4. Cloud computing for developing and operating data management services 


Target Audience

The course is aimed at data centres staff, RIs representatives, IT experts and individuals interested in data access and discoverability.

Background 

In recent years, one of the major challenges in the Environmental and Earth Science has been managing and searching larger volumes of complex data, collected across multiple disciplines. Many different standards, technologies and common practices have been developed to support each phase of the Data Lifecycle. This course focuses on the creation and reuse of FAIR data and services in the Environmental and Earth sciences.


Learning Objectives

It is expected that, by the end of the course, participants will be better positioned to:

1. Discuss basic concepts of semantics presenting how they can enrich data resources, enhance FAIRness and foster discoverability of data resources

2. Examine the full life cycle of an 'on demand' model run and results visualization for the creation of a new data product.

3. Illustrate how resources (datasets, services, workflows) can be created, published and accessed on a metadata catalogue (LifeWatch ERIC Metadata Catalogue).

4. Demonstrate the basic steps to run a legacy application in cloud, develop native cloud applications, automate application deployment and auto-scale a runtime application.


Content and Structure

Training Sessions:

1. Semantics

2. VREs, Data analysis and Visualization

3. Resource Access Tools

4. Cloud computing for developing and operating data management services 


Target Audience

The course is aimed at data centres staff, RIs representatives, IT experts and individuals interested in data access and discoverability.



       A tutorial to guide users on the use of LifeWatch ERIC Metadata 

       Catalogue.

This is a study-lab course in which topics are presented by short texts, practical sessions are introduced and explained and assignments are completed  by students. It is made of seven units: three theoretical units, each followed by a practical unit. In order to exemplify the use of the proposed  methods and models, a final unit is devoted to a case study on benthic macroinvertebrates in the Po River delta, providing the data, the R code and some interpretative comments. Practical units are based on the use of the R software, with purpose-specific libraries and functions. Quizzes are given following every unit, together with practical assignments to be addressed with the R software.

After this introductory unit, the first practical unit will be devoted to briefly introduce the R software. Biodiversity partitioning will be the subject of the next two units, where methodology and software for γ , α and β diversity profiling will be described and applied to a sample data set. The theory behind mixed effects modeling will be sketched and applied to investigate the variation of biodiversity measures. The last practical unit exemplifies the use of the R implementation of mixed effects modeling routines with data from ecological surveys. Finally we will summarize and exemplify the proposed methods with the complete analysis of a case study.

This tutorial is a practial guide for Species Distribution Modelling (SDM). The tutorial will use the "dismo" package of "R" and will show the guidelines on how to model species distribution starting from occorency data (got from internet, or users ones) and a set of bioclimatic variables.
After a brief introduction of SDM, with some examples of biological application, this tutorial will show a guide (split in 5 main sections or modules) that leads the user step by step how to obtain a distribution map. Afterwards there will be some suggestions on how to deal with common issues and statistical errors that may affect the analysis and to evaluate properly the output. 

The Incidence Function Model describes presence/absence of a species in the patches of a highly fragmented landscape at discrete time intervals (years) as the result of colonization and extinction processes. The IFM ignores local dynamics sin ce they are faster than metapopulation dynamics in producing changes in the size of local populations (Hanski, 1994).
In the IFM, the process of occupancy of patch i is described by a first-order Markov chain with two states, {O, i} (empty and occupied, respectively). The extinction probability of a population in a patch is constant in time and is assumed to decrease with increasing patch area, and the colonization probability is assumed to be a sigmoidal function increasing with connectivity. The IFM is the best known spatially explicit metapopulation model in literature.
This model has been applied to conservation problems and to area-wide pest-management.
First, a short introduction to discrete time, finite space, homogeneous Markov chain will be provided, aiming at understanding the basic mathematics of the IFM. Then, the IFM model will be discussed by deeply considering (a) the role of the parameters and how they affect metapopulation dynamics; (b) variations to the basic model (rescue effect, time-dependent colonization probabilities). Finally, we will move on to the use of the free software R to deal with simulation and parameter estimation.