2019 FOSS4G Bucharest Talks speaker: Enrique Soriano
Talks
Struggle with WebGL to render vector data
Spatial information and its treatment has evolved from the centralization and publication of this in a single repository via standards such as WMS to the service of the information as it is to be processed by browsers via WFS.
However, the WFS protocol has some shortcomings in terms of performance when it comes to the format in which to serve the information, giving way to more optimal formats for the service of vector information such as .pbf.
This format allows the transmission of large amounts of information to the local browser client.
This information, increasingly larger, requires the use of specific rendering libraries such as WebGL.
The present work shows a state of the art of the existing WebGL libraries and a real test field on which the data have been tested, showing the results obtained and the most optimal solution.
The following frameworks have been considered for the representation of large amounts of data:
* OpenLayers
* Mapbox GL js
* Deck GL
* kepler.gl
Resulting on the tests executed to represent large amount of data, Mapbox GL has revealed as the more flexible tools in terms of performance and capabilities.
Serverless infrastructure to manage vector and tiff data: pbf and COGs
Classical spatial information architectures have required a server for the dissemination of spatial information at vector and raster level.
With the evolution of technologies and the empowerment of navigators, new alternatives to the dissemination of spatial information in vector and raster format have appeared.
Thus, at present, it is possible to conform an architecture serverless that allows the publication of spatial information based on the following standards and technologies:
* Vectorial information in .pbf format using the STAC and WFS3 architecture.
* Serverless raster information using COGs: Cloud Optimised Geotiffs
* Rendering of large amounts of information via WebGL in the browser.
This talk offers an overview of a full serverless architecture based on OpenSource technologies.
INSPIRE Reference Validator: status and next steps
The INSPIRE Reference Validator is the tool adopted by EU Member States to validate the resources of their Spatial Data Infrastructures. It is an implementation of the ETF, an open source testing framework based on ISO and OGC standards, which performs tests organized into Executable Test Suites (ETS) using SoapUI, BaseX and TEAM Engine. The ETF can be used via either a web application or a REST API; a Docker container is also available for quick deployment. The INSPIRE Reference Validator, recently deployed on the cloud, offers many open source ETS to test data sets, metadata, View Services (WMS, WMTS), Download Services (Atom, WFS, WCS, SOS) and Discovery Services (CSW) against the interoperability requirements set by the INSPIRE Technical Guidance documents. Starting from the context of INSPIRE Action 2017.4 on Validation and conformity testing, the talk will present the latest developments of the ETS and ETF (including the governance of the software project) and describe the future plans.
Publication of Inspire Datasets as Linked Data
In order to increase interoperability and facilitate the reusing of geospatial data, it is proposed a methodology of publishing INSPIRE-compliant datasets as Linked Data, using the RDF format and various ontologies such as the ones derived from the ARE3NA for the Annex I themes, or GeoSPARQL from the OGC. This methodology would cover the whole process of generating the RDF triples from GML sources, setting up a triple-store to persist the information, and issuing SPARQL queries to the exposed endpoint. A working example would be presented using the Spanish CNIG endpoint, where several datasets from the Annex I are hosted. Then a series of queries joining external information from other endpoint, like DBPedia or GeoNames, would be used as a mean to demonstrate the interoperable capabilities and the potential applications to enrich spatial data, extract meaningful insights from it and use it to support information systems.