ISO 19157:2013 and 3D data show that quality control standards might need their own quality control

A part of my research involves analysing and describing the propagation of errors in 3D GIS. This is a new field that has first been tackled by my paper presented at the ISPRS conference in Toronto (Oct 2014). The work done so far is limited to 2/2.5D, and the increased production of 3D city models involves some new aspects that are not available in 2/2.5D. For instance, the level of detail, which is one of the principal aspects of a 3D geo-dataset.

The quality of geo-data has been an important topic for a long time, and it has been formalised by the geo-information technical committee ISO/TC 211 in their standards. The ISO 19157:2013 Geographic information — Data quality is the principal standard for describing the quality of geo-data, and has superseded the ISO 19113:2002. Among other things, the standard deals with the evaluation and description of the positional and thematic errors, and the completeness of the data.

However, it seems that the standard falls short when dealing with 3D data and opens many questions:

  • It does not provide a way how to describe the quantity of invalid solids in the datasets, e.g. that 5% of solids in the 3D dataset are not valid. The Logical Consistency / Topological consistency element does not foresee that, and I see no other (or a generic) element that would fit this quality aspect.
  • It is not clear how to specify that the dataset contains data in the wrong level of detail. E.g. that the metadata says LOD2, but the dataset is actually LOD1. Is it something that can be stored in the usability element, the mysterious sixth quality element (see more about it below)?
  • The same question applies to the geometric reference of the model. E.g. the metadata states that the walls of buildings are reconstructed as projection from roof edges (the photogrammetric way), but the walls in the dataset are reconstructed as extrusion from the building footprint (cadastre data)? How would it be possible to note that such metadata is wrong?
  • The positional error in the height in the way that is described in the standard seems to be focused towards 2.5D data, i.e. digital terrain models, and it is not suited for 3D city models.
  • What is exactly the sixth quality element—the usability element? Are there any examples from practice, especially about 3D data? The standard seems to be indistinct about it.

I am really curious how do national mapping agencies deal with describing the quality of their 3D data since they usually rely on this standard which is, as shown above, not compatible with 3D.

I have contacted a few people about the these ambiguities, and the questions are still unanswered. If you have any insight about this topic, please let me know. The title of this post might be too bold, but I find it hard to believe that a geo-information standard released in 2013 does not regard 3D geo-information. This should be a priority in the next revision of the standard.

This might be an interesting MSc topic, so feel free to contact me if you are interested in investigating and solving these issues.

One comment

  1. Interesting point Filip. When it comes to quality of geospatial data, there are still lots of missing and unclear building blocks that need to be solved including error propagation in 3D models. Plus, you are right about the ISO standard. When it comes to 3D. Reading the ISO standard for quality of geoinformation is a real disappointment. Does OGC have some kind of standard for quality of Geodata? Or perhaps a working group tackling this issue? If not, you can raise the awareness and perhaps initiate such a working group 😉