USGIF GotGeoint BlogUSGIF promotes geospatial intelligence tradecraft and a stronger community of interest between government, industry, academia, professional organizations and individuals focused on the development and application of geospatial intelligence to address national security objectives.

October 25, 2016

Introduction

Building Information Models are comprehensive digital representations of buildings which include 3D geometry and associated data. They are widely used for designing, engineering, and constructing buildings (vertical BIM) and infrastructure (horizontal BIM). Owners are beginning to see value in BIM models and data for operating buildings. A BIM model includes not just the 3D geometry of all building elements and the enclosed spaces, but also semantic information representing both the type of the elements, the relationships between them (defined by the data model) and data associated with the geometric elements. With BIM models being mandated in countries like the UK, Finland and Singapore, and in France and Russia in the near future, and the increasing adoption of BIM by the private sector in the U.S. and other countries, the volume of BIM data is growing rapidly. By merging models from different disciplines into common virtual models, the amount of information stored in such multidomain repositories quickly becomes large and complex even for average projects. Centralized or distributed repositories enable the creation, integration and maintenance of large quantities of data from various stakeholders involved in the planning and building process and the wide range of specialized software tools used by these stakeholders. Many ways of creating, manipulating and visualizing BIM and CAD models have been developed including CAD, Solibri, Navisworks, GTPPM, mvdXML, PQML, EQL, SQL, XSLT/XQuery, SPARQL, Gremlin, Linq, SDAI, and generic programming languages. However, the development of a general, intuitive query language for BIM models including geometric and related information remains a challenge.

The information needs of individual stakeholders in different parts of the construction process are very different. For example, a structural engineer does not require all the information available in the overall building model such as the detailed specifications of a suspended ceiling. This makes the extraction of partial model subsets or domain specific views on large models necessary. In addition regulations, energy performance modeling, and building maintenance require finding specific facilities among the building assets.

In the last few years research has focussed on the development of an open, domain-specific query language for BIM. Most of the this research has focussed on querying, updating and deleting data stored in Industry Foundation Classes (IFC) models. The Industry Foundation Classes (IFC) is a vendor neutral and open model that defines more than 650 entities and a few thousand attributes. Most industry software BIM tools are able to export IFC models, though there is typically some information loss. The bimserver.org platform is an open repository of IFC models and already provides a Java API to extract partial building information models from a repository.

Open query languages

Many existing database applications use the Structured Query Language (SQL), an ANSI/ISO standard with spatial extensions standardized by the Open Geospatial Consortium (OGC), as the standard language. SQL makes it possible to create, read, update and delete records in a Relational Database Management System (RDBMS). But the applicability of SQL in a BIM context is limited. SQL is based on operations on data stored in tables. Application to IFC, as a BIM example, would require storing 600 entities along with their thousands of attributes in individual tables. Severe performance and scalability issues are associated with using SQL for BIM. In addition the complexity of IFC-based queries would rapidly become too complex.

The W3C Data Activity has defined a collection of technologies comprising a “web of data” where information is easily shared and reused across applications. Key parts of this technology stack are the RDF (Resource Description Framework) data model, the OWL Web Ontology Language and the SPARQL protocol and RDF query language which has been standardized by the W3C. The Resource Description Framework (RDF) is a directed, labeled graph data format for representing information on the Web. RDF and SPARQL enables interoperability among distributed heterogeneous information systems making possible queries that span several repositories and link information from different sources. Specifically in a BIM context it enables ad hoc queries of sub models that reside in repositories maintained by the different stakeholders in a building project.

Spatial queries

BIM data models can be simple or complex. A complex data model includes logical relationships between building elements, such as windows and walls, walls and storeys, and so on. These relationships are created at design time. A simple model may not include any explicit relationships. For example, there may not be any way to determine which windows are associated with a wall except visually. Spatial queries enables relationships to be determined after the creation of the BIM model. For example, a spatial query can find all the windows in a wall, all the walls of a storey, or all the HVAC and plumbing facilities in a storey.

The efficient implementation of spatial BIM queries requires 3D spatial indexing such as octree and multi-dimensional r-tree indexes and algorithms. It has been known for a long time that spatial indexing is a key technology enabling efficient GIS systems. Very early on in the development of spatial indexes it was realized that these are domain specific. Several GIS systems support an extended spatial SQL based on quadtree or r-tree indexes. These systems evolved historically from a 2D geometric representation approach. However, the functionality for processing 3D geometry in many GIS systems is restricted and the performance of the available tools is unsuitable for complex building data sets. Therefore, it has been argued that a specialized tool is required to handle queries against elaborated BIM models containing vast numbers of elements and detailed geometry representations. Such a query language enables the spatial analysis of BIM models and the extraction of partial models that fulfill certain spatial constraints. Among other features, it includes operators that reflect the topological relationships between 3D spatial objects including "within", "contain", "touch", "overlap", "disjoint" and "equal" in 3D space. Both octrees and multi-dimensional r-tree indexes have been investigated as a basis for an efficient BIM query language.

Open BIM query language

A BIM query language, dubbed BIMQL, has been proposed as an extendable, open, domain specific query language for BIM models. It can be used to select and update partial aspects in building information models. A limited implementation of the language has been built on top of the bimserver.org platform. It is not complete, in particular it does not support spatial queries, but has provided a useful vantage point for future research, discussion and development on the development for end-user interfaces to complex BIM models as well as the composition of service-oriented architectures.

Unlike traditional data modeling approaches which are limited by the scope of their underlying schemas, RDF and related technologies can provide an open and common environment for sharing, integrating and linking data from different domains and data sources. In comparison with domain-specific query languages, SPARQL is especially applicable for scenarios when data from multiple sources are needed. Some important scenarios include

All building objects should be classified according to the local building code

Where are the locations of companies which produce the materials used in walls placed in the hall ?

The interesting thing about these scenarios is that they require the BIM geometry and data plus other external data. This is the type of query that the W3C semantic web technologies are designed to support.

In the AEC domain, a typical use case is data integrity validation against BIM standards or building code requirements. A Dutch example is the Dutch BIM standard requires that all building elements should be associated with appropriate building storey. Provided that the building model is represented in the standardized ifcOWL, it is possible to check this spatial containment relationship using off-the-shelf SPARQL implementation.

Currently SPARQL does not contain the vocabularies or functions required by many use cases in the construction domain. Recently researchers have proposed extending SPARQL with a set of domain-specific functions for the construction world. There are precedents for this strategy. In 2011 the Open Geospatial Consortium (OGC) released GeoSPARQL, a geographic query language for RDF data, which extended SPARQL with geospatial functions that enabled spatial queries. The advantages of this approach is that not only are the extended AEC functions compatible with existing SPARQL environments, but they can also be used to query building data combined with data from other fields.

But standard SPARQL can only crawl the data graph according to the structure of the BIM data model. There are many use cases that depend on geometric relationships that cannot be handled by off-the-shelf SPARQL. To address this SPARQL has been extended to include spatial operators "touches", "intersects", "disjoints", and "contains" and well as other domain-specific functions.

An example of a simple spatial query using a spatial extension is finding all walls which contain or adjoin doors.This requires the spatial function "touches" to find doors that are in or near walls.

As in the case of geospatial data, additional domain-specific functions are required to provide the necessary functionality. For example, functions were added for different ways of measuring distance (spt: defines the spatial extensions namespace)

spt:distance is a function to return the shortest distance between two products in 3D space.

spt:distanceZ is a function to return the shortest vertical distance between two products.

spt:distanceXY shortest distance between the projections of two products on XY plane

Another example is finding all windows that are close (less than 300 units) to a column.

Implementation and testing

As a proof of concept, the extensions were implementedbased on the open source Jena ARQ query engine. An objective was to try to minimize low level coding to make the function implementations more portable and more transparent for public reviews.

As a practical example, a case study of building code compliance checking using the International Building Code 705.8.4 (IBC 2006) was tested successfully using a BIM model of a building with 4 floors.

The researchers report that the main limitations of the open SPARQL-based approach they have proposed relate to the way it was implemented. The implementation is not completely portable and performance could be improved, by integrating existing geometric libraries, for example. Future research will address these issues and investigate more use cases to extend functions for specific domains and to combine more data from different sources in the construction industry.

May 02, 2016

In the future, the deliverables at the end of a construction project will not only include the completed building itself, but also a digital "reality model" of the structure. The digital model will include digital as-builts comprised of georeferenced point clouds captured by laser-scanning or meshes generated from images captured with a digital camera on a smartphone. That was the prediction of Bhupinder Singh of Bentley at the GeoBuiz 2016 Summit in Bethesda. This is a vision that is shared by other professionals in the construction industry including Ron Singh, Chief Surveyor at the Oregon Department of Transportation, who sees it as dramatically changing the construction process from surveying through design and construction to operations and maintenance.

Consumer devices and professionals

Consumer mobile devices will increasingly be the platform for many professional activities which currently require specialized devices and a PC computing platform. Bryn Fosburgh of Trimble pointed out that while currently the location accuracy of GPS equipped smartphones is 10 meters, centimeter precision is surely coming because all that is required is a relatively simple antenna redesign. Bryn sees a continuum in these devices between consumer and professional applications. But while consumers tend to blindly accept whatever is available on their smartphones, professionals using these devices will have to be aware of the limits of the hardware, what it can be used for and what not. For example, while smartphones in the future will be capable of centimeter accuracy in ideal locations, the accuracy will be less where there is a lot of scatter in the RF signals from the GPS satellites. To ensure reliable results a professional surveyor relies on multiple sources including GPS linked to an earth station and traditional total station measurements. Increasingly we will require other professionals to be involved. Bryn reported a rising demand for photogrammetrists, as a result of the growing volume and availability of high resolution imagery which needs professional to help interpret.

Platform as a service

For virtually all of the world's big IT companies including IBM, Oracle, SAP, and Microsoft and the world's big content companies such as Digital Globe, the cloud is central to their vision of the future. Together with others in the IT industry Xavier Lopez of Oracle sees the future in platform as a services (PaaS), but there are challenges. A major challenge is integrating data in different databases including traditional SQL databases from Oracle and Microsoft and nonSQL databases such as Hadoop. Another challenge is semantics. Frequently different disciplines use different terms for the same thing. I have blogged about the CB-NL project in the Netherlands aimed at solving the semantics problem in the construction industry. Another big issue that Xavier identified is data fusion without co-location - how do you enable efficient processing of a logically federated database when the data resides in different locations ?

Monetizing the cloud

But perhaps the biggest challenge is finding an appropriate business model. It is not completely transparent how the cloud is going to be successfully monetized, though the trend is toward some form of subscription or pay-per-use. An interesting model of the successful application of the cloud in the construction industry is Textura, just acquired by Oracle. Textura’s cloud serves as a place in which all of the participants in a project from the funding, through design, to construction can come together and access the shared assets necessary to complete the project. Textura’s network includes about 85,000 general and subcontractors who using Textura to collaborate on construction projects. Textura handles on average about 6,000 projects each month. The company’s shared database, which hosts project budgets, contractors, invoices, payments, change orders and compliance documents, resides in the cloud and is accessible to contract administrators, contractors, developers, subcontractors, consultants, title companies, and banks.

Big data and analytics

One of big drivers for the cloud is enabling the extraction of information from the huge volumes of data (most of which includes location) which are now available. I have blogged about the nearly 100 petabytes of earth observation imagery available from Digital Globe. According to Dr Richard W. Spinrad, Chief Scientist at the National Oceanic and Atmospheric Administration (NOAA) , NOAA is expecting to have 160 petabytes of archived geospatial data by 2020.

As another example, Jeff Jonas of IBM described a publicly available database which includes 600 billion transactions per day collected by telephone companies. This data, which includes location, can be mined (after being anonymized) for very interesting and commercially valuable information. Jeff Jonas described some examples - using it to analyze where people spend most of their time (their "pattern of life"), how many people go into a store, and how long they remain in a store.

Dipanshu Sharma, Founder & CEO, xAd illustrated live some of the things his company, which is in the location-based marketing business, does with this type of data. xAd can customize ads to target people who are inside a store ("There's a great deal on aisle 5"), just outside one ("Come on in, we have great deals on umbrellas") or close to a competitor's store ("Checkout XXXX, our prices are lower than YYYY") - the words are mine.

In all of these example, geospatial data and technology are key to the applications. Jeff Jonas foresees that the apps of the future on your smartphone will require geolocation and will be so compelling that just about everyone will enable GPS location on their smartphones.

November 10, 2015

At the the Year in Infrastructure conference in London the Singapore Land Authority (SLA) won the annual Be Inspired AwardforInnovation in Government. Comprising only 700 square kilometers for the entire nation, land is a valuable and critical national resource for Singapore and the Singapore Land Authority is responsible for making the most of the resource.

Specifically SLA is responsible for the national land management system and for all geographical information (GIS) management. At the Be Inspired Awards at the Year in Infrastructure 2015 conference in London, I had an opportunity to chat with Victor Khoo and Kean Huat Soon. Victor Khoo is Deputy Director Land Survey at the SLA and Kean Huat Soon is a Senior Surveyor and the technical lead for the project.

Singapore intends to be the world's first "smart nation". Part of this initiative involves developing a virtual Singapore that is intended to be the source of authoritative information about Singapore for use by government. The current priority is a 3D model of buildings and below and above ground infrastructure.

The Mapping Singapore in 3D project is financed 50% by the Government of Singapore and 50% by agencies, primarily the Civil Aviation Authority of Singapore (CAAS) and the Public Utilities Board (PUB). Several government agencies including the Building and Construction Authority (BCA) and the Housing Development Board (HDB) are already using it. CAAS finds it essential for maintaining aviation glide paths over the city as does the PUB for assessing the impact of flooding. HDB uses it for planning purposes.

The project involves capturing large amounts of data using multiple rapid mapping technologies including oblique imagery, airborne laser scanning, mobile laser scanning, and terrestrial scanning. The data is compiled into 3D city model in a single database repository. The database includes geometry, topology, semantics and appearance. It relies on CityGML, a standard managed by the Open Geospatial Consortium (OGC), for the database schema and for data exchange. The project works at the Level of Detail (LOD) Level 1-3. The data is stored in a single geospatially-enabled relational database (Oracle Spatial). The total volume of data is more than 50 terabytes.The most challenging part of the project has been the development of business processes and technologies for ensuring the data remains current. The database is open and accessible to all government agencies.

3D Cadastre

Victor Khoo is also responsible for developing a 3D cadastre for Singapore which is a separate project at SLA. A 3D cadastre is especially critical for Singapore for several reasons. 85 % of the population of Singapore lives in public high rise buildings. Underground space is heavily used and underground density is increasing. Land developers are being very creative and packaging complex volumetric parcels that include both underground and above ground.

A 3D cadastre has data, legal, system and process dimensions. The legislation required for the legal aspect is already in place. The system and processes are intended to be completed by the end of 2016. With respect to data a 3D GIS is being populated at SLA starting with 2D land parcel data.

Land surveyors are an essential part of the 3D cadastre project. As a first step surveyors are required to use electronic data exchange instead of paper (Singapore mandated electronic submissions for building permitting over a decade ago.) Singapore has decided to support ePlan LandXML as an open standard for the exchange of survey data. In 2013 Singapore joined the ePlan working group which is focussed on developing a digital protocol for the transfer of cadastral data between the surveying Industry and government. The working group was originally made up of technical experts from the eight states and territories of Australia and New Zealand working in the management of cadastral data.

An outstanding issue is the integration of BIM models into the Singapore city model. A number of groups are working on a way of integrating CityGML and the BIM Industry Foundation Classes (IFC) standard. An example is the Dutch 3D standard.

Energy modeling for cities

The European SUNSHINE (Smart Urban Services for Higher Energy Efficiency) project has found a data model within the INSPIRE standard that is adequate for energy performance modeling for cities. CityGML has strongly influenced the development of the INSPIRE BU model, both for 2D and for 3D profiles. Many use cases that were considered for INSPIRE BU require a three-dimensional representation of buildings such as a building information model (BIM). Examples are noise emission simulation and mapping, solar radiation computation or the design of an infrastructure project. To allow for that, the building representation in Level of Detail (LoD1 - LoD4) of CityGML has been added to the INSPIRE BU model as a core 3D profile. For large scale energy performance at the urban level, the SUNSHINE team concluded that detailed interior elements of each building are not required. It is possible to work at a simple Level of Detail 3 (LOD3) and just include elements like roofs, envelope walls, and windows.

March 05, 2014

The UK Government's goals for construction industry is to reduce the CAPEX for public construction (design and build) projects and to reduce the UK's carbon intensity in line with its EU carbon committments. To reach its goal for the construction industry the UK Government has undertaken several initatives, one of which is a commitment to embrace Building Information Modelling (BIM) in Government projects over a 5-year time frame, to encourage industry to participate in this effort, and to position the UK to become a world leader in BIM.

The UK Government has explicitly targeted Level 2 BIM in the maturity ramp, defined as “file based collaboration and library management.” Level 2 BIM is a series of domain specific models (e.g. architectural, structural, services etc) where structured data can be shared based on COBie UK 2012.

The inital focus is on the design/build part of the lifecycle, but the government has said that "the 20% saving refers to CapEx cost savings however we know that the largest prize for BIM lies in the operational stages of the project life-cycle"

BIM Level 3

The UK Government is projecting that Level 3 BIM will be introduced in 2018. BIM Level 3 is not yet clearly defined but according to Peter Hansford could include big data, support for intelligent infrastructure including smart grid, and enabling smart city models. It could go beyond several subsystem BIM models in BIM Level 2 to a single model. Most interestingly according to Peter Hansford it could provide the basis for whole lifecycle modeling, allowing a building design to be optimized for the whole life cost and whole life carbon footprint.

According to Paul Morrell, Former Chief Construction Adviser, UK, although some companies are claiming to use BIM Level 3, no one is delivering its full potential of this technology. A recent poll of 410 live webinar participants found that no one is operating at BIM Level 3 at present.

Whole Lifecycle

In a panel discussion on BIM Level 3, Rich Saxon, the UK Government's BIM Ambassador dor Growth, pointed out that some aspects of BIM Level 2 are incomplete which makes talking about adoption of BIM Level 3 extremely ambitious. In agreement with Peter Hansford, he see an important goal of BIM Level 3 should be to go beyond projects to whole lifecycle. In the future he foresees that firms will offer whole life projects including design, build, and maintain. For example, Phillips will design, build and maintain building lighting which is leased to the building owner. He warned that the emerging Internet of Things (IoT) is poised to be disruptive. We can imagine buildings with sensors and analytics able to monitor, analyse and dynamically respond without human intervention to building situational awareness.

Interoperability across the construction lifecycle and the need for standards

Ian Chapman, Director of the National BIM Library, said that part of BIM Level 3 should be standards - because these make it possible for software to interoperate, which enables data integration across the construction lifecycle (plan, desing, build, O&M). It also makes it possible to find efficiencies by direct comparison of different projects and project outcomes. Software interoperability requires a BIM geometry standard like IFC, a clasification standard like Uniclass 2, an information delivery manual (IDM), and a data dictionary (DD). For example, ISO 29481-1:2010 specifies a methodology and format for the development of an information delivery manual (IDM) and is intended to facilitate interoperability between software applications used in the construction process, to promote digital collaboration between actors in the construction process and to provide a basis for reliable information exchange. The data dictionary as Ian Chapman presented it appears to be very close to the concept library (CB-NL) sponsored by by the Dutch Council on Building Information (BIR) that maps the same objects with different names across the construction lifecycle.

Enabling the right people to make decisions

Anne Kemp, Director of BIM at Atkins Global, argued that there needs to be conversation at the national level to deicde what do we really want from BIM Level 3 ? .Her perspective is that the most important goal of BIM Level3 is that it needs to enable the right people to make decisions. For example, in 2007 Tewkesbury was flooded, but not in the most recent flooding because changes were made – the right people were enabled to make the right decisions.

December 04, 2013

At the Second Annual Summit on Data Analytics for Utilities in Toronto, Mathieu Viau, a researcher at Hydro Quebec's IREQ research centre gave a presentation on how he has been using a semantic layer to allow semantic tools such as RDF, OWL and SPARQL to be used to implement semantic interoperability using real utility data from operational GIS, MDMS, ERP, DMS, and other systems. The implementation is inherently geospatial because it is based on Oracle Spatial and Graph.

In this example Mathieu mapped data from GIS, MDMS, DMS, ERP and other systems onto the IEC 61968 Common Information Model (CIM) which is a widely used data model standard for distribution networks. Data quality has become an increasingly important issue because greater grid reliability is one of the important objectives of the smart grid especially in North America. The semantic layer allowed SPARQL queries against a variety of operational data a lot of which includes location. Examples that Mathieu gave included finding all fuses that are more than 12 meters away from a conductor or mapping transformer voltage levels to identify overloaded equipment.

July 30, 2013

Since Google Earth's release in June 2005, national mapping agencies and other government organizations involved with geospatial data and technology have been reasessing their role. At the Geospatial World Forum (GWF) in Amsterdam last year, Paul Cheung of the United Nations Initiative on Global Geospatial Information Management
(GGIM) voiced the concern about their role that many national mapping
agencies and other government organizations with responsibility for
geospatial information have had since the advent of Google Earth,
private data companies like Digital Globe, Geoeye, TeleAtlas and Navteq,
crowd-sourced geospatial data like OpenStreetMap, and open data policies adopted by many governments around the world.

The UN-GGIM has just published Future trends in geospatial information management: the five to ten year vision. A number of experts and visionaries across a range of disciplines of the geospatial community including data collection, academics and major users of geospatial information, the private sector and the crowdsourced/volunteered geographic information movement were invitedto contribute their views on the emerging trends in the geospatial world.

Private sector participation

With respect to the private sector the list of contributors includes senior people at Trimble, Hexagon, ESRI, and Digital Globe/GeoEye, but does not include Google, Microsoft, Nokia/Navteq, or TomTom(TeleAtlas). I also didn't see anyone on the list from OpenStreetMap. However, Google, Microsoft Bing, and OpenStreetMap are discusssed in the report.

Trends

Given the large number of people contributing to this report, it is not surprising that it includes a comprehensive set of trends that are impacting national mapping and cadastral agencies (NMCAs), government bodies responsible for the provision of authoritative geospatial information within a country.

Big data - estimated at approximately 2.5 exabytes every day, huge volumes of data are expected to increase demand for geospatial technology to help to make sense of all this data.

Linked data - connecting data to other pieces of data on the Web, providing context and adding value to existing data

Semantic technologies - the semantic web as been discussed for years, but there are signs within some verticl industries such as construction that something practical may be emerging

internet of things - a hyper‑connected network of intelligent objects, I have seen estimates of a trillion objects in the near future

Cloud - infrastructure as a service (IaaS), platform as a service (PaaS), software as aservice (SaaS) and data as a service (DaaS)

Open‑source - expected to continue to gaim momemntum, already a number of NMCAs have adopted open‑source solutions into some of their services, OGC reference implementations are open source.

Open standards - In many ways the geospatial community is an model industry in the development and maintenance of open standards, led and coordinated by OGC in partnership with many organisations

3D data - the world is moving from two dimensional (2D) to three dimensional (3D) mapping; many municipalities are developing realistic and intelligent city models including buildings and infrastructure; the report projects that 3D is becoming an intrinsic part of core geospatial data, rather than an add‑on as it is now

3D visualization -3D visualization technology much of it arising in the gaming industry makes it possible to convey complex concepts, projects, and analytical results to non-technical folks such as politicians and the general public

4D - used widely in the construction industry, for example, linking Primavera to a BIM model helps with construction scheduling, 5D is also being used; the report also foresees 4D increasing over the coming five to ten years in geographic information systems (GIS), not only to understand change that has already taken place, but to also enable predictive modelling of future trends

Imagery - forecasts centimeter precision in many areas of the globe, focus is likely to be performance and analytics.

Unmanned aerial vehicles (UAVs) - in the civilian sector UAVs are already being used increasingly in a number of sectors

GNSS - by 2015, there will be over 100 GNSS satellites in orbit. This will enable faster data collection in very challenging environments, with higher accuracy and greater integrity

Satellite gravimetry - some nations are already taking the step to move away from traditional schemes defined using large‑scale terrestrial observations and base the national vertical reference system on purely gravimetric geoids instead

Coordinate systems - are becoming more accurately defined as technology and techniques improve; this is critical large scale construction projects, especially in the transportation sector

Indoor positioning - an emerging frontier, but one that still represents major challenges

Passive crowdsourcing - anyone with a smart phone, with or without a GPS, is a sensor which is already being used by private companies to derive infirmation about traffic, shopping patterns, and other information; social media make this an even richer source of information

Active crowdsourcing - OpenStreetMap (OSM) which grew by providing an alternative to an NMCA is the best known example; some NMCAs still find the concept of crowd-sourced data a challenge

Funding models - funding the collection of geospatial data remains a controversial topic in some countries; one of the major challenges of the next five to ten years for governments will be demonstrating the value pf geospatial data to the national economy as has been done in Australia and New Zealand and as is currrently underway in Canada

For some vertical industries the report says that "governments will also play a role in many countries in driving, or at least supporting, cross-sector collaboration in areas such as building, construction and public safety, through building information modelling (BIM)..." This is already occuring in the U.S., U.K., Finland, Netherlands, Denmark, Norway, Hong Kong, and Singapore, where national governments ara mandating or at least strongly encouraging the adoption of BIM for public projects. It is being driven primarily by efficiency and reducing the cost of infrastructure.

The report argues that "governments have a key role to play in bringing
all actors together to ensure that our future society is a sustainable,
location-enabled one, underpinned by the sustainable provision and
effective management of reliable and trusted geospatial information."

The challenge for NMCAs in the area of traditional mapping is that in parts of the world, Google. Bing, OpenStreetMaps (see here to compare these data sources), or a local crowd-sourced alternative like Malsingmaps provides more comprehensive, accurate, current and accessible data with less restrictive licensing and at a lower cost than the NMCA. There are those who argue that in the area of traditional mapping, private and crowd-sourced organizations do a better job at lower cost and that government needs to find a new area of focus where it can provide a service that it would be difficult for anyone else to provide. Examples are managing authorative data such as the parcel fabric or mapping undeveloped areas such as Northern Canada or the Amazon interior that may be uneconomic for a commercial concern.

Infrastructure Mapping

An area where government may be uniquely qualified is mapping national infrastructure, much of which has a national security dimension. In many parts of the world the location of underground infrastructure including electric power, water and wastewater, gas, district heating, and other networks is unknown or poorly known.

According to national statistics, in the United States an underground utility line is hit on average every 60 seconds. The total cost to the national economy is estimated to be in the billions of dollars. The problems is that in most municipalities in North America, for years underground utility lines have been put in the ground not according to plan but wherever it has been easiest and cheapest to build them. In addition 2D as-builts of underground infrastructure are unreliable. The result is that in most municipalities the location of underground utiltiies is very poorly known.

In the U.S. access to infrastructure data including location is restricted under Protected Critical Infrastructure Information (PCII) Program. In addition utilties, telecommunications and other organizations responsible for operating infrastructure networks are very often reluctant to share information about their networks for competitive and other reasons. There are successful examples around the world where government has helped enable a shared ulility network database.

The Victorian Spatial Council has developed a framework for sharing based on licenses and metadata, not only within the Government of
Victoria, but also among utilities, telecommunications firms, National
Government, local governments, quasi-government, and non-government
organizations.

As a practical example, the City of Las Vegas has successfully completed
a pilot to develope a 3D model of the City's underground and
aboveground infrastructure and is expanding the project to cover more of
the City.

There is a growing body of information that indicates that the return on
investment (ROI) for investing in accurately geolocating underground
infrastructure is positive and substantial. A pilot in northern Italy
estimated a ROI of €16 for every € invested in
improving the reliability information about underground infrastructure.
For comparison the ROI for investing in improving information about underground infrastructure in the United States has been estimated
to range from $3 to $21 for every dollar invested.

May 23, 2013

At the Oracle Spatial and Graph User Meeting, the folks from Oracle outlined some of the things to expect in the next version of Oracle 12c. I was very impressed. All of the below is my interpretation of what will be in Oracle 12c.

Parametric curve support

The really big news for the architecture, engineering and construction community (AEC) who use CAD and BIM applications is that Oracle 12c is expected to support NURBS (non-uniform rational basis spline) a type of parametric curve widely used in the design space. According the Siva Ravada the support will be very general and will be able to handle different ways of representing NURBS. Up to now the only parametric curve supported by Oracle was circular arcs.

To me it looks like Oracle is taking a major step toward being able to store natively and manipulate (in some cases by stroking) the parametrized curves used by CAD and BIM applications. There is a lot of CAD and BIM data out there and being able to store it and manipulate it in Oracle Spatial and Graph is going to be a game changer.

Orthorectification

Oracle is providing support for orthorectification embedded directly in the database. This will have important implications for the industry. Up to now, you had to pull the image out of where ever it was stored, transfer it over the network to someone's application often from an image processing company to orthorectify it, and then transfer it back to store the image or stream it to the end user. With 12c you will be able to orthorectify images in the database. This is in line with Oracle's objective of bringing processing to the data, rather than the data to processing.

Raster algebra

Oracle 12c will also have support for raster algebra, again in the database. For example, you will be able to average in the database12 months of national temperature data stored in monthly raster files.

GDAL and PDAL support

Oracle supports the open source GDAL raster libraries, the industry standard developed by Frank Warmerdam (now with Google) and used by just about everyone for accessing raster images including ESRI. PDAL is a similar type of open source library for point clouds developed by Howard Butler and Michael Gerlek.

3D

Oracle supports three types of 3D data. 3D vectors, point clouds and terrain (surfaces). Oracle 12c will provide full support for 3D geodetic data, lon lat and elevation in meters or feet.

Performance

Oracle has developed a new engine for handling massive
point clouds. Dan Geringer showed impressive results for a 2.8 billion
point point cloud (286 gigabyte file).

Processing vector geospatial
data is reported to be 50 to 100 times faster in 12c than in 11g.

Raster
operations have been parallelized which can dramatically reduce the time it takes for these operations.

Engineered Database

Since acquiring Sun, Oracle has become a hardware and software company and is offering integrated hardware and software solutions that offer extremely good performance characteristics. For example, processing LiDAR data benefits from hardware acceleration on engineered machines such as Exadata boxes.

Bundled data

Oracle is planning to bundle vector data from Nokia (Navteq), TomTom and others.

Oracle Spatial and Graph

The name of the product is changing to Oracle Spatial and Graph partly because it has had graph capabilities (network and RDF semantic) already for several years. But probably more importantly because the demand for graph capabilities is accelerating, especially RDF semantic graphs for linked data, text mining, and for social media analytics, according to Xavier Lopez. Even Oracle's NOSQL database (aka BerkeleyDB) is apparently getting graph capabilities.

May 21, 2013

One of the most important things that impressed me at the Geospatial World Forum 2013
(GWF 2013) conference in Rotterdam is the degree to which in the Netherlands that building information modeling (BIM) and geospatial are perceived to be tightly linked. In my previous post I gave an overview of a presentation by Bram Mommers, who works for the large private engineering company ARCADIS, on why integrating geospatial into the construction process is important.

Jaap Bakkers, who is with the Rijkswaterstaat, the national water company in the Netherlands, presented more details about the Concept Library (CB-NL) initiative.

It is supported by the Dutch Council on Building Information (BIR), which is a joint industry and government council created to foster the development of building information modeling (BIM) in the Netherlands. It includes government agencies such as Rijkswaterstaat, private construction contractors, and engineering and architural firms. Government funds it, but most of its expertise is seconded from industry.

Construction processes in the Netherlands

In the Netherlands many government projects are private-public partnerships (P3), where a private sector firm or consortium is responsible for the design-build-finance-maintain phases of the lifecycle and the government as the owner is responsible for operation.

Many AEC firms are adopting BIM because it is cheaper, reduces risk of budget, schedule overruns, and results in fewer change orders. They may be motivated to adopt BIM to increase their margin or because they are required to by the owner, which is often a governmental organization such as the Rijkswaterstaat. Once construction is complete, at commissioning, the owner is handed a large volume of facilities data and as-builts. But this data is often unusable by the owner because it is incompatible or non-interoperable with the owner's asset management, GIS, and other systems. This is the primary objective of the Concept Library (CB-NL), to create standards that enable re-use of design and construction data for operations. This is one example of the
data impedance problem, where at every handover, design to construction, construction to operations, data is lost and has to be recreated.

Geospatial in the construction process

Marcel Reuvers, Manager of Geo-standards at Geonovum, gave an overview of some the critical roles that geospatial plays on the construction lifecycle.

Planning / preparation phase

Asset management / maintenance

Managing as-builts

I would add sustainable design which always requires geospatial information about local prevailing weather pattern and the location and orientation of neighbouring structures for right to light, wind, solar heating, natural lighting, solar PV generation potential, and other analysis.

I blogged previously about some fundamental changes to the construction process that will make geospatial central to the construction process. What has been proposed is that a post-construction survey would become the critical
source of reliable
asset information in the form of a 3D intelligent model which would be maintained in a geospatially-enabled asset database. When a new
project is initiated, 80-90% of the necessary
information would already available in the
database making a
complete resurvey, as is the current construction practice,
unnecessary. All that is required before design can
begin is minimal due diligence to validate the as-builts. The new
process also implies that there is a reliable geospatially-enabled asset
database that is
maintained thoughout the operations and maintenance phase of the
lifecycle.

Concept Library (CB-NL)

In the Netherlands there is already a standard decomposition for buildings called COINS. The idea is to build on this to create a general approach for decomposing infrastructure as well as buildings and that it suffiiciently general to include geospatial.

The Concept Library is intended to map different terminology across domains: design, engineering, architecture, construction, asset management, facilities manmagement,and geospatial. For example, it interelates terms like arch bridge, rail bridge, spanning structure, viaduct, and crossing, each of which may be used by a different domain to refer to the same structure. The business benefit is that it would reduce the data impedance problem, where at every handover in the construction lifecycle, designer to contractor or contractor to owner, data is lost and has to be recreated.

The vision is that the Concept Library is an open database based on a standard ontology that is searchable and has an open API so that vendors such as Autodesk, Bentley or ESRI can develop interfaces to it for their products.

CB=NL and geospatial

CB-NL was initially focused on BIM, but is being extended to include geospatial. CB-NL enables designers, contractors, asset managers and GIS staff to share a common dataset. It also makes it possible to automate the process of populating asset and facilities management application and GIS databases for the operations and maintenance phase of the lifecycle.

Concept Library (CB-NL)

A two year project to develop the Concept Library has just been initiated, in January 2013, which is supported by a large number of government and private organizations. It is using the OWL ontology language, which has been endorsed by the W3C. It uses tools and constructs from buildingSmart's Semantic Constructs for inputting and editing semantic content. Real world pilots, for example, a Rijkswaterstaat water services project and the Schipol, Amsterdam, Almere ring road project, will be used to demnstrate the practicality of the approach.

According to Marcel Reuvers, there are a number of standards that relate
to the CB-NL project from a number of standards bodies including the
OGC, buildingSmart, ISO/TC211, and LandXML. Jaap Bakker said that the project team has been in touch wth the Open Geospatial Consortium (OGC) in additon to the buildingSmart Alliance about the CB-NL project.

March 13, 2013

The Semantic Web is an effort of the World Wide Web Consortium (W3C). The idea of the Semantic Web is to provide a common framework that allows data to be shared and reused across application, enterprise, and community boundaries, or put another way a web of data that can be processed directly and indirectly by machines. Converting the existing web to a semantic web has turned out to be a major challenge and it remains largely unrealized.

However, as a first step ways are being found to add some level of semantic content to existing data and services. As an important example, the OGC has announced the adoption of "Semantic annotations in OGC standards" as an OGC Best Practice. Annotation of Web Services (or data) compliant to OGC standards such as WMS, WFS and GML refers to attaching meaningful (semantic) descriptions to the service.

The authors give an example from a GML document of an XML schema that defines a class or feature type exploitationsponctualsproduction. They point out that from the information in this schema you can't tell what the data described by this schema represents. The name is confusing and the attributes don't provide any additional information. Whose name is the value of name, what does the attribute year refer to, and how is the allowedproduction measured?

An OGC Best Practice means a recommended way to add semantic content to OGC web services based on existing standards. The authors discuss how to do this using not only OGC standards but also standards from the W3C and ISO/TC211.

OGC standards cover the functional dimension of a web service, but they don’t tell us much about what the data represents in the real world. For example, in the case of WMS or WFS the application knows how to load and visualize the data on a map, but provides very little information to a user in a specific domain such as engineering or construction about how to read the displayed map.

Put another way, semantic annotations can convert geospatial data into infrastructure data. Semantic annotations are a way for data providers to connect geospatial data to real world domain models, to provide information about data and web services that is meaningful to an engineer or construction contractor. Semantic annotations not only make data meaningful to a much broader audience of consumers, but make it discoverable by a much broader audience through standard web search engines.