Consultoría - Topografía -GPS - GNSS - Batimetría - Laboratorios didácticos - Fotogrametría - Software - Suelos - Drones - Accesorios - Franquicias - Estaciones Metereológicas - Equipos láser - Equipos para Construcción - Energías renovables
The usage of the optical and radar imageries are set to contribute to expand Tesera’s expertise in creating high resolution forest inventories, which, up to now, is based on a combination of airborne Lidar and colour infrared imagery for the Canadian market. The predictive performance resulting from combinations of different satellite sensors is an expected outcome of this project. Visiona and Tesera intend to build the means to optimally combine different imagery and generate High Resolution Inventory data and analytics according to customer specifications. The participation of some customers in the R&D project is aimed to contribute to designing a product that effectively meet their needs.
Visiona is a joint venture between Embraer and Telebras groups devoted to the integration of spatial systems. Created in May of 2012 as a result of a Brazilian government initiative to meet the National Program of Spatial Activities’ (PNAE) goals, the company spearheaded the Brazilian Government's Geostationary Defense and Strategic Communications Satellite System, SGDC, and is leader in the Brazilian satellite remote sensing market.
Founded in 1997, Tesera Systems, is an employee-owned company that specializes in systems integration and application development, utilising open source platforms, cloud, machine learning techniques applied to remote sensing and ground plot data to predict variables of forestry interest, which are used to create operational high resolution forest inventories for use by the forest sector.
The ‘smart city’ concept entirely relies on a permanent stream of massive amounts of data acquired by a great variety of sensors distributed throughout the city. Smart use of all this data requires integration with 3D city maps for which point clouds, acquired by laser scanning or photogrammetry, are the main sources. The author of this article identifies the abilities of point clouds to support the smart city concept.
The benefits of point clouds for monitoring urban processes were recognised quite early on. Back in the early 1990s, airborne Lidar could accurately provide the height component which is so important for many urban needs. Not only the geometric accuracy appeared to be amazingly high, but also the point density. The needs include design and inspection of utilities such as water mains, sewer systems, tunnels, bridges, roads, railways and power lines, and the creation of 3D city maps in which the shapes of buildings and other objects have been reconstructed with high spatial detail. One prerequisite is the availability of geodata, which represents the environment in its full spatial and time dimensions at a highly detailed level. Before being ready for use, geodata has to be acquired. This seems trivial, but is far from it. Thanks to tools like Google Maps, too many laymen take the existence of geodata for granted. The desire of many local authorities to implement the smart city concept will result in a strong need for increasing the acquisition of geodata. Since geodata acquisition is time-consuming, labour-intensive and associated with a number of technological issues and costs, it may be readily envisaged that the creation of smart cities will be associated with a bothersome burden.
People continue to move to cities. Population growth within the limited boundaries of a megapolis increases traffic, housing, utility density and the risk of calamities, and puts a strain on scarce resources. As a result, the economic, social and environmental impacts are tremendous. Added to this, nearly one billion people live in cities which may be hit by floods, droughts, cyclones, earthquakes or other natural disasters. Humans, power, water, gas, wastewater, bits and bytes – they all are transported through a labyrinth of roads, tunnels, pipes and cables. Flood, fire or other shocks can disrupt these arteries and veins of the city. To cope with these threats and optimise the use of scarce resources, many city authorities want to exploit the opportunities offered by today’s technology; they want their city to become ‘smart’.
In the smart city concept, information and communication technology (ICT) is fully exploited to increase the efficiency of mobility, retail and delivery services, to minimise energy consumption and air pollution, and to reduce the response time to casualties and hazard sites. The goals are often formulated in abstract terms such as liveability, economic progress, quality of life, sustainable development or attractiveness. In the smart city concept, decisions are not made ad hoc or based on political beliefs or preoccupations. Instead, decision-making is informed, based on the plethora of data obtained from sensors and analysed using smart algorithms – possibly based on machine-learning principles – running on high-performance computers. A city cannot be smart without the permanent availability of data collected by sensors mounted on static or moving objects. To illustrate this, the importance of point clouds for water management is discussed below.
Flood threats are increasing due to rising sea levels and rapid urbanisation. Flooding is caused by an accumulation of water which may result from heavy rainfall, swollen rivers or dike breaches. Flooding causes degradation of assets, economic losses, injuries and health risks. Renovation of dikes, creation of water overflow areas and protection of vulnerable dunes are essential for urbanised lowlands such as the Mississippi River area and the Rhine/Meuse Delta. For these purposes, detailed and accurate knowledge is required on the variations in elevation of the river bed and the surrounding area, height of dikes, flood waves, along-track and across-track slope of the river and the water resistance associated with land use. The use of a digital elevation model (DEM) or, even better, a 3D map derived from point clouds as input for flood models provides insight into areas at a high risk of inundation. The development of a new urbanised area will affect the flow of water. Is the sewerage system still able to drain off the water after heavy rainfall? Does the rainwater still flow in the direction of the river or reservoirs? Should the drainage layout be improved? These are all questions a smart city should be able to answer, and the solutions lie in the expert analysis of 3D geodata by geomatics professionals.
Essential in the smart city concept is the availability and continuous production of digital data, which may be collected dynamically or statically, continuously or intermittently. Smartphones and social media channels such as Twitter deliver real-time data on human movements and how people use the city’s space, either by public transport or their own methods, and other facilities. Static sensors distributed over the city produce data on particulate matter, noise pollution, temperature, groundwater level and so on. Cameras can detect traffic accidents and enable rescue workers to assess injuries while still rushing to the scene of a disaster. The permanent stream of real-time data captured by a dense sensor network needs to be processed, analysed, used in decision-making and disseminated to stakeholders. For this, the location of each sensor, whether static or dynamic (such as a smartphone in a cyclist’s pocket), must be known in a uniform coordinate system. To guarantee the smart use of all sensor data, the skeleton of a smart city consists of cadastral maps, master plans, utility maps (containing the location of underground pipelines, cables, sewerage systems, manholes and much more), as-built plans, maps outlining buildings and roads, and 3D maps at the necessary level of detail.
3D maps include the height dimension which is essential for many smart city applications such as flood risk monitoring, emergency response, viewpoint analysis and calculation of the solar energy potential of parcels and roofs. Making a city really smart starts with the creation of an accurate 3D map covering the entire city. Of course, such a 3D city map should be precise, detailed and up to date. Positional or thematic inaccuracies and outdatedness will impair proper decision-making. The very nature of geoinformation means that damage originating from substandard products may not be immediately recognisable. Mistakes made today may – a decade or two later – cause society significant losses, not only moneywise, but also in terms of injuries, accidents, demolished houses, and human suffering. Accurate 3D mapping is therefore an important, even essential, asset underpinning the smart city concept. Capturing and representing the 3D space of a city is challenging because of the level of detail required. 3D mapping requires the right data, the right software tools and geomatics professionals with the right knowledge.
The advantages of 3D city maps were first recognised many centuries ago. Armed with only pencil and paper, surveyors and cartographers in the 16th and 17th centuries exploited their craftsmanship to produce bird’s-eye views which combined a 2D map with perspective views of buildings (Figure 1). Hundreds of years later, aerial images processed with digital photogrammetric workstations enabled extraction of 3D coordinates of corners of buildings and other key points, albeit in a labour intensive process. Apart from various types of imagery (satellite, aerial and terrestrial), today other data sources are available for creating 3D city maps, including airborne, mobile and terrestrial Lidar and topographic and cadastral 2D maps. Using a mix of these data sources allows a 3D city model to be created highly automatically (Figure 2). As airborne Lidar – and, later on, mobile laser scanning (MLS) systems (see Figure 3) – became more mainstream, this boosted the research into automated 3D mapping. For example, airborne Lidar enabled Rotterdam to generate a 3D map of buildings which was made openly available in 2011. The basic data source was a detailed map with footprints of buildings. The airborne Lidar point cloud was acquired by Fugro’s Fli-Map system in 2010. The accuracy and point density of 30 points/m2 enabled the automatic extrusion of the footprints resulting in a Level of Detail 1 (LoD1) representation of Rotterdam in CityGML (Figure 4). Manual editing was required to eliminate anomalies, some caused by time shifts in the acquisition of the map and point cloud. Since this is now outdated, Rotterdam is currently at an advanced stage of releasing a much more detailed 3D map in close consultation with a broad spectrum of prospective users. Regular updating is a necessity. Indeed, public authorities often become enthusiastic about the smart city concept initiated by one or more city department and release financial resources, but underestimate the necessary costs of the annual or biannual updating to preserve the value of the 3D maps. Despite all the photogrammetric methods and laser scanning research, automated 3D mapping has not yet materialised; today’s detailed 3D city maps are created manually. For example, AccuCities, based in London, UK, commercially offers detailed 3D maps based on the manual processing of high-resolution aerial images (Figure 5).
Since 2010, the acquisition capabilities of both photogrammetric and laser scanning sensors have been tremendously improved, but so too have the geodata processing methods thanks to innovative advancements such as dense image matching (DIM) and simultaneous location and mapping (SLAM). DIM enables the creation of dense point clouds from overlapping images. Meanwhile, SLAM has been one of the most prominent successes in robotics and enables a robot to move autonomously through an unknown space. No map is needed, and no GNSS; the solution is based on ‘guessing’ the position by exploiting all sensor data. The guesses are iteratively refined using data collected while the robot is moving and the iterative closest point (ICP) algorithm aimed at minimising the difference between successive scans. Landmarks – features which are distinct from the background – are central in the SLAM approach. SLAM solutions are also beneficial for 3D mapping of indoor and subsurface spaces using trolleys, hand-held sticks (Figures 6 and 7) or backpacks (Figure 8). On trolleys, the main positioning and orientation sensors are odometers, inertial navigation sensors and lasers. Odometers count wheel spins enabling computation of the speed and distance covered. When mounted on two wheels, odometers indicate directional changes. SLAM does not consist of a particular algorithm, but is rather an approach in which a diversity of solutions are employed depending on on-board sensors, the nature of the environment and possibilities to connect the trajectories with ground control points.
Today’s photogrammetric software enables high automation of the chain, from flight planning, self-calibration of consumer-grade cameras and aerotriangulation to the creation of DEMs and orthomosaics as well as their confluence: 3D landscape and city models. However, the creation of detailed 3D maps of cities still requires considerable human intervention. DIM is one of the technological innovations contributing to the automation of the process. DIM is based on semi-global matching, the basics and potentials of which have been developed and demonstrated by Hirschmüller (2008). DIM enables the computation of a height value for each pixel, thus producing high-resolution digital surface models (DSMs) and – by filtering out points reflected on buildings and vegetation – DEMs in (semi-)automatic workflows. To demonstrate the capabilities of DIM implemented in commercial software, the section below presents a project comparing feature-based matching software, developed by students, with commercial software.
During a project carried out in the last quarter of their first year, students at the Delft University of Technology (TU Delft) working towards a master’s in geomatics developed the ‘NARUX3D’ software to automatically extract textured mesh models from images captured by unmanned aerial systems (UASs). The software features three main modules which run in succession: image matching, sparse and dense point-cloud generation, and mesh generation. The scale-invariant feature transform (SIFT) was used to detect tie points in overlaps for co-registration of images, resulting in the camera orientations and an initial sparse point cloud using structure from motion (SfM). The sparse point cloud was then densified using a multi-view stereo (MVS) algorithm. To reconstruct a 3D mesh from the dense point cloud, two methods were developed. The first – Poisson surface reconstruction – worked well for smooth surfaces. The second method – PolyFit – was effective for creating compact mesh models from piece-wise planar objects. The performance of NARUX3D was tested at two sites located on the TU Delft campus. During a UAS flight, images of a building (see Figure 9) were taken every two seconds, resulting in 99 images.
Providing that the overlaps were sufficiently high, the software was able to create point clouds even when the facades consisted of a variety of materials with complex textures. The resulting dense point cloud and mesh shown in Figure 10 (on the left) have been compared with the results from Pix4Dmapper commercial software (Figure 10, on the right). In general, point clouds generated by the latter were much denser, while the meshes were less noisy and had a more realistic appearance. The SIFT matching approach uses feature-based image matching. Distinctive points are detected in the overlaps of images to which descriptors are assigned. Corresponding points in overlaps are then sought based on similarities between descriptors. When little or no texture is present, no reliable matches can be found. This was the case at the second TU Delft site, a white cottage (Figure 11). The roof and facades were coated white, and Pix4Dmapper dealt well with that. However, NARUX3D also extracted points on corners, ribbings and other locations with sufficient contrast. Considering the price of commercial software, the results the students obtained in just two months were quite impressive. Semi-global matching is now a regular part of many commercial photogrammetric mapping software solutions including Pix4Dmapper, SimActive’s Correlator3D, Racurs’ Photomod, Bentley’s ContextCapture, SURE of nFrames, Trimble’s Inpho and others.
Most local governments do not have enough skilled personnel to implement the smart city concept. The private sector is eager to fill the gap, but companies often lack geomatics expertise too. No geodata or geoICT has much value unless it is in the hands of professionals who have the knowledge and craftsmanship to transform the data into information which supports the local authorities’ goals and needs. And the best-qualified people to do this – experts who can set geodata to work for the benefit of the masses – are the geomatics specialists. Unfortunately, in and around the year 2000, many institutes of higher education experienced a low influx of new geomatics students and some universities abandoned their geomatics programmes altogether. Today, society is confronted with a shortage of skilled, experienced and knowledgeable geomatics professionals. Universities and local governments should join forces to fuel young people’s enthusiasm for a career in geomatics. They are badly needed.
Hirschmüller, H. (2008) Stereo Processing by Semiglobal Matching and Mutual Information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2), pp. 328-341.
McKinsey & Company (2018) Smart Cities: Digital solutions for a more livable future.
In 2017, 11 European public mapping agencies (PMAs) financed a EuroSDR project to explore the economic value of 3D geoinformation. They benefited from being able to share knowledge with one another about the findings and their concerns. For the investigated cases, the cost-benefit ratio was found to be about 1:3. Since the calculated financial benefits were rather circumstantial, however, the academics involved in the project complemented the study with a scientific article in which they attempt to provide more nuance by focusing on the broader public value. They conclude that, in general, the use of 3D geoinformation is increasingly profitable. For PMAs, future research will be not so much about whether to make the transition to 3D, but more about how to do so.
Public mapping agencies (PMAs) are in a difficult position. After 25 years the production processes for 2D mapping and GIS have finally reached a status of high efficiency, but now the market increasingly needs 3D data and information. New but here-to-stay applications like BIM, smart cities, augmented reality and climate change studies ‘demand’ it. Plus high-tech innovations and pretty 3D visualisations are more effective means of communicating data about the physical and built environments. But most PMAs are reluctant to invest in anything more than large pilots or individual projects. One of the problems is that the public authority that has to bear the costs is not usually the one that enjoys the most benefits. The core competence of PMAs is to produce all the highly reliable geographical data of the territory needed and distribute it in as client-friendly a way as possible within their imposed business model. If the clients’ needs evolve to 3D, the PMAs have to follow, otherwise they risk losing their authoritative position – or do they? That is the key question for many European PMAs, which is why they participated in the EuroSDR project to investigate the economic value of 3D.
For the study – with involvement of the company ConsultingWhere – six application fields were selected: forest management, flood management, 3D cadastre and valuation, civil contingency, asset management, and urban planning. Over the course of six different EuroSDR workshops, attended by representatives of the PMAs and the stakeholders of the application fields, value chain analysis was applied. Improved planning processes were a clear theme in all of the fields. Two application fields were selected for quantification using cost-benefit analysis: flood management, due to the ubiquity of the problem and its high political profile, and urban planning because 3D geoinformation has a significant potential to contribute to the problems of managing urban growth.
In urban planning, the costs and benefits were evaluated in detail by scaling up and comparing real-world cost estimates from Denmark, a country that uses 3D geoinformation in this field, with the Republic of Ireland (using the comparative land areas), which does not. The benefits are based on the financial impacts that are related to processes in the urban planning value chain:
- Local area planning (LAP) revision and the impact on the planning authority
- Visual impact assessment and the lower costs for developers
- Reduced time for citizens to make LAP submissions and major scheme objections
- General improvements to public-sector efficiency.
After the application of some correction factors, the net value of the cost-benefit analysis for a ten-year period was calculated as 1:2.1 and a net present value (NPV) of €22 million.
In flood management, the same financial model was applied but three approaches were taken to ‘triangulate’ the assessment. Firstly, a cost avoidance method used data from Switzerland, one of the few countries in Europe where the PMA has made a complete switch to 3D several years ago. Estimating the damage avoided since then thanks to the use of 3D geoinformation resulted in a cost-benefit ratio of 1:3.3 and an NPV after ten years of €8.9 million. Secondly, a case study was performed looking at an impact study in The Netherlands where high-resolution height data was published as open data some years back. The results of the Dutch study were compared with the cost-benefit potential of a similar high-resolution 3D digital terrain model in Denmark. The third method applied an adapted business case transfer from the USA’s National Enhanced Elevation Assessment to Belgium. The results of the latter two methods were similar to the Swiss one.
The EuroSDR study concludes: “The cost-benefit analysis in both urban planning and flood management demonstrated that benefits outstrip costs by a multiple of two to three times, even when considering each case in isolation. As further applications of 3D geoinformation are added, additional costs should rise more slowly, whilst benefits should accrue at a similar rate, thereby enhancing the overall rate of return.” The PMAs were very content, both with the study outcome and also with the opportunity to share knowledge about the challenges they face in convincing decision-makers to invest in this innovation.
3D production and processing costs are approaching the same level as for 2D, but 3D innovations also require time and money to invest in technical infrastructure and in transforming business and operating models. In a climate of budgetary constraints, the economic feasibility of 3D innovation is “a point of consideration”, states the report. Accordingly, to further assist public-sector managers making the case for change, two academics involved in the EuroSDR study – Professor Jantien Stoter (TU Delft, The Netherlands, specialised in 3D and 4D geoinformation) and Professor Joep Crompvoets (KU Leuven, Belgium, specialised in national spatial data infrastructures), both of whom have intimate knowledge of PMAs – sought other means to complement the findings. They collaborated with Dr Serene Ho (KU Leuven, specialised in institutional aspects of geospatial innovation) to explore 3D geoinformation innovation from a public value perspective. Such a perspective on innovation explores the value of it from the users’ point of view. While acknowledging that traditional innovation ideals of effectiveness and efficiency matter, it draws attention to civic objectives like responsiveness to needs, liberty and participation, citizenship and transparency. Re-examining the data collected for the EuroSDR study, a qualitative analysis has been published as a scientific article (in Land, May 2018). It reveals that, in the authors’ experience, proving economic value is vital, but the creation of public value is equally or more significantly a driving factor for transformative innovation “as this conveys social and political currency”.
The article points out that moving towards a model where 3D geoinformation is the dominant data environment for PMAs will undoubtedly yield public value. 3D geoinformation enhances a government’s ability to protect its citizens’ quality of life by providing advanced analytical abilities that result in better living environments and avoiding damage to property. It improves the safety of emergency responders. It helps in planning and securing vital infrastructure. Also, it engenders greater trust in public organisations by fostering greater transparency, confidence and the ability to communicate decisions. The paper describes many examples from the 11 European countries.
Classification of the feedback from stakeholders demonstrates clear potential in financial and strategic aspects, mainly generated through mechanisms of improving the effectiveness of technical products (and, subsequently, various workflows and applications) and enhancing the data environment in which stakeholders operate. The potential for public value creation for PMAs could be even more significant, given the fundamental nature of cadastral data for all other development-related decision-making and the growing importance of sound urban planning. The authors conclude: “Innovation in 3D geoinformation would therefore likely consolidate and advance the PMAs’ position as authoritative geodata custodians and emphasise the role PMAs play in fostering secure and sustainable development. The move to 3D is a better way to meet the evolving nature and scale of their public mandate”.
“I have no doubt that 3D mapping is the near future in a fast-growing amount of applications,” says Jantien Stoter. Joep Crompvoets agrees – albeit a little reluctantly, because he also believes that many so-called new applications function fine, and more simply, with 2D data: “A decision-maker’s heart beats faster when they look at 3D visualisations, and this innovation is a logical evolution. So I agree that we will not get this genie back into the bottle again, although it comes at a price.” In their scientific article they refer to the economic value outcome of the EuroSDR study by stating: “The study found that such innovation was potentially a viable return on investment, perhaps even profitable.” This downplays the cost-benefit ratio of 1:3 actually presented in the study. Both professors explain: “We don’t want to give the impression that the financial benefits for the national or regional mapping organisation are always that manifest, let alone achieve a certain cost-benefit ratio. And it makes a big difference whether the investors’ perspective is aimed at obtaining benefits for the country as a whole, or whether the PMA has to sell the data to be profitable on its own. It certainly seems profitable, but the amount of scientific studies is too limited to be very specific. It is easier to prove that 3D geoinformation brings larger effectiveness of policies, processes and operating environments within and between governments and businesses.”
This has echoes of an earlier time, when digital mapping and GIS found their way from the American defence industry to the national mapping authorities worldwide. There were not many convincing cost-benefit studies proving GIS was profitable, yet the PMAs still had to make the switch because everybody could see that it made so many processes more effective and the technology was here to stay. Stoter and Crompvoets recognise many similarities. They add: “An important difference is that back then the transition was made easy because the initial costs of the national large-scale digital mapping and GIS revolution had largely been paid by the utilities sector. Now the demand is so fragmented that there is no focus for a shared business model.” Also, compared to the analogue way of working until then, GIS was a really disruptive technology; the market understood that serious investments and national coordination were needed. “But we passed the point of no return for the transition to 3D about five years ago,” states Jantien Stoter. “Since we can’t calculate the profit of new technology for new and currently unknown possibilities, we have to stop concentrating on the doubts. Look at examples such as Singapore and large cities in China or the PMA of Switzerland – they made the complete switch to 3D geoinformation and no longer want to be without it.” Joep Crompvoets outlines the plans for future research: “Our investigations will concentrate on how – and not whether – to make the change. We think that mapping of base data in 3D is the most efficient for a country when it is centralised, harmonised and quality controlled by the national mapping agencies. Which steps can be taken? Which priorities work best? And how should it be financed?”. His colleague concludes: “It is about the production of correct, up-to-date 3D data at different levels of detail for different applications, without every user group producing and paying for its own snapshot of reality. We’d better get going – back to the future!”
EuroSDR is a not-for-profit organisation linking national mapping and cadastral agencies with research institutes and universities in Europe for the purpose of applied research in spatial data provision, management and delivery. Joep Crompvoets is EuroSDR’s secretary-general and chairs the Business Models and Operation Commission. He is a professor of information management in the public sector, senior researcher/consultant and project manager at the Public Governance Institute of KU Leuven, Belgium. Jantien Stoter leads the EuroSDR 3D Special Interest Group. She is a professor of 3D geoinformation at The Netherlands’ TU Delft, Faculty of the Built Environment & Architecture. Prof Stoter also works as an innovation researcher at both Kadaster and Geonovum.