Mapping the UMass Amherst Campus Using UAS Aerial Imagery 2023

By Ryan Wicks | 25 Jun 2023

Introduction

If you’ve visited the UMass Amherst campus occasionally in the last 10 years you may have noticed dozens of new construction projects of various sorts popping up all over the campus; new academic buildings, solar panel parking canopies (https://www.umass.edu/sustainability/climate-change-energy/solar-energy/2017-solar-projects), and geothermal well drilling, just to name a few. To coordinate these efforts effectively, up-to-date information – specifically GIS data – is quite helpful. Not only does the UMass Amherst Physical Plant leverage drones and Trimble R10 RTK to regularly survey progress on construction projects, but they also host a campus-wide GIS working group whose focus is to promote and leverage the power of GIS data, tools, and ancillary equipment to provide highly detailed, highly accurate, current information to any departments and decision makers in the UMass network.

While teams from the physical plant regularly survey and document progress on specific construction projects, the University also has also regularly updated it’s high-resolution color campus map approximately once every 2-3 years. These maps are constructed using high-resolution aerial images; while the Massachusetts State aerial survey (https://www.mass.gov/info-details/massgis-data-2021-aerial-imagery) might suffice in many cases, but for slightly higher resolution, slightly higher accuracy lateral, and improved vertical accuracy, a more localized flight conducted by an unpiloted aerial system (UAS) can be used. The last few times that the UMass Amherst campus has used drone imagery to collect the aerial data it has leveraged an outside company, but in the last few years both our capacity and expertise in surveying techniques with new technologies has grown as a campus community across multiple departments. I am delighted to have led the effort to conduct the most recent UAS aerial survey. Read on and I’ll tell you a little bit about the process.

I want to say a big “thank you” to the collaborators who contributed to this project, either directly or indirectly. Not only would we not have completed this project without them, but beyond that they are all outstanding colleagues that are a genuine pleasure to work with. I’ve named all individuals who contributed in some capacity, whether large or small, at the end of this article. Thank you.

Figure 1 shows the area of interest for the all-campus aerial survey, along with a bunch of symbology that indicate a few relevant features that I want to highlight in this article. For now, let’s start with the area of the extent of the survey region. The area to be surveyed is about 6.487 square kilometers, or about 2.5 square miles. This is a significantly larger region than one would typically survey with a multirotor UAS such as the Mavic 2 Pro and Mavic 3 Pro RTK that we used for conducting the survey. Something of this scale would typically be appropriate for a fixed-wing UAS, such as the SenseFly eBee that was used in the prior survey. Fixed wing aircraft tend to be much more fuel efficient and can generally stay airborne longer and can usually survey larger areas more efficiently and quickly than a comparably-sized multirotor. We did not, however, have an eBee or other fixed-wing UASs in our inventory that can be used for surveying, nor did we have the budget to purchase something that expensive. This presented us with a few different logistical challenges.

Perhaps the most direct consequence of using multirotor aircraft as opposed to fixed-wings is that the flights were shorter and less efficient. The subregions shown and numbered in Figure 1 were sized to be approximately 600m x 600m – approximately the region that one of our Matrice 2 Pros could survey using a single battery. The survey regions was loaded into QGIS and divided into subregions of the desired size using the “create grid” tool (if I am recalling correctly); these subregions were then buffered by about 40 meters, which corresponds to about the spacing between transects for 75% side overlap for a Mavic 2 Pro flying at 400 feet AGL, and then these buffered files were exported as individual *.shp files. These *.shp files were imported into Drone Deploy using the “Import KML or SHP” extension that is free to add-on (https://help.dronedeploy.com/hc/en-us/articles/1500004861201-Shapefile-or-Google-Earth-KML-Flight-Planning). This helped ensure that the images taken for each flight covered the desired area for each subregion. In order to compensate for the increased time of operations, when collecting the aerial imagery we divided into two teams; each team would place and measure any supplemental GPCs in the areas they were trying to survey, and then they would fly over their respective regions and collect the aerial imagery. The teams coordinated so that they were never flying in adjacent subregions at the same time. This allowed us to collect all of the aerial imagery in just over a couple of days of work.

Another big issue was safety. Fixed foam wing designs like the eBee inherently pose a much smaller hazard to the public in the even of a crash. Foam wing designs are dynamically stable and will tend to glide to the ground even without compute control if the aircraft is balanced well, whereas multirotor UAS typically have no gliding capability, tend to weigh more for a given size class, and in the event of a loss of control tend to crash to the ground more precipitously and with more energy. A multirotor falling from the sky could very easily cause injury or even death if it impacted in a certain way, whereas a fixed-wing unit is very unlikely to cause any serious injury as it glides to the ground. UAS equipment are very rare; I’ve flown hundreds of flights professionally over the past 6 years, and I’ve only witnessed one crash that could not readily be attributed to pilot error, and even then fail-safe systems prevented a total loss of the aircraft. As operators we could take multiple extra precautions to help prevent any kind of pilot or equipment error, but given that the survey operation was going to take days to complete, and tens of thousands of individuals cross the campus on foot in density reminiscent of the busiest of city streets, the hazard of a potential crash would present was agreed to be significantly risky. Consequently the regions of higher traffic on the core of campus were only flown during a holiday weekend when foot traffic, and consequently the risks would be dramatically reduced. The more remote regions over fields did not have any specific restrictions placed on when they could be flown. 

Another challenge that segmenting the 2.5 square mile survey region and using multiple different drones and cameras presents is how to ensure accurate alignment between the different subregions to create a precise whole model and map of the campus. Even if we used only a single UAS and a single camera, there are tens of thousands of images that are used in the photogrammetry model reconstruction, and the computing resources for that are relatively extreme even for robust PC desktops. Instead, if we process the images by subregion or clusters of subregions, we can produce point clouds for those respective regions, and then merge the point clouds together and map an orthomosaic using those point clouds. In order for this to work well, there needs to be significant overlap between the different subregions, and each subregion needs to be able to have GCPs that the images in that subregion can reference. Figure 3 shows the image locations for subregion 24 as well as the GCP placement relative to the boundaries between the subregions. You may note that the image locations from subregion 24 extend into the adjacent subregions; this is to ensure that the alignment between subregions will work well. Figures 4a – 4e show the different point clouds derived from the images of adjacent subregions and how they can be combined into a comprehensive point cloud. Figure 5 shows the orthomosaic that can be derived from a unified point cloud.

If one looks closely, Figures 4a-4e show a slight curvature to the point cloud representing the ground. These images were developed for demonstration purposes and have not had GCPs incorporated in their generation, nor have the camera settings been calibrated correctly, so the accuracy and precision are a bit off. These images also highlight the concern of accuracy and precision, which is extra challenging to maintain over a region that is 2.5 square miles in area. We care about the overall accuracy and precision of the model for the entire survey area, but in order to achieve a high accuracy we need to pay attention to the accuracy of the point cloud in each subregion.

The core campus regions – subregions 17-19 and 23-26 have the higher concentration of critical human-built infrastructure, and so we decided to have higher accuracy standards than the outlying regions. As is shown in Figures 1 and 3 those subregions have GCPs spaced about ever 200-300 meters, which corresponds to no more than 3 degrees of separation between images and GCPs on the ground, but with an expected value of degrees of separation that we predicted to be around 2 (given the planned flight altitude and lens parameters of the camera on a typical Mavic 2 Pro). The accuracy of this kind of GCP spacing and the accuracy at which the GCPs were measured we expected to yield a horizontal accuracy of less than 0.1 m and a vertical accuracy of just over 0.2 m. For the non-core campus regions we set a standard of a maximum of roughly 4 degrees of separation, which corresponds, coincidentally, to spacing the GCPs about 600 meters in a square grid pattern for flight plans with the Mavic 2 Pro. We would expect this to yield an lateral accuracy of just over 0.1 meters and vertical accuracy of somewhere between 0.4 meters and 1 meter. These estimates for accuracy are informed in part by my own experience doing aerial surveys and tests of accuracy as well as the peer-reviewed papers such as “Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points” by Martínez-Carricondo et al. (https://doi.org/10.1016/j.jag.2018.05.015) The reference to this paper is at the end of the article. At the time of writing this article, the photogrammetry processing of the images, GCPs, and models for the whole campus is not yet complete, so I don’t yet have statistics to share for the accuracy or precision for the entire survey area.

For part of the image collection we actually did use a Mavic 3 with an RTK module; this unit was on loan to us from Greenman-Pederson Inc. (GPI) and was flown with David Price as the Pilot in Command (PIC), and so it was not available to support the entire data collection. While not a fixed-wing unit, the RTK module in theory helps reduce the number of GCPs required to achieve the same level of accuracy, and even with no GCPs using an RTK-capable drone that encodes that position information into each image’s metadata should have a smaller upper-bound on the error that can exist that is several orders of magnitude smaller than without RTK corrections for each image over a project survey region. For a project of this scale, having an RTK capable drone would help use reduce the human-hours ultimately require to achieve an accurate model by perhaps about 50% based on our estimates. This is because deploying and recovering GCPs takes time, as well as time to tag GCPs in images during the model reconstruction process. Having a fixed wing system that has RTK capability would also help as it would help with the efficiency of flight. A lightweight foam wing would also be nice to use as it would reduce the potential hazards posed to pedestrians during flight operations. While I feel that we did a good job and followed appropriate safety precautions and protocols fairly well, these few hardware changes could have a significant impact on our operational performance in the future.

I was delighted to have so much cooperation between faculty, students, and staff on this project. A very kind thank you to the following contributors to this project:

Trimble Tech Used

  • Trimble R10

Faculty

  • David Price – UMass Faculty/Staff; Chief UAS Pilot, GPI
  • Alexander (Sasha) Stepanov – GIS/IT Manager, A&F Information Technology
  • Michael Knodler – William W. Boyer Endowed Prof, College of Engineering
  • Charlie Schweik – Professor, Environmental Conservation, School of Public Policy
  • Alexander Schreyer – Senior Lecturer II, Environmental Conservation, Architecture Department

Staff

  • Niels la Cour – Senior Physical Planner, Facilities & Campus Services
  • Carl Larson – Surveyor/GIS Administrator, Facilities & Campus Services
  • Helena Koszewski – Departmental Assistant, Environmental Conservation, UMass Transportation Center
  • Kris Stetson – Admin Manager, UMTC/Business Mgr, RTIC, UMass Transportation Center
  • Kassandra Rounds – Clerk, UMass Transportation Center
  • Daniel Warner – Accountant, Environmental Conservation

Students

  • Benjamin Leinfelder – Architecture (M.Arch)
  • Justin Weiner – Operations & Info Mngmt (BBA)
  • Jahiya Clark – Environmental Consrevation (MS)
  • Noah Green – Civil Engineering(BS)
  • Yuhan Liu – Landscape Architecture(BS)

References

Patricio Martínez-Carricondo, Francisco Agüera-Vega, Fernando Carvajal-Ramírez, Francisco-Javier Mesas-Carrascosa, Alfonso García-Ferrer, Fernando-Juan Pérez-Porras. Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points. International Journal of Applied Earth Observation and Geoinformation. Volume 72. 2018. Pages 1-10. ISSN 1569-8432. https://doi.org/10.1016/j.jag.2018.05.015. (https://www.sciencedirect.com/science/article/pii/S0303243418301545)