Today will be a dive into ArcGIS Pro demo which can be located at here. This demo is to teach how to create a permeable/impermeable layer map from an aerial photograph. This is done by using a few different tools that will be mentioned in the methods section. ArcGIS pro is the new updated version of Arcmap and ArcScene. In a few years ESRI will be moving to ArcPro. The main difference is having more of a ribbon style banner instead of the tool bars everywhere.
Methods
The first step is to open up an already processed aerial image into Arcgis Pro. This one used for this demo had a resolution of 6 inches and contains 3 bands. Under the projects tab and then under task there is a calculate surface imperiousness tool. This is used in conjunction with the bands to determine where there is surface imperiousness (Figure 1). Then the next tool used is called "group similar pixels into segments." This allows the image to be simplified to more accurately classify broad land-use types (Figure 2).
Figure 1: On the right, where it is highlight is under the tasks, and on the left is what the tasks open up into. This allows the run of the surface imperiousness tool.
Figure 2: This is the more simplified image which shows the smooth edges well.
The Second lesson has the user open Arcmap to create a training sample, which cannot be made in ArcGIS Pro currently. therefore, a shapefile will be made to open in ArcGIS Pro. First, open the Neigborhood and segmented images and turn on the image classification toolbar. To use the image classification tool bar spatial analysis needs to be turned on in the extension window. Next, on the classification tool bar click the drop down on the draw polygon tool to select the draw rectangle tool. Next, start to draw rectangles on the roofs of the houses found in the cul-du-sac. Then open up the training sample manager (Figure 3). Next, highlight all of the training ID's and merge them together into one class name. The next step is to repeat the last step except for roads, driveways, bare earth, grass, water, and shadows (Figure 4). Finally, to finish this step open it back up into ArcGIS Pro.
Figure 3: Image showing the rectangles on the roofs, and where the class can be found in the training sample manager.
Figure 4: This is an image showing the Training sample manager should end up like.
Now, back in ArcGIS Pro open up the train the classifier task. This opens up the parameters window. Input the raster and training sample file, and then make sure to save it in the correct place with the extension .ecd (Figure 5).
Figure 5: Train the classifier parameters window.
The next step is to select the classify the imagery from the tasks tool bar. Input the correct information into the right spots, and run the tool. It should create an image that has the colors of the classes to look like Figure 6.
Figure 6: Reclassified image of the original using the classes created from the pixel color.
Now, it is now on the reclassify tool which allows the user to change the value of the fields. For this demo change the gray roofs, driveways, and roads to 0, and change the rest of the fields to 1. This will create an image like Figure 7 after it has ran the reclassify again.
Figure 7: the final reclassify that distinguishes between roofs, driveways, and roads to bare ground/penetrable ground.
Finally the last lesson is to calculate impervious surface area. To start, click on the create accuracy assessment points in the task window. Then enter the information as described in the tutorial and let it run. It will create an image that looks like Figure 8 with all of the accuracy points now added into it.
Figure 8: Accuracy points added to the map.
The next step is to open the accuracy points in the contents window to allow the modification of the GrdnTruth column. The 1 means permeable and the 0 means impermeable. This is required to go through each point to determine this and change as needed (Figure 9). Then return to the task bar and hit next and run to finish the accuracy assessment points.
Figure 9: Showing the editing of the GrndTruth.
The next step is to compute a confusion matrix. This is done by selecting the compute a confusion matrix under the task window. Then enter the accuracy points in the input and create a correct output. This generates the ability to give an estimate on how accurate the data is. In Figure 10, the image is showing an accuracy of 92% under the Kappa heading.
Figure 10: Showing the Confusion Matrix Results.
Next, is to fill out the tabulate area parameters with the information found in Figure 11.
Figure 11: Tabulate the area parameters.
Now, after creating the new table a table join tool opens up next that can filled out exactly like Figure 12. This joins together the two tables.
Figure 12: Table join.
Finally the last step up sets is to symbolize the parcels. The first step is to select the clean up the table and symbolize the data from the tasks window. This is done by deleting a field and modifying the name of a few of the fields to better suit their needs (Figure 13).
Figure 13: Attribute table being edited.
Finally after modifying the attribute table go to the symbology window to change it to graduated colors maps with 7 breaks (Figure 14).
Figure 14: Graduated color map showing the impermeability of the land classification with 7 different classes.
There are a few different patters that can be seen in the image above. First off, There seems to b a lot more of the yellow (permeable) on the outside of the neighborhood compared to the middle which is due to the houses surrounding the pond. Therefore, most of the water filling the pond may be coming from the outside of the neighborhood instead of the neighborhood where it most likely just flows off. Also, the pond itself is not the darkest red (impermeable) due to its ability to take water into the ground.
This data set can be used with UAS by doing a comparison to data collected by UAS using the same type of imagery, and looking for similar spectral patterns. A UAS platform may even be able to get a more accurate representation of the area by generating a larger amount of pixels. This type of map can be used as a first look at the data, but a UAS should be able to get more detail if needed. Also, a UAS would be able to make this faster if using the same method.
Pix4D is a useful software that can be utilized to process many images taken by UAS platforms. With this program it can put together images at a relatively quick pace (depending on size of files), and create professional looking images. With the ability to process this data it can then be brought into Esri's ArcMap or Arcmap pro to create maps that can be displayed with the relative information. The program is relatively easy to understand as well as having everything needed placed out well. Also, many questions can be answered through the help directory on the program as well as other forums found on the internet.
Some questions that should be known before using the program to generate good results:
o Look at Step 1 (before starting a project). What is the overlap needed for Pix4D to process imagery?
75% frontal overlap with 60% side overlap
o What if the user is flying over sand/snow, or uniform fields?'
85% frontal and 70% side overlap is needed for these areas.
o What is Rapid Check?
Rapid check is more used in the field to make sure everything is going correct for the map/coverage.
o Can Pix4D process multiple flights? What does the pilot need to maintain if so?
Yes, it can process multiple flights, and the pilot needs to maintain altitude.
o Can Pix4D process oblique images? What type of data do you need if so?
It can process oblique images it just needs the angle it was taken at.
o Are GCPs necessary for Pix4D? When are they highly recommended?
They are not necessary for Pix4D, but they are highly recommended when trying to reduce the noise and make the image more accurate.
o What is the quality report?
The quality report is what is created after any of the processing steps to give feed back of how each processes went. It alters to any fixes that may need to be done, and it gives the user a quick check on how the data is starting to look. It gives statistics and anything that may be needed for how good the data set is.
Methods:
There were two flight logs given (unprocessed) to be used with Pix4D to create an orthomosaic of all of the images given. The first step is to create a file/place to save all of the data sets to. For this, the best way was to create a folder names 2017monthday with a flight 1 and flight 2 folder inside that. This will help to keep the data separate and easily accessed. The next was to run a summary report on the data before doing the full data to make sure all of the data is correct and there are no errors. This can catch thing before having to wait long periods of time just to start over again. Also, when inputting the data for the DJI Phantom 3 advance the camera needed to be changed to rolling shutter.
The next step was to run the initial processed data which gives a quality report (figure 1 & 2).
Figure 1: Summary of the first flight.
Flight 2: summary of the second flight initial processing.
Also with the report summary a coverage map is created. The areas with the most coverage will be near the middle because there will be more pictures to overlap with compared to the outside where it is on the edge with less photos to be matched up with (Figure 3&4).
Figure 3: Coverage map showing the most coverage being in the middle.
Figure 4: Coverage is highest in the middle and lowest on the outside.
This is import to view because it will show if there is any missing data within the area of interest along with how accurate the data will be. The more coverage the higher the amount of accuracy there will be.
Now, the next step is going be to actually run the full process. This may take some time depending on the data size. This will then generate an otheromosaic for each set of data. These can then be added into Arcmap to generate a map of the region. Figure 5 and 6 show the data that was processed in Pix4D and brought into Arcmap to be made into elevation maps.
Figure 5: Flight 1 data after being processed in Pix4D
Figure 6: Flight 2 data after being processed in Pix4D
This is a flyby created in Pix4D.
Results
The Flight 1 data in Figure 5 (above) has a few different patterns where there are almost roads that go between the piles of sand along with there being some type of distortion in the southern part that was mostly caused from not having enough overlap in that region or some other problem that may have occurred during the actual flight. In the second flight image (Figure 6) There is a lot of distortion on the south eastern side of the image. This is also mostly caused by not having enough images on the edge, but that is not the area that is going to be looked at for trying to solve for volumetrics, length or other processes. With both images there seems to be more detail in the center areas of the image compared to the outside.
Conclusion
Pix4D is a powerful tool for anyone looking to process aerial photographs, but if you use really bad data with not enough coverage the program will still be able to create an image, but it will not be up to standards for accuracy. It also is also a great program for figuring out the any type of 3D processes within the program. This program can be used as a great tool, but the user needs to understand the what the process is doing, and how the program calculates it to get the highest amount of accuracy.
* Why are proper cartographic skills essential in working with UAS data?
Cartographic skills are essential for working with UAS data because it makes the reader have a better understanding of the data. Having a North Arrow, scale bar, locator map, watermark, and data sources give the reader everything they would need to understand. It is also important to make sure data created and presented can not be stolen as easily, and it also gives the reader the ability to look up anything or look at data used.
* What are the fundamentals of turning either a drawing or an aerial image into a map?
The fundamentals of turning a drawing or an aerial image into a map involve a few different steps. The first step is to add a north arrow, scale bar, watermark, sources and a title. The most important is the north arrow and the scale bar. Although, most of this is not useful without having a locator map as well.
* What can spatial patterns of data tell the reader about UAS data? Provide several examples.
* What are the objectives of the lab?
The objects of the lab include using processed Pix4D to create a map that meets the criteria for the fundamentals of turning an image into a map. The objectives also includes using a hillshade tool, and learning about the difference between a DSM and a DEM. Finally, this lab also includes understanding the patterns found in a orthomosaic, reporting statistics between the DSM and DEM, and understanding how UAS data can be used as a tool to enhance cartography for the user.
Methods
Metadata:
Platform: DJI Phantom 3 Advanced
GPS precision level: sub meters
Drone Sensor: Sony 16 Megapixel Camera
Altitude of project: 60m
Coordinate system: WGS 1984 UTM Zone 15N
Projection: Wisconsin State Plain
Date: 3-7-2016
The first step is to open up the orthomosaic and the DSM into ArcMap, and then create pyramids and generate the statistics for each data set. a DSM is a digital surface map compared to a DEM which is a digital elevation map. A DSM includes many values for elevation which includes trees, plants, and power lines, etc. A DEM involves just the ground surface force creating the raster. Georeference Moasic is using ground control points to lock down the image compared to orthorectified moasic which is when images from a drone are stitched together based on tie points done by a computer. Orthorecified moasic is not nearly as accurate as a georeferenced moasic due to the ground control points. The statistics created from the DSM are important because it will show the value of the ground elevation and give all the data needed that may be presented. Finally, to use a hillshade on a DSM use the search tool on and search for hillshade. In the hillshade window delineate the regions of the DSM. Now the last step is to create a map with all of the fundamentals of map making involved with it. The statistics found within the properties of each group represents the elevation of the ground at different points.
Figure 1: A Map of Sportfield, Wisconsin showing the elevation based on a DSM and Orthomosaic.
Figure 2: This is an oblique view of the sportsfield. The north arrow gives the direction in the top left corner.
Results
* What types of patterns do you notice on the orthomosaic?
On the Orthomosaic there are a few different patterns the first would be that there is a straight line of trees along with a very gradually increasing slope going from South to North. The map is almost split up right down the middle where it starts to slope down compared to gaining elevation.
* What patterns are noted on the DSM? How do these patterns align with the DSM descriptive statistics? How do the DSM patterns align with patterns with the orthomosaic?
The pattern noticed on the DSM is that there are almost lines going across which indicate the elevation. This aligns with the statistics due to those numbers also showing the elevation. There are also the tree and vegetation that can be seen which lines up with the orthomosaic.
* Describe the regions you created by combining differences in topography and vegetation.
I created a region that involved the top area and then one for the bottom area. I created two separate ones for the vegitation therefore, not allowing it to interfere with the rest of the map.
* What anomalies or errors are noted in the data sets?
The trees and vegetation could make the elevation numbers off due to them being higher up and all being the same height due to how the images were taken. Another error would be having poor data in the top left of the image. This would create errors that could be fixed if there were more pictures to be processed in that area. There are also not any GCP to tie any of the parts down to the actual basemap.
* Where is the data quality the best? Where do you note poor data quality? How might this relate to the application?
The data quality is the best near the middle of the image. This is due to how many images are being stitched together in that area. The poor data is in the top left part of the image. This may be due to having lack of images up there to create a better area of data. This may relate to the application because it may not give an exact elevation compared to the other areas.
Conclusion
UAS data is a great tool to complement any GIS user. It has the ability to take high quality data to create high quality numbers to solve many problems. It is also useful because of the high level of accuracy it has. It does have limitations as it cannot fly during most types of bad weather, and relies on day time to capture data when light is needed. When the user is working with the data the user should know that even if the program can process the data, it does not mean that it will be correct numbers. There is a lot of steps that go into solving a problem and the quick way will most likely be the garbage out method. This data could be combined with GCP to help tie down the points even better than they are with the platforms GPS.
There are many different kinds and styles of unmanned aerial systems created every year with their own unique take on the build. Since the beginning of the boom of UAS the companies with the best build and highest quality are breaking their way through the market. A few of the brands that stick out of the pack would include the Parrot, DJI, Yuneec, and Kespry. This four companies have a high anticipation for the 2017 market. Many drones are build with a fix wing or quad copter. Each has their own strengths and weaknesses. Comapny XXXX has asked to be consulted on what type of platform to purchase to do volumetrics on a sand mine in Western Wisconsin. This can be a daunting task due to the amount of variability out there. Therefore, getting the details of the project will be the first step. The company want to be able to use a UAS to determine the amount of volume within a sand pile which will take roughly 20-30 minutes of flight time. Therefore this report will be going into details of a low level/hobbyist drone, mid level, and upper level commercial drone. For this report a low level drone costs $500-3000, a mid level cost $5000-10000, and a high level commercial costs $50,000 - 150,000.
Low Level
The low level drone picked for this project would be the DJI Phantom 3. The DJI Phantom 3 has four different models that can currently be purchased between $500-$999. This includes the Standard, 4k, Advanced, and Professional. The standard was created for people on a budget and released after the Advanced and professional models were already created. The standard does not have the 4k video capability, and the Phantom 3 standard comes with a 2.7k video or 720p. This also uses the Panasonic image sensor instead of the Sony Exmor R sensor which comes on the more expensive models of the DJI Phantom 3 Advance, 4k, and professional models. The control made for the standered model comes with a control that looks like the one from the DJI Phantom 2 and does not have an easy way to mount an Ipad to the controller. It also does not have any custom buttons and does not have any dedicated buttons for the video playback.
The 4k DJI Phantom was realeased with the capability of 4K playback, and has the more expensive controller allowing it to easily connect to an Ipad or Iphone. It also has all the capabilities of the standard model.
The Phantom 3 Advanced was part of the first release with the DJI Phantom 3's. This model does not have 4k video and therefor uses the same camera as the standard model. The difference between this model and the two listed above is that this uses a technology called Lightbridge. This technology offers a more dependable video and control signal. With the Lightbridge technology the DJI Phantom 3 Advance can travel up to 5000m or 3.1 miles from the pilot. This is 4.1 times farther than the phantom 3 4k and it is 5 times farther than the standard model. Therefore even though this model does not have the 4k video is comes with more impressive technology allowing greater control over the drone. The Advance also has 720p play back to the streaming goggles compared to the 480p in the two listed above.
The last DJI Phantom 3 model is the professional model. This model also includes the 4k camera, 720p playback, and Lightbridge GPS technology. The difference with this model is that it comes with a 100W charge with allows the professional model to be charged in less than an hour compared to all the others which take up to an hour and twenty minutes. This is the most expensive of the models at $999, but has great reviews at the low level tier.
All the models also allow the user to create a point of interest which the Phantom 3 will continuously face it as the user flys or by circling around the object. It can also be programmed to follow the user as a 'personal film crew.' it can also be programmed with flight paths, and has intelligent orientation control allowing the user to use the easiest mode to fly with. The DJI Phantom 3 also has LED lights which indicate the battery level. Parts are easy to order from the DJI Store and reviews suggest that the DJI is not hard to fix when parts break as well.
Figure 1: The video above explains how to set up and get started with the DJI Phantom 3 Professional. It goes over the app used on the Iphone and how to use the controls with the device.
Mid Level
The Freefly Systems Alta 6 UAV costs $11,995.00, which is slightly over budget for the mid level, but this is a very impressive drone. It is made with 8 motors, allows the user to use a 9.1kg payload to allow expensive cameras to be mounted onto the platform. This Drone also allows the user to put the camera on top or on the bottom depending on the needs of the drone. This drone comes with Freefly's own flight control systems and it also works with a range of different transmitters so when upgrading to this drone there may not be a need to buy a new controller.
The flights modes that can be used with this drone include manual, height hold, climb rate control, position hold, ground speed control, return-to-home, and auto-l;and. This drone also works with first person video systems. The freefly ALTA has a very robust hard case. To take out the drone just unfold and put the battery in and it is ready to go.
The system comes with weather-resistant electronics and has a silent drive technology built into the motors. This drone can fly up to a mile away, but it can be more depending on the weight. The sensor used on this drone is up to the user since it can be attached using the gimble. The flight time on this drone is around 40 minutes.
This drone is great for users that want to get unique angles and carry high end cameras that weigh more than the average drone can carry.
Figure 2: The video above shows the Freefly Systems Alta 6 UAV. This offers some video of the unboxing and review of the system.
High Level
The Penguin B Platform made by UAV System Integrators. This platform currently holds the world record for endurance at 54.4 hours of flying time. It has a fuel injected engine that allows the Penguin B to fly up to 20+ hours for normal use. It can also carry a 10kg payload, and has a 80W on-board generator. The Penguin B has been sold since 2009 and has over 140 different customization. This drone is mostly used with research organizations and universities due to the price and the customization that can be done to fit the research being conducted.
The Penguin B Platform is a fixed-wing UAV which is why it has such a longer flight time, and also gives it the ability to have such a large payload. This platform is shot into the air using a catapult, car-top, or runway take off to give the platform enough velocity to keep it in the air.
This drone comes with free Piccolo autopilot configuration files and comes with a four day integrator training. This UAV flies at speeds up to 36m/s, but the general cruise speed is 22 m/s. It also comes with the ability to deice the wings with its heated pitot probe. It comes with quick release joints which allow fast and quick assembly. the Penguin B has the option to have heavy duty land gear to land on rough terrain. The platform also comes with the portable control station which is compatible with Cloud Cap Piccolo, Procerus Technologies Kestrel, Micropilot autopilot systems. This platform can carry any type of sensor as long as it fits into the universal mount on the drone.
This is a highly advanced UAV that comes with over 140 different customization, and if needed the company can also come up with different ones for different needs that may not have been met yet.
Figure 3: This video shows a take off and landing from a car mount for the Penguin B UAV.
Conclusion
The best option for the project at hand would be the DJI Phantom 3 Advanced model. This one was chosen because of the price point and capabilities. It matches the amount of flight time needed. It has a camera that has a high enough resolution to use for calculations. It also has the ability for other projects as well.