In this assignment, we analyzed 50 points collected at the same location via a GPS handheld device. Through these collected points we determined their precision and accuracy.
I first determined the mean of the points collected ("waypoints") by using the summary statistics tool, and found the exact coordinates of the "Average Waypoint" via the Absolute X,Y,Z tool. I re-projected and spatially joined the layers. Lastly, I created three new fields to determine the 50th, 68th, and 90th percentile.
Map1: GPS datapoint distribution and precision/accuracy analysis
My horizontal precision for the 68th percentile is 4.5 meters. The distance between the "Average Waypoint" and the true reference point is 3.78 meters. Horizontal precision looks at the "consistency of a measurement method," and aims to provide "tightly packed results." (Bolstad, 2016) Horizontal accuracy on the other hand "measures how close a database representation of an object is to the true value." (Bolstad, 2016)
My
horizontal precision (4.5 meters) overestimated my horizontal precision as the actual
distance between my average waypoint and reference point was significantly more [+0.78 meters] than the true distance of 3.78 meters. When looking at my mapped percentiles in the map above, my precision analysis was not great as 56% of the mapped points feel within the 68th percentile. The data collected should have also resided within 3.78 meters instead of 4.5
meters for optimal accuracy.
My vertical average was 27.79, while the true reference point elevation was 22.58 meters. This means that my vertical accuracy was +5.21 meters. As stated above with my horizontal accuracy of +0.78 meters, I overestimated my vertical accuracy. An additional 5.21 meters is pretty significant and may result in unusable data.
I believe that I can use the data collected by the GPS unit, but I would need to account for a certain level of inaccuracy. I would suggest that the company that obtained the handheld GPS data re-calibrate their devices and do a refresher training with their staff to reduce user error. From a GIS analyst's point, some errors I encountered could be derived from formatting errors when we changed the projection. The data could also be old, and the physical marker could have shifted over time.
For the last step in this analysis, I determined the RSME and created a cumulative frequency distribution (CDF) as seen below.
Graph 1: Cumulative Distribution Function graph of Map 1 dataset
The CDF in
this graph looks at the relationship between the mean RSME (x-axis) and the cumulative
percentage (y-axis). It is telling me
how much mean RSME error is at a certain CP. For example, at 10 CP (or the 10th
percentile) there is approximately 1.2 mean RSME. If we look at the median
distribution at the 50th percentile, we have a mean RSME error of
about 2.5. If we are basing the value on a scale out of 7, then our RSME value
can be reduced to .36 and falls into the acceptable range between 0.2 and 0.5. Therefore,
our data would have an acceptable level of error, and we could proceed forward
with the data.
Other values collected for RSME and CDF analysis:
Sources:
Paul Bolstad. 2016. GIS Fundamentals: A First Text on Geographic Information Systems. 5th Edition. Eider Press. ISBN-13: 978-1506695877
Comments
Post a Comment