Skip to main content

Positional Accuracy: NSSDA

 In this analysis, I compared the street and road intersect data collected for Alburquerque, NM by the City of Alburquerque and the application StreetMaps. I used an orthophoto base layer as the reference for this analysis, to compare and determine the accuracy of both the City and Streetmap layers using NSSDA procedures.

The most difficult part of this analysis for me was how to determine what 20% per quadrant looks like. Because the reference map was divided into 208 quadrants, I had to determine how to subdivide all the quadrant's equality into 20%. After multiple trials and error, I decided to subdivide the entire area (208 sub-quadrants) into 4 equal-area subsections. In this way, I could do 5 random right intersection points per subsection or 20% per subsection. 

Map 1: City of Albuquerque city map data. 

Map 2: City of Alburquerque SteetMap data

When selecting a random intersection to place the points within each quadrant, I choose a location that had data from each of the three layers (StreetMap, City, and reference). The lower left quadrant had a large area missing the reference base map for example, and did not meet my criteria.  I also made sure the points were separated at least 10% of the diagonal distance of each of the 4 quadrants. 

After creating the 20 random points in each of the three layers, I added their distinct coordinates via the Add XY Coordinates (data management tool). I came across issues using this tool when trying to add the coordinates to the City feature class; instead of coordinates, I got Null in each cell. I was not able to resolve the issue with this tool in this layer and ended up using the Absolute X,Y,Z tool to find the coordinate of each point and then manually input it into the attribute table. [My classmates later gave me some great alternatives: 1) create 2 new fields with floats and use calculate geometry 2) make sure one point is not selected as will cause an error.]

Next camethe data analysis. This step was a simple chug and plug in excel using the templates in the NSSDA handbook. (NSSDA, 1999)

Table 1: Positional Accuracy Analysis data using City data


Table 2: Positional Accuracy Analysis data using StreetMap data

In conclusion, utilizing the National Standard for Spatial Data Accuracy (NSSDA) steps if found: 
  1. The City of Albuquerque city-data tested 23.93 feet horizontal accuracy at 95% confidence level. 
  2. The StreetMaps data tested 232.78 feet horizontal accuracy at 95% confidence level.
I can confidently say the city data is significantly more accurate than the StreetMap data; specifically by 208.85 feet more accurate at a 95% confidence level. 

The Positional Accuracy Handbook noted to multiple the RSME by a factor of 1.7308 to determine the horizontal accuracy at a 95% confidence level, or 1.9600 to find the vertical accuracy at a 95% confidence level. I thought it is interesting that these factors were additionally researched to determine their continual accuracy. For example, the Federal Geographic Data Committee further evaluates positional accuracy of large maps at ground scale. (FGDC, 1998)

Sources:

Federal Geographic Data Committe. (1998) Geospatial Positioning Accuracy Standards Part 3: National Standard for Spatial Data Accuracy. https://www.fgdc.gov/standards/projects/FGDC-standards-projects/accuracy/part3/chapter3

National Standard for Spatial Data Accuracy. (Oct 1999) Positional Accuracy Handbook.  https://www.mngeo.state.mn.us/committee/standards/positional_accuracy/positional_accuracy_handbook_nssda.pdf 






Comments

Popular posts from this blog

Bivariate Choropleth and Proportional Symbols

In the first part of this lab, we used proportional symbols to represent positive and negative values in job increases/decreases in the USA.  Because there were negative values in this data set, I created a new map to "fix" the data. In this new map, I created a new field and copied the negative job loss data. I then used the Calculate field data and multiplied it by one to make it positive. Lastly, I overlaid both maps on the data and was able to accurately represent the increase and decrease of jobs in the USA by state.   In the second part of this lab, we delved into how to prepare data for a bivariate choropleth map, choose colors for the legend, and create a good layout.  I created three separate fields to analyze the data: Class Obese, Class Inactivity, and Class Final. I used the symbology tool to create 3 Quantile for the Obese and Inactivity classes and used each quantile to set the three classifications in the fields I created using the Select by Attributes tool to

Infographic's

 This week was a fun and challenging week as we learned about and created infographics. It was fun to create the infographics themselves, but challenging to figure out the best methods and practices in analyzing raw data.  We used 2018 County Health Rankings National Data from countyhealthrankings.org. I chose to move forward with the two values: Unhealthy Mental Days and Premature Dealth.  I   choose these two variables because those that struggle with mental health die before their time due to depression, anxiety, and/or a combination of similar issues. Both variables are normalized by relating their value to all counties within each state in the USA. For example, the poor mental health days is normalized as the average number of reported mentally unhealthy days per month per ctizen. The normalized premature rate is the “age-adjusted years of potential life lost rate per 100,000.”  Below, I created a scatterplot of the normalized data.  I choose to keep the scatterplot in a pretty tr

Color and Choropleths

This lab was very interesting as we dived into color theory.  In the first part of the lab, we created and compared linear and adjusted progression color ramps to themselves as well as a color ramp from the website colorbrewer.org.  I found, the colorbrewer color ramps are not as rhythmic when compared to the other methods, as they don’t step up at set intervals or rates. However, I don’t think that a set rate is needed to go from color to color. I preferred the colorbrewer ramp because each color was distinct from its neighbors. In the linear and adjusted color ramps, the colors looked too similar to each other and were not distinct enough for each step. I think that as long as the color ramp is moving in the opposite direction of the same color hue, the step rate or interval is not as relevant. When I first was completing the linear step I started with the purple hue option but had a difficult time, as each step in the color ramp looked the same. At one point, I created my own color