Skip to main content

Module 5: Coastal Flooding

 For the first part of this module, I compared before and after pictures of the New Jersey coast, pre and post Hurricane Sandy. I created DEM's from lidar data to compare the before and after. Below, you can see the magnitude of erosion and accretion on the Jersey coast due to the Hurrican Sandy in 2012. 


In the second portion of this lab, I created a storm surge impact model for Collier County, Fl. This was a very educational assignment and I learned a lot. We not only created a storm surge model for the coast of Florida, but we also did so utilizing both Lidar and USGS DEM data to compare accuracy. I first began by converting the Lidar layer to meters, and then reclassified both to only include data where the elevation is under 1 meter. I used the region group tool to clump together into one large area the land along the coast, and isolated it (via select by attributes) to only analysis this single large connected area. Lastly, converted that large area for both Lidar and USGS to a polygon. Here is where the analysis got very tricky and I struggled greatly. 

From here, I had to spatially join the buildings layer with the Lidar/USGS polygon respectively. I accomplished this task simply, and further was able to select by attributes only the areas where the join_count equaled one [where the building was found to be impacted by the flood] to create a new layer. What flabergasted me, was how do I determine the errors of Omission and Comission? How can I compare two different layers (Lidar/USGS)? 

I tried many tools [Overlay, Intersect, Select by location] but I keep getting errors or fails. After connecting with a peer and compairng our thought processes, I realized I needed to do a spatial join of the Lidar/USGS layers that contained all buildings. I spatially joined the impacted LIDAR to the impacted USGS layer. This resulted in a join_count column with 1 where both overlapped and 0 where they did not overlap. I was able to use select by attributes in the table to determine the top half of the formula for error of omission. I spatially joined the LIDAR layer to the USGS layer and did the same thing to determine the error of commission.

Below is my final map.


I do not think the assumptions we made in this analysis are accurate. First, we excluded low-lying areas that are that were not connected to the large polygon touching water. These are likely to get flooded as well and excluding them can cause an error of omission. I think we should also have taken into account the urban planning of the area for water runoff, percent of permeable surfaces to impermeable surfaces, the efficiency of collecting water runoff, nearby large bodies of water, and land topography. Lastly, we assumed the land was at a uniform height, which is very unlikely, and I believe that is what caused our high errors of commission. 



Comments

Popular posts from this blog

Bivariate Choropleth and Proportional Symbols

In the first part of this lab, we used proportional symbols to represent positive and negative values in job increases/decreases in the USA.  Because there were negative values in this data set, I created a new map to "fix" the data. In this new map, I created a new field and copied the negative job loss data. I then used the Calculate field data and multiplied it by one to make it positive. Lastly, I overlaid both maps on the data and was able to accurately represent the increase and decrease of jobs in the USA by state.   In the second part of this lab, we delved into how to prepare data for a bivariate choropleth map, choose colors for the legend, and create a good layout.  I created three separate fields to analyze the data: Class Obese, Class Inactivity, and Class Final. I used the symbology tool to create 3 Quantile for the Obese and Inactivity classes and used each quantile to set the three classifications in the fields I created using the Select by Attributes...

Positional Accuracy: NSSDA

 In this analysis, I compared the street and road intersect data collected for Alburquerque, NM by the City of Alburquerque and the application StreetMaps. I used an orthophoto base layer as the reference for this analysis, to compare and determine the accuracy of both the City and Streetmap layers using NSSDA procedures. The most difficult part of this analysis for me was how to determine what 20% per quadrant looks like. Because the reference map was divided into 208 quadrants, I had to determine how to subdivide all the quadrant's equality into 20%. After multiple trials and error, I decided to subdivide the entire area (208 sub-quadrants) into 4 equal-area subsections. In this way, I could do 5 random right intersection points per subsection or 20% per subsection.  Map 1: City of Albuquerque city map data.  Map 2: City of Alburquerque SteetMap data When selecting a random intersection to place the points within each quadrant, I choose a location that had data f...

Infographic's

 This week was a fun and challenging week as we learned about and created infographics. It was fun to create the infographics themselves, but challenging to figure out the best methods and practices in analyzing raw data.  We used 2018 County Health Rankings National Data from countyhealthrankings.org. I chose to move forward with the two values: Unhealthy Mental Days and Premature Dealth.  I   choose these two variables because those that struggle with mental health die before their time due to depression, anxiety, and/or a combination of similar issues. Both variables are normalized by relating their value to all counties within each state in the USA. For example, the poor mental health days is normalized as the average number of reported mentally unhealthy days per month per ctizen. The normalized premature rate is the “age-adjusted years of potential life lost rate per 100,000.”  Below, I created a scatterplot of the normalized data.  I choose to k...