Skip to main content

Module 5: Coastal Flooding

 For the first part of this module, I compared before and after pictures of the New Jersey coast, pre and post Hurricane Sandy. I created DEM's from lidar data to compare the before and after. Below, you can see the magnitude of erosion and accretion on the Jersey coast due to the Hurrican Sandy in 2012. 


In the second portion of this lab, I created a storm surge impact model for Collier County, Fl. This was a very educational assignment and I learned a lot. We not only created a storm surge model for the coast of Florida, but we also did so utilizing both Lidar and USGS DEM data to compare accuracy. I first began by converting the Lidar layer to meters, and then reclassified both to only include data where the elevation is under 1 meter. I used the region group tool to clump together into one large area the land along the coast, and isolated it (via select by attributes) to only analysis this single large connected area. Lastly, converted that large area for both Lidar and USGS to a polygon. Here is where the analysis got very tricky and I struggled greatly. 

From here, I had to spatially join the buildings layer with the Lidar/USGS polygon respectively. I accomplished this task simply, and further was able to select by attributes only the areas where the join_count equaled one [where the building was found to be impacted by the flood] to create a new layer. What flabergasted me, was how do I determine the errors of Omission and Comission? How can I compare two different layers (Lidar/USGS)? 

I tried many tools [Overlay, Intersect, Select by location] but I keep getting errors or fails. After connecting with a peer and compairng our thought processes, I realized I needed to do a spatial join of the Lidar/USGS layers that contained all buildings. I spatially joined the impacted LIDAR to the impacted USGS layer. This resulted in a join_count column with 1 where both overlapped and 0 where they did not overlap. I was able to use select by attributes in the table to determine the top half of the formula for error of omission. I spatially joined the LIDAR layer to the USGS layer and did the same thing to determine the error of commission.

Below is my final map.


I do not think the assumptions we made in this analysis are accurate. First, we excluded low-lying areas that are that were not connected to the large polygon touching water. These are likely to get flooded as well and excluding them can cause an error of omission. I think we should also have taken into account the urban planning of the area for water runoff, percent of permeable surfaces to impermeable surfaces, the efficiency of collecting water runoff, nearby large bodies of water, and land topography. Lastly, we assumed the land was at a uniform height, which is very unlikely, and I believe that is what caused our high errors of commission. 



Comments

Popular posts from this blog

Bivariate Choropleth and Proportional Symbols

In the first part of this lab, we used proportional symbols to represent positive and negative values in job increases/decreases in the USA.  Because there were negative values in this data set, I created a new map to "fix" the data. In this new map, I created a new field and copied the negative job loss data. I then used the Calculate field data and multiplied it by one to make it positive. Lastly, I overlaid both maps on the data and was able to accurately represent the increase and decrease of jobs in the USA by state.   In the second part of this lab, we delved into how to prepare data for a bivariate choropleth map, choose colors for the legend, and create a good layout.  I created three separate fields to analyze the data: Class Obese, Class Inactivity, and Class Final. I used the symbology tool to create 3 Quantile for the Obese and Inactivity classes and used each quantile to set the three classifications in the fields I created using the Select by Attributes tool to

Infographic's

 This week was a fun and challenging week as we learned about and created infographics. It was fun to create the infographics themselves, but challenging to figure out the best methods and practices in analyzing raw data.  We used 2018 County Health Rankings National Data from countyhealthrankings.org. I chose to move forward with the two values: Unhealthy Mental Days and Premature Dealth.  I   choose these two variables because those that struggle with mental health die before their time due to depression, anxiety, and/or a combination of similar issues. Both variables are normalized by relating their value to all counties within each state in the USA. For example, the poor mental health days is normalized as the average number of reported mentally unhealthy days per month per ctizen. The normalized premature rate is the “age-adjusted years of potential life lost rate per 100,000.”  Below, I created a scatterplot of the normalized data.  I choose to keep the scatterplot in a pretty tr

Color and Choropleths

This lab was very interesting as we dived into color theory.  In the first part of the lab, we created and compared linear and adjusted progression color ramps to themselves as well as a color ramp from the website colorbrewer.org.  I found, the colorbrewer color ramps are not as rhythmic when compared to the other methods, as they don’t step up at set intervals or rates. However, I don’t think that a set rate is needed to go from color to color. I preferred the colorbrewer ramp because each color was distinct from its neighbors. In the linear and adjusted color ramps, the colors looked too similar to each other and were not distinct enough for each step. I think that as long as the color ramp is moving in the opposite direction of the same color hue, the step rate or interval is not as relevant. When I first was completing the linear step I started with the purple hue option but had a difficult time, as each step in the color ramp looked the same. At one point, I created my own color