Skip to main content

Module 4: Crime Analysis

 This was an interesting module, as I analyzed homicide data in 2017/2018 Chicago. I created three different hotspot analysis maps to anticipate homicide hotspots in Chicago year 2018 using data from 2017. After creating each map, I determined the accuracy of each map by comparing how large a map's hotspot area was to the actual crime density of 2018. 

Beginning this portion of the assignment, I first made sure to change the environmental parameters to that of the city of Chicago boundaries. 

Below I summarize all the technical steps I took to perform my analysis. 

1)      Grid Overlay:

a.       Spatial join Chicago grid and 2017 homicides

b.      Select attributes greater than zero and make new layer

c.       Select attributes in the top 20% and create a new layer

d.      Dissolve layer into one polygon

                                                               i.      Create a new field in the table

                                                             ii.      Use field calculator to give entire new field same number

                                                           iii.      Use dissolve tool

2)      Kernel Density

a.       Use kernel density tool

b.      Only use data 3x the mean or above

                                                               i.      Change symbology to only 2 categories; 3*mean and max

c.       Convert to polygon

                                                               i.      Reclassify using numbers above

                                                             ii.      Use raster to polygon tool

d.      Select attributes classified as 2 [greater than 3*mean]

3)      Local Moran’s I

a.       Spatal join census tracts and 2017 homicides

b.      Add a new field in table use calculator to find [# of homicides/housing units *1000]

c.       Use tool Cluster and Outlier Analysis (Anselin Local Moran's I)

d.      Use SQL query to select high-high clusters and make new layer

                                                               i.      Dissolve using dissolve tool 


Below are my three maps. 


As I mentioned, I analyzed my maps to determine which was most accurate using the data below.


I would recommend the Kernel Density map to the police chief because both the predictive homicide hotspot and crime density numbers were high. This means that this map was most accurate at predicting real future hotspots. The grid-overlay showed the highest crime density per area, but was way to low accurately predicting multiple hotspots throughout the city at just 26%. Local Moran’s I did the best here at 45%, but lacked accuracy with crime density at 7.72. Essentially, the grid overlay did not anticipate enough area hotspots but did a good job estimating homicides within those areas. Local Moran's on the other hand did a good job anticipating hotspot area coverage but was not accurate in the crime density of homicides within that space. This would mean allocating a lot more resources for the police department, as the space covered here was about an additional 10 sq miles, without there being a high crime density when compared to the other maps. 





Comments

Popular posts from this blog

Positional Accuracy: NSSDA

 In this analysis, I compared the street and road intersect data collected for Alburquerque, NM by the City of Alburquerque and the application StreetMaps. I used an orthophoto base layer as the reference for this analysis, to compare and determine the accuracy of both the City and Streetmap layers using NSSDA procedures. The most difficult part of this analysis for me was how to determine what 20% per quadrant looks like. Because the reference map was divided into 208 quadrants, I had to determine how to subdivide all the quadrant's equality into 20%. After multiple trials and error, I decided to subdivide the entire area (208 sub-quadrants) into 4 equal-area subsections. In this way, I could do 5 random right intersection points per subsection or 20% per subsection.  Map 1: City of Albuquerque city map data.  Map 2: City of Alburquerque SteetMap data When selecting a random intersection to place the points within each quadrant, I choose a location that had data f...

Utilizing ERDAS Imagine to Analyze Map Features

 This week we learned how to utilize histograms and different bands to highlight different features in a map. On the following map that we worked on, dark bodies of water caused high peaks on the left of histograms while snow-peaked mountains were small blips on the far right. These simple distinctions help to quickly identify map features on a graph, that you can then utilize as a stepping stone to finding them on the image. I found it incredibly interesting how the different band layers highlighted different features on the map. Figure 1 below depicts three different features we found on the image.  Figure 1: Distinct features found on an image using ERDAS Imagine. Feature 1: Large body of water. Feature 2: Snow-capped mountains transitioning to thick vegetation. Feature 3: Shallow turbulent body of water near urbanized land, transitioning to deep calm body of water. 

Choropleth and Dot Mapping

 This week we explored choropleth and dot mapping. Choropleth is a thematic form of mapping that focuses on color units, whose color intensity is proportional to its corresponding data value. Dot mapping is also thematic. It uses either a proportional or graduated thematic symbol (like a circle), whose size increases due to its data value. Using ArcGIS pro, I analyzed the population densities of countries in Europe (person per square kilometer), as well as their wine consumption (liters per capita) to determine if there was a correlation between the two. In my choropleth map, I decided to use a natural breaks classification. I chose not to use Equal Interval because only 2 classes (with slight 3 rd class) were represented in the map, and it looked like almost just one color in the lower range. The standard deviation classification appeared to be more diverse at first glance but was actually skewed to the top ranges. I was then between Quantile and Natural Breaks. While both t...