Skip to main content

Module 4: Crime Analysis

 This was an interesting module, as I analyzed homicide data in 2017/2018 Chicago. I created three different hotspot analysis maps to anticipate homicide hotspots in Chicago year 2018 using data from 2017. After creating each map, I determined the accuracy of each map by comparing how large a map's hotspot area was to the actual crime density of 2018. 

Beginning this portion of the assignment, I first made sure to change the environmental parameters to that of the city of Chicago boundaries. 

Below I summarize all the technical steps I took to perform my analysis. 

1)      Grid Overlay:

a.       Spatial join Chicago grid and 2017 homicides

b.      Select attributes greater than zero and make new layer

c.       Select attributes in the top 20% and create a new layer

d.      Dissolve layer into one polygon

                                                               i.      Create a new field in the table

                                                             ii.      Use field calculator to give entire new field same number

                                                           iii.      Use dissolve tool

2)      Kernel Density

a.       Use kernel density tool

b.      Only use data 3x the mean or above

                                                               i.      Change symbology to only 2 categories; 3*mean and max

c.       Convert to polygon

                                                               i.      Reclassify using numbers above

                                                             ii.      Use raster to polygon tool

d.      Select attributes classified as 2 [greater than 3*mean]

3)      Local Moran’s I

a.       Spatal join census tracts and 2017 homicides

b.      Add a new field in table use calculator to find [# of homicides/housing units *1000]

c.       Use tool Cluster and Outlier Analysis (Anselin Local Moran's I)

d.      Use SQL query to select high-high clusters and make new layer

                                                               i.      Dissolve using dissolve tool 


Below are my three maps. 


As I mentioned, I analyzed my maps to determine which was most accurate using the data below.


I would recommend the Kernel Density map to the police chief because both the predictive homicide hotspot and crime density numbers were high. This means that this map was most accurate at predicting real future hotspots. The grid-overlay showed the highest crime density per area, but was way to low accurately predicting multiple hotspots throughout the city at just 26%. Local Moran’s I did the best here at 45%, but lacked accuracy with crime density at 7.72. Essentially, the grid overlay did not anticipate enough area hotspots but did a good job estimating homicides within those areas. Local Moran's on the other hand did a good job anticipating hotspot area coverage but was not accurate in the crime density of homicides within that space. This would mean allocating a lot more resources for the police department, as the space covered here was about an additional 10 sq miles, without there being a high crime density when compared to the other maps. 





Comments

Popular posts from this blog

Positional Accuracy: NSSDA

 In this analysis, I compared the street and road intersect data collected for Alburquerque, NM by the City of Alburquerque and the application StreetMaps. I used an orthophoto base layer as the reference for this analysis, to compare and determine the accuracy of both the City and Streetmap layers using NSSDA procedures. The most difficult part of this analysis for me was how to determine what 20% per quadrant looks like. Because the reference map was divided into 208 quadrants, I had to determine how to subdivide all the quadrant's equality into 20%. After multiple trials and error, I decided to subdivide the entire area (208 sub-quadrants) into 4 equal-area subsections. In this way, I could do 5 random right intersection points per subsection or 20% per subsection.  Map 1: City of Albuquerque city map data.  Map 2: City of Alburquerque SteetMap data When selecting a random intersection to place the points within each quadrant, I choose a location that had data f...

Isarithmic Mapping

  Map 1: Annual Precipitation, Washington State Map 1 is an Isarithmic map that follows the continuous phenomenon of rainfall in Washington state over a 30 year period. The data was created by the PRISM group at the Oregon State University in 2006, and then downloaded and amended by the U.S. Department of Agriculture, Natural Resources Conservation Service, National Geospatial Management Center in 2012. Eden Santiago Gomez, analyzed the data on 5/2/2021, to create the map above. Santiago Gomez created continuous tones for the data, also adding a hillshade effect. She then converted the floating raster data into Integer data via the geoprocessing tool Int (Spatial Analyst Tool) to bring out hypsometric tinting. Lastly, she added contours of the data via the Contour List tool.   How the precipitation data was derived and interpolated? The PRISM system has been continually developed over the past couple decades, utilizing physiographical maps and climate fingerprints as its ...

Supervised Classification

 This week we learned how to perform supervised and unsupervised classification on ERDAS Imagine. On the figure 1 map below, you can see my attempt at classifying land use/cover of Germantown, Maryland. This week my map falls short of the assignment goals, as I was not able to get ArcGIS pro to replicate the recoded feature classification outputs I performed in ERDAS. I am not sure what I did wrong, but in the future, I will make sure to analyze signature points more carefully via their histograms and mean plots to identify potential errors.  Figure 1: Supervised classification of Germantown, Maryland with a smaller distance map of the main feature map.