Skip to main content

Module 4: Crime Analysis

 This was an interesting module, as I analyzed homicide data in 2017/2018 Chicago. I created three different hotspot analysis maps to anticipate homicide hotspots in Chicago year 2018 using data from 2017. After creating each map, I determined the accuracy of each map by comparing how large a map's hotspot area was to the actual crime density of 2018. 

Beginning this portion of the assignment, I first made sure to change the environmental parameters to that of the city of Chicago boundaries. 

Below I summarize all the technical steps I took to perform my analysis. 

1)      Grid Overlay:

a.       Spatial join Chicago grid and 2017 homicides

b.      Select attributes greater than zero and make new layer

c.       Select attributes in the top 20% and create a new layer

d.      Dissolve layer into one polygon

                                                               i.      Create a new field in the table

                                                             ii.      Use field calculator to give entire new field same number

                                                           iii.      Use dissolve tool

2)      Kernel Density

a.       Use kernel density tool

b.      Only use data 3x the mean or above

                                                               i.      Change symbology to only 2 categories; 3*mean and max

c.       Convert to polygon

                                                               i.      Reclassify using numbers above

                                                             ii.      Use raster to polygon tool

d.      Select attributes classified as 2 [greater than 3*mean]

3)      Local Moran’s I

a.       Spatal join census tracts and 2017 homicides

b.      Add a new field in table use calculator to find [# of homicides/housing units *1000]

c.       Use tool Cluster and Outlier Analysis (Anselin Local Moran's I)

d.      Use SQL query to select high-high clusters and make new layer

                                                               i.      Dissolve using dissolve tool 


Below are my three maps. 


As I mentioned, I analyzed my maps to determine which was most accurate using the data below.


I would recommend the Kernel Density map to the police chief because both the predictive homicide hotspot and crime density numbers were high. This means that this map was most accurate at predicting real future hotspots. The grid-overlay showed the highest crime density per area, but was way to low accurately predicting multiple hotspots throughout the city at just 26%. Local Moran’s I did the best here at 45%, but lacked accuracy with crime density at 7.72. Essentially, the grid overlay did not anticipate enough area hotspots but did a good job estimating homicides within those areas. Local Moran's on the other hand did a good job anticipating hotspot area coverage but was not accurate in the crime density of homicides within that space. This would mean allocating a lot more resources for the police department, as the space covered here was about an additional 10 sq miles, without there being a high crime density when compared to the other maps. 





Comments

Popular posts from this blog

Bivariate Choropleth and Proportional Symbols

In the first part of this lab, we used proportional symbols to represent positive and negative values in job increases/decreases in the USA.  Because there were negative values in this data set, I created a new map to "fix" the data. In this new map, I created a new field and copied the negative job loss data. I then used the Calculate field data and multiplied it by one to make it positive. Lastly, I overlaid both maps on the data and was able to accurately represent the increase and decrease of jobs in the USA by state.   In the second part of this lab, we delved into how to prepare data for a bivariate choropleth map, choose colors for the legend, and create a good layout.  I created three separate fields to analyze the data: Class Obese, Class Inactivity, and Class Final. I used the symbology tool to create 3 Quantile for the Obese and Inactivity classes and used each quantile to set the three classifications in the fields I created using the Select by Attributes tool to

Infographic's

 This week was a fun and challenging week as we learned about and created infographics. It was fun to create the infographics themselves, but challenging to figure out the best methods and practices in analyzing raw data.  We used 2018 County Health Rankings National Data from countyhealthrankings.org. I chose to move forward with the two values: Unhealthy Mental Days and Premature Dealth.  I   choose these two variables because those that struggle with mental health die before their time due to depression, anxiety, and/or a combination of similar issues. Both variables are normalized by relating their value to all counties within each state in the USA. For example, the poor mental health days is normalized as the average number of reported mentally unhealthy days per month per ctizen. The normalized premature rate is the “age-adjusted years of potential life lost rate per 100,000.”  Below, I created a scatterplot of the normalized data.  I choose to keep the scatterplot in a pretty tr

Color and Choropleths

This lab was very interesting as we dived into color theory.  In the first part of the lab, we created and compared linear and adjusted progression color ramps to themselves as well as a color ramp from the website colorbrewer.org.  I found, the colorbrewer color ramps are not as rhythmic when compared to the other methods, as they don’t step up at set intervals or rates. However, I don’t think that a set rate is needed to go from color to color. I preferred the colorbrewer ramp because each color was distinct from its neighbors. In the linear and adjusted color ramps, the colors looked too similar to each other and were not distinct enough for each step. I think that as long as the color ramp is moving in the opposite direction of the same color hue, the step rate or interval is not as relevant. When I first was completing the linear step I started with the purple hue option but had a difficult time, as each step in the color ramp looked the same. At one point, I created my own color