Wednesday, July 29, 2020

Storm Surge Analysis in New Jersey and Florida


This week, we looked at elevation data for the New Jersey shoreline from pre and post Sandy datasets, and compared two DEMs in Florida to determine the impacts of a 1M storm surge and comparing how the elevation data effects the results.  

Below, you will see the final difference in elevation on a stretch of the Jersey shore from before and after Sandy. The pre-Sandy file has a more consistent coastline in terms of obvious areas of development.  There are many areas that now look almost like little inlets where housing has clearly been destroyed and beaches eroded.  There are clearly some areas that were hit much harder than others and destruction is not consistent or even.  The areas of greatest erosion appear to be along the center left of the image.  This matches up with my observations of the differences between the two las images and confirms the areas of greatest destruction.  There appear to be some data anomalies as you move farther inland in that the difference between the two layers is as if the post layer was subtracting a value of zero.  This is also right next to areas that were apparently built up during the storm.  The take home message is that destruction was not equal and that more analysis should be done to gain a greater understanding of the situation.

This exercise was very good because it helped me to think about the many variables to consider when assessing pre and post storm damage.  It also helped me to see the impact of data that does not perfectly align and the general limitations in analysis such as this.


The map below is a comparison of a USGS DEM and a LiDar DEM to assess the coastal affects of a 1M storm surge. There are a number of potential issues related to how we assumed the lack of connectedness and uniform surge could influence the areas being impacted.  In reality, it is quite likely that areas not showing a complete connection or that are in a low lying area, but surrounded by higher ground, are actually connected and vulnerable enough to be impacted by the combination of high winds, rainfall, and variability in storm surge that is not encapsulated in a study that assumes this level of uniformity.  Just because storm surge might max out at a given height on average, it does not mean this impact is felt equally across all regions and even limited connectivity and semi-protected areas should be considered as points of vulnerability.  We could adjust our analysis by applying a storm surge range that gives us a level of uncertainty with which to work.  We could also review the areas that show limited connectivity and determine what might be causing that limitation.  All limits to connectivity of elevation are not created equally and having a deeper understanding of those areas would be very beneficial in designing a study to better model real world possibilities.  Rainfall and inland flooding should also be incorporated as variables in a realistic study.



This exercise was very beneficial for understanding the impact of DEM accuracy on flood analysis.  It also helped me to understand the many assumptions we make when relying on elevation data and constant variables in complex analysis.

Wednesday, July 22, 2020

Crime Analysis

This week, we looked at three different types of analysis for the purpose of predicting crime and thus enhancing the effectiveness of policing efforts.  We compared Grid Overlay, Kernel Density, and Local Moran's I analysis using 2017 homicide data in Chicago to predict areas of homicide in 2018 and thus determine the best analysis that can be used for a cost-effective solution for future policing efforts.

To complete the analysis I first set the environments so the extent and mask were equal to the boundary of Chicago. Then, for the Grid Overlay method, I used the spatial join tool and entered 2017 homicides to be joined with the Chicago grid. I then selected all grids that were greater than 0 and created a new layer called Homicide Count. I then selected the top 20% from those results. I had 311 total cases and the top 20% resulted in 62 cases selected. I then create another layer from that selection, dissolved it into a single field and the first map below was the result.

For Kernel Density, I used the Kernel Density tool on the 2017 homicides layer, selected an output cell size of 100 and a search radius of 2630. I then split the results into two categories: 0 to 3* the mean, and everything else above that. I then reclassified the data into those two categories and converted the raster to a polygon. After that, I selected all cases that were 3* the mean or greater and created a layer from that selection. The map is also shown below.

For the Moran’s I analysis, I again performed a spatial join between census tracts and 2017 homicides. I then added a new field in the attribute table and calculated the crime rate per 1000 homes. I then used the Cluster and Outlier Analysis tool. From that result, I selected the High-High results and created a new layer from that selection. After that, I dissolved the layer into a single field. The results are also shown below.
Grid Overlay

 Kernel Density

 Local Moran's I


Hotspot Technique
Total area (mi2) in 2017
Number of 2018 homicides within 2017 hotspot
% of all 2018 homicides within 2017 hotspot
Crime density (2018
homicides within 2017 hotspot per mi2)
Grid Overlay
15.46
159
27.00%
10.28
Kernel Density
26.67
262

44.48%

9.82
Local Moran's I
34.05
265

45.00%

7.78


The Kernel Density analysis provides the best model for predicting future homicides. This is because it captured nearly the exact same number of homicides in 2018 as the Local Moran’s I analysis, but did so in an area 7.38 square miles smaller. This would allow the enforcement effort to be much more concentrated and focused while addressing a similar amount of crime. Kernel Density is also better than the Grid Overlay method because it captures a much more significant number of overall homicides, and the density of homicides per square mile is not that much less in the Kernel Density analysis. So you definitely get the most bang for your buck with the Kernel Density analysis.

I also want to highlight what I think is a profound level of short-sightedness in an analysis such as this. While the stated objective and results of such analysis contain useful information, the desire to focus on a single variable as a cure for the problem of homicide creates the huge possibility of mismanagement, misallocation of resources, and even the active continuation and enhancement of foundational issues which ultimately result in situations where criminal activity incubates. These analyses can and should be used, but they must be used in ways that incorporate a fundamental understanding of wealth disparity, educational funding and availability, local economic opportunity, and other factors I am likely not mentioning here.

The issue of homicide, and crime in general, must be taken into consideration along with the functioning of society as a whole. If the police chief is asking for this information, we need to be asking the police chief who they are partnering with in the community in order to understand the underlying causes of such violence and work toward creating partnerships with the Health Departments, Education Departments, and the general governing body within a given community in order to find solutions that really get at the heart of the problem. A proactive and integrated approach can, in turn, create a more prosperous community that has great potential to reduce the burden on law enforcement and increase its overall effectiveness.

I suggest that we begin looking at these types of crime analysis with a much broader view of community interactions and attempt to limit the idea of a single variable solution.

Monday, July 13, 2020

3D Scenes, Line of Sight and Viewshed Analysis

There are no attached image files for this blog post, but we completed Esri's four part "Getting Started with Visibility Analysis" exercise.  This course began with an introduction to the 3D landscape and an overview of some of the basic navigation and analysis tools available in ArcGIS Pro.  After the introduction, we performed a line of sight analysis along a parade route and executed a viewshed analysis to help determine ideal lighting locations for a campground.  These were relatively simple exercises, but helped us to uncover the power of these fundamental tools within ArcGIS Pro.  We finished up the training course with the creation of a 3D scene in Portland, OR.  We created a 3D scene of buildings and trees and subsequently published the buildings layer as a package.  This helped us to understand the basic processes involved in scene creation and publishing and gave an overview of privileges and publishing considerations.  Overall this four-part course was a great introduction to the power of these 3D analytical tools.

Monday, July 6, 2020

LiDAR Analysis

In this week's exercise, I used LiDAR data from the state of Virginia to focus on various tools and analysis available to us in ArcGIS Pro.  I created four maps displayed in two separate layouts for easier comparison and analysis.  I also created a third layout to illustrate the study location.  First, I used the LAS to Raster tool to develop a canopy DEM layer and a ground DEM layer.  I then used the Minus tool to create a new raster that showed canopy height.  I then used the LAS to MultiPoint tool to create a ground layer and a vegetation layer as determined from the LiDAR file. I then set conditions, added layers, and divided to create a density layer.  The density layer is dimensionless, but is based on a 0 to 1 scale of increasing density.



I combined these two final maps into one layout and set them to the same scale and color scheme for easy comparison.  The first thing that jumped out at me is that the areas of most density do not necessarily correlate with the areas of tallest canopy.  This indicates that the biomass creating the most dense vegetation is likely shrub growth or early to middle aged areas of forest growth.  The tallest canopies indicate the oldest trees and these appear to reduce the ability of undergrowth to become as dense as some of the shorter areas of the forest where competition among vegetation types is likely greater.  See map below for reference:


For the next map I used the Digital Surface Model(DSM) and displayed it alongside the original LiDAR scene.  I used the DSM because it would be a more apples to apples comparison with the original LiDAR scene which had not been filtered for ground or canopy points.  I again used the same color scheme and they look like a 2D model and 3D model of the same scene, as you would expect.  There are slight differences between the lower and upper limits, but since the DSM does filter non-ground points, this variation is expected.  See map below for reference:


I was unable to change the labels of the legend for my LiDAR scene, so the formatting is not as consistent as I would like, but the general information is understandable.



Wednesday, July 1, 2020

Least-Cost Path Corridor

In this analysis, we were required to determine suitability criteria, develop cost rasters, and use a corridor analysis to determine a suitable area for black bears to traverse back and forth between two protected forest habitats.  I started by defining the problem, listing the factors, and developing a flow chart/model to complete the analysis.

I used the above model to reclassify the rasters based on established criteria.  I then used a weighted overlay to create a cost raster.  I then inverted the cost raster using the raster calculator tool and developed the subsequent model to finish analysis:

Using the above model, I was able to generate a corridor and using that data, I was able to create the map below.


Because I wanted there to be a theoretically optimum corridor based on the established criteria, I manually adjusted so I had the smallest possible connection between the two management areas and two additional buffers that were less optimal.  The outcome did not yield a nice and easy result in which I could clearly establish the best possible route, but it did show that an optimal path from Coronado 2 could start from a broad area, but would ultimately funnel into a narrow entry point into Coronado 1.  Human population, food, and slope could also be considered in an analysis such as this to help develop a more firm result, but this exercise helped me to have a much better understanding of the tools at our fingertips.