Technology 3 min read

How we Perceive Safety and how it can be Taught to Machines

Joseph Sohm | Shutterstock.com

Joseph Sohm | Shutterstock.com

How does a person assess if somewhere looks safe or not? A new MIT study aims to quantify judgment using cell phone data and machine learning.

Data on how we Perceive Safety

Presented at Association for Computing Machinery’s Multimedia Conference,
MIT researchers, in collaboration with the University of Trento and the Bruno Kessler Foundation, studied neighborhood safety scores.

Using cell phone data and machine learning, researchers compared neighborhood “perceived safety” scores to how often people visited those neighborhoods, referred to as “visitation frequency.” The study was adjusted based on factors like population density and the given distance from the city center.

 

The researchers first created a massive database of images of different parts of 53 cities and scored them based on their visual characteristics for affluence or poverty. Then, using cell phone data from 1.4 million human volunteers, they calculated how many people visited those areas.

An eye on the Street

After controlling for population density, the results showed that people tend to visit areas perceived safe and avoided unsafe areas. However, males under 30 did tend to visit the areas others saw as unsafe more often. The researchers also identified the visual characteristics that led to perceptions that an area is safe or unsafe. For instance, green and well-maintained green spaces boosted the perception of security, but rough and unmaintained parks did the opposite.

#JaneJacobs' #eyes-on-the-street theory: buildings with street facing windows increase a sense of safety.Click To Tweet

The findings support Jane Jacobseyes-on-the-street theory, which states that buildings with street facing windows increase a sense of safety compared to buildings with few or no such windows. To some extent, the findings supported Oscar Newman’s defensible-space theory, which states that the physical design of a structure can enhance or diminish perceived security.

Training Machines to Detect Safe Neighborhoods

In order to distinguish between which visual features correlated to how people perceive safety, MIT researchers created an algorithm to block out continuous images with defined boundaries. Consequently, their algorithm recorded changes to the image scores assigned by the machine-learning network.

Creating Safer Neighborhoods

The study’s findings indicate that a city can change the visual characteristics of an area to increase the number of people that perceive safety.

Luis Valenzuela, an urban planner in Santiago, Chile is particularly struck by the researchers’ demographically specific results. “That, I would say, is quite a big breakthrough in urban-planning research,” he says. “Urban planning — and there’s a lot of literature about it — has been largely designed from a male perspective. This research gives scientific evidence that women have a specific perception of the appearance of safety in the city.”

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Edgy Universe know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Edgy Universe

EDGY is an SEO incubator, forecaster, and support center for deep learning, technological advancement, and enterprise-level end-to-end search programs.

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.