Machine Learning with Amazon Blink Cameras

This blog post describes a ‘hobby project’ showcasing use of Amazon Blink camera to trigger and fetch picture of a door and use ‘Machine Vision’ algorithms to determine if the door is open or closed !

Amazon Blink Security Cameras

Amazon Blink cameras are wireless, smart home security cameras that provide cloud-based video storage. They can be remotely controlled by mobile app and provide host of features like configurable motion sensitivity, motion detection zones, two-way audio and infrared night vision.

‘Blink App’ uses (un-documented) APIs with JSON payloads to control, trigger, store and stream videos from cameras via a ‘command hub’.

We can leverage these APIs to trigger update of thumbnail image from the corresponding camera. The image obtained can be used by Machine Learning / Machine Vision algorithms for analysis and predictions.

Blink camera image comparison

In our example, a ‘closed door’ is a ‘Desired State’, however an ‘open door’ or even a ‘partially open door’ is an ‘Anomaly State’ which needs to be detected and alerted by Machine Learning algorithm.

Machine Learning using python imaging and numpy libraries

We can use Python Imaging library to analyze RGB channel information at a pixel which gets affected by the position of the door.

r,g,b = image.getpixel((555,600))

In above example, a pixel of the image (shown in red circle) has perceivable difference in RGB channel based on the position of the door, and hence can be used with a threshold value (with a degree of error) to determine if the door is open or closed.

image_cropped = image.crop(box=(300, 150, 500, 350))
plt.rcParams[“figure.figsize”] = (2,2)

Since results based on a single pixel in an image are subject to error as it can get influenced by light conditions, its better to consider an area of affected pixels by cropping the image.

import numpy as np
avg = np.average(image_cropped)

The above example shows cropping a part of the image (shown in red box) affected by the door’s position and then using average of the ‘numpy array’ of the cropped image. A threshold of ‘85’ can help us determine if the door is in an ‘open’ or ‘closed’ position.

if (avg > 85):
print (‘Door is closed’)
print (‘Door is OPEN !’)

Machine Learning using ‘Edge Detection’ algorithms

‘Edge detection’ is a fundamental tool in image processing, machine vision and computer vision’.

‘Edge Detection’ algorithms like ‘OpenCV2 canny’ can be used to detect presence or absence of the gate and hence provide more reliable results.

Direct usage of Blink camera images for ‘edge detection’ algorithms produces high noise due to surrounding objects. As a result, edges from the gate are not clearly defined and it becomes difficult to predict position of the gate.

Sequential transformation of the Blink camera image by converting it to ‘Grayscale’ and using ‘Bilateral filter’ and then running it using ‘OpenCV2 canny’ edge detection transformation gives sharper edges with very less noise.

img = cv2.imread(image_path.__str__(),1)

# Step 1
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

# Step 2
bi = cv2.bilateralFilter(gray, 15, 75, 75)

# Step 3
dst = cv2.Canny(bi, 100, 200)

Gate position detection using image transformations and ‘edge detection’ results

Just like in previous example, we can use the transformed image obtained after running sequential processing and edge detection algorithm.

We can crop a part of the image (shown in red box) which gets affected by the position of the gate. The average ‘numpy array’ of cropped image will show considerable different values based on the gate’s edges being present or absent in it. In the above example, a threshold of ‘2’ can help us determine if the gate is positioned as ‘open’ or ‘closed’.

from PIL import Image

image_cropped = dst[200:360, 250:400]

avg = np.average(image_cropped)
if (avg < 2):
print (‘Door is closed’)
print (‘Door is OPEN !’)