Computer Vision – Egg hatching detection

The release of Raspberry PI 3 along with simple, low cost cameras makes entry in to camera motion detection simple and cheap.  For me this meant a handy (and cool) project to monitor my lizard egg incubator.

The Project

I have lizard eggs hatching a couple of times a week.  It is good to know when they hatch as I like to leave them in the incubator and then take them out after 24hrs.  Normally I check each day or two and then think, “did that just hatch or did it hatch just after I last checked?”.

I also wanted to trial the OpenCV image processing library as well as brush up on my Python.

 

Overview

I could try to train an algorithm to detect the various species of lizard in the image.  It could be done, and would be interesting, but a more straight forward approach is via motion detection.  The hatching process can take hours from start to finish so image to image at 24fps the changes will be very small, so how could it work?  Easy, take images every minute and compare it to a reference image; Use the difference between the images to judge what has happened.

Check out a timelapse video I created based on the images captured of a gecko hatching from it’s egg : Timelapse video

Part 1. Hardware

I used a Raspberry PI 3 to capture the images and do the processing.  It is perfect for the job : cheap, powerful enough for some processing, and has the libraries and ability to run python to make it all pretty easy.

The camera, I use a wide-angle IR camera with IR LEDs as it need to run day and night and ideally without visible light to disturb the day/night cycle of the lizards.  Make sure you get a wide-angle camera as the camera to object distance is small (~15cm)

Part 2. Image Capture

I won’t go in to detail about how enable the Raspberry PI Camera as there are plenty out there.  This is how I used it.  It can capture live images or run through a list of recorded files.  I record each image to allow for testing and training of the code.

Full code is available on GitHub : IncubatorMonitor code

 import picamera from picamera import PiCamera
 def go_replay(self):

    for f in self.files:
       try:
          (path, ext) = os.path.splitext(f)
          if (os.path.isfile(f) and ext == ".jpg"):
             img = self.process_image(f)
          else:
             print("reject", f, ext)
             # catch exceptions that might likely specific to 1 file.
       except (AttributeError, ValueError, TypeError, IndexError) as e:
       print("Caught", e, "while processing", f)

     self.write_results() 

 def go_live(self):

    camera = PiCamera()
    camera.start_preview()
    high_res = (640, 480)
    camera.resolution = high_res

    while True:
       for i in range(1,10):
          filename = self.output_path + 'Incubator_'+ time.strftime('%m%d_%H%M%S')+ '.jpg'
          camera.capture(filename)
          img = self.process_image(filename)

          print("Processed ", filename)

          #todo configure interval
          #split sleep to allow quicker interruption
          sleep(10)
          sleep(10)
          sleep(10)
          sleep(10)
          sleep(10)
          sleep(9)

       #write results periodically as it will probably end in tears (i.e. ctrl-C)
       self.write_results()
       print('Updated results')

Part 3. Basic Motion detection

This is where you go, “Oh, is that it?”.  It is much easier to write motion detection code in python.  The trick is detecting something useful and avoiding false-positives.

I use the Open Source computer vision library  OpenCV.  I’m barely scratching the surface of what can be done with it but if nothing else, it is a handy way to deal with images.

The hardest part is building OpenCV on the Raspberry Pi.  Here are instructions on how to do it.

Basic steps involved:

  1. Convert to grayscale.  Easier to deal with when only looking at a single intensity value per pixel.  Given the IR camera provides little real color detail we aren’t losing anything useful.
  2. Image difference.  Subtract one image from the other to detect differences.  The camera and the background is fixed so any change _should_ be interesting to us.
  3. Threshold the difference. There is some noise so the difference is never 0 for each pixel.  We use a threshold to detect changes of a meaningful level.  For this application a threshold of 15 to 20 is used.
  4. Dilate the thresholded data.  To smooth and merge (gross simplification) areas above the threshold perform a dilation step.  The size and content of the dilution matrix has an impact on how well this works.  I found a 9×9 matric with with MORPH_ELIPSE to work pretty well but could be improved on.
  5. Create polygons with each area above the threshold producing a list of objects to analyse.

Here is a snippet of the relevant code.  Again, full code is available here : IncubatorMonitor code


 se = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9, 9))

 def __init__(self, blah):
...
 r = cv2.imread(self.path)
 self.raw = imutils.resize(r, width=500)
 blur = cv2.GaussianBlur(self.raw, (9,9), 0)
 self.processed = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
...

 def compareToReference(self, ref_obj, threshold):

    assert(len(self.processed) == len(ref_obj.processed))

    #compare to reference image
    self.delta = cv2.absdiff(self.processed, ref_obj.processed)
    # Check against threshold to find meaningful changes
    self.thresh = cv2.threshold(self.delta, threshold, 255, cv2.THRESH_BINARY)[1]

    # Run dilation to smooth and join nearby detections.
    self.dilate = cv2.dilate(self.thresh, self.se, iterations=4)

    #extract a list of separate detections.
    (_, self.contours, _) = cv2.findContours(self.dilate.copy(), cv2.RETR_EXTERNAL,
    cv2.CHAIN_APPROX_SIMPLE)
    self.output = self.raw

    # Create a box for each found movement.
    for c in self.contours:
       self.areas.append(cv2.contourArea(c))
       (x, y, w, h) = cv2.boundingRect(c)

       _ = cv2.rectangle(self.output, (x,y), (x+w, y+h), (0,200,0), 2)

    self.ref = ref_obj

From this we end up with a set of images showing the difference between the reference image and the image being analysed.

The Image:

Incubator_0212_110811

  1. The raw difference between the image and reference.

Incubator_0212_110811_delta

2. The difference threshold

Incubator_0212_110811_thresh

Dilation expands and merges the smaller detections.

Incubator_0212_110811_dilate

Look, we found it!

Incubator_0212_110811_box

 

Part 4. Image Classification

In a perfect world, we can just say each of these detections above the threshold are hatching events.  In practice, there are other things that can happen to trigger a difference between the images: removing the lid which has the camera on it, the light in the room going on and shining through the incubator window, moving the camera, moving the tubs inside, schrodinger’s lizard etc.

In this case the easiest thing to do for some of these is to control the environment:

  • Fix the camera to the inside of the incubator.
  • Cover the viewing ports on the incubator to avoid stray light.

To make things more interesting, I decided not to, to make the the software “smart” enough to tell the difference.

It needs to classify each image in to 1 of 4 categories

  • No detection
  • Found hatching
  • Detected temporary disruption
  • Environment changed, reset reference image.

This is a typical “classification” problem in machine learning terminology.

Start with something simple, some basic logic to separate these cases.  What features can we extract to judge between these categories?

  • No detection
    • Very little difference to the reference image
  • Found hatching
    • There is a noticeable change in 1 specific location.
  • Detected temporary disruption
    • There is a broad change between the image and the reference or a large number of localised differences.
  • Environment changed, reset reference image.
    • The images are very different.

So, how well does it work?

In short, pretty well.  It never misses a hatching and always detects a big reference change.  The problem, as to be expected, is false positives of hatching detection that should be classified as a temporary disruption.  This happens once or twice a week.

The solutions:

  1. Keep manually “tuning” (tinkering) the parameters to tune them out.  Probably would work but doesn’t address the underlying fragility.
  2. Put in some memory.  Temporary changes are temporary and hatching is forever so if we have an awareness of what has happened in the past then we can separate these 2 situations.  I’ve done this and it works but it offends me slightly.
  3. Use machine learning techniques to make a smarter assessment.  This sounds like more fun so it is what I will do next.  Logistic Regression or SVG? Neural Network?  We’ll see…

Part 5. Notification

For fun, I decided to tweet the notifications to my twitter account.  This is really straight-forward in python.  It can either send my a DM or post an update.


import tweepy
import json

class IncNotify():

def __init__(self):
#
# twitter_auth.json File Format
#
#{
# "consumer_key" : "blah",
# "consumer_secret" : "blah",
# "access_token" : "blah",
# "access_secret" : "blah"
#}

   twitter_auth = json.loads(open('twitter_auth.json').read())
   # Configure auth for twitter
   auth = tweepy.OAuthHandler(twitter_auth['consumer_key'], twitter_auth['consumer_secret'])
   auth.set_access_token(twitter_auth['access_token'], twitter_auth['access_secret'])

   self.api = tweepy.API(auth)

   ## TODO handle failure
   print ("Twitter output enabled")

def notify(self, img, level):

   # TODO img is correct format.
   if self.api is None or img is None or level == 0:
      return None

   msg = "The incubator monitor has detected a gecko hatching. Did I get it right?"

   if level <= 1:
      self.api.send_direct_message(user="namezmud", text=msg)
   else:
      path = img.output_path
      if not path:
         path = img.path
      self.api.update_with_media(path, msg)

   print("SEND!!!! " + img.getShortname())

Conclusion (so far)

There are still plenty of TODOs in the code and extensions that could be made but it works for now and I’ll set it up live for next breeding season.  Plenty to work on in what spare time I have.

If you are interested in giving it a go, feel free.  All the code is on github here for public consumption. IncubatorMonitor

The aftermath

Within the space of a week I had my PC stolen in a burglary and snapped in half the micro SD card in the raspberry PI.  These were where the images and backup for most of the hatching season were stored so I lost all my unit tests and most of my training data.  We’ll have to see if I have enough data left to train a machine learning model of if I have to wait until the end of the year to collect more data.  If there is anyone in the northern hemisphere breeding this summer and are interested in setting up a camera, please get in touch!

 

 

 

 

 

Advertisements

Yet Another Monitor

It is probably the most obvious and straight-forward IoT project there is but for me it was a chance to try some technology I haven’t had a chance to use – the ‘ol ‘monitor the temperature and display some pretty graphs’ project.  Much of what I have used is overkill for the need but provides a good base of knowledge for something bigger.

The Project

Monitor the temperature of up to 20 reptile enclosures, logging the data to ensure an optimal environment, particularly during the winter cooling period and correlate successful breeding with winter temperatures.

The Solution

Using every buzz-word in one project… IoT, “the cloud”, AWS, Python.

YAM (1).jpg

I’ll go in to more detail on each part in future posts to set the scene this is how it went.

Part 1. Sensors

I need around 20 sensors spread over two sides of a room so the DS18B20 sensors in a waterproof housing are a good option.  Cheap (on ebay) and communicate via digitial signal over a 3 wire bus (incl. power & ground) so less wiring and no noise/voltage drop issues from an analog sensor.

Part 2. Arduino / Particle Photon

Two of a number of options for wifi connected micro-controllers.  Read the sensors from the bus on a digital input and broadcast through MQTT. The use of MQTT means any number of devices can be connected to any number (within reason) of sensors without the upstream processing needing to know or care.  JSON allows data to be encoded in a human readable and easy to integrate in python format.  Sure the payload is larger than a binary message, but in this case, who cares?

Sneak-peek : The Photon is much better than the arduino in this context.  The new raspberry pi zero W would also do the job with a 5 line shell script at AUD$15.

Side-note: Am I old for realising the Pi Zero W is more powerful than the first $10k I spent on computers?

Part 3. MQTT Broker

Handy gateway to “the cloud”, down-sampling data, adding sensor -> location mapping plus security and buffering to the MQTT connection to the cloud.

Part 4. Storage

A database as you expect.  NoSQL and in the cloud : because I can…  And so I can access the data outside my network without opening up my ports.

Part 5. Display

On the web, using python and one of the many charting libraries, hosted in the cloud.  If 100,000 people want to view the temperature I keep my S. ciliaris at they can, if I pay for the capacity…. And again, I don’t need to open my ports.

This was an opportunity to try flask and bokeh python libraries for web-framework and charting respectively.  Django and google charts would be fine too.

I haven’t put much effort in to the display so far so it is pretty basic and clunky but works as a proof of concept.

Check it out for yourself live : YarraRiverReptiles Live!

screen-shot-2017-05-27-at-3-48-10-pm.png

 

Conclusion

This is still a work in progress, things to clean up and features to add but it is a start.  If you are interested in more details, check out the follow up posts over the next few weeks or get in touch.