Researchers have developed a deep learning algorithm that can automatically identify, count and describe animals in their natural habitats.
A new paper, published in Proceedings of the National Academy of Sciences (PNAS), decribes how the cutting-edge artificial intelligence technique can automatically describe photographs that have been collected by motion-sensor cameras on deep neural networks.
The result is a system that can automate animal identification for up to 99.3 per cent of images while still performing at the same 96.6 per cent accuracy rate of crowd-sourced teams of human volunteers.
"This technology lets us accurately, unobtrusively and inexpensively collect wildlife data, which could help catalyse the transformation of many fields of ecology, wildlife biology, zoology, conservation biology and animal behaviour into 'big data' sciences," explained Jeff Clune, the senior author of the paper and Harris Associate Professor at the University of Wyoming.
"This will dramatically improve our ability to both study and conserve wildlife and precious ecosystems."
Clune, who wrote the paper, is also a senior research manager at Uber's Artificial Intelligence Labs.
Entitled ‘Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning', the paper was co-written by Clune's PhD student Mohammad Sadegh Norouzzadeh and his former PhD students Anh Nguyen, Margaret Kosmala, Ali Swanson, Meredith Palmer and Craig Packer.
This study obtained the necessary data from Snapshot Serengeti, a citizen science project on the Zooniverse.org platform.
Snapshot Serengeti has deployed a large number of 'camera traps', also known as motion-sensor cameras, in Tanzania that collect millions of images of animals in their natural habitat, such as lions, leopards, cheetahs and elephants.
The information in these photographs is only useful once it has been converted into text and numbers. For years, the best method for extracting such information was to ask crowd-sourced teams of human volunteers to label each image manually. The study published today harnessed 3.2 million labelled images produced in this manner by more than 50,000 human volunteers over several years.
"Not only does the artificial intelligence system tell you which of 48 different species of animal is present, but it also tells you how many there are and what they are doing. It will tell you if they are eating, sleeping, if babies are present," added Kosmala, another Snapshot Serengeti leader.
"We estimate that the deep learning technology pipeline we describe would save more than eight years of human labeling effort for each additional 3 million images. That is a lot of valuable volunteer time that can be redeployed to help other projects."
Loon's balloons will bring the internet to remote areas of the country
New clues into the biosphere on Earth in the lead up to the emergence of animal life
Planetary collision might shed light on the chaotic processes behind a star's early development
Success boosted by streamer Ninja and celebrity gamers