Google has developed a massive network of systems which is able to learn and recognise content without human interaction.
The company said that its so-called 'neural network' combined 16,000 CPU cores to create an approximation of a brain. The neural network was then deployed to analyse still frames within YouTube videos.
Researchers said that after one week, the network, which was designed to resemble a newborn brain, was able to recognise particular shapes and figures in the videos.
"Our hypothesis was that it would learn to recognise common objects in those videos. Indeed, to our amusement, one of our artificial neurons learned to respond strongly to pictures of cats," wrote Google fellow Jeff Dean and researcher Andrew Ng.
"Remember that this network had never been told what a cat was, nor was it given even a single image labeled as a cat. Instead, it 'discovered' what a cat looked like by itself from only unlabelled YouTube stills."
The company said that the effort, which is housed in one of its datacentre facilities, was part of a larger effort to construct large-scale networks which can approximate the function of the human brain. The company said that the network was an extremely small-scale deployment, containing roughly one billion connections in total.
By comparison, the company estimates that the average adult brain sports upwards of 100 trillion neuron connections.
Google believes that such systems could dramatically improve current image recognition and classification patterns. Additionally, the researchers see the platform as having applications in areas such as speech recognition and language modelling.