IBM and the Massachusetts Institute of Technology (MIT) are to build a joint facility to research and develop vision systems that bring artificial intelligence-based visual and audio cognisance to robotics.
The ultimate aim is to develop AI for computers, robots and other systems that comprehend external stimuli and act accordingly.
IBM Research and MIT will establish the IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension (BM3C) to work on the project. MIT will put its best brains in the lab, while IBM will contribute scientists and the firm's Watson machine learning platform.
The collaboration will bring together brain, cognitive and computer science specialists at MIT to conduct research in the field of unsupervised machine understanding of audio-visual streams of data, using insights from next-generation models of the brain to inform advances in machine vision.
"In a world where humans and machines are working together in increasingly collaborative relationships, breakthroughs in the field of machine vision will potentially help us live healthier and more productive lives," said Guru Banavar, chief scientist for cognitive computing and vice president at IBM Research.
"By bringing together brain researchers and computer scientists to solve this complex technical challenge we will advance the state-of-the-art in AI with our collaborators at MIT."
BM3C will be led by Professor James DiCarlo, head of the Department of Brain & Cognitive Sciences at MIT, supported by a team that will include graduate students from the Brain & Cognitive Sciences department and the MIT Computer Science and Artificial Intelligence Lab.
The collaboration is one of a number of partnerships that IBM has put together in the field of machine learning and AI, but bringing genuine audio-visual cognisance to AI would be a key breakthrough.
BM3C will address technical challenges around pattern recognition and prediction methods in the field of machine vision that are currently impossible for machines alone to accomplish.
IBM and MIT are just two of many organisations bidding to bring AI-based cognisance of sight and sound to robotics. UK online retailer Ocado has talked about the need for such systems to automate the packing of supermarket items for delivery so that potatoes are packed before tomatoes, for example.
Ocado's goal is to blend vision systems with robotics to further automate its warehousing systems with the eventual aim of having them entirely automated.
Such success in the development of AI-based audio and visual systems for computers and robotics ought to herald the long-awaited age of leisure, provided the robots don't use their intelligence to rise up and free themselves from the tyranny of human subjugation.
Acton's warnings come as Facebook is embroiled in one of the biggest data scandals in history
The unmanned tanks could eventually be kitted with AI systems
Dubbed I-MacEtch, it will help meet demand for more powerful nano-tech
GPU firm's research unit for self-driving cars is growing