Is Neuromorphic Computing the Future of Visual Perception in Machines?

By Abhishek Jadhav

Technology companies specializing in compact, energy-efficient edge computing devices are actively working to enhance their devices’ capabilities to enable visual perception in machines for applications like facial recognition. Typically, visual perception demands significant computational resources when deployed remotely on edge devices, leading to reduced latency and ability to make real-time decisions.

Historically, the conventional use of deep learning and machine learning neural hardware, along with GPUs, has shown inefficiency in terms of energy consumption, especially when deployed near the network edge where data originates.

In response to this challenge, neuromorphic computing emerged as a solution, offering a form of artificial intelligence inspired by the human brain’s information processing methods. Using neuromorphic computing enables companies to create edge devices with a strong focus on energy efficiency, even while managing demanding AI workloads.

BrainChip partners with VVDN to develop Edge Box based on neuromorphic technology

BrainChip, a company known for its neuromorphic computing, has collaborated with VVDN Technologies, an electronics engineering and manufacturing firm, to create an Edge Box based on neuromorphic computing technology. This product is geared towards delivering advanced AI capabilities and finding applications in diverse domains such as security surveillance, automotive, and industrial use cases.

According to BrainChip, the Edge Box is designed to enable customers to deploy edge artificial intelligence applications in a cost-effective manner. Organizations can leverage the power of AI on edge devices for monitoring and security applications across various industries, offering a significantly more efficient and effective alternative to traditional approaches.

The Edge Box is a compact device with the capacity to execute AI models that support tasks like video analytics, facial recognition, and object detection. Leveraging the BrainChip Akida processor, known for its high performance, low power consumption, and scalable architecture, the Edge Box can be a suitable device for edge AI solutions.

Activist Post is Google-Free — We Need Your Support
Contribute Just $1 Per Month at Patreon or SubscribeStar

“This portable and compact Edge box is a game-changer that enables customers to deploy AI applications cost-effectively with unprecedented speed and efficiency to proliferate the benefits of intelligent compute,” says Sean Hehir, chief executive officer at BrainChip.

Prophesee develops GenX320 event-based Metavision sensor for always-on area monitoring systems

Prophesee has unveiled GenX32, an event-based Metavision sensor, with the aim of enhancing its integration and user-friendliness within edge-embedded vision systems, such as AI accelerators and edge system-on-chips. The primary focus of development has centered around optimizing event data pre-processing and formatting, ensuring compatibility with data interfaces, and facilitating low-latency connectivity to various processing platforms, including energy-efficient neuromorphic processors.

The company highlights specific use cases for this sensor, such as eye-tracking for human-machine interfaces, safety applications like driver monitoring systems (DMS), and emission detection. Additionally, it offers always-on capabilities for security and safety applications.

Prophesee uses event-based vision through its Metavision platform, representing a shift from conventional methods of acquiring and processing visual data, taking inspiration from the functioning of the human vision system. By using neuromorphic techniques, Prophesee achieves efficiency and performance enhancements, resulting in enhancing safety, productivity, and the overall user experience across various vision-enabled systems in domains like consumer electronics, industrial settings, automotive applications, and more.

In contrast to traditional cameras, event sensors do not use one common acquisition rate (frame rate) for all pixels. Instead, each pixel independently determines when it samples data by reacting to changes in incident light levels. This approach offers several advantages, including rapid operation, efficient power utilization, minimal latency leading to faster response times, reduced data processing demands, and a high dynamic range. These qualities collectively render event sensors highly suitable for applications in the realm of security and beyond.

“We have built on our foundation of commercial success in other application areas and developed this new event-based Metavision sensor to address the needs of edge system developers with a sensor that is easy to integrate, configure and optimize for multiple compelling use cases in motion and object detection, presence awareness, gesture recognition, eye tracking, and other high growth areas,” says Luca Verre, chief executive officer and co-founder at Prophesee. These other areas could include face biometrics.

SiLC Technologies introduces Eyeonic Vision Systems for advanced machine vision

SiLC Technologies has developed four distinct versions of the Eyeonic Vision System, each customized to enhance machine visual perception. These versions are optimized to address varying application needs, which demand vision capabilities across different distances.

In contrast to traditional machine vision systems that typically rely on conventional cameras for capturing static images, SiLC’s Eyeonic Vision Systems represent a more comprehensive and dynamic solution.

The Eyeonic Vision System uses FMCW (Frequency Modulated Continuous Wave) LiDAR technology, a sensor that uses laser light to measure distances and gather detailed environmental data. This capability enables the system to deliver real-time and precise information. To integrate the FMCW LiDAR sensor, the company has built a chip-integrated solution, which forms the core technology utilized within these vision systems.

“Since the launch of the Eyeonic Vision System, our collaboration with various OEMs revealed distinct vision requirements, necessitating multiple versions of our solution,” says Mehdi Asghari, chief executive officer and founder at SiLC Technologies.

Market growth for neuromorphic technology

As per a report from Gartner, neuromorphic computing stands out as one of the four emerging technologies to reshape the industry landscape within three to eight years. Their report highlights the substantial influence neuromorphic computing will have on existing product offerings and market dynamics.

“The impact is likely to be significant, though, as neuromorphic computing is expected to disrupt many of the current AI technology developments, delivering power savings and performance benefits not achievable with current generations of AI chips,” the report says.

Source: Biometric Update

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Minds, MeWe, Twitter – X, Gab, and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Is Neuromorphic Computing the Future of Visual Perception in Machines?"

Leave a comment