July 25, 2018

Talking Driverless Software With Perceptive Automata Co-Founder and CEO Sid Misra

Back in February, we published an article discussing the triumphs, challenges, and future of autonomous driving in Boston. We interviewed all kinds of experts on the issue, including Perceptive Automata Co-Founder and CTO Sam Anthony. While he had great insights on the topic, there wasn’t much the company could say about its driverless software—only that it was aiming to be “the next generation of AI.”

But that’s changed, and the veil has been lifted! On July 10, the Somerville-based company emerged from stealth with an announced $3M in seed funding from First Round Capital and Slow Ventures.

This is of course very exciting news, so we reached out to Co-Founder and CEO Sid Misra to learn more about their now-unveiled technology.

Alex Culafi (AC): Congratulations on exiting stealth mode! How does it feel?

Sid Misra (SM): It’s very exciting that the world finally gets to learn about Perceptive Automata and all that’s to come. Our technology will improve safety and rider experience in self-driving cars, and enable self-driving cars to “act” more like a human would.

They will be able to make judgements on awareness and intent of pedestrians, cyclists, and other motorists, which will make future interactions between humans and self-driving cars not only possible, but safe and smooth too.

Perceptive Automata Sid Misra
Perceptive Automata Co-Founder and CEO Sid Misra.

AC: What are some of the cool things you've been up to?

SM: The unique interdisciplinary expertise of our team has enabled us to build deep learning models trained using the techniques of behavioral science (including cognitive psychology, neuroscience, and psychophysics) to characterize the way humans evaluate other humans, and giving that information to an autonomous system.

We have spent the last year taking an early prototype to a modular and scalable platform that can be deployed using automotive-grade computers, and work with sensors that are already on our customers’ vehicles. This progress has enabled us to successfully deploy our platform in our customers’ vehicles.

AC: Your website suggests that your software has the capability to change the game in the autonomous space. Tell me more about your software.

SM: Our software is solving the biggest remaining problem with autonomous vehicles: understanding humans.

If an autonomous vehicle can understand a human’s state of mind, it can predict what that human might do next, and observe how that human’s state of mind changes as the vehicle slows down or changes lanes.

Our system is able to recognize human behavior and calculate a person’s intent and awareness based on subtleties like posture, orientation, and head movement. The car then acts in accordance with these calculations, and this enables self-driving cars to safely interact with humans.

AC: What are some of the problems your software is solving?

SM: Our software is bridging the gap between humans and robotic systems. Any robotic system operating in human environments must understand human behavior and predict what they could do next to operate safely and smoothly. This problem becomes critical and urgent in the case of autonomous vehicles, which is the most acute case of a large fast-moving robot in human-dominated areas.

Most importantly, we’ve taken a behavioral science approach to solving this problem. In order for pedestrians and AVs to safely share the road, it is essential that we have a way for AVs to understand human behavior.

AC: How does it compare with other AV software in the space?

SM: Other AV software are only operating on a basic machine learning level—the software can recognize when there is a pedestrian or tree or another car, but human behavior is much more complicated than that. We’ve come up with a way for AVs to not only recognize humans, but to understand and predict their behavior so the vehicle can safely interact with pedestrians.

AC: What has the testing process been like for you, and how has it been going?

SM: Incredibly exciting, but similar to any startup that’s creating cutting-edge technology: there are many ups and downs.

That said, our team is incredible and we have received very positive feedback from potential partners.

AC: Where do you go from here? What's next for Perceptive Automata?

SM: Currently, we’re very busy working with our customers to deploy our technology into autonomous vehicles and the next generation driver assistance systems for human-driven vehicles. Our longer-term vision is to enable any robotic system to understand and collaborate with humans. There are a lot of other industries where machines are becoming heavily involved today. Think about warehouses, factories, agriculture, etc. Software that has the ability to read and understand humans will be very helpful in all of these verticals where there’s an intersection of machines and humans.

We are partnered with and plan to continue partnering with auto makers (OEMs), automotive suppliers, and mobility companies that will integrate our software into their self-driving OS and next-generation driver assistance packages.

We’re also partnering strategically for data sharing with sensor companies and mapping companies.

AC: Anything else you want to add on your company or the future of AV?

SM: This is the first time that an AI has been built using behavioral science. What this means is that the machine learning models underlying our software platform are trained using the techniques of behavioral science (including cognitive psychology, neuroscience, and psychophysics) to characterize the way humans evaluate other humans, and giving that information to an autonomous system. All other approaches to AI and machine learning have taken a geometry or trajectory-based approach so the idea that we’ve taken an entirely different approach that’s producing demonstrable and actionable results for our customers is extremely exciting.

Some say that maybe we’ll have to separate cars from pedestrians in the future and we’ll have “pedestrian pathways.” We don’t want to see that version of the world—we like how interactive our cities are.

AC: When do we get to ride in a car with your software?

SM: Hopefully soon, but it’s important to clarify that we’re not building the car, we’re providing a large critical capability around human behavior understanding that will enable our customers to deploy their AVs safely and quickly. That said, we’re open to partnering with companies that are building self-driving cars because they will all need this technology.

Alexander Culafi is a Staff Writer at VentureFizz. He also edits and produces The VentureFizz Podcast. Follow him on Twitter: @culafia.