Cameras steering trucks towards autonomy & improved tech training

Quimby Mug Bayou Florida Headshot
Updated Mar 22, 2019

Did you walk through my holographic truck this week at the Technology & Maintenance Council exhibition in Atlanta?

If so, no worries because obviously you didn’t realize you were traipsing through cutting edge training technology courtesy of Design Interactive.

While wearing DI’s Augmentor headset and seeing a demo of their new augmented- and virtual reality mobile app made me think again about the advancements in camera technology and the promises they hold.

During my recent interview with Pronto founders Andrew Levandowski and Ognen Stojanovski I raised the concern about the ability of autonomous systems in Class 8 trucks to distinguish between human beings and objects, like a log, on the side of the road. (Credit goes to Martin Daum, head of Daimler Trucks, for first bringing that dilemma to my attention a few years ago).

“That distinction is almost there. We can do that with a very high level of precision and recall,” replied Levandowski whose Level II system is camera based and supported by radar.

Distinguishing between people and objects is not so much the problem, Levandowksi said, as it is in determining if that person is walking along the side of the road or attempting to cross it.

Autonomous truck technology developer TuSimple is also using camera-based systems to analyze driving conditions. Lidar simply can’t compete with the scanning range of cameras. And the greater the field for more accurate scanning means additional time for more precise and critical computational analysis.

And camera technology is advancing quickly, according to Matt Johnston, division head of commercial solutions for Design Interactive. While using his company’s new Augmentor app, a technician simply aims his smartphone’s camera at a tag (similar to a QR code) that’s been affixed to a component on the truck and the phone then will bring up information specific to that part, including text, audio and video. Diagnostic tips are revealed as well.

Of course I had to ask Johnston if the camera tech was getting closer to simply recognizing a component by its distinguishing characteristics (a power steering reservoir obviously looks much different than a radiator). He quickly smiled and affirmed my suspicions.

It makes complete sense. Camera-based biometrics has been at work for a while. And if it works on the level of facial recognition technology, then it certainly makes sense for it to transition to auto where it could distinguish between an A/C compressor and an alternator and look for more gateway clues on those components, such as model numbers, which would then allow database access for part specs, diagnostics, removal steps, part availability and part ordering (with the blessing of the service writer, of course).

For how-to YouTube fans like me who frequently comb the site for tips on just about anything, camera-based VR/AR holds amazing promise. Instead of getting greasy fingerprints on my pricey service manuals and key board, I could strap on my VR head set and get to work. But there’s plenty of work to be done.

When I put on DI’s head set to see that holographic truck, I was undeniably impressed. As people on the exhibit floor unwittingly walked through the large, white translucent display, I slowly eased up for a closer look. Components were very rudimentary in design, but you could definitely see the potential. More data-hungry designs could be developed, Johnston said, which would reveal more intricate details.

Nonetheless, camera technology cannot compete just yet with the human eye.

“Martin was right—identifying people on the road way is difficult, but not so much ‘Is it a box?’ or ‘Is it a person?’ but what is that person going to do in the next couple of seconds,” Levandowski said.

Such nuances can be difficult enough for human beings to discern let alone artificial intelligence. Still, I can’t help but think about Kasparov and Deep Blue. There finally did come a time, roughly 20 years ago, when the famous chess master was defeated by IBM’s celebrated machine. As a young college student at the time I remember counting that as a strange, but revelatory moment that held important implications for the future. However, framed in today’s discussion about developing AI, a chess match assumes an adversarial tendency amid the drive for self-preservation and hopefully victory. Those goals simply cannot extend to a person walking alongside a busy road. Body language cues cannot always be relied upon either which brings me back to what one of trucking’s best and brightest had to say about Level 5 autonomy.

“I still think we’re a long way away from a truck driving itself unmanned on any kind of road,” Levandowski said.

It’s hard to disagree.