videantis – processors for deep learning, computer vision and video coding http://workground.videantis.com passion for video Wed, 09 Dec 2020 14:04:37 +0000 en-US hourly 1 https://wordpress.org/?v=5.7.11 http://workground.videantis.com/wp-content/uploads/2020/05/favicon_16x16-2.ico videantis – processors for deep learning, computer vision and video coding http://workground.videantis.com 32 32 September newsletter http://workground.videantis.com/september-newsletter-4.html http://workground.videantis.com/september-newsletter-4.html#respond Wed, 09 Dec 2020 13:22:46 +0000 http://workground.videantis.com/?p=7102 videantis processor platform adopted for TEMPO neuromorphic edge AI chip In July we announced that TEMPO is adopting...

The post September newsletter appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
videantis processor platform adopted for TEMPO neuromorphic edge AI chip

TEMPO_LogoIn July we announced that TEMPO is adopting our multi-core processor platform and toolflow for its neuromorphic mixed-signal edge AI chip. The development is part of the European TEMPO project and targets several autonomous driving use cases. TEMPO stands for “Technology & hardware for nEuromorphic coMPuting” and is an ECSEL JU innovation project supported by the EU Horizon 2020 programme.

Together with the Fraunhofer Institute for Integrated Circuits IIS, Infineon, Valeo, InnoSenT and other leading European companies and universities, videantis will develop the neuromorphic artificial intelligence ASIC platform and software development tools specifically tailored for energy-efficient edge processing for intelligent autonomous vehicles.

Read full press release

videantis appoints Stephan Janouch as marketing director

stephanjanouch_videantisEarlier in September we announced the appointment of Stephan Janouch as our new Marketing Director. Janouch, who brings with him more than two decades of diverse experience from various roles in the automotive electronics industry, will be responsible for all marketing activities at videantis. “The appointment of Stephan as our new Marketing Director will help us to increase industry awareness of our advanced processing technology as well as of videantis as a company,” says Dr. Hans-Joachim Stolberg, CEO of videantis GmbH. “It is an exciting time to join videantis. The company is poised for growth with its leading AI processing technology helping the industry to build smarter and more efficient vision systems,” adds Janouch.

Read full press release

Cool AI chips are green

forestWhen chips get hot, thermal management quickly becomes difficult. The chips need more complex power grids, more expensive packages and active cooling fans for instance. Lots of GPU-based systems even need water cooling. But besides these implications on form factor and design complexity, there’s another reason to keep power consumption low: it’s good for our planet, since most of our energy comes from fossil fuels that release carbon dioxide into the atmosphere. “But chips just consume a few Watts,” I hear you say. In this article, we take a deeper look, do a quick quantitative analysis, and see what the impact really is.

This result? We save at least 1 ton of CO2 per vehicle over its lifetime.

Read the full analysis

Industry news

Deloitte’s 2020 global automotive consumer studies
What automobile industry trends and disruptive technologies might drive the automotive industry in 2020? Explore the data and insights from the 11th year of Deloitte’s Global Automotive Consumer Study (fielded in fall 2019) and discover how 35,000 consumers in 20 countries are feeling about autonomy, electric and connected vehicles, ridesharing, and more. View the reports

Ship with no crew to sail across the Atlantic
A full-size, fully autonomous research ship is to make one of the world’s first autonomous transatlantic voyages. Promare, a non-profit marine research organization, has worked with IBM, a global consortium of partners and scientific organizations to build the Mayflower Autonomous Ship (MAS). Launching from Plymouth in the UK on 16 September 2020, the ship will travel to Plymouth, Massachusetts, after spending six months gathering data about the state of the ocean. Watch the video

Upcoming events

All our upcoming face-to-face events have been cancelled. Schedule an online meeting with us by sending an email to sales@videantis.com. We’re always interested in discussing your automotive and other sensing solutions and visual compute SOC design ideas and challenges. We look forward to talking with you!

The post September newsletter appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
http://workground.videantis.com/september-newsletter-4.html/feed 0
Cool AI chips are green http://workground.videantis.com/cool-ai-chips-are-green.html http://workground.videantis.com/cool-ai-chips-are-green.html#respond Thu, 24 Sep 2020 17:42:22 +0000 http://workground.videantis.com/?p=6885 When chips get hot, thermal management quickly becomes difficult. The chips need more complex power grids, more expensive...

The post Cool AI chips are green appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
forestWhen chips get hot, thermal management quickly becomes difficult. The chips need more complex power grids, more expensive packages and active cooling fans for instance. Lots of GPU-based systems even need water cooling. And just in case you can get away with using only passive cooling, even a little heat greatly affects the housing and mechanical design. There needs to be enough surface material to dissipate the heat, resulting in the final device to become larger, heavier, and more expensive. In addition, if your power comes from a battery, that’ll have to have more capacity too, again adding cost and bulk. That’s a lot of reasons to keep power consumption low.

And chips do burn a lot of power. Especially when they’re running compute-intensive deep learning workloads. Deep learning algorithms require many tera-ops of multiplications per second and move massive amounts of data. At videantis, we designed our processor architecture from the ground up to consume as little energy as possible. Optimizing for low power has always been in our DNA at videantis – with a history in designing for mobile phone applications that are battery operated.

But besides these implications on form factor and design complexity, there’s another reason to keep power consumption low: it’s good for our planet, since most of our energy comes from fossil fuels that release carbon dioxide into the atmosphere. “But chips just consume a few Watts,” I hear you say. “That’s nothing compared to the energy that our heating, our cars, and manufacturing industries use!”. Well, let’s take a deeper look, do a quick quantitative analysis, and see what the impact really is. We’ll focus on the automotive use case, since we’ve been doing quite a bit of work there and have millions of vehicles with videantis technology inside already on the road.

Let’s first look at how much gas is needed to run our deep learning algorithms. A typical lifetime for a new vehicle is roughly 300,000 km. At an average speed of 50km per hour, this means such a car is in operation for 6000 hours. So for every Watt a chip uses, this adds up to 6kWh over the lifetime of the vehicle. There’s 9.5 kWh per liter of gasoline, in theory, but since a typical engine only has a thermal efficiency of about 25%, each liter of gasoline consumed produces 2.4kWh. So, each Watt that a chip burns translates into 6kWh over the lifetime of the vehicle, for which you need 2.5 liters of gas (6kWh / 2.4kWh/l). So for a typical GPU-based ADAS system that consumes of 250W this adds up to about 630 liters of gas.

In addition, the weight of the compute module also contributes to gas usage. Each kilogram the vehicle has to carry around over its lifetime translates into another 12 liters of gas burned. For a typical system that weighs 2.5kg, this amounts to 30 liters of fuel. In total, these big ADAS systems require about 660 liters of gas over their lifetime in the vehicle. Since each liter of gasoline during combustion turns into 2.34kg of CO2, the total amount of CO2 is about 1.5 tons per vehicle.

The vision processing efficiency of the videantis processor solution is much higher compared to the systems that are on the market today. Benchmarking has shown that systems based on the videantis processor architecture typically provides 50x better deep learning and visual computing performance per Watt, and a weight reduction of about 80%. This results in a savings of at least 1 ton of CO2 per vehicle over its lifetime.

There are currently about 300 million vehicles on the road in the EU today. If all those vehicles were using the videantis processing architecture instead of systems that are similar to NVIDIA’s or Tesla’s, then all those vehicles combined over their lifespan would save about 300 million tons of CO2 . It takes about 2 million acres of trees to offset such a carbon footprint, a forest the size of a small country. It’s another good reason for our engineers at videantis to optimize for low power.

The post Cool AI chips are green appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
http://workground.videantis.com/cool-ai-chips-are-green.html/feed 0
New marketing director at videantis http://workground.videantis.com/new-marketing-director-at-videantis.html http://workground.videantis.com/new-marketing-director-at-videantis.html#respond Thu, 24 Sep 2020 17:38:12 +0000 http://workground.videantis.com/?p=6880 videantis appoints Stephan Janouch as marketing director September 17, 2020, Hannover, Germany – videantis GmbH, a leading supplier...

The post New marketing director at videantis appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
videantis appoints Stephan Janouch as marketing director

September 17, 2020, Hannover, Germany – videantis GmbH, a leading supplier of deep learning, computer vision and video coding solutions, today announced the appointment of Stephan Janouch as the new Marketing Director.

Janouch, who brings with him more than two decades of diverse experience from various roles in the automotive electronics industry, will be responsible for all marketing activities at videantis.
“The appointment of Stephan as our new Marketing Director will help us to increase industry awareness of our advanced processing technology as well as of videantis as a company“, says Dr. Hans-Joachim Stolberg, CEO of videantis GmbH.
“It is an exciting time to join videantis. The company is poised for growth with its leading AI processing technology helping the industry to build smarter and more efficient vision systems“, adds Janouch.

Before joining videantis GmbH Janouch held positions in engineering, marketing, business development and also worked as an editor-in-chief.

About videantis

Headquartered in Hannover, Germany, videantis is a one-stop deep learning, computer vision and video processor IP provider, delivering flexible computer vision, imaging and multi-standard HW/SW video coding solutions for automotive, mobile, consumer, and embedded markets. Based on a unified processor platform approach that is licensed to chip manufacturers, videantis provides tailored solutions to meet the specific needs of its customers. With core competencies of deep camera and video application expert know-how and strong SoC design and system architecture expertise, videantis serves a worldwide customer basis with a diverse range of target applications, such as advanced driver assistance systems and autonomous driving, mobile phones, AR/VR, IoT, gesture interfacing, computational photography, in-car infotainment, and over-the-top TV. videantis has been recognized with the Red Herring Award and multiple Deloitte Technology Fast 50 Awards as one of the fastest growing technology companies in Germany.

For more information, please visit www.videantis.com.

For more information please contact:

Stephan Janouch, Marketing Director
stephan.janouch@videantis.com
Phone: +49 (511) 51 522 335

videantis GmbH
Rotermundstraße 11
30165 Hannover
Germany
www.videantis.com

Supporting material

Image Stephan Janouch

The post New marketing director at videantis appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
http://workground.videantis.com/new-marketing-director-at-videantis.html/feed 0
videantis processor adopted for TEMPO AI chip http://workground.videantis.com/videantis-processor-adopted-for-tempo-ai-chip.html http://workground.videantis.com/videantis-processor-adopted-for-tempo-ai-chip.html#respond Mon, 17 Aug 2020 13:28:22 +0000 http://workground.videantis.com/?p=6778 videantis processor platform adopted for TEMPO neuromorphic edge AI chip July 7, 2020, Hannover, Germany – videantis GmbH,...

The post videantis processor adopted for TEMPO AI chip appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
videantis processor platform adopted for TEMPO neuromorphic edge AI chip

July 7, 2020, Hannover, Germany – videantis GmbH, a leading supplier of deep learning, computer vision and video coding solutions, today announced the adoption of its next-generation digital AI multi-core processor platform and toolflow for a neuromorphic mixed-signal edge AI chip. The development is part of the European TEMPO project and targets several autonomous driving use cases. TEMPO stands for “Technology & hardware for nEuromorphic coMPuting” and is an ECSEL JU innovation project supported by the EU Horizon 2020 programme.

Together with the Fraunhofer Institute for Integrated Circuits IIS, Infineon, Valeo, InnoSenT and other leading European companies and universities, videantis will develop a neuromorphic artificial intelligence ASIC platform and software development tools specifically tailored for energy-efficient edge processing for intelligent autonomous vehicles.

To this end, videantis will integrate its highly efficient and high-performance next-generation multi-core processor solution into a neuromorphic AI chip platform that processes LiDAR and radar sensor data for multiple autonomous driving use cases using AI-based methods. The solution combines deep decompression technology with a digital deep neural network (DNN) accelerator that remains software-programmable to easily adapt to different use cases of the chip.

Videantis will also support this chip with the v-CNNDesigner tool flow that automates the distribution and mapping of AI workloads onto the parallel architecture. v-CNNDesigner allows developers to map their neural networks on the videantis processors without requiring programmer’s intervention, removing the error-prone and complex programming task of finding the best quantization and parallelization strategies, data organization, and synchronization.

The videantis technology will be integrated and demonstrated together with the latest product innovations from the other research partners.

About TEMPO

TEMPO (Technology & hardware for nEuromorphic coMPuting) is a European innovation project. This project has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 826655. The JU receives support from the European Union’s Horizon 2020 research and innovation programme, and from Belgium, France, Germany, Netherlands, Switzerland. TEMPO was kicked off on the 1st of April 2019 and has a duration of three years. The consortium of this ambitious project consists of no less than nineteen members. Imec takes the lead as the sole Belgian consortium partner. The other consortium members are, for France: CEA-LETI, ST-Microelectronics Crolles, ST-Microelectronics Grenoble, Thales Alenia Space and Valeo. For Germany: Bosch, Fraunhofer EMFT, Fraunhofer IIS, Fraunhofer IPMS, Infineon, Innosent, TU Dresden and videantis. For the Netherlands: imec the Netherlands, Philips Electronics and Philips Medical Systems. For Switzerland: aiCTX and the University of Zürich.

For more information, please visit www.ecsel.eu.

About videantis

Headquartered in Hannover, Germany, videantis is a one-stop deep learning, computer vision and video processor IP provider, delivering flexible computer vision, imaging and multi-standard HW/SW video coding solutions for automotive, mobile, consumer, and embedded markets. Based on a unified processor platform approach that is licensed to chip manufacturers, videantis provides tailored solutions to meet the specific needs of its customers. With core competencies of deep camera and video application expert know-how and strong SoC design and system architecture expertise, videantis serves a worldwide customer basis with a diverse range of target applications, such as advanced driver assistance systems and autonomous driving, mobile phones, AR/VR, IoT, gesture interfacing, computational photography, in-car infotainment, and over-the-top TV. videantis has been recognized with the Red Herring Award and multiple Deloitte Technology Fast 50 Awards as one of the fastest growing technology companies in Germany.

For more information, please visit www.videantis.com.

For more information please contact:

Marco Jacobs, VP Marketing
marco.jacobs@videantis.com
Phone: +49 (511) 51 522 330

Figures are available online, please see www.videantis.com/news

videantis GmbH
Rotermundstraße 11
30165 Hannover
Germany
www.videantis.com

Supporting material

videantis logo
TEMPO logo

The post videantis processor adopted for TEMPO AI chip appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
http://workground.videantis.com/videantis-processor-adopted-for-tempo-ai-chip.html/feed 0
May newsletter http://workground.videantis.com/may-newsletter-3.html http://workground.videantis.com/may-newsletter-3.html#respond Thu, 04 Jun 2020 08:44:22 +0000 http://workground.videantis.com/?p=4892 How many cameras do we need? 29 says Waymo. In 2016 we wrote an article titled “what are...

The post May newsletter appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
How many cameras do we need? 29 says Waymo.

In 2016 we wrote an article titled “what are all these automotive cameras doing?” about how and where we saw automotive cameras being integrated into consumer vehicles. At the time, we counted 12 cameras in the vehicle. The latest Tesla vehicles actually come pretty close. A new Tesla today has 9 cameras on board: 3 front cameras with different fields of view, 4 on the sides, 1 on the rear, and then there’s a new in-cabin camera just above the rear-view mirror.

Giving a car about 10 eyes seems like it should be enough. After all, humans can safely operate a vehicle with just two eyes. And even if you only have one functioning eye, you can still legally operate a vehicle in most countries without needing extra measures. But recently Waymo announced that their latest vehicles have no less than 29 cameras on board.

Continue reading

Updated overview of ADAS acronyms

To help you navigate the many acronyms that are in use in the automotive industry, we’ve compiled a helpful list of definitions below. We’ve posted an overview of acronyms before, but we’ve extended and refined the list again.
Recently, the SAE association and partner organizations such as the AAA, J.D. Power, and Consumer reports released a document that presents a common naming for ADAS functions. The list is meant to aid in reducing driver confusion and define the functions of ADAS in a consistent manner. We have labelled those terms in our list with an “(SAE official term)” marker.

Please don’t hesitate to contact us if you have additional ones you’d like to see added to this list.

See the full list

Yes, we are still expanding!

Looking for a position where you can learn a lot and make a big impact at the same time?

We’ve added several people to our team in the past few months but are still looking for additional stellar hardware and software engineers to add to our staff at our headquarters in Hannover, Germany. Do you think you can keep up with the pace of the rest of our team? Interested in taking on a challenge? Do you have experience in deep learning, embedded processing, low power parallel architectures, or performance optimization? We’d love to hear from you.

See our open positions and shoot us a message.

Industry news

Continental and Pioneer Partner for a New User Experience
Continental and Pioneer have signed a strategic cooperation agreement. Their integrated infotainment solution means both partners create a holistic user experience that is specially aimed at the Asian market. Continental integrates Pioneer’s entire infotainment subdomain into its high-performance computer for vehicle cockpits as part of the agreement.
Read press release

Computer vision research continues
The ECCV and CVPR conferences are going ahead as 100% online events. CVPR, the largest conference of its kind, unveils the latest research spanning the fields of computer vision, deep learning, artificial intelligence, image compression, pattern analysis, and beyond. This year’s conference convenes from 14-19 June. The European Conference on Computer Vision (ECCV), running from 23-28 August, is the top European conference in the image analysis area. More about CVPR and ECCV.

Upcoming events

All our upcoming face-to-face events have been cancelled. Schedule an online meeting with us by sending an email to sales@videantis.com. We’re always interested in discussing your automotive sensing solutions and visual compute SOC design ideas and challenges. We look forward to talking with you!

Was this newsletter forwarded to you and you’d like to subscribe? Click here.

 

The post May newsletter appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
http://workground.videantis.com/may-newsletter-3.html/feed 0
ADAS acronyms (updated) http://workground.videantis.com/adas-acronyms-updated.html http://workground.videantis.com/adas-acronyms-updated.html#respond Thu, 04 Jun 2020 08:49:39 +0000 http://workground.videantis.com/?p=4902 Overview of ADAS technology acronyms (updated) The field of automotive advanced driver assistance systems (ADAS) is still growing...

The post ADAS acronyms (updated) appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
Overview of ADAS technology acronyms (updated)

The field of automotive advanced driver assistance systems (ADAS) is still growing rapidly. These systems make our vehicles safer and more comfortable to drive. Many are already on the market, several of which include processors and technologies developed by videantis. To help you navigate the many acronyms that are in use in this industry, we’ve compiled a helpful list of definitions below. We’ve posted a list of acronyms before, but we’ve extended and refined the list again.

Recently, the SAE association and partner organizations such as the AAA, J.D. Power, and Consumer reports released a document that presents a common naming for ADAS functions. The list is meant to aid in reducing driver confusion and define the functions of ADAS in a consistent manner. We have labelled those terms below with an “(SAE official term)” marker.

Please don’t hesitate to contact us if you have additional ones you’d like to see added to this list.

ACC – Adaptive Cruise Control (SAE official term)

Cruise control system that automatically adapts speed to maintain a safe distance from vehicles in front.

ADA – Active Driving Assistance (SAE official term)

Provides steering and brake/acceleration support to the driver at the same time. The driver must constantly supervise this support feature and maintain responsibility for driving.

AD – Autonomous Driving

The situation where a vehicle is self-driving, where there is no human driver required to take control.

AV – Autonomous Vehicle

A self-driving vehicle in which human drivers are never required to take control.

ADAS – Advanced Driver Assistance System

An electronic system that aids the driver for a safer and more comfortable driving experience. Often based on camera technology, but can also include other sensors like radar, laser, or ultrasound.

AEB – Automatic Emergency Braking (SAE official term)
Autonomous Emergency Braking, Active Emergency Braking

Automatic Emergency Braking monitors the proximity of vehicles in front, or behind the vehicle in case it is in reverse, detecting situations where a collision is imminent. Braking is then automatically applied to avoid the collision or mitigate its effects.

AES – Autonomous Emergency Steering (SAE official term)

A system that automatically steers the vehicle to help avoid a collision, usually combined with an automatic emergency braking system.

AFW – Adaptive Forward Lighting, AFLS – Adaptive Front Lighting System

System that automatically turns the headlight beam to the right or left dependent on the vehicle’s direction in curves.

AHB – Automatic High Beams (SAE official term)
AHBC – Adaptive High Beam Control

Adaptive High Beam Control detects oncoming traffic and vehicles in front, automatically adjusting the headlamp beam high and low.

Also known as Adaptive Light Control.

ALC – Adaptive Light Control

Adaptive Light Control detects oncoming traffic and vehicles in front, automatically adjusting the headlamp beam high and low.

Also known as Adaptive High Beam Control.

ALK – Autonomous Lane Keeping

Lane Keeping Assist combines a forward-facing camera to detect lane markings with an electric steering system, keeping the vehicle in the center of the lane.

See also Lane Keeping Assist.

ANV – Automotive Night Vision

Automotive Night Vision captures images using a thermal camera or active infrared lighting and presents it on a dashboard display. This increases the driver’s perception and viewing distance during nighttime.

Also known as Night View Assist.

APA – Active Parking Assistance (SAE official term)
APS – Automatic Parking System

Automatic Parking Systems are designed to help a driver park. Some perform the entire job automatically, while others simply provide advice so that the driver knows when to turn the steering wheel and when to stop.

See also Intelligent Parking Assist and Parking Assist.

BSW – Blind Spot Warning (SAE official term)
BSD – Blind Spot Detection, BSA – Blind Spot Assist, BSM – Blind Spot Monitoring, BLIS – Blind spot Indication System, SBSA – Side Blind Spot Alert

Blind Spot Detection systems provide vital information about the vehicles blind spots, areas that cannot be seen easily by the driver. Some of these systems will sound an alarm if they sense the presence of an object within a blind spot, others include cameras that transmit camera images to a display in the dashboard.

See also Lane Change Assist.

BC – Backup Camera (SAE official term)

A camera that’s mounted in the rear of the vehicle, facing backward. The live view that’s captured is presented on a display for the driver in case the vehicle is in reverse.

BOP – Back-over Protection, Back-over Prevention

A back-over protection or prevention system can combine both ultrasonic and rear-view camera technologies to increase safety while backing up, ensuring the driver doesn’t hit a pedestrian, vehicle or other object.

CAS – Collision Avoidance System

Collision avoidance systems use a variety of sensors to determine whether a vehicle is in danger of colliding with another object. These systems sense the proximity of other vehicles, pedestrians, or other objects on the road. When the vehicle is in danger of colliding with another object, the collision avoidance system will warn the driver and take preventive actions, such as precharging the brakes, apply tension to the seat belts, or take over steering.

Similar to Crash Imminent Braking or Collision Detection Warning.

CDW – Collision Detection Warning

Collision Detection Warning systems use a variety of sensors to determine whether a vehicle is in danger of colliding with another object. These systems sense the proximity of other vehicles, pedestrians, or other objects on the road. When the vehicle is in danger of colliding with another object, the collision avoidance system will warn the driver and take preventive actions, such as precharging the brakes, apply tension to the seat belts, or take over steering.

Similar to Crash Imminent Braking or Collision Avoidance Systems.

CIB – Crash Imminent Braking, Collision Imminent Braking

CIB systems automatically apply the brakes in a crash imminent situation if the driver does not respond to warnings.

Similar to Collision Detection Warning or Collision Avoidance Systems.

CMS – Camera Monitor System, Camera Mirror System

A system that adds monitors, or displays, to the car, presenting the view of externally mounted cameras. For instance rear view cameras or mirror replacement cameras that remove the need for left, right, or rear-view mirrors, and present a better view of the vehicle’s surroundings.

C-NCAP – China New Car Assessment Programme

Chinese car safety assessment program. It is primarily modeled after safety standards established by Euro NCAP and is run by the China Automotive Technology and Research Center

CRA – Crossroad Alert

System that lets the driver know about upcoming roads crossing the current road, identified for instance by stop signs.

CTA – Cross-Traffic Alert

These systems let you know if you’re about to run into oncoming cross traffic. Multiple sensors or wide angles cameras are located near the front or rear of the vehicle, detecting traffic that comes from the side, typical in parking lot situations.

See also Rear Cross-Traffic Alert.

DBS – Dynamic Brake Support

Supplements the driver’s braking input if the driver isn’t applying sufficient braking to avoid a rear-end crash.

See also Active Emergency Braking.

DM – Driver Monitoring (SAE official term)
DDW – Drowsy Driver Warning, DFW – Driver Fatigue Warning, DDD – Driver Drowsiness Detection, DMS – Driver Monitoring System, DAS – Driver Alert System

Driver drowsiness or awareness detection systems use cameras or other sensors to determine if a driver’s attention is still on the road and on operating the vehicle safely. Most systems track eye blinking rates and gaze direction. Some of these systems look for the driver’s head to nod in a telltale motion that indicates sleepiness.

EVWS – Electric Vehicle Warning Sound

A system that makes sounds designed to alert pedestrians to the presence of electric drive vehicles that make very little noise.

EDA – Emergency Driver Assistant

A driver system that monitors driver behavior. If the system concludes that the driver is no longer able to safely drive the vehicle, the car takes the control of the brakes and the steering to bring the vehicle to a stop.

Euro NCAP – New Car Assessment Programme

The European New Car Assessment Programme, the European organization that defines car safety performance assessment programs.

FCW – Forward Collision Warning (SAE official term)
FCWS – Forward Collision Warning System, FCA – Forward Collision Avoidance

Forward Collision Warning systems use a variety of sensors to determine whether a vehicle is in danger of colliding with another object. These systems sense the proximity of other vehicles, pedestrians, or other objects on the road. When the vehicle is in danger of colliding with another object, the collision avoidance system will warn the driver and take preventive actions, such as precharging the brakes, apply tension to the seat belts, or take over steering.

FOV – Field of View

Describes the angular extent of a given scene that is imaged by a camera, typically in degrees. For instance, a 180 degree FOV camera has a very wide angle lens that can capture the full side of the vehicle.

FuSa – Functional Safety

Functional safety is the part of the overall safety of a system or piece of equipment that depends on automatic protection operating correctly in response to its inputs or failure in a predictable manner (fail-safe). The automatic protection system should be designed to properly handle likely human errors, hardware failures and operational/environmental stress.

GFHB – Glare-free High Beam

The Glare-free High Beam function allows driving with the high beam on at all times. If the camera detects other traffic on the road, the distribution of light from the high beams is adjusted in order to not blind the approaching driver.

See also Head Lamp Assist.

HLA – Head Lamp Assist

The Head Lamp Assist function allows driving with the high beam on at all times. If the camera detects other traffic on the road, the distribution of light from the high beams is adjusted in order to not blind the approaching driver.

See also Glare-free High Beam.

HUD – Head-up-Display (SAE official term)

A transparent display that shows information on the front windshield, allowing drivers to keep their eyes on the road, instead of having to look away toward information on the dashboard.

HDC – Hill Descent Control

A system that adjusts speed by applying the brake or shifting to lower gears during descent from a hill.

ISA – Intelligent Speed Adaptation, Intelligent Speed Advice

A system that monitors vehicle speed, warning the driver to adjust their speed in case it is higher than the allowed limit. Typically uses Traffic Sign Recognition and map data to determine the allowed speed limit.

IHBC – Intelligent High Beam Control

The Head Lamp Assist function allows driving with the high beam on at all times. If the camera detects other traffic on the road, the distribution of light from the high beams is adjusted in order to not blind the approaching driver.

See also Glare-free High Beam and Head Lamp Assist.

IPAS – Intelligent Parking Assist System

Intelligent Parking Assist Systems are designed to help a driver park. Some perform the entire job automatically, while others simply provide advice so that the driver knows when to turn the steering wheel and when to stop.

See also Parking Assist and Automatic Parking System.

JNCAP – Japan New Car Assessment Program

JNCAP promotes the use of safer cars by creating the environment in which automobile users can easily select such vehicles and by encouraging automobile manufacturers to develop safer vehicles.

KSI – Killed or Seriously Injured

Number of people killed or seriously injured.

L0 – Level 0 automation of driving

No automation. The driver operates the vehicle at all times. Defined by SAE International, an automotive standardization body, under J3016.

L1 – Level 1 automation of driving

Driver assistance. The driver assistance system controls either the steering or acceleration/deceleration. The human driver monitors at all times and performs all remaining aspects of the  driving task. Defined by SAE International, an automotive standardization body, under J3016.

L2 – Level 2 automation of driving

Partial automation. The driver assistance system controls both steering and acceleration/deceleration. The human driver monitors at all times and performs all remaining aspects of the  driving task. Defined by SAE International, an automotive standardization body, under J3016.

L3 – Level 3 automation of driving

Conditional automation. The driver assistance system controls both steering and acceleration/deceleration. The human driver has to be ready to intervene and take over at all times. Defined by SAE International, an automotive standardization body, under J3016.

L4 – Level 4 automation of driving

High automation. The driver assistance system controls both steering and acceleration/deceleration. The system continues to operate even if the human driver is not ready to intervene. The system does not support all roads and road conditions. Defined by SAE International, an automotive standardization body, under J3016.

L5 – Level 5 automation of driving

Full automation. The driver assistance system controls both steering and acceleration/deceleration. The system continues to operate under all roadway and environmental conditions that a human driver can manage. All roads and road conditions are supported. Defined by SAE International, an automotive standardization body, under J3016.

LA – Lighting Automation

Lighting Automation allows driving with the high beam on at all times. If the camera detects other traffic on the road, the distribution of light from the high beams is adjusted in order to not blind the approaching driver.

See also Glare-free High Beam, Head Lamp Assist, and Intelligent High Beam Control.

LCA – Lane Change Assist

Lane change assist senses a vehicle approaching in a neighboring lane while you signal for a lane change. The vehicle can alert the driver with a flashing indicator in the side mirror.

See also Blind Spot Detection.

LCA – Lane Centering Assist

Lance Centering Assist combines a forward-facing camera to detect lane markings with an electric steering system, keeping the vehicle in the center of the lane.

See also Lane Keeping Assist.

LD – Lane Detection

Using a forward camera to detect lane markings on the road.

LDW – Lane Departure Warning (SAE official term)
LDWS – Lane Departure Warning System

Lance Departure Warning uses a forward-facing camera to detect lane markings, warning the driver in case the vehicle leaves the lane without proper use of the turn signal.

LKA – Lane Keeping Assistance (SAE official term)
LKAS – Lane Keeping Assistance System

Lane Keeping Assist combines a forward-facing camera to detect lane markings with an electric steering system, keeping the vehicle in the center of the lane.

See also Lane Centering Assist.

LSD – Light Source Detection, LSR – Light Source Recognition

Systems that detects light sources from road users or road infrastructure. This information is typically then used to control the vehicle’s headlights and dim them. Matrix lights can offer a finer control of dimming.

LTA – Left Turn Assist

When crossing a road to turn left, oncoming traffic may be overlooked or its speed may be misjudged. A left turn assist system warns the driver or automatically brakes in case it’s not safe to turn left.

MOD – Moving Object Detection

A system that detects moving objects around the vehicle, typically during parking or slow maneuvering. Typically uses multiple cameras located around the vehicle.

NCAP – New Car Assessment Programme

The European New Car Assessment Programme, the European organization that defines car safety performance assessment programs.

NHTSA – National Highway Traffic Safety Administration

An agency of the U.S. federal government, part of the Department of Transportation, which defines and enforces vehicle safety standards.

NPA – No Passing Assist

Based on traffic sign information, the driver is automatically warned in case of overtaking.

NV – Night Vision (SAE official term)
NVA – Night View Assist

Night View Assist captures images using a thermal camera or active infrared lighting and presenting it on a dashboard display. This increases the driver’s perception and seeing distance during nighttime.

Also known as Automotive Night Vision.

OC – Online Calibration

A camera-based system that calibrates itself during start up of the car, or in real-time. This is in contrast to a camera system that needs to be calibrated in the factory or garage.

OD – Object Detection

A computer vision algorithm that detects objects in view of a camera: for example pedestrians, vehicles, animals, or cyclists.

OSD – Optical Surface Dirt

A camera system that automatically detects whether the camera lens is dirty and warns the driver or takes other appropriate action.

PA – Parking Assistance

Parking Assistance systems are designed to help a driver park. Some perform the entire job automatically, while others simply provide advice so that the driver knows when to turn the steering wheel and when to stop.

See also Automatic Parking System and Intelligent Parking Assist.

PD – Pedestrian Detection, PDS – Pedestrian Detection System

A system that detects pedestrians in front or behind the vehicle, typically camera-based.

PAEB – Pedestrian Automatic Emergency Braking

A system that performance automatic braking in cases a pedestrian is detected in front of the vehicle.

PCW – Parking Collision Warning (SAE official term)

Detects objects close to the vehicle during parking maneuvers and notifies the driver.

PLD – Parking Line Detection

A system that detects markers on the road surface in order to determine the exact location of parking lots.

See also Parking Slot Marking Detection.

PSMD – Parking Slot Marking Detection

A system that detects markers on the road surface in order to determine the exact position of parking lots.

See also Parking Line Detection.

RAEB – Reverse Automatic Emergency Braking (SAE official term)

Detects potential collisions while in reverse gear and automatically brakes to avoid or lessen the severity of impact. Some systems also detect pedestrians or other objects.

RCTW – Rear Cross-Traffic Warning (SAE official term)
RCTA – Rear Cross-Traffic Alert

These systems let you know if you’re about to back into oncoming cross traffic. Multiple sensors or wide angles cameras are located near the rear of the vehicle, detecting traffic that comes from the side, typical parking lot situations.

See also Cross-Traffic Alert

RSA – Road Sign Assist

A Road Sign Recognition system is a camera-based technology that detects and analyzes the traffic signs next to the road. Speed limit signs can for instance be used to control speed of the vehicle. Often the important traffic signs are shown on the dashboard in order to inform the driver.

See also Traffic Sign Assist.

RPA – Remote Parking Assistance (SAE official term)

Without the driver being physically present inside the vehicle, provides steering, braking, accelerating and/or gear selection while moving a vehicle into or out of a parking space. The driver must constantly supervise this support feature and maintain responsibility for parking.

RVC – Rear view camera

A camera that’s mounted in the rear of the vehicle, facing backward. The live view that’s captured is presented on a display for the driver in case the vehicle is in reverse.

SAD – Semi-Autonomous Driving

A driving system that is primarily autonomous, but requires the driver to monitor and take control of the vehicle in case the automated driving system cannot safely operate the vehicle.

SLA – Speed Limit Assist, SLW – Speed Limit Warning

System that based on traffic signs and maps information informs the driver of the allowed maximum road speed.

SVC – Surround View Camera (SAE official term)
SVS – Surround View System

Multi-camera surround view camera systems capture and display the area surrounding the car in a single integrated view on a display in the dashboard.

See also Surround View Park Assist.

SVPA – Surround View Park Assist

Multi-camera surround view park assist systems capture and display the area surrounding the car in a single integrated view on a display in the dashboard.

See also Surround View Camera.

TA – Trailer Assistance (SAE official term)

Assists the driver with visual guidance while backing towards a trailer or during backing maneuvers with a trailer attached. Some systems may provide additional images while driving or backing with a trailer. Some systems may provide steering assistance during backing maneuvers.

TJA – Traffic Jam Assist

A Traffic Jam Assist system keeps distance and adapts speed and optionally takes control of steering in lower-speed, dense traffic situations.

TSR – Traffic Sign Recognition

A Traffic Sign Recognition system is a camera-based technology that detects and analyzes the traffic signs next to the road. Speed limit signs can for instance be used to control speed of the vehicle. Often the important traffic signs are shown on the dashboard in order to inform the driver.

See also Road Sign Assist.

TLR – Traffic Light Recognition

A Traffic Light Recognition system is a camera-based technology that detects and analyzes traffic lights, either to inform the driver or to provide information to the vehicle for autonomous driving.

TA – Turning assistant

The Turning Assistant system monitors opposing traffic when turning at low speeds, even autonomously applying the brakes in case of unsafe situations.

UPA – Ultrasonic Park Assist

A Parking Assist system that solely uses ultrasonic sensors. Ultrasonic sensors can detect distance, but can’t detect smaller objects well, nor can they find parking spot markers.

See also Parking Assist.

VRU – Vulnerable Road User

Non-motorized road users, such as pedestrians, cyclists, motor-cyclists and persons with disabilities or reduced mobility and orientation.

VUT – Vehicle Under Test

Refers to the car that’s being tested, often in the context of a testing protocol.

WWDW – Wrong-Way Driving Warning

A system that warns the driver when she is traveling into the wrong direction. Typically uses a Traffic Sign Recognition system to detect wrong-way traffic sign indicators.

See also Wrong-Way Driving Alert.

WWA – Wrong Way Alert, WWDA – Wrong-Way Driving Alert

A system that warns the driver when she is traveling into the wrong direction. Typically uses a Traffic Sign Recognition system to detect wrong-way traffic sign indicators.

See also Wrong-Way Driving Warning.

The post ADAS acronyms (updated) appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
http://workground.videantis.com/adas-acronyms-updated.html/feed 0
How many cameras do we need? 29 says Waymo. http://workground.videantis.com/how-many-cameras-do-we-need-29-says-waymo.html http://workground.videantis.com/how-many-cameras-do-we-need-29-says-waymo.html#respond Thu, 04 Jun 2020 08:46:31 +0000 http://workground.videantis.com/?p=4898 In 2016 we wrote an article titled “what are all these automotive cameras doing?” about how and where...

The post How many cameras do we need? 29 says Waymo. appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
In 2016 we wrote an article titled “what are all these automotive cameras doing?” about how and where we saw automotive cameras being integrated into consumer vehicles. These cameras are combined with intelligent visual processing primarily to enhance safety and to relieve the driver from fully operating the vehicle. At the time, we counted 12 cameras in the vehicle: 3 in a front camera, 4 for surround view and parking cameras, 3 cameras for mirror-replacement, and 2 cameras to monitor the driver.

The latest Tesla vehicles actually come pretty close. A new Tesla today has 9 cameras on board: 3 front cameras with different fields of view, 4 on the sides, 1 on the rear, and then there’s a new in-cabin camera just above the rear-view mirror. The in-cabin camera recently got added but is reportedly not in use yet.

Giving a car about 10 eyes seems like it should be enough. After all, humans can safely operate a vehicle with just two eyes. And even if you only have one functioning eye, you can still legally operate a vehicle in most countries without needing extra measures.

But recently Waymo announced that their latest vehicles have no less than 29 cameras on board. This is almost 3x the number of cameras that we had predicted would be enough just 4 years ago! And in addition to the 29 cameras, there are 5 lidars on board, one for long range and another 4 for closer proximity sensing, as well as 6 radars. That’s over 40 powerful sensors that Waymo’s designers saw were required to make a fully autonomous vehicle sense its surroundings with enough detail and precision to safely and swiftly guide the car to its destination. The cameras have different fields of view, some focusing on more near sensing, while others can spot pedestrians and stop signs even 500 meters away. Waymo published a video and an article where they give a bit more background, but not many details are available.

Cost also does not seem to be a top priority for Waymo. The average price of a passenger vehicle around the world is $27K. This doesn’t leave much room for 40 sensors and a trunk half full of AI compute electronics, which is what the AI-based computer vision algorithms that fuse and process all this data need.

Waymo has been working on self-driving vehicles for well over a decade­­­­; they know what they’re doing. But will every vehicle have 40 sensors on board in a few years?

What we see at videantis is that the automotive industry is adopting intelligent cameras, radar and lidar at an impressive pace. But there’s a wide range of requirements. Some vehicles indeed adopt more than 10 cameras, but front cameras, surround view, rear, mirror-replacement, and driver monitoring are still the key applications for the mainstream vehicle segment. One thing is clear: whether you have 40 sensors, 10, or just a single one in the vehicle, you need a very efficient high-performance AI and vision processor architecture that can handle the wide variety of processing requirements. Whether it’s AI, computer vision, imaging, or data fusion, the unified videantis processor architecture can run it. And just as important, the architecture is scalable and software programmable, resulting in maximum reuse of software and semiconductor technology to support the OEM’s vehicle line up, whether they are fully autonomous or include just a couple of cameras for safety, and whether it’s for this year’s vehicles or for those in ten years.

The post How many cameras do we need? 29 says Waymo. appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
http://workground.videantis.com/how-many-cameras-do-we-need-29-says-waymo.html/feed 0
March newsletter http://workground.videantis.com/march-newsletter-4.html Tue, 31 Mar 2020 11:16:00 +0000 http://workground.videantis.com/?p=4091 We wanted to give you a brief update on how videantis is reacting to the COVID-19 pandemic. We...

The post March newsletter appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
We wanted to give you a brief update on how videantis is reacting to the COVID-19 pandemic. We are closely following and implementing the recommendations from leading health organizations and the German government. We’re doing everything we can to keep us all safe while continuing to serve our customers. Our employees are all enabled to work from home and equipped to continue their work and provide customer support, resulting in minimal impact to our business. Thank you all for your cooperation and we wish you all the best.

Blog: help, my algorithms are changing!

In the past decades that I have been developing electronics I’ve heard this a lot: “The algorithms our engineers developed are way beyond what our competition has.” The annual ImageNet Large Scale Visual Recognition Challenge even makes a competition out of it. The result: algorithms keep getting better and better. In automotive, having better algorithms means the chances of getting into an accident are smaller too.

So, what’s the problem?

One problem is that our vehicles last for at least ten years. In these years, the algorithms will improve a lot, meaning there needs to be a way to upgrade the vehicle with the latest software. But there’s one big assumption here: that the hardware can run these new algorithms.

Read the blog

Demo: AI, deep learning and CNN processing at CES 2020

Tony Picard, our VP of Sales, gave a product demonstration at the January 2020 Consumer Electronics Show, which the Edge AI and Vision Alliance recorded and published online. Specifically, Tony gives an overview of our company and products, and demonstrates our deep learning, computer vision, and video coding technologies with a focus on automotive applications.

Industry news

Waymo’s cars arrive with 29 cameras, 5 lidars and bunch of radars
Over the past several years, hundreds of Waymo engineers have rebuilt most of the company’s self-driving hardware, chiefly the cameras, lidars, and radars that perceive the world around the car. The software may do the thinking, but it’s no good if it can’t rely on good sensing data. The 29 cameras on the I-Pace can handle a wider range of lighting conditions and better handle extreme temperatures. Read the article

Toyota Invests $400 Million in Self-Driving Startup Pony.ai
Autonomous-vehicle tech startups are a hot commodity in the automotive industry right now, and Toyota is the latest manufacturer to put massive amounts of money into one of the startups. The Japanese automaker announced recently that it was investing $400 million into Pony.ai, a Chinese autonomous vehicle tech company. Read the article

Upcoming events

All our upcoming face-to-face events have been cancelled. Schedule an online meeting with us by sending an email to sales@videantis.com. We’re always interested in discussing your automotive sensing solutions and visual compute SOC design ideas and challenges. We look forward to talking with you!

Was this newsletter forwarded to you and you’d like to subscribe? Click here.

The post March newsletter appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
Help, my algorithms keep changing http://workground.videantis.com/help-my-algorithms-keep-changing.html Wed, 18 Mar 2020 08:24:38 +0000 http://workground.videantis.com/?p=4083 In the past decades that I have been developing electronics I’ve heard this a lot: “The algorithms our...

The post Help, my algorithms keep changing appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>
windingroadIn the past decades that I have been developing electronics I’ve heard this a lot: “The algorithms our engineers developed are way beyond what our competition has.” I’ve seen it when developing image enhancement algorithms, I’ve heard it about video codecs, and audio quality. And now I’m seeing the same ‘my algorithm is better than yours’ claims in the field of AI. The annual ImageNet Large Scale Visual Recognition Challenge even makes a competition out of it, ranking everyone’s algorithms on their ability to detect and classify images correctly. There are thousands and thousands of papers published in the field of deep learning every year, all of them claiming a new technique that improves some aspects of deep learning. The result: algorithms keep getting better and better. This is great for the consumer as it enables our electronics to get better and provide an ever-improving user experience. In automotive, it’s even more important, since it’s not just about the user experience there, but also about safety. Better algorithms mean the chances of getting into an accident are smaller too.

So, what’s the problem?

One problem is that our vehicles last for at least ten years. In these years, the algorithms will improve a lot, meaning there needs to be a way to upgrade the vehicle with the latest software. Tesla does this well and provides a means to update the car’s software over the air. They not only change the infotainment system’s features or the battery management algorithms over night, but also the safety-related self-driving algorithms. Other vehicle OEMs are quickly following and adopting the same over-the-air upgrade capabilities.

But there’s one big assumption here: that the hardware can run these new algorithms. And not just run them, but run them just as efficiently as the old algorithms. If the processors aren’t capable of running the new algorithms efficiently, they wouldn’t be able to run in real-time, which is key since there can be no delays when controlling a vehicle on the road. However, since deep-learning-based AI algorithms require many teraops of computation, many semiconductor companies have been hard-wiring them. Hard wiring an algorithm provides an easy path toward high performance while remaining low power and cost efficient. This is crucial to bringing these algorithms to consumer price points, to keep the systems relatively small, and without active cooling fans that are prone to break down. However, hardwired implementations give up one key trait: they can’t be upgraded to the latest algorithms. They’re not implemented in software but in fixed electronics circuits in hardware.

At videantis we combine software programmability with efficiency, providing extreme performance and low power with the ability to upgrade the algorithms. Ever since we started the company in 2004, our processor architecture has been fully software programmable. It’s a lot more work to design, develop and optimize such an architecture and the accompanying software development suite of tools, but it pays off. And our customers experience this. We see that videantis-based semiconductors stay in the market longer, are used for a wider range of use cases, and are always upgraded to run the latest algorithms.

The post Help, my algorithms keep changing appeared first on videantis - processors for deep learning, computer vision and video coding.

]]>