Automotive self-driving technology has made remarkable progress.One factor is not only the development of information environment infrastructure, such as high-precision GPS and faster information communications (5G, 6G, and other technologies for even faster communications), but improvements in vehicle-mounted cameras and sensor technologies have also made a significant contribution.One of the technologies essential for automated driving is the accurate understanding of surrounding information.Sensors such as LiDAR, radar, and ultrasound are a way to know if an object is present, but cameras are used to figure out what the object is.
Modern cars are equipped with sensors and cameras in various positions.For example, one manufacturer’s Level 3 autonomous car is not mounted LiDAR because it is not fully automated (Level 5), but it still has four 360° cameras and one infrared camera.The process of gathering information on the surroundings of this vehicle is a high-speed process that first uses cameras to determine what type of objects are in the surroundings and where they are located, and then uses sensors to measure the distance to those objects (people, other vehicles, trees, buildings, obstacles, etc.).The cameras correspond to the driver’s eyes, but when humans focus on a single point, they can only perceive that point.For example, suppose you are watching a movie on a 55-inch TV at a distance of 2 meters. At this time, the viewing angle from edge to edge of the screen is approximately 17 degrees horizontally and 10 degrees vertically.When you are reading subtitles, you cannot grasp what is happening on the screen other than the subtitles.The field of view for most people is said to be about 200 degrees horizontally and 125 degrees vertically, and the stable visual fixation field in which you can view objects quickly and stably, is about 60 to 90 degrees horizontally and 45 to 70 degrees vertically. When gazing at a single point, the central field of view is said to be 1 to 2 degrees.
Similarly, when driving a car, if you concentrate on the door mirrors, you will not be able to immediately grasp the situation ahead or behind your car.Autonomous driving allows for situations in all directions to be perceived at the same time, thus reducing the probability of accidents to an extremely low level (the goal is, of course, zero accidents).
AI using deep learning techniques is responsible for recognizing objects captured by the cameras.That is done by the software, but there are often large luminance differences in the angle of view that is captured by the camera.In other words, it is a mixture of sunlit and shaded areas.
If the shutter speed is set to match the bright areas, the shaded areas will be black and nothing is captured. This is called “crushed shadows.”On the other hand, if the shutter speed is set to match the shaded area, the sunlit areas will be completely white. This is called “blown out highlights.”In such a situation, the vehicle cannot recognize objects.Some time ago, I heard of an accident in which the driver could not recognize that the shadows in the rear view monitor were crushed when the car was backing, and a small child was playing there, causing the driver to hit the child with the car.In order to prevent such accidents, and also forautonomous driving or ADAS, software is installed that can recognize objects in a mixture of sunlit and shaded areas by attenuating areas with blown out highlights and amplifying areas where shadows are crushed.Artificial solar lighting is indispensable for the development of such software.
The illustration above is an example of artificially creating a mix of sunlit and shaded areas within the camera’s angle of view.Objects are placed in the sunlit and shaded areas, and artificial solar lighting is used in the development of software to reduce the contrast of the images taken.
The artificial solar lighting XG-500AFSS is the best choice for the evaluation of vehicle-mounted cameras in such light environments and for the quantitative evaluation of software.
If you want to conduct performance evaluations of vehicle-mounted cameras or develop image processing software in a stable environment, please feel free to contact our highly experienced team.We will propose equipment that best suits your needs.