Fog is one of the most formidable enemies of autonomous driving and advanced driver-assistance systems (ADAS). It distorts light, scatters signals, and erodes the reliability of environmental perception—core capabilities that keep drivers and pedestrians safe. The debate between camera vision and LiDAR (Light Detection and Ranging) has raged for years, but foggy conditions strip away marketing hype and force a focus on fundamental performance: Which technology truly delivers when visibility plummets?
This article goes beyond the typical "hardware vs. software" dichotomy. Instead, we frame the comparison around two distinct "safety philosophies":camera vision’sreliance on algorithmic ingenuity to overcome physical limitations, and LiDAR’s use of hardware redundancy to establish a baseline of reliability. Drawing on 2025’s latest real-world test data, technical breakthroughs, and industry case studies, we’ll answer the critical question: Which works better in fog? The Core Divide: Two Safety Philosophies Under Fog
To understand why fog exposes the strengths and weaknesses of each technology, we first need to unpack their underlying operating principles—and the safety mindsets that drive their adoption.
Camera vision systems operate like "brain-powered eyes." They rely on high-definition cameras (typically 8-10 in advanced setups) paired with powerful AI chips and massive datasets to mimic human visual perception. The philosophy here is minimalism: use software to compensate for limited hardware, leveraging machine learning to translate 2D visual data into 3D environmental understanding. Tesla and Xpeng are the most prominent advocates of this approach, which shines in clear conditions where abundant visual cues allow algorithms to thrive.
LiDAR, by contrast, is a "hardware-first guardian." It emits millions of laser pulses per second to create a high-precision 3D point cloud of the surrounding environment, measuring distances, shapes, and speeds with exceptional accuracy. The philosophy here is redundancy: use physical sensing capabilities to establish a safety floor, even when environmental conditions obscure visual details. Huawei, BYD, and most luxury ADAS providers embrace this "LiDAR + camera + millimeter-wave radar" trinity, prioritizing consistent performance over cost savings.
Fog disrupts both systems—but in fundamentally different ways. For cameras, fog scatters light, blurs edges, and washes out contrast, depriving algorithms of the visual features they need to identify obstacles. For LiDAR, fog particles scatter laser pulses, creating "point cloud noise" that can obscure real targets or generate false positives. The question isn’t which is "unaffected"—it’s which can recover faster, maintain critical performance metrics, and keep drivers safe when visibility is at its worst.
Real-World Data: How They Perform in Fog (2025 Latest Tests)
The most compelling evidence comes from the 2025 "Intelligent Driving Extreme Scenario Test White Paper," jointly released by the China Automotive Engineering Research Institute (CAERI) and Dongchedi. This landmark study tested 36 mainstream models across 15km real-road foggy routes and 216 simulated collision scenarios, quantifying performance gaps with hard data. Let’s break down the key findings by fog severity.
1. Light Fog (Visibility: 200-500m)
In light fog—common in early mornings or coastal areas—both technologies perform adequately, but subtle gaps emerge. Camera vision systems, buoyed by advanced dehazing algorithms, hold their own in basic obstacle recognition. Tesla’s FSD V12.5, for example, achieved a 90% obstacle recognition accuracy rate in light fog, thanks to its raindrop and haze elimination algorithms trained on billions of kilometers of real-world data.
LiDAR systems, meanwhile, maintained near-perfect accuracy (98%+) with minimal noise. The Hesai ATX Lidar, a newly launched long-range model, demonstrated its ability to filter 99% of fog-related noise at the pixel level, preserving clear point clouds of surrounding vehicles and pedestrians. The gap here is narrow, but LiDAR’s advantage lies in consistency: while camera systems may struggle if fog density fluctuates suddenly, LiDAR’s physical sensing remains stable.
2. Moderate Fog (Visibility: 100-200m)
As visibility drops below 200m, camera vision’s algorithmic limits become apparent. The CAERI test showed that pure camera models experienced a 3x increase in obstacle miss rates compared to LiDAR-equipped vehicles. The Xpeng G6’s pedestrian recognition distance plummeted from 150m in clear weather to just 65m in moderate fog, while the Tesla Model Y’s dropped to 78m. This is a critical shortfall: at highway speeds (100km/h), a 65m detection distance gives the system only 2.3 seconds to react—barely enough for emergency braking.
LiDAR systems, by contrast, maintained effective detection distances above 80m. Huawei’s ADS 3.0, equipped with a 192-line LiDAR, achieved an average pedestrian recognition distance of 126m in moderate fog, providing a 4.5-second reaction window. The difference stems from LiDAR’s ability to penetrate fog using longer wavelengths (1550nm) that scatter less than the visible light used by cameras. Even when scattered, laser pulses retain enough energy to return to the sensor and calculate distances accurately.
3. Dense Fog/Advection Fog (Visibility: <100m)
In dense fog—where visibility drops below 100m, or even 50m in extreme cases—the divide becomes a chasm. This is the "make-or-break" scenario for autonomous systems, and the CAERI data is stark: pure camera vision systems suffered a 15% manual takeover rate, with frequent "perception failure" alerts. In conditions where fog obscures lane markers, traffic lights, and even large obstacles, algorithms simply lack sufficient visual information to make safe decisions.
LiDAR-equipped vehicles, however, maintained a takeover rate of just 3%. Huawei’s ADS 3.0 even demonstrated the ability to accurately identify stationary vehicles and complete evasive maneuvers in 30m visibility—conditions where human drivers would struggle to see beyond their headlights. Key to this performance are advanced fog-filtering algorithms, such as those developed by LSLidar. These algorithms analyze the characteristics of fog-scattered laser pulses, separating noise from valid point cloud data to preserve critical obstacle information. The result is a system that doesn’t just "see" through fog—it maintains situational awareness when camera vision fails entirely.
Technical Breakthroughs: Narrowing the Gap?
While LiDAR holds the upper hand in foggy conditions, both technologies are evolving rapidly. Let’s examine the latest innovations that are reshaping their fog performance.
Camera Vision: Algorithmic Advances
The biggest strides in camera vision’s fog performance come from AI-powered dehazing algorithms and larger, more diverse datasets. Tesla’s FSD V12.5, for example, uses a combination of supervised and unsupervised learning to "reverse-engineer" fog effects, restoring clarity to blurred images. By training on 10 billion kilometers of nighttime and adverse weather data, the system has improved dynamic object tracking speed by 40% in low-visibility conditions.
However, these advances have limits. They rely on the presence of some visual features to work with—something that disappears in dense fog. Even the best dehazing algorithm cannot create information that isn’t there, making camera vision’s physical limitations difficult to overcome.
LiDAR: Hardware and Algorithm Synergy
LiDAR’s evolution focuses on enhancing penetration, reducing noise, and lowering costs. One of the most exciting breakthroughs is single-photon LiDAR, a next-generation technology developed by a collaboration of UK and US researchers. This system uses ultra-sensitive superconducting nanowire single-photon detectors (SNSPDs) and 1550nm wavelength lasers to capture high-resolution 3D images through fog—even at distances of 1 kilometer. By detecting individual photons and measuring their flight time with picosecond precision (one trillionth of a second), the system can distinguish between fog particles and real objects with unprecedented accuracy.
Commercial LiDAR systems are also advancing rapidly. LSLidar’s in-house dust/rain/fog filtering algorithm, compatible with all its models (including 1550nm fiber and 905nm hybrid solid-state LiDAR), significantly reduces point cloud noise while maintaining target detection. Hesai’s ATX Lidar, with a 140° ultra-wide field of view and 300m detection range, can identify and mark fog, exhaust fumes, and water droplets in real time, ensuring clean point cloud data for the system. These innovations are making LiDAR more robust in fog while driving down costs—once a major barrier to adoption—with 2025 prices falling to the $300-$450 range.
Practical Choice: When to Prioritize Which Technology?
The answer to "which works better in fog" depends on your use case and risk tolerance. Here’s a framework for decision-making:
For Consumer Vehicles (ADAS)
If you live in a region with frequent fog (e.g., coastal areas, valleys, or cold climates with temperature inversions), LiDAR is the safer choice. The CAERI data proves that its ability to maintain situational awareness in dense fog provides a critical safety buffer. Even as camera vision improves, LiDAR’s hardware redundancy serves as a "safety net" that algorithms cannot replicate.
For regions with minimal fog, pure camera vision may be sufficient—especially if cost is a primary concern. Models like the Tesla Model Y and Xpeng G6 offer strong ADAS performance in clear and lightly foggy conditions, with ongoing OTA updates continuously improving their algorithms over time.
For Commercial Autonomy (Robotaxis, Trucking)
In commercial applications where safety and reliability are non-negotiable (and regulatory compliance is mandatory), LiDAR is not just preferred—it’s essential. Robotaxis operating in urban areas with unpredictable fog events, or long-haul trucks traveling through fog-prone highways, cannot afford the 15% takeover rate of pure camera systems. LiDAR’s 3% takeover rate in dense fog is the difference between operational viability and safety risks.
The Future: Synergy, Not Competition
The most forward-thinking approach isn’t choosing one technology over the other—it’s integrating them. Modern ADAS systems (like Huawei ADS 3.0) use LiDAR’s reliable 3D point clouds to complement camera vision’s high-resolution visual data. In fog, LiDAR provides core obstacle detection, while cameras help identify details like traffic light colors or pedestrian gestures (when visible). This "sensor fusion" leverages the strengths of both technologies, creating a system that is more robust than either alone.
Conclusion: LiDAR Leads in Fog, But Camera Vision Isn’t Out
When it comes to foggy conditions, the data is unambiguous: LiDAR outperforms camera vision across all fog severity levels, with a particularly wide gap in dense fog. Its hardware-driven approach to perception—penetrating fog with laser pulses and filtering noise with advanced algorithms—establishes a safety baseline that camera vision’s software-centric model cannot match, at least for now.
That said, camera vision is evolving rapidly. AI dehazing algorithms and larger datasets are improving its performance in light to moderate fog, making it a viable choice for regions with minimal extreme fog events. For most drivers and commercial operators, however, LiDAR’s ability to "see through fog" and reduce manual takeovers is a safety advantage that is hard to ignore.
Ultimately, the future of autonomous perception in fog lies in sensor fusion. By combining LiDAR’s reliability with camera vision’s detail, we can create systems that are safe, efficient, and adaptable to even the harshest weather conditions. For now, if fog safety is your top priority, LiDAR is the clear winner—but don’t count camera vision out as algorithms continue to advance.