• DragonTypeWyvern@midwest.social
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    13 days ago

    It’s pretty important to have that “easy” anti-collision problem solved. I’m not quite sure why people think it must be either/or instead of both.

    • weew@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      12 days ago

      Like I said, the argument is that if AI vision is actually solved, at that point it’s like walking with perfect vision and a blind cane.

      LIDAR’s true strength isn’t even useful for driving at speed. LIDAR is super precise - useful for parking perhaps, but when driving at 50km/h or faster, does it really matter if the object in front is 30.34m ahead or 30.38m?

      Also, the main problem with LIDAR is that it really doesn’t see any more than cameras do. It uses light, or near-visible light, so it basically gets blocked by the same things that a camera gets blocked by. When heavy fog easily fucks up both cameras and LIDAR at the same time, that’s not really redundancy.

      I’d like to see redundancy provided by multiple systems that work differently. Advanced high resolution radar, thermal vision, etc. But it still requires vision and AI 100%: the ability to identify what an object is and its likely actions, not simply measure its size and distance.

      • GamingChairModel@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        12 days ago

        Also, the main problem with LIDAR is that it really doesn’t see any more than cameras do. It uses light, or near-visible light, so it basically gets blocked by the same things that a camera gets blocked by. When heavy fog easily fucks up both cameras and LIDAR at the same time, that’s not really redundancy.

        The spinning lidar sensors mechanically remove occlusions like raindrops and dust, too. And one important thing with lidar is that it involves active emission of lasers so that it’s a two way operation, like driving with headlights, not just passive sensing, like driving with sunlight.

        Waymo’s approach appears to differ in a few key ways:

        • Lidar, as we’ve already been discussing
        • Radar
        • Sensor number and placement: the ugly spinning sensors on the roof get a different vantage point that Tesla simply doesn’t have on its vehicles now, and it does seem that every Waymo vehicle has a lot more sensor coverage (including probably more cameras)
        • Collecting and consulting high resolution 3D mapping data
        • Human staff on standby for interventions as needed

        There’s a school of thought that because many of these would need to be eliminated for true level 5 autonomous driving, Waymo is in danger of walking down a dead end that never gets them to the destination. But another take is that this is akin to scaffolding during construction, that serves an important function while building up the permanent stuff, but can be taken down afterward.

        I suspect that the lidar/radar/ultrasonic/extra cameras will be more useful for training the models necessary to reduce reliance on human intervention and maybe reduce the number of sensors. Not just in the quantity of training data, but some filtering/screening function that can improve the quality of data fed into the training.