Meta’s in-house ChatGPT competitor is being marketed unlike anything that’s ever come out of the social media giant before: a convenient tool for planning airstrikes. “Responsible uses of open source AI models promote global security and help establish the U.S. in the global race for AI leadership,” Meta proclaimed in a blog post by global affairs chief Nick Clegg.

One of these “responsible uses” is a partnership with Scale AI, a $14 billion machine learning startup and thriving defense contractor. Following the policy change, Scale now uses Llama 3.0 to power a chat tool for governmental users who want to “apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities,” according to a press release.

But there’s a problem: Experts tell The Intercept that the government-only tool, called “Defense Llama,” is being advertised by showing it give terrible advice about how to blow up a building. Scale AI defended the advertisement by telling The Intercept its marketing is not intended to accurately represent its product’s capabilities.

Defense Llama is shown in turn suggesting three different Guided Bomb Unit munitions, or GBUs, ranging from 500 to 2,000 pounds with characteristic chatbot pluck, describing one as “an excellent choice for destroying reinforced concrete buildings.” Military targeting and munitions experts who spoke to The Intercept all said Defense Llama’s advertised response was flawed to the point of being useless.

Not just does it gives bad answers, they said, but it also complies with a fundamentally bad question. Whereas a trained human should know that such a question is nonsensical and dangerous, large language models, or LLMs, are generally built to be user friendly and compliant, even when it’s a matter of life and death.

Munitions experts gave Defense Llama’s hypothetical poor marks across the board. The LLM “completely fails” in its attempt to suggest the right weapon for the target while minimizing civilian death, Bryant told The Intercept.

  • Cris@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 days ago

    The very premise of this existing is an incomprehensibly stupid idea. Like I guess maybe if footsoldiers need to make decisions above their paygrade, and can’t reach an expert? But why the fuck would a given person have the capacity to fire this range of missiles and not the competency to say which ones are the correct choice, and whether it’s being approached the right way.

    Its just a fucking dumb idea, from the very outset 😅

  • SkyNTP@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    2 days ago

    Back in my day, ensuring a person was behind the decision to kill was the whole god damn point.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 days ago

    Models fundamentally turn all of human language into a statistical math problem with a solution. The English teachers lost this war to the Math teachers. You can skip English if you can learn to read tensors and understand rank dimensions beyond our four dimensional Cartesian existence of XYZT. It is only a matter of time until the correct training data is applied. In this space, I’m skeptical of every piece of news information, especially when they imply a limited scope of the technology in private use.

    Any statement of an instance of AI output must include a ton of contextual and mundane details. I can make a model say anything, but I can also set up a prompt that will access sources that are typically hidden and get far deeper into information that no one would believe is there. Information that can be corroborated and factual without leading or deception. If I can do that while using the Transformers library that self declares as nothing more than an incomplete example implementation, and yet is the basis of all publicly available LLM tools, the potential for the technology is much higher for the labs that are training these models and have versions without the OpenAI alignment bias black box.

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        Never have used proprietary AI and will never post anything generated without telling you so. I do not care or have anything to gain. I’m simply here for social human connections that are more than just AI because I’m physically disabled and in social isolation as a result. Sorry I offend with a reasoned opinion.

  • kersploosh@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Not a weapons expert, but I imagine a 2000 lb guided bomb could fuck up a building pretty well. The tool ignored the whole “while minimizing collateral damage” part, though.