• Midnight@slrpnk.netOPM
    link
    fedilink
    arrow-up
    5
    ·
    1 month ago

    Impact, an app that describes itself as “AI-powered infrastructure for shaping and managing narratives in the modern world,” is testing a way to organize and activate supporters on social media in order to promote certain political messages. The app aims to summon groups of supporters who will flood social media with AI-written talking points designed to game social media algorithms.

    In video demos and an overview document provided to people interested in using a prototype of the app that have been viewed by 404 Media, Impact shows how it can send push notifications to groups of supporters directing them at a specific social media post and provide them with AI-generated text they can copy and paste in order to flood the replies with counter arguments.

    The overview document describes Impact as a “A Volunteer Fire Department For The Digital World,” empowering “masses of ‘good people’” to “fight active fires, stamp out small brush fires, and even do preventative work (prebunking) to stop fires before they have the potential to start.” However, experts say that what Impact proposes could potentially only continue to blur the lines between authentic and inauthentic behavior online and can also lead to a world where only people with the resources to pay for such a service could “shape reality,” as the Impact overview document says. The app also shows another way AI-generated content could continue to flood the internet and distort reality in the same way it has distorted Google search results, book sold on Amazon, and ghost kitchen menus.

    In a section of the overview document titled “Why isn’t Impact ‘bad’/illegal/unethical/etc,” the company explains that “The ‘bad guys’ are doing coordinated inauthentic behavior….Impact empowers coordinated authentic behavior. We need a group of volunteers to help keep online spaces clean. It’s just such a monument [sic] task that without AI and centralized coordination it would be impossible to do at scale.”

    One demo video viewed by 404 Media shows one of the people who created the app, Sean Thielen, logged in as “Stop Anti-Semitism,” a fake organization with a Star of David icon (no affiliation to the real organization with the same name), filling out a “New Action Request” form. Thielen decides which users to send the action to and what they want them to do, like “reply to this Tweet with a message of support and encouragement” or “Reply to this post calling out the author for sharing misinformation.” The user can also provide a link to direct supporters to, and provide talking points, like “This post is dishonest and does not reflect actual figures and realities,” “The President’s record on the economy speaks for itself,” and “Inflation has decreased [sic] by XX% in the past six months.” The form also includes an “Additional context” box where the user can type additional detail to help the AI target the right supporters, like “Independent young voters on Twitter.” In this case, the demo shows how Impact could direct a group of supporters to a factual tweet about the International Court of Justice opinion critical of Israel’s occupation of the Palestinian territories and flood the replies with AI-generated responses criticizing the court and Hamas and supporting Israel.

    In another section titled “Is it effective?,” the company explains that “It is well-documented that social media algorithms preference [sic] pulses of high-energy activity over pure volume […] A coordinated group of people working together, saying the same thing in different words from all kinds of different places/accounts/etc., can have a massive impact on what is trending etc.”

    • Midnight@slrpnk.netOPM
      link
      fedilink
      arrow-up
      4
      ·
      1 month ago

      A section of the overview document titled “Team and Timeline: Where We Are Today” says that “The first version of Impact has been built, and we are starting to deploy it with a few pilot initiatives.” That section also states that “We believe it is critical to move quickly, and have assembled a team of world-class technologists and organizers who are committed to working on Impact.”

      However, in an interview, the two people behind the app, Dmitry Shapiro and Thielen, said that Impact is just a prototype at this point, that only eight people have downloaded the app so far, and that while they are showing it to people, no initiatives are currently using it and no AI-generated text from the app has been posted to social media.

      When I asked why the overview document says that the app is starting to deploy with a few pilot initiatives, Shapiro said “I think that’s loose language in a document,” and reiterated that there are currently no active initiatives or paying customers.

      Thielen said that Impact is a response to the misinformation and inauthentic behavior that has taken over social media, and a recognition that platforms like Twitter are not going to properly address those issues. He also said it only took him a couple of weekends to build the Impact prototype, and that it would be easy and cheap for someone else to build it as well.

      “I see this sort of thing [Impact] as inevitable,” Thielen said. “Social media is not getting cleaner and nicer and more representative of reality, it’s only getting worse. Someone is going to have to make some kind of tool that elevates normal people’s voices and allows people to engage collectively in real time to be able to affect any sort of change on here.”

      Shapiro is a former product manager, and Thielen previously founded a company called Koji, which was acquired by Linktree last year. Currently, Shapiro is CEO of MindStudio, a platform for developing AI-powered applications, where Thielen is CTO.

      Becca Lewis, a postdoctoral scholar at the Stanford Department of Communication, said that when discussing bot farms and computational propaganda, researchers often use the term “authenticity” to delineate between a post shared by an average human user, and a post shared by a bot or a post shared by someone who is paid to do so. Impact, she said, appears to use “authentic” to refer to posts that seem like they came from real people or accurately reflects what they think even if they didn’t write the post.

      • Midnight@slrpnk.netOPM
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        “But when you conflate those two usages, it becomes dubious, because it’s suggesting that these are posts coming from real humans, when, in fact, it’s maybe getting posted by a real human, but it’s not written by a real human,” Lewis told me. “It’s written and generated by an AI system. The lines start to get really blurry, and that’s where I think ethical questions do come to the foreground. I think that it would be wise for anyone looking to work with them to maybe ask for expanded definitions around what they mean by ‘authentic’ here.”

        In another video demo Impact shows how a fake organization named “Pro-Democracy” can share a video in support of Kamala Harris with users and ask them to share it to TikTok alongside an AI-generated caption. 0:00 /4:39

        “These AI tools are so new that we don’t yet have clear norms surrounding when it’s acceptable to use AI in the democratic process,” Josh A. Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology, said when 404 Media showed him the Pro-Democracy demo video. “If AI can help someone articulate a view they truly hold, it could empower people who might not otherwise participate and increase involvement in civic discourse. But there are also risks. People may become overly reliant on AI models and passively share AI-generated content that they haven’t checked themselves.”

        The “Impact platform” has two sides. There’s an app for “supporters (participants),” and a separate app for “coordinators/campaigners/stakeholders/broadcasters (initiatives),” according to the overview document.

        Supporters download the app and provide “onboarding data” which “is used by Impact’s AI to (1) Target and (2) Personalize the action requests” that are sent to them. Supporters connect to initiatives by entering a provided code, and these action requests are sent as push notifications, the document explains.

        “Initiatives,” on the other hand, “have access to an advanced, AI-assisted dashboard for managing supporters and actions.”

        In the Stop Anti-Semitism demo, Thielen directs supporters to this tweet, about a July 19 International Court of Justice Advisory Opinion that Israel’s presence in the occupied Palestinian territories is illegal and should stop, an opinion it also shared in 2004.

        In the Impact demo video Thielen doesn’t instruct supporters to correct any misinformation in the tweet and instead asks supporters to “provide additional context and set the record straight.”

        Specifically, it gives supporters the following “talking points.”

        The ICJ has a known history of anti-semitism
        There are lots of accusations that are not vetted or fact-checked, and a lot of misinformation is damaging public opinion of Israel
        Where is the ICJ ruling on Hamas?
        The ICJ and ICC have zero jurisdiction over Israel or the United States. There [sic] rulings mean absolutely nothing. 
        

        “Think of these as the core substance of the response that you want,” Thielen says in the video, and explains that some of the responses that will be AI-generated based on those talking points may include just one of them, more than one, or a synthesis of several.

        In the “additional context” box Thielen writes that the target audience should be “People who have been seeing a lot of misinformation about Israel and the war online, and find themselves increasingly sympathetic to Gaza. Encourage them to do more research.”

        Impact then generates a “seed” for each supporter. “This is what makes the messages all appear to be coming from different perspectives and angles.”

        An example of one seed shown in the demo reads: “Informative and calm, longer, providing historical context, link to reputable sources.”

        “Frustrated and urgent, medium, highlighting double standards, use caps for emphasis,” reads a seed to another supporter. The demo video also shows what the push notification each supporter would get is based on the seed, as well as the “Draft message” Impact is asking them to share. According to the video, the push notification this supporter would get would read: “Dana, respond to the tweet about the ICJ ruling on Israel. Add context and correct any misinformation.”

        The draft message for this user reads:

        “Where’s the ICJ ruling on Hamas? The court’s history of anti-Semitism is CLEAR. So much misinformation out there is warping public opinion. Before jumping to conclusions, DO YOUR RESEARCH. The ICJ has ZERO jurisdiction over Israel anyway!”

        “Meme-like, very short, pointing out hypocrisy, include trending hashtag,” another seed says. The generated draft message based on that seed is: “ICJ ruling on Israel but silent on Hamas? 🤔 Make it make sense. #DoubleStandards.”

        “The goal is to create a well-rounded yet consistent narrative in a way that makes it easy for your supporters to just tap ‘copy,’ paste this in, and then they’re good to go,” Thielen says in the video.

        When I asked Thielen why the demo showed Impact directing users to flood a factual tweet with replies trying to undermine it, he said that he did not give the specifics of the demo a lot of thought.

        “That was just me being lazy,” he told me. “I just typed ‘Israel’ into Twitter search and clicked on the top thing without looking at it.”

        Twitter’s “platform manipulation and spam policy” states that “You may not use X’s services in a manner intended to artificially amplify or suppress information or engage in behavior that manipulates or disrupts people’s experience or platform manipulation defenses on X.” Twitter also says that prohibited behavior includes “coordinated activity, that attempts to artificially influence conversations through the use of multiple accounts, fake accounts, automation and/or scripting.” However, it’s unclear if what Impact proposes would violate Twitter’s policy, which also states that “coordinating with others to express ideas, viewpoints, support, or opposition towards a cause,” is not a violation of this policy.

        “Coordinated groups of people can show up and help, or coordinated groups of people can show up and harass,” Shapiro said. “We don’t think coordination is in any way a bad thing. We think it’s a great thing, because you can get stuff done, and if you’re doing good, truthful things, then I don’t see any problems.”

        Twitter did not respond to a request for comment.

        “If social media users aren’t transparent about their own AI use, others may lose trust in online forums as it becomes harder to distinguish human writing from synthetic prose,” Goldstein said in response to the Pro-Democracy demo video.

        “I think astroturfing is a great way of phrasing it, and brigading as well,” Lewis said. “It also shows it’s going to continue to siphon off who has the ability to use these types of tools by who is able to pay for them. The people with the ability to actually generate this seemingly organic content are ironically the people with the most money. So I can see the discourse shifting towards the people with the money to to shift it in a specific direction.”

  • Monkey With A Shell@lemmy.socdojo.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    Who is the ‘bad guys’ is going to depend on who’s paying for the service. This thing sounds like an atrocity waiting to be unleashed. I might have less issue with it just as a ‘rally call’ tool to bring attention to a post, but the provisioning of generated content for these users to post is just going to turn the whole concept sour.

    We already have some idea what happens when AI starts feeding on their own output, those minor errors that nobody corrected end up being ingested and mutated/amplified further in a self defeating cycle of rot. Granted humans are pretty susceptible to the same behavior, but at least with us it’s not done at machine speed.