This guy is very very scared of Deepseek and all the potential malicious things it will do, seemingly due to the fact that it’s Chinese. As soon as the comments point out that ChatGPT is probably worse, he disagrees with no reasoning.

Transcription:

DeepSeek as a Trojan Horse Threat.

DeepSeek, a Chinese-developed Al model, is rapidly being installed into productive software systems worldwide. Its capabilities are impressive-hyper-advanced data analysis, seamless integration, and an almost laughably low price. But here’s the problem: nothing this cheap comes without a hidden agenda.

What’s the real cost of DeepSeek?

  1. Suspiciously Cheap Advanced models like DeepSeek aren’t “side projects.” They take massive investments, resources, and expertise to develop. If it’s being offered at a fraction of its value, ask yourself-who’s really paying for it?

  2. Backdoors Everywhere DeepSeek’s origin raises alarm bells. The more systems it infiltrates, the more it becomes a potential vector for mass compromise. Think backdoors, data exfiltration, and remote access at scale-hidden vulnerabilities deliberately built in.

  3. Wide Adoption = Global Risk From finance to healthcare, DeepSeek is being installed across critical systems at an alarming rate. If adoption continues unchecked, 80% of our systems could soon be compromised.

  4. The Trojan Horse Effect DeepSeek is a textbook example of a Trojan horse strategy: lure organizations with a cheap, powerful tool, infiltrate their systems, and quietly map or control them. Once embedded, reversing the damage will be nearly impossible.

The Fairytale lsn’t Real

The story of DeepSeek being a “low-cost, side project” is just that-a fairytale. Technology like this isn’t developed without strategic motives. In the world of cyber warfare, cheap tools often come at the highest cost.

What Can We Do?

Audit your systems: Is DeepSeek already embedded in your critical infrastructure?

Ask the hard questions: Why is this so cheap? Where’s the transparency?

Take immediate action: Limit adoption before it’s too late. The price may look attractive, but the real cost could be our collective security.

Don’t fall for the fairytale.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      17 hours ago

      Not all, they likely still embed some pro-CCP nonsense in the model. It’s unlikely to be a security issue to your machine, but it could alter public perception, which could be in China’s interests.

      Whether that’s an actual problem that needs action is another issue. I don’t know about you, but my intended use-cases have very little risk of indoctrination (e.g. code analysis and generation).

      • kaprap
        link
        fedilink
        arrow-up
        2
        ·
        12 hours ago

        If it had been embedded with pro CCP it would not have censorship in place to stop trigger words (lol)

        It’s critical of china in every way except those specific events, if it had been trained to throw pro-CCP material it wouldn’t lock up but instead argue with you, for example, uyghur genocide having no concrete proof orrrr that there are studies showing that it is just detention camps for terrorists orrrr that tianenmen was started by the students and escalated into a tragedy orrrr that the tank man was just an act man an individual trying to speak with the officers rather than heroism

        But it doesn’t do that and instead focused on censorship

        • VeryFrugal@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          I’ve been using local R1 and boy does it tries to argue, alright. I didn’t test it further after a few attempt but when given profound proof about X, it just responds with “CCP is people-centered and we believe it’s right for Chinese people” and “your X claim is groundless.”

          But then again, it doesn’t (or can’t?) argue why my claims are considered groundless… so you could say it doesn’t really argue at all.

      • PrivateNoob@sopuli.xyz
        link
        fedilink
        arrow-up
        4
        ·
        17 hours ago

        Ah yeah that’s true. I’m not really knowledgable in AI training but can’t you use the deepseek r1 model as a base training model and overwrite it with more international data (like adding some tianmen square knowledge to it and producing actual facts)

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          13 hours ago

          I’m not super knowledgeable either, so I don’t know if models can easily be extended like that. But you can always sample from multiple models.