• Captain Janeway@lemmy.world
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    8 months ago

    More like A.I. is polluting our science?

    Regardless, I blame the science industry as a whole rather than A.I. The pressure to produce papers is dumb. Why should we force publications? I think we all have had that uncomfortable feeling that - even prior to A.I. - some papers were more fluff than fact. For every instance of a paper being discredited for manipulating data (or just incorrectly running the stats), I’m sure there are at least a few others that slipped under the radar. And of course this depends heavily on your topic, publication, and how radical your paper alters your science.

    I got my master’s and my wife got her PhD. Both of us got them in stem fields from public universities. We’ve been around a lot of the science journal obsession and I think it’s a problem. It’s not only wasting our own time for reviewing papers that frankly don’t need to be written. But it’s wasting the time of our future successors who must now wade through the garbage to find truth.

    Yes A.I. is a problem. Yes it should be checked. But we’ve been pointlessly driving people to get published and it’s making it much harder to see through the A.I. driven BS.

    Idk am I way off base here?

    • kwomp2@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      8 months ago

      Decoupling a community/society from itself and expect to thrive cooperating has always been a bad idea. A team performs bad, when everyone is acting in their own interest, instead of the team interest. Neither the market nor the peer review/science-career system can resolve that. (Even though they make it work, but yeah… in a shitty way)

      Aka individualism and teamwork aren-t friends. Capitalism isn’t good for science, because science is teamwork.

      Tldr I agree

      • golli@lemm.ee
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        Capitalism isn’t good for science

        For me it is more that capitalism is good at solving certain kinds of problems, but fails at others. Moore’s law is an example where imo capitalism did a pretty great job at innovating and pushing the limits. Counter examples can e.g. often be found in healthcare, where treating something chronic might be more lucrative then aiming for a cure.

        • kwomp2@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          The problem to be solved: research production of better chips.

          Market mode: Researchers in several companies do research, experiment, optimize to beat each other, restricted share of information, profit

          Brain mode: researchers coorperate, plan research process and personel in terms of efficient research.

          The market inscentived chip research a lot because it was profitable, a lot. That was because it is actual usefull innovation. Other organisations of economy (shoutout to democratic socialism) would of course still have experts in their industries, understanding that microchips are important to research and advice society to direct ressources accordingly. And those could be used more efficiently.

          But yeah, there might be situations in wich a society says no because they need to build a hospital for humanity reasons.

          • golli@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            What are the mechanisms for resource allocation, prioritization and incentivization in your “brain mode”? And who decides on everything? Those are actually really difficult problems to solve and i think your “advice society to direct ressources accordingly” is easier said then done. Sure in an ideal world humanity is peacefully working together as a whole, knows how to allocate resources efficiently and everyone is motivated, but is that something that can realistically exist?

            I am not calling for full blown liberal capitalism, but the market aspect has it’s upsides. In that sense i like that you used the wording “market mode”, since that is probably the aspect i meant more than capitalism in general. And i think at least historically it has faired pretty well against a competing concept like planned economies (which your concept seems closer to), although ofc those were far from a great implementation. But this kind of shows how hard it can be in practice.

            Imo what we really fail with at the moment is setting the right constrains and rules. Just because e.g. our patent/copyright system is broken doesn’t mean there couldn’t be a better version that does allow information to flow faster.

            • kwomp2@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              8 months ago

              First, It’s definetly easier said then done. Trust me, I’ve said it multiple times and didn’t manage to do it even once.

              Ok, basically it boils down to dialectics. Different organization of humans alters humans in mind and behaviour. Human nature is not fixed, we’re freakin flexible and largely formed by our circumstamces, which mainly consist of human made structures, options, etc.

              I stupidly say brainmode to point towards the possibility of rationality in both individuals and society (just one does not work great). Meaning we’d understand someone needs to clean that toilet and if everyone feels more or less integrated/seen and is regularily part of collective subjectivities, people will say fine to cleaning the bathroom from time to time. All that only happens when society is both socialist and democratic. Being hold at gunpoint or cleaning that toilet under thread of starving makes you do it, but you will need to be “incentivized” for ever, because the reality of your praxis is in fact not solidarity or reason, but force.

              Collective reason does not grow in individualist praxis. Neither does the will to do shit for free or for others.

              The question of possibillity is indeed a question of history. Socialism as the more-human concept without democracy was a shit idea all along. And the left in the ussr knew, they just lost.

              The individual we find in this social order is a frightened, humiliated, heart-cooled-down, thus egoistic one. Its the only configuration that brings you through life in capitalism (yes of course its not deterministic, there is some slack etc). Subjectivity is misinterpreted as individualism, while in fact we act as kollektives, we pretend not to, for the sake of a misguided concept of freedom. I am convinced people are able to understand that and “feel it”. The tricky part is it needs a change in real world options for these ideas to make practical sense. And it needs those ideas to create those options (i.e. changing social structure) there you have dialectic thinking, wich is appropriate for humans and their history, imo.

              And by the way, I could well imagine democratic socialism where allocation and production are mainly planned according to the needs of people, with high degrees of self government etc. wich still uses market-like-incentives in exceptional cases, like boring jobs really no one wants to do (cleaning that toilet for example). But it wouldnt be market principles structuring economy, just society saying “alright, no one wants to take care of festival toilets, so if you do it you get a reward”

    • mryessir@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      I said to myself: I am starting a master iff I find some topic I am passionate about and want to research about some specific problem.

      I did not met a single student with the same attitude - the majority takes the easiest way to get the title.

      Can not judge about PhD. I admire the determination and the broad knowledge. I do not know if they have to make compromises on their topic in order to successfully obtain their title. I expect unsuccessful (?) hypotheses are a successful result as well. Though I have never ever read any paper admitting that their initial idea did not work out. And instead they forked at point x or whatever.

  • db0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    More like scientific process has become such a chore and polluted with so much (performative) bureaucracy that people would use anything to offload some of it away. Make the process of scientific advancement fun again, and there’s no reason to use AI to bypass it. Same reason why most existing artists didn’t immediately switch to GenAI to do their art, and most actively hate it.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    This is the best summary I could come up with:


    Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate.

    Source: “Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews”

    If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage.

    Isn’t it possible that human culture contains within it cognitive micronutrients — things like cohesive sentences, narrations and character continuity — that developing brains need?

    companies are refusing to pursue advanced ways to identify A.I.’s handiwork — which they could do by adding subtle statistical patterns hidden in word use or in the pixels of images.

    Similarly, right now teachers across the nation have created home-brewed output-side detection methods, like adding hidden requests for patterns of word use to essay prompts that appear only when copied and pasted.


    The original article contains 1,636 words, the summary contains 188 words. Saved 89%. I’m a bot and I’m open source!

      • Gadg8eer@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Hey, at least nobody has been replaced with the bots that convert YouTube URLs to piped.video or summarize articles. Ironic, but this specific bot is helpful and working right, so… Good bot. Best friend.

        GPT, however, is purely made to bust unions and I hope to see Big Tech buried alive for that.

    • TheFriar@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      And so far, major A.I. companies are refusing to pursue advanced ways to identify A.I.’s handiwork — which they could do by adding subtle statistical patterns hidden in word use or in the pixels of images.

      This, coupled with the noted worry a few paragraphs higher of these same companies of poisoning their future models with the tsunami of their own AI’s previous garbage output, highlights perfectly the Problems inherent in capitalism:

      Companies know they’re on the short track to destruction (or, more accurately, humanity’s short path to demise). They discuss this behind closed doors and try to plan as best they can for this inevitable outcome. But the other side of that is their shareholders and executives demanding infinite and expanding growth. And time and again, the desire for exponentially increasing profits wins out. And this is in practice across every industry. AI, fossil fuels, manufacturers, data—everything is getting squeezed tighter and tighter. And AI came along and actually hastened that decline. With its massive energy consumption, it’s jolting climate change into high gear; with its “streamlining” capabilities for businesses its booting employees out of the market in favor of lower quality, higher margin AI bullshit (and worsening inequality); its garbage output of AI dream slop (a phrase I really liked from that article), they are diminishing the quality of everything and destroying what precious little time we have left to halt the most disastrous effects of climate change.

      And, time and again, I come to the same conclusion myself. We cannot wait for these vampires to give up their quest for increasing profits. We need to act right now, en masse, to halt their machine that runs on us. Strike. General strike, across the board, crashing the global economy and wielding the power to start it back up. We have this power! We’ve always had it. It’s well past time we use it. Because if we don’t do it now, there won’t be anything left to fight for. The billionaires and millionaires will be holed up in whatever bunkers and private islands they’ve quietly been building. Which is actually happening and is genuinely terrifying. This shit has gone on way too fuckin long. And it makes me sick.

  • walter_wiggles@lemmy.nz
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Spot on. Academia has been incentivizing the wrong things for years. You would think teaching and education would be priorities, but no. They value publishing and ability to get research grants.