• Carrolade@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      This almost makes me think they’re trying to fully automate their publishing process. So, no editor in that case.

      Editors are expensive.

      • YAMAPIKARIYA@lemmyfi.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        If they really want to do it, they can just run a local language model trained to proofread stuff like this. Would be way better

          • YAMAPIKARIYA@lemmyfi.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            I don’t think so. They are using AI from a 3rd party. If they train their own specialized version, things will be better.

            • FiniteBanjo@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              Here is a better idea: have some academic integrity and actually do the work instead of using incompetent machine learning to flood the industry with inaccurate trash papers whose only real impact is getting in the way of real research.

                • FiniteBanjo@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 month ago

                  You can literally use tools to check grammar perfectly without using AI. What the LLM AI does is it predict what word comes next in a sequence, and if the AI is wrong as it often is then you’ve just attempted to publish a paper with halucinations wasting the time and effort of so many people because you’re greedy and lazy.

                • BearGun@ttrpg.network
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 month ago

                  Proofreading involves more than just checking grammar, and AIs aren’t perfect. I would never put my name on something to get published publicly like this without reading it through at least once myself.

    • TheFarm@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      This is what baffles me about these papers. Assuming the authors are actually real people, these AI-generated mistakes in publications should be pretty easy to catch and edit.

      It does make you wonder how many people are successfully putting AI-generated garbage out there if they’re careful enough to remove obviously AI-generated sentences.

      • BluJay320@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        I definitely utilize AI to assist me in writing papers/essays, but never to just write the whole thing.

        Mainly use it for structuring or rewording sections to flow better or sound more professional, and always go back to proofread and ensure that any information stays correct.

        Basically, I provide any data/research and get a rough layout down, and then use AI to speed up the refining process.

        EDIT: I should note that I am not writing scientific papers using this method, and doing so is probably a bad idea.

  • magnetosphere@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    To me, this is a major ethical issue. If any actual humans submitted this “paper”, they should be severely disciplined by their ethics board.

  • shadowtofu@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    This article has been removed at the request of the Editors-in-Chief and the authors because informed patient consent was not obtained by the authors in accordance with journal policy prior to publication. The authors sincerely apologize for this oversight.

    In addition, the authors have used a generative AI source in the writing process of the paper without disclosure, which, although not being the reason for the article removal, is a breach of journal policy. The journal regrets that this issue was not detected during the manuscript screening and evaluation process and apologies are offered to readers of the journal.

    The journal regrets – Sure, the journal. Nobody assuming responsibility …

    • Taako_Tuesday@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      What, nobody read it before it was published? Whenever I’ve tried to publish anything it gets picked over with a fine toothed comb. But somehow they missed an entire paragraph of the AI equivalent of that joke from parks and rec: “I googled your symptoms and it looks like you have ‘network connectivity issues’”

    • Patrizsche@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Daaaaamn they didn’t even get consent from the patient😱😱😱 that’s even worse

      • Frenchy@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I mean holy shit you’re right, the lack of patient consent is a much bigger issue than getting lazy writing the discussion.

      • exscape@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        The entire abstract is AI. Even without the explicit mention in one sentence, the rest of the text should’ve been rejected as nonspecific nonsense.

        • canihasaccount@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 month ago

          That’s not actually the abstract; it’s a piece from the discussion that someone pasted nicely with the first page in order to name and shame the authors. I looked at it in depth when I saw this circulate a little while ago.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      We are in top dystopia mode right now. Students have AI write articles that are proofread and edited by AI, submitted to automated systems that are AI vetted for publishing, then posted to platforms where no one ever reads the articles posted but AI is used to scrape them to find answers or train all the other AIs.

    • hydroptic@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago

      It’s Elsevier, so this probably isn’t even the lowest quality article they’ve published

  • repungnant_canary@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Maybe, if reviewers were paid for their job they could actually focus on reading the paper and those things wouldn’t slide. But then Elsevier shareholders could only buy one yacht a year instead of two and that would be a nightmare…

    • adenoid@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Elsevier pays its reviewers very well! In fact, in exchange for my last review, I received a free month of ScienceDirect and Scopus…

      … Which my institution already pays for. Honestly it’s almost more insulting than getting nothing.

      I try to provide thorough reviews for about twice as many articles as I publish in an effort to sort of repay the scientific community for taking the time to review my own articles, but in academia reviewing is rewarded far less than publishing. Paid reviews sound good but I’d be concerned that some would abuse this system for easy cash and review quality would decrease (not that it helped in this case). If full open access publishing is not available across the board (it should be), I would love it if I could earn open access credits for my publications in exchange for providing reviews.

      • Ragdoll X@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        I’ve always wondered if some sort of decentralized, community-led system would be better than the current peer review process.

        That is, someone can submit their paper and it’s publicly available for all to read, then people with expertise in fields relevant to that paper could review and rate its quality.

        Now that I think about it it’s conceptually similar to Twitter’s community notes, where anyone with enough reputation can write a note and if others rate it as helpful it’s shown to everyone. Though unlike Twitter there would obviously need to be some kind of vetting process so that it’s not just random people submitting and rating papers.

          • fossilesque@mander.xyzOPM
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            I feel like I’ve seen this model before, I know I’ve heard it. There’s better ways to do it than your suggestion, but it’s there in spirit. Science is a conversation, it would be a really cool idea to make room for things like this. In the meantime, check out Pubpeer, it’s got extensions for browsers. Super useful and you have to attach your ORCID to be verified. Everyone can read it though.

  • Nobody@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    In Elsevier’s defense, reading is hard and they have so much money to count.

  • blackstampede@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    I started a business with a friend to automatically identify things like this, fraud like what happened with Alzheimer’s research, and mistakes like missing citations. If anyone is interested, has contacts or expertise in relevant domains or just wants to talk about it, hit me up.

  • Diabolo96@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    They mistakenly sent the “final final paper.docx” file instead of the “final final final paper v3.docx”. It could’ve happen to any of us.

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Raneem Bader, Ashraf Imam, Mohammad Alnees, Neta Adler, Joanthan ilia, Diaa Zugayar, Arbell Dan, Abed Khalaileh. You are all accused of using chatgpt or whatever else to write your paper. How do you plead?