• FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      21
      ·
      1 year ago

      There is nothing against copyright law to read data that a person has put online in a public, unrestricted manner for the purpose of having it be read.

      • pup_atlas@pawb.social
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        That’s not what’s happening though, they are using that data to train their AI models, which pretty irreparably embeds identifiable aspects of it into their model. The only way to remove that data from the model would be an incredibly costly retrain. It’s not literally embedded verbatim anywhere, but it’s almost as if you took an image of a book. The data is definitely different, but if you read it (i.e. make the right prompts, or enough of them), there’s the potential to get parts of the original data back.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          14
          ·
          1 year ago

          which pretty irreparably embeds identifiable aspects of it into their model.

          No, it doesn’t. The model doesn’t contain any copyright-significant amount of the original training data in it, it physically can’t contain it, the model isn’t large enough. The model only contains concepts that it learned from the training data - ideas, patterns, but not literal snippets of the data.

          The only time you can dredge a significant snippet of training data out is in a case where a particular bit of training data was present hundreds or thousands of times in the training data - a condition called “overfitting” that is considered a flaw and that AI trainers work hard to prevent by de-duplicating the data before training. Nobody wants overfitting, it defeats the whole point of generative AI to use it to replicate the “copy and paste” function in a hugely inefficient way. It’s very hard to find any actual examples of overfitting in modern models.

          It’s not literally embedded verbatim anywhere

          And that’s all that you need to make this copyright-kosher.

          Think of it this way. Draw a picture of an apple. When you’re done drawing it, think to yourself - which apple did I just draw? You’ve probably seen thousands of apples in your life, but you didn’t draw any specific one, or piece together the picture from various specific bits of apple images you memorized. Instead you learned what the concept of an apple is like from all those examples, and drew a new thing that represents that concept of “appleness.” It’s the same way with these AIs, they don’t have a repository of training data that they copy from whenever they’re generating new text.

          • pup_atlas@pawb.social
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            I’m aware the model doesn’t literally contain the training data, but for many models and applications, the training data is by nature small enough, and the application is restrictive enough that it is trivial to get even snippets of almost verbatim training data back out.

            One of the primary models I work on involves code generation, and in those applications we’ve actually observed verbatim code being output by the model from the training data, even if there’s a fair amount of training data it’s been trained on. This has spurred concerns about license violation on open source code that was trained on.

            There’s also the concept of less verbatim, but more “copied” style. Sure making a movie in the style of Wes Anderson is legitimate artistic expression, but what about a graphic designer making a logo in the “style of McDonalds”? The law is intentionally pretty murky in this department, with even some colors being trademarked for certain categories in the states. There’s not a clear line here, and LLMs are well positioned to challenge what we have on the books already. IMO this is not an AI problem, it’s a legal one that AI just happens to exacerbate.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              5
              ·
              1 year ago

              You’re conflating a bunch of different areas here. Trademark is an entirely different category of IP. As you say, “style” cannot be copyrighted. And the sorts of models that chatter from social media is being used for is quite different from code generation.

              Sure, there is going to be a bunch of lawsuits and new legislation coming down the pipe to clarify this stuff. But it’s important to bear in mind that none of that has happened yet. Things are not illegal by default, you need to have a law or precedent that makes them illegal. And there’s none of that now, and no guarantee that things are going to pan out that way in the end.

              People are acting incensed at AI trainers using public data to train AI as if they’re doing something illegal. Maybe they want it to be illegal, but it isn’t yet and may never be. Until that happens people should keep in mind that they have to debate, not dictate.

              • pup_atlas@pawb.social
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                The law is (in an ideal world), the reflection of our collective morality. It is supposed to dictate what is “right” and “wrong”. That said— I see too many folks believing that it works the other way too, that what is illegal must be wrong, and what is legal must be ok. This is (decisively) not the case.

                In AI terms, I do believe some of the things that LLMs and the companies behind them are doing now may turn out to be illegal under certain interpretations of the law. But further, I think a lot of the things companies are doing to train these models are seen as “immoral” (me included), and that the law should be changed to reflect that.

                Sure that may mean that “stuff these companies are doing now is legal”, but that doesn’t mean we don’t have the right to be upset about it. Tons of stuff large corporations have done was fully legal until public outcry forced the government to legislate against it. The first step in many laws being passed is the public demonstrating a vested interest in it. I believe the same is happening here.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  The problem I have with this is that the argument seems to boil down to “I don’t like this so it should be illegal.” It puts me in mind of the classic objection on the grounds that something is devastating to your case. Laws should have a rationale beyond simply being what “collective morality” decides, otherwise all sorts of religious prohibitions and moral scares end up embedded in the legal system too.

                  Generally speaking, laws are based on the much simpler and more generic foundation of rights. Laws exist to protect rights, and get complicated because those rights can end up conflicting with each other. So what rights do the two “sides” of this conflict bring to the table? On the pro-AI side people are arguing that they have the right to learn concepts and styles from publicly available data, to analyze that data and record that analysis, and to make use of the products of that analysis. It all seems quite reasonable and foundational to me. On the anti-AI side - arguments based on complete misunderstandings of how the technology works aside - I generally see “because it’s devastating to my future career, your honor.”

                  Anti-AI artists are simply being selfish, IMO, demanding that society must continue to provide them with their current niche of employment and “specialness” by restricting other peoples’ rights through new legal restrictions. Sure, if you can convince enough people to go along with that idea those laws will be passed. That doesn’t make them right. There have been many laws over the years that were both popular and wrong on many levels.

                  Fortunately there are many different jurisdictions in the world. There isn’t just one “The Law.” So even if some places do end up banning AI I don’t think that’s going to slow it down much on a global scale, it’ll just help determine which places get a lead and which places fall behind in developing this new technology. There’s too much benefit for everyone to forego it everywhere.

                  • pup_atlas@pawb.social
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    I’m out and about today, so apologies if my responses don’t contain the level of detail I’d like; As for the law being collective morality, all sorts of religious prohibitions and moral scares HAVE ended up in the law. The idea is that the “collective” is large enough to dispel any niche restrictive beliefs. Whether or not you agree with that strategy aside, that is how I believe the current system works in an ideal sense (even if it works differently in practice), that’s what it is designed to protect from my perspective.

                    As for anti-AI artists, let me pose a situation for you to illustrate my perspective. As a prerequisite for this situation, a large part of a lawsuit, and the ability to advocate for a law is based on standing, the idea that you personally, or a group you represent has been directly, tangibly harmed by the thing you are trying to restrict. Here is the situation:

                    I am a furry, and a LARGE part of the fandom is based on art and artists. A core furry experience is getting art commissioned of your character from other artists. It’s commonplace for all these artists to have a very specific, identifiable signature style, so much so that it is trivial for me and other furs to be able to identify artists by their work alone at just a glance. Many of these artists have shifted to making their living full time off of creating art. With the advent of some new generational models, it is now possible to train a model exclusively off of one singular artists style, and generate art indistinguishable from the real thing without ever contacting them. This puts their livelihood directly at risk, and also muddies the waters in terms of subject matter, and what they support. Without laws regulating training, this could take away their livelihood, or even give a (very convincing, and hard to disprove) impression that they support things they don’t, like making art involving political parties, or illegal activities, which I have seen happen already. This almost approaches defamation in my opinion.

                    One argument you could make is that this is similar to the invention of photography, which may have directly threatened the work of painters. And while there are some comparisons you could draw from that situation, photography didn’t fundamentally replace their work verbatim, it merely provided an alternative that filled a similar role. This situation is distinct because in many cases, it’s not possible, or at least immediately apparent which pieces are authentic, or not. That is a VERY large problem the law needs to solve as soon as possible.

                    Further, I believe the same, or similar problems exist in LLMs, like they do in the situation involving generative image models above. Sure with enough training, those issues are lessened in impact, but where is the line of what is ok and what isn’t? Ultimately the models themselves don’t contain any copyrighted content, but they (by design) combine related ideas and patterns found in the training data, in a way that will always approximate it, depending on the depth of training data. While “overfitting” might be considered a negative in the industry, it’s still a possibility, and until there is some sort of regulations establishing the fitness of commercially available LLMs, I can envision situations in which management would cut training short once it’s “good enough”, leaving overfitting issues in place.

                    Lastly, with respect, I’d like to push back on both the notion that I’d like to ban AI or LLMs, as well as the notion that I’m not educated enough on the subject to adequately debate regulations on it. Both are untrue. I’m very much in favor of developing the technology, and exploring all it’s applications. It’s revolutionary, and worthy of the research attention it’s getting. I work on a variety of models across the AI and LLM space professionally, and I’ve seen how versatile it is; That said, I have also seen how over publicized it is. We’re clearly (from my perspective) in a bubble that will eventually pop. We’re claiming products use AI to do this and that across nearly every industry, and while LLMs in particular are amazing, and can be used in a ton of applications, it’s certainly not all of them— and I’m particularly cautious of putting new models in charge of dangerous or risky processes where they shouldn’t be before we develop adequate metrics, regulation, and guardrails. To summarize my position, I’m very excited to work towards developing them further, but I want to publicly express the notion that it’s not a silver bullet, and we need to develop legal frameworks for protecting people now, rather than later.

    • anlumo@feddit.de
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Short messages usually aren’t creative enough to be protected by copyright. Exceptions might be poems and similar texts.