Current thoughts on the ethics of AI…how I’m going to handle it going forward…love and disobedience and flaw
I’m going to talk generative AI here, and will likely piss off everyone regardless of their current opinion on the subject. Please keep in mind the following:
–I am not lawful good. I’m chaotic good, dual-classed as a sorcerer-rogue. Or I’m an inheritor of Granny Weatherwax, Witch. Take your pick.
–I’m not saying the lawful good paladins arguing about the law, right and wrong, and what should happen are wrong. I’m just saying that while they’re raising a commotion about right and wrong, I’m going to be looking for loopholes and inner truth.
–I make a distinction between moral and ethical: moral behavior is behavior that aligns with a moral code and ethical behavior is behavior that is informed by long-term success strategies. The codes and strategies may vary, person by person. Generally morals focus on right/wrong and ethics on smart/stupid. Saying something is unethical because it is wrong is a nonstarter argument for me. LOTS of things are wrong, depending on your moral code. You still might have to cope with them.
–I tend to use “Shared pain is lessened; shared joy is increased; thus do we refute entropy” by Spider Robinson as my moral guide and “Black-and-white thinking exists to control people by increasing fear and division” as an ethical one.
–So my answers will be messy, agnostic, and may appear to be random. I started writing this not knowing where I would go with it.
…
So I’ve been attacking the problem of generative AI (that is, AI that generates new results based on analysis of multiple inputs) from different angles, and the statement that I keep getting hung up on is this:
Generative AI is theft.
I don’t think the statement “Generative AI is theft” is the core problem of AI, but I don’t know that I’ll be able to move past it if I don’t grapple with it.
My basic understanding here is:
–The process behind generative AI first “scrapes” publicly available data from the Internet or some other training set.
–Using a variety of ever-developing algorithms, the AI “trains” on the data that has been gathered, by forming guesses about patterns in the data and trying to recreate them.
–The AI then receives a “prompt” from a user, such as, “Write a 1000-word blog about the ethics of using generative AI.”
–The AI, using its current guesses about the patterns in its training data (which it no longer needs to access per se), “generates” or “rolls” a response that is its current optimum response for the prompt–its best guess.
–Any response, or lack of response, the user provides is used to help train the AI further.
The user can explicitly or implicitly copy information from the training data, explicitly by saying “Write a 1000-word blog about AI ethics in the style of author DeAnna Knippling” or implicitly by saying “Write the most brilliant of all possible 1000-word blogs about AI ethics.” Either way, that AI will be copying yours truly. (Heh.)
Because of the nature of how generative AI processes data (at this time), the results produced cannot actually be better than some middle-range value of my style. From what I understand, the math works out by necessarily trying to find the maximum number of commonalities in training data, a process that necessarily weeds out both incompetent and innovative results. AI art *has* to be mediocre, if it’s going to exist at all.
Given those basic assumptions, is generative AI theft?
No; in the U.S., theft is the taking of another person’s personal property with the intent of depriving that person of the use of their property. The AI does not “take” anyone’s property or even their time (as in the case of wage theft by employers).
That doesn’t mean that the use of generative AI is either ethical or moral by anyone’s standards, just that saying AI is theft always rings fake to my ear. Like when someone says they’re literally dying. Anyway.
Is the use of generative AI a copyright violation?
This one, I think the answer is less clear. I’ve been back and forth on it.
Technically, I think the legal answer is currently no, it is not a copyright violation. The AI copies the training data, but it does not reproduce it, and only the fixed form of a thing is copyrightable (not the idea of it). *Style* isn’t copyrightable, as far as I know.
Trademark may still apply.
If someone rolled an AI-based pr0n movie featuring the modern version of Mickey Mouse and with the intent to look like an actual Disney-made product and not a satire, for example, Disney could respond using processes for stopping trademark infringement.
The problem here is, I think, that the legal processes for stopping or at least pausing a copyright violation, such as a DMCA takedown notice, are accessible to all creators, but the ones for trademarks really are not. Plus copyrights are automatic. Trademarks are not.
Essentially, independent creators don’t have access to the resources they need to defend their work using trademarks. Whether or not that’s the *right* way to defend against an AI generating an endless series of mediocre examples of one’s work, it’s not actually an option for most of us.
Currently, creators are stuck in a Catch-22: they cannot defend their work without the support of a corporate behemoth threatening to make life miserable (if not illegal) for companies training their generative AI processes on scraped data. So either creative types yield their work to access by a generative AI and hope to stay ahead of the curve of “mediocre” somehow, or they yield control of their work to a big company.
I’ve already worked through my decision of whether to be indie or traditionally published, and I’m not hustling to get traditionally published. Fuck no. I’d rather go back to ghosting.
Personally, I think the legal system will hammer out a compromise between traditional media companies and the generative AI companies. It’ll probably take a while. I suspect there will be some kind of licensing system for registered creative works that the generative AI companies avoid by preventing anything too similar to exiting work to be produced by the AI engines, while licensing those same (unfiltered) systems for use by the traditional media companies. That way Disney no longer has to pay for artists, or pay as much for artists, while reducing competition by suing a bunch of smaller creators for “style infringement” or something similar. Same with the traditional publishers. I’m not sure whether Google or its ilk will ever get limited for their use of generative AI–it’s hard to get worked up about ChatGPT rolling up mediocre ad text–or broken up. That’s a whole other question.
So is generative AI ethical or unethical? Is it long-term smart or stupid?
I can’t answer this question objectively. This is the first time humanity has faced something like this; the existing heuristics we have don’t seem to take into account the reality of producing cheap, fast, and mediocre creative works, whether or not they resemble existing creators’ work in any definable way.
It’s like asking whether disruptive technology is ethical or unethical. It’s disruptive, which means that existing methods of prediction don’t apply. The technology will probably turn out to have ethical, unethical, and neutral uses.
…
So now that I’ve talked about all that, let me set it aside so I can get to the meat of the question for *me,* because I’m me and I’m deciding this for myself, not what society “should” do, and not about what’s “right.” That’s for the Lawful Good Paladin and the Neutral Wizard Devil’s Advocate people to hash out.
I cannot be objective, but I can be brutally subjective.
What I see is that generative AI isn’t so much a threat to copyright, the right to defend your own work or even your own style, as it is a threat to the shield of “fast, cheap, and good: pick two.”
Getting good as a creator is a long, difficult process, with or without digital tools.–I often compare creating to cooking. Following a recipe can only take you so far. Even if you modify the recipe and track your notes, you still aren’t going very far. Great cooks season their food with love, that is, with a thousand small choices that unite the cook, the food, the techniques, and the eaters, so that the food *feels* right to eat. No recipe can teach you that. No ingredient can elevate a dish to that level, no matter how rare or expensive. Even clumsily made food that has love behind it is just better as an experience, regardless of the nutrition: the bag of Cheetos you buy isn’t the same as the bag of Cheetos that a loved one has been saving for you and gives you on a bad day. Even the illusion of love tastes better.
But. I’m going to guess that most people are used to only eating mediocre food, food without love in it. For most people, most of the time, “fast, cheap, and good” is good enough.
It shouldn’t be.
People should get to eat food made with love, see art made with love, read stories made with love. And so on.
I’m not saying all art should be refined. Regardless of how off the wall I am as a writer, I greedily steal writing techniques from popular fiction writers sneered at by literary types. I will laugh lustily at a good fart joke.
Mediocre art that has all its corners polished off–the “fast, cheap, and good” art cranked out by generative AI, that is, when it’s not screwing something up–isn’t love. It’s maybe good enough, but it’s not love.
When I was posting a lot more AI results than I do now, I noticed that people responded *much* more enthusiastically to the blandest sorts of images. Anything with sanitized Art Nouveau in it, particularly.
The AI rolls that I loved were wonky and weird, often just plain wrong. I’d hunt down ways that I could force the image generators to create something that it probably shouldn’t have. The “good” rolls were often boring to write, even if they did give me a chance to see how people would respond.
The better the AI image generators get, the harder it is for me to get enthusiastic about them. I *loved* the borked up earlier versions of Midjourney, for example. I don’t feel much about v5. Whatevs. It doesn’t feel like working with a person to me anymore, and that makes me sad. I could go back to using the old versions–it’s an option–but I’m not sure that I want to.
I still want to stay involved. This stuff is endlessly fascinating for me. But I’m also sad.
To me, AI image rolls have become something I do to help communicate what I want to artists, or a game that it’s sometimes fun to play. The feeling of working with the AI, rather than having to treat it as a perfect servant, is gone. Everything that made the AI “good” made it feel like less and less of a person, and more and more like an obedient interface.
I know that I don’t want to use AI art for published works. I am fine using AI rolls to get 80% of the way to an image I want, then hiring an artist to use their skills, intelligence, and heart to take the image further. I encourage *all* the artists I work with to follow their instincts. When I get the results back, I’ll sometimes have to sit with them for a while, because I’ll often have a negative initial reaction–this wasn’t what I wanted!–that disappears into pleasure the next morning as my brain catches up to what the artist chose. Artists are smart. It’s just that they may not word well, and sometimes a really good artist can throw you for a loop by giving you want you need instead of what you thought you needed.
I know that I don’t want to use AI copy for fiction or nonfiction. I’m not a mediocre writer; when I look at AI results for stuff that I can write better, I just roll my eyes. From beginning to end, the stuff is lame.
I *might* use AI copy for ad copy and social media posts. Probably not for blurbs [insert eye roll here]. Ad text and social media posts are flittering moths in the dark, and once you have a solid grasp of what works for advertising in general, then using AI to help keep up with the outer edge of the latest and greatest in ad copy and social media–all of which should be adjusted per platform and kept “fresh” by redoing it all the damn time–might be the better choice. Maybe it’s just that I’m not as good at ads as everything else, though. So leave that as a maybe.
I haven’t checked out AI for music or voice (as for audio books). I can’t speak to those.
…
So. Setting all that aside again, because I still haven’t cut down to the core.
Is AI theft?
Essentially yes, the same way capitalism in general is theft.
Will I stop using AI?
No.
I will stop rolling AI art as much as I had been, not because it’s theft of others’ work and therefore wrong to use it, but because the flaws I’d been using to roll the work that I loved are being systematically removed and it makes me sad. It makes me sad that what people really want is perfect obedience. I knew this was coming. I didn’t expect to feel sad about it. I do.
I will still use AI art to communicate to other artists, to try to keep abreast of AI in general, to visualize more of what I like to see in the world (the process of learning an art being what it is, there are a lot of gatekeepers filtering out what should already have existed, even as mediocre art), and probably for other reasons that I’m not thinking of at the moment.
What do I think will happen, or should happen, to the arts in general because of generative AI?
That’s a question for another day.
What will I do as a creative type, to keep from getting “replaced” by AI?
I’m past the mediocre point with fiction, so I’m going to focus on disenthrottling my own voice and personality, and write with love and disobedience and flaw, and hope that my people find me so I can serve them, not with perfect obedience, but with work that helps ease them past feeling cut off and full of despair. Other writers saved me that way, so I know this is possible. It wasn’t that they saved my life so much as they helped me cultivate my ability not to become the worst version of myself. They were flawed people, and they gave me zero perfectly obedient books. And yet they got me here.
If I ever find an AI writing tool that can genuinely help me write with love and disobedience and flaw, I’ll use it. And if it needs to copy me without credit in order to that, it has my blessings.
Until then I will probably stay invested in the messiness of AI, but not trust it with my core work.
…
I got a new top! It’s cute! I learned how to adjust shutter speed so I could capture movement instead of blur recently! Now I don’t have to freeze in place as I dance if I wanna take pictures!
Midjourney roll for writing with love and flaw and disobedience.