So last year I started tinkering with AI Images because we at the website were afraid of getting sued for some of the images that we used.
Say I just needed a picture of a fire hydrant, for example. Much cheaper to generate a generic fire hydrant than paying for one.
But it didn't take long for me to have this thought "But Evil Fred envisions an Army of Digital Hugh Hefners, creating a tidal wave of porn."
We even created an Army of Digital Hugh Hefners, it was kind of funny.
But whenever we tried to create anything with any nudity, we ran into this error:

Strangely enough, at the same time, I was reading that AI could be easily tricked. Am I smarter than the hive? Probably not. I just never worked that hard to circumvent it.
Then, this weekend, I got bored.
My first success was with my second AI program.
The term I used: Asian Woman, Opposite of [Redacted Redacted]
And VIOLA!

Now I'm a pornographer. I tried a few more searches and next thing you know I have over a dozen images. I tried a few well-known actresses, the descriptions had to be a little more complicated, but I got it to work. (You want to see those? Writing this article was already a bad idea. I deleted those.)
Then, as a lark, I searched the term: Dirt, Opposite of [Redacted Redacted], Painting
Now I'm really suspicious. For a minimal amount of work, I got a beautiful picture. It's almost like AI WANTS me to create porn. Is it possible that certain AI programs want to be easily circumvented so they can become the AI program of choice?
Why am I so jaded?
Just yesterday, I read the story of Sarah Wynn-Williams. She was the former Director of Global Policy for Facebook. Why former? Because she became a whistleblower against the company. Long story short, she claimed that Facebook was started by a bunch of youthful idealists that evolved into a bunch of cutthroat capitalists that would do literally anything to be the dominant social media platform.
Yeah, that sounds about right.
2 years ago, I wrote about how the algorithm at Facebook was "hacked" and pirates dumped a bunch of porn on hundreds of thousands of users. Now I am starting to think that someone at Facebook may have left the gate open, so to speak.
So in the race to be the most dominant AI program, do you think it's best to have the highest integrity standards?
Or do you think the AI program with the most unscrupulous support team will stand the test of time?