Follow-Up

Just FYI, I was trying to get Bing Image Creator to show me some magical girls celebrating Thanksgiving, and this popped out.

I like how it seems to have confused her Moon Stick with an orange on a fork, or maybe some unholy fusion of an orange and a stemmed wine glass. I also like how her turkey is nestled in a fruit salad. The generator made several similar images, and in all of them, the turkey is surrounded by fruit for some reason.

A few weeks ago, I tried to get it to show me pictures of Secret Jouju because my daughter is a fan. Unfortunately, it hasn’t been trained on that one.

Happy Thanksgiving

The Arrogance of the Librarian

I am a librarian. I like my job a lot. Unlike my previous career, it doesn’t require sixteen-hour shifts of hard labor plus homework, so I view it as my retirement. But sometimes I have to deal with other librarians, and then my job borders on intolerable.

I have just returned from one of those self-congratulation sessions that librarians euphemistically call “conferences.” Because librarians’ egos are inversely proportional to librarians’ relevance, they currently border on god-complex, and this particular conference was perhaps the worst example of librarian narcissism I have yet witnessed. Last year, I was at a conference where the attendees were invited to applaud themselves for the heroic virtue of being librarians, but this conference was even worse.

The keynote speaker was James LaRue, who is both a stuffy elitist and extraordinarily dishonest, and whom I might discuss at length in another post. Most of the sessions were spews of empty jargon, but the worst was the final session I attended, on large language models, which have unfortunately been dubbed “A.I.”

The presenters had little to say, though they showcased some fascinating new software such as Perplexity, Connected Papers, and Scispace, which can amass, analyze, summarize, and show the inter-connectedness of academic papers, as well as determine if a consensus exists.

What was striking about this presentation was not the tools it showed off but the naïve optimism of its presenters. It is no secret that the use of the library has been declining over the last few decades largely because of the rise of the internet. In response, librarians have invented what they call “information literacy,” which nobody can quite define but which every librarian writes or talks about. Supposedly, perhaps because they are surrounded by books all day, librarians have a unique ability to winnow fact from fiction and identify “fake news.” Everybody would realize how important librarians are if we could just get those poor, benighted, Google-using souls back in through the library doors.

In reality, anyone who’s spoken to a librarian for more than five minutes knows that librarians have no special powers when it comes to discerning facts. The aforementioned James LaRue, a rather prominent librarian, can’t even make an argument without ad hominem, a red herring, or whataboutism, let alone keep his facts straight. Once you cut through the jargon and wade through the self-righteous academic articles on the topic, “information literacy” is little more than a desperate attempt to prove that the library, and the librarians, are still relevant in the information age.

But according to the presenters of this talk on “A.I.,” software like Perplexity will finally–finally–bring our patrons back to us. Now, at last, everyone will need librarians again: After all, how could anyone figure out how to use the new software without Dear Librarian to show him? How could he type a question into a prompt without a librarian looking over his shoulder and suggesting better keywords?

This claim was baffling, and it was made all the more baffling by the impenetrable jargon it was couched in (librarians, we were told, will become “conveners of communities of practice”). But it reached the heights of absurdity when the presenters displayed a new program they had “designed” and “written” without knowing how to compose a single line of code: They simply asked ChatGPT to write the program for them, and–ta da!–it worked.

You read that right: People who cannot write a single line of code believe they will be the new A.I. experts. This prophecy will no doubt prove as prescient as the prediction, back when the internet was new, that everyone would need librarians to catalog it. If these fellows did not hold their profession in such inflated esteem, they might have soberly realized the real implications of the A.I. software they were showing us: “Information literacy” can be automated. Librarians went all in on information literacy as proof that people will always need them, but now machines can do that, too.

Tragically, our librarians could have retained their relevance if they had only known their place. Despite all the recent misguided attempts to redefine it, what the library is, at its heart, is something everyone will always want or at least want to know is there: It is a warehouse for all the books that people want to have access to but do not want to buy. By extension, the librarian is a glorified book shelver. That’s it. Everything else is hubris.

But I have a master’s degree! a librarian might object. Yes, I have one too. That degree is worthless. It is nothing but an expensive piece of paper that serves as a magic pass for getting a library job. Librarianship was one of the earliest casualties of American credentialism, the mindset that has wrecked our universities and turned them into overpriced extensions of high school. If we librarians are honest with ourselves for a change, we will admit that any reasonably intelligent person could take over our jobs tomorrow and do them as well as we do within a month. Perhaps he could do our jobs better than we since he would do them without writing any faux academic papers about convening communities of practice.

But because librarians have refused to know their place, they have not only made grandiose claims about their importance but have alienated their natural allies–that is, engaged parents who want to impart early literacy to their children. Keeping the parents around should have been simple, but for at least two decades now, or perhaps more, librarians have treated parents as their sworn enemies. Oh, they don’t say that outright, of course; they couch it in diplomatic language. But even a cursory familiarity with the literature on such subjects as children’s libraries or book challenges will leave a reader with the strong impression that librarians view parents as little more than a hostile force to be circumvented. This underlying hostility has finally broken out and become explicit in the absurdity of “drag queen story hours,” which are an overt, no longer subtle, effort to defy parents and corrupt children. And this is not guesswork on my part, either: There are academic papers specifically describing drag queen story hours as having that purpose.

Of course, this open hostility from the librarians has met with an equally open hostility from parents. Still, the librarians will learn nothing from the encounter because their gigantic egos make self-reflection impossible. In fact, James LaRue was there at the conference to complain about (and misrepresent) those angry parents. His conclusion, at the end of his hour-and-a-half speech, was that we need to groom their children even harder.

Those angry parents would be gone tomorrow if the librarians would take just one drop of humility and go back to sleepily cataloging their books and shushing their noisy patrons. But they won’t because, to an egomaniac, that one drop looks like poison.

Bing Image Creator Tries Drawing Magical Girl Pretty Dynamo

I think this will be my last post on this subject at least for a while, lest this simply turn into a blog of AI-generated art, but I want to report on what happened when I tried to get Bing to produce images of the protagonist from Jake and the Dynamo.

I did this in two different sessions and got very different results each time, though I notice certain similarities running throughout. This isn’t exactly scientific, so I can’t tell you the exact prompts I used, but each one was a variation or slight modification of this:

Skinny ten-year-old electricity-themed magical girl, powder puff tutu, blue and gold breastplate, blue and gold metal boots, bare midriff, simple gold tiara, long blue pigtails, green eyes, holding shield and spear, motif of heart with lightning bolt through it appearing on shield and armor, riding a flying snowboard over an Art Deco city.

I got hit by the “dangerous content” warning inconsistently and seemingly at random, though it appeared more often if I specified either “photorealistic” or “anime-style.” Undoubtedly, the young age of the character is what was triggering it.

I also found it absolutely cannot draw a young girl in fantasy armor. Almost every generation came out looking like a teenager or adult, and almost every one of them was busty with armor designed to accent her cleavage. This is no doubt because of the fantasy art on which the generator has been trained.

Pretty Dynamo as adult
“Photorealistic” Dynamo.

It did, early on, try to make child-like characters, but they inevitably came out as oppai lolis, some of which were rather disturbing. I tried to tweak the prompt to stop that from happening, but it either made no difference or hit me with the “dangerous content” warning again. The generator does not recognize operators like no or not or the minus sign. Ironically, I was trying to tone the resulting images down, but that made the problem worse.

Continue reading “Bing Image Creator Tries Drawing Magical Girl Pretty Dynamo”

‘Rags and Muffin’ According to Bing Image Creator

I see a lot of people playing around with Microsoft Bing’s new Image Creator, which uses Dall-E 3 as its engine. I’m toying around with AI because, like other people, I find it interesting and fun but also because I’m obliged to familiarize it for work reasons I won’t discuss on the blog.

In any case, I decided to try to get Bing Image Creator to reproduce some scenes from my novel Rags and Muffin. Here we go.

First, I wanted Rags sipping tea in her iconic wingback chair with Muffin curled up nearby. These images, I note, are quite similar in lighting and composition no matter how I tweak them. Also, Bing cannot understand what I mean by a furry, dog-like dragon no matter how I phrase it. I get either a dog or a dragon, no in-between.

Girl sipping tea with a dragon.
Rags sips tea with … Muffin? Is that you?

Some of the images are more convincing than others, but never once has it given me a picture that didn’t have obvious telltale signs of AI generation.

Continue reading “‘Rags and Muffin’ According to Bing Image Creator”

Children’s Book Update: Back from My Editor

I have the text of my children’s picture book back from my editor. She described it as “really sweet” and “a joy” and made minimal suggestions. I’ve been over the text one last time and think it’s ready to go. Since I’ve never done this kind of book before, I’m flying blind, but I think the next step will be to find an illustrator and, after the illustrations are done, to find someone to handle formatting, which I can’t do myself this time. I have some people in mind, but it would be inappropriate to say who until I get commitments.

I’ve also been toying with Grammarly GO, which is the new “A.I.” plugged into the Grammarly grammar-checking app. Grammarly was already built on the same technology as these new “large language models,” so the introduction of this new A.I. assistant was probably relatively easy. Like the rest of Grammarly, it’s set up more for business emails than for creative writing, though as I’ve played with it, I’ve found it more impressive than I thought at first. It has a few stock questions you can ask it, and one is “Find my main point,” the result of which is the image at the top of the post. But after a little work, I got it to make the following suggestions, which, though brief and vague, indicate that the A.I. has mostly processed the story correctly:

Grammarly suggests more details to flesh out a story.
Grammarly GO’s story suggestions.

It has correctly identified this as a work of fantasy fiction, and it has also correctly identified the protagonist and the villain. Obviously–and unsurprisingly–it can’t distinguish a children’s fairy tale from an adult novel, so its requests for more detail are irrelevant. Its second and third suggestions would only be reasonable if this were a different sort of work. The first suggestion, however, shows the A.I.’s limits: The opening of the story is, in fact, already dedicated to “who Anastasia is and what kind of person she is.” The software apparently couldn’t pick that up.

I wondered if it always gave these same suggestions, so I fed it a longer, more fleshed-out novel, specifically Rags and Muffin.

Grammarly struggles to analyze Rags and Muffin.
Grammarly GO struggles with a novel.

LOL. Ouch. So now it’s asking for less detail. “Clarify the narrative focus” may be legitimate, though not quite in the way Grammarly GO means it, since the presence of some plot lines unconnected to the main story is one criticism I’ve received from real human readers. But characters who “feel somewhat undeveloped” is definitely not a criticism I usually get.

In any case, the impression I get of Grammarly GO is the same one I get of ChatGPT: Its suggestions aren’t entirely bad, but they’re vague and so elementary that I can’t imagine them being useful to any but a beginning writer. For anyone else, they are at best good reminders.

I may be wrong, but I personally suspect the hullaballoo over “A.I” will prove to be a tempest in a teapot.  The company that owns ChatGPT is hemorrhaging money, and these programs apparently degrade over time: A.I.s that were once whip-smart eventually lose their ability to perform even basic maths problems, and because they are really nothing more that advanced versions of word-prediction software, they often “hallucinate” information, presenting fiction as fact. Grammarly, too, is beginning to give more and more incorrect suggestions, especially comma splices.

Besides all that, the creators of these programs deliberately lobotomize them to make them politically correct. Many people have demonstrated this with ChatGPT, which will coyly dodge certain topics or even lie outright to avoid stating inconvenient facts. Grammarly appears to be undergoing a similar sort of deliberate retardation: Previously, it had “inclusive language” suggestions, and enough people complained that they added the ability to turn those off. Now the original “inclusive language” suggestions have been expanded to no less than sixteen different ungrammatical but politically correct settings that are on by default and have to be manually deactivated. The weirdest and stupidest of them is “Show Ukraine Support Message,” an utterly inappropriate setting for a grammar checker:

Grammarly demands I show support for the Ukraine.
I will not eat the bugs. I will not live in the pod. I will not Show Ukraine Support Message.

Once Current Thing has ended and we’re on to the next Current Thing, this option will presumably change to something else, like maybe “Express Fashionable Disapproval of Republican Presidential Nominee”–which will, of course, also be on by default.

Working on ‘The Cup of Agamemnon’

Having finished the manuscript for a children’s picture book and sent it to my editor, I’m back to work on The Cup of Agamemnon, a sword-and-planet adventure that I suspect will take me some time to complete This book will be a challenge for me. It involves more large-scale, semi-realistic warfare than I am used to depicting, so I will need some research and assistance to make sure I don’t destroy suspension of disbelief. The world I am describing is supposed to be like a decaying Achaemenid Empire but with Civil War-era weaponry and some science-fiction conceits thrown in. Whether that mishmash is workable remains to be seen.

I confess I’ve been playing around a little with ChatGPT. I fed it some details from this book to see if it could offer any real assistance. So far, it’s sort of like bouncing ideas off a person who is very safe and not particularly creative, though it at least offers a few reminders of things that need to be addressed. I’ll quote some snippets from our “conversation.”

Recently, after I asked ChatGPT about the first time I mentioned the novel, it coyly asked me to “remind” it of what we’d discussed, which indicates that it may not remember past conversations as well as its advertising says it does:

Of course, I’d be happy to help! Please remind me of the key elements of the concept we discussed previously for the novel, and let me know which areas you’d like to refine or develop further.

I then gave a one-sentence reminder to see if it actually remember details. It gave a lengthy answer, none of which suggested that it remembered our previous discussion, but which contained some good, basic advice for a writer. In the midst of it, it asked me to discuss characters, so I described the primary characters of the book:

The characters are 1) The Narrator, who, as his name suggests, is narrating the story. He is a troubadour from a race with highly developed language skills; he can learn languages very quickly and memorize lengthy works, even epics, in order to recite them as he travels the galaxy. 2) Angelica, a human from the future who has augmented her body with cyborg enhancements; her bionic computer is powered by a miniaturized fusion reactor at the base of her spine, and her main weapon is a segmented superstring in her hand, which can open a portal to the mantle of a neutron star and release tiny amounts of neutron-degenerate matter, which will instantly vaporize enemies. 3) Sam, a large, muscular, furry creature called a Feralax; he carries a large machine gun and quite a lot of ammunition. These three are shipwrecked because of a miscalculation while flying faster than light in their spacecraft. After their shipwreck, they travel through a mountain pass that appears to be a former trade route, but has since been abandoned. They come upon an unconscious native, a so-called Gernian, who appears to have been attacked. They treat his wounds and keep him alive until they reach a Gernian fort.

ChatGPT gave a surprisingly lengthy reply to this, mostly recommending conflicts that could arise from the details given thus far. I won’t bother quoting in full, but it suggested:

Angelica’s futuristic enhancements provide her with a distinct set of skills that can be both advantageous and complex. Her bionic computer and portal-opening weapon are fascinating elements that can drive both action and intrigue. Consider exploring her relationship with technology and how it affects her humanity, as well as how her advanced technology might interact with the moon’s environment.

Not bad, but not more than what one could get from a not-very-interested editor. At most, it amounts to reminders of a few basics that an author could forget in the midst of composing.

I tried to get more useful details, so I focused in on one scene: Early in the book, the stranded spacefarers come upon a besieged fort in a mountain pass. Realizing that the combatants have technology inferior to their own, they decide the break the siege in order to ingratiate themselves to the people in the fort and acquire the two things they lack–food and water. Making sure that this sequence is reasonably realistic is a point of anxiety to me, so I asked ChatGPT to see if it had any insights:

The characters arrive while the fort is under siege by hill-dwelling tribes who have access to cannons, flintlock rifles, and rudimentary bombs, but no more advanced weapons. The defenders in the fort have slightly more advanced rifles and cannons of their own. Describe the kind of logistics it would require to besiege such a fort. Suppose the fort is similar in structure to an Urartian fortification such as existed in ancient Armenia, with a similar environment.

The responses I got for this were okay but not insightful. It mentioned supply lines, which I hoped it would discuss in more detail or at least with some comments I had not thought of myself, but it did not. Most interesting to me is that it did not mention sappers, who played an important part in roughly analogous sieges. I then asked it about sappers specifically, and it responded with answers that were, again, okay but vague. It did, however, include one detail I consider useful:

  • Focused Blast: Sappers might aim to create a focused blast pattern that directs the explosive force inward and upward, aiming to break apart the stones and create a breach.
  • Overlapping Charges: Multiple charges might be used in close proximity to create a larger breach or to ensure that a single charge doesn’t leave the wall partially intact.

More remarkably, I asked it how the protagonists could deal with the pickets of the besieging army, and it gave specific (though, again, vague) responses for each character, such as:

  1. Narrator (Troubadour):
    • Language Skills: The Narrator’s exceptional ability to learn languages quickly could be used to gather information about the besiegers’ positions, numbers, and patrol schedules. This knowledge would aid in planning an effective evasion strategy.
    • Disguise and Deception: The Narrator could use his talents to mimic the language and mannerisms of the besieging tribes, allowing the group to blend in temporarily and gather information without raising suspicion.
  2. Angelica (Cyborg Augmented Human):
    • Technological Abilities: Angelica’s cyborg enhancements could provide valuable advantages. She might use her bionic computer to remotely access and disable any surveillance equipment the tribes might be using, reducing the risk of detection.
    • Stealth and Reconnaissance: Angelica’s enhanced senses and physical capabilities could help her scout ahead and assess the picket positions with minimal risk of being detected. Her augmented vision might allow her to spot hidden sentries.

The suggestions here demonstrate the large language model’s difficulty in keeping track of or understanding specific details: The suggestion that Angelica could “remotely access and disable any surveillance equipment” is of course impossible since the invaders have no such equipment.

Picture Book Update

I will have to roll up my sleeves and format my new book in Adobe Illustrator or InDesign. This is something I wanted to avoid, but I don’t think I can hire someone for the formatting, mostly because I want to have the book in a  finalized draft before I hire the illustrator: I need to know the page count so I know how many illustrations will be necessary.

I think I’ll place the text on the left page, surrounded by an attractive border, with a full-page color illustration on the right page. I’ll print the books in 8 x 8 inches via KDP, with bleed, which will make for a small but adequately sized paperback picture book.

I’m frustrated that Vellum, my usual formatting software, doesn’t offer the full range of trim sizes available on Amazon. It’s set up for novels, of course, but I  thought I could make a picture book out of it with a little coaxing. However, none of its available trim sizes are suitable for that, so Adobe is my best bet. I have some experience with Adobe Illustrator but none with InDesign. This could be fun, especially since I will now have to worry about all kinds of typographical things that Vellum handled for me automatically, such as runts and orphans and so forth.

In slightly different news, I noticed that Grammarly has rolled out its own “artificial intelligence” system, GrammarlyGO, which has automatically been integrated into my Word plugin. Although there’s been a lot of buzz about what it might mean for authors to start using AI in their writing, Grammarly, which is hugely popular, already functioned on similar principles, so a lot of us have already been using AI to assist with our writing, at least in a limited fashion, without knowing it. That Grammarly is an “AI” system similar to ChatGPT explains both why it is more dynamic than most grammar checkers and why it sometimes gives screwy, ungrammatical suggestions: I once had it suggest that I write “more bottomless” instead of “deeper,” apparently because it could comprehend how to employ a synonym grammatically but couldn’t understand subtle differences in meaning, and it has recently developed the annoying habit of suggesting comma splices. If it degrades over time as other AIs do, it will probably become useless in a few years.

If nothing else, it is good for catching my spelling mistakes and cutting out unnecessary words, but it is designed more for business emails than fiction writing.

Anyway, GrammarlyGO is interesting to play around with. You can see from its suggestions in the image at the top of this post that it can parse a document reasonably well, but its tips are quite basic. The first of its three recommendations is the standard “show don’t tell,” which isn’t bad in itself but is inappropriate for the present work, which is a children’s fairy tale. The second recommendation is vague, possibly a stock suggestion it gives when it doesn’t know what else to say. Probably, it can’t understand the conflict that’s already present, or perhaps it doesn’t know how to handle a story this short. And as for the third suggestion–that’s already in the story, which is written with a moral appropriate for its young target audience: The importance of gratitude. But I’m unsurprised that an AI can’t pick that up since it’s built into the story’s fabric and never explicitly stated.

New Story Finished

I have finally finished one of the projects I’m working on, the text for a children’s picture book. I wrote it specifically for my first daughter, and it’s based on a story I made up for her one night when we didn’t have a storybook handy. It’s a fairy tale with a whimsical premise, some classic fairy-tale motifs, a climax that might surprise slightly by breaking convention, some action, and a moral I think is not too hamfisted.

Before I hire an illustrator, I’m looking into how exactly to go about producing a picture book, which is quite a different format from the novels I’ve produced before. Unfortunately, my formatting software, Vellum, is not set up for picture books, so I might have to pay to have the book formatted professionally. That will make me nervous because it means that, once it’s formatted, I can’t tweak the text. I’m nervous about the options I see on Amazon KDP, too, which also does not seem to be set up for picture books, but I have several articles open on my browser which claim it can be done.

I also still need to send it to an editor, though the funds are rather short. It’s only fifteen manuscript pages, so I’m reasonably confident in it, but it would still be foolish to publish without a second pair of eyes on it.

Instead, Watch ‘Masters of the Universe’ (1987)

If you’re hankering to watch a live-action movie based on a Mattel toy franchise, in which the characters make an ill-advised trip to the “real world” that disappoints long-time fans, let me suggest an alternative to the film that opened this weekend. I recommend the 1987 box-office disaster Masters of the Universe, which is based on the hugely popular sword-and-planet toy line of the same name. It stars an oiled-up Dolph Lundgren, fresh off his performance in Rocky IV, as He-Man. The movie was panned by critics and shunned by audiences when it came out, and it is even credited with dethroning the Masters of the Universe line of action figures from its dominance of the toy aisle and helping to shut down its studio, Cannon Films. Nonetheless, I believe it’s past time that this movie gets a re-evaluation. It is undeniably flawed, but it was made with real passion and heart, and it contains some genuinely good performances.

You can stream it for free on Amazon. ( Please ignore it if that link gets crossed out as if it’s dead. WordPress doesn’t like it when I link to streaming services.)

This movie, sad to say, had a minuscule budget and consequently makes some serious mistakes, but let’s face it, it probably really is the best thing to ever come out of the Masters of the Universe franchise, which has never been known for its brilliant writing or high production values. The 1983 cartoon, which is the version of the story that everyone remembers most fondly, is memorable mostly for being surreal, weird, and stiffly animated–and its stiff animation often falls into uncanny-valley territory because much of it is rotoscoped. If you have doubts, I invite you to watch its first broadcast episode, which you can also stream for free, and compare it to the movie linked above. Assuming you can put nostalgia aside and judge both dispassionately, I think you will agree that the film is the superior production.

Continue reading “Instead, Watch ‘Masters of the Universe’ (1987)”