“When my daughter started Krav Maga I explained the three rules:
I knocked together a quick prompt based on the Last Rose of Albion version.
Original Amelia wore a darker jacket:
After my earlier text-rendering adventures, it didn’t surprise me that Z-Image Turbo couldn’t render “whoso” (whoro, whojo, whoyo, whobo, …), but I was annoyed that it kept changing “hammer” to “hommer”, and I could see it start off right and then mis-correct it; also, more often than not, the sword was not in the stone.
The new hotness is rendering with ZIT and then doing style edits with Flux.2 Klein; this is easy to set up in SwarmUI thanks to Juan’s Base2Edit extension. Same pic, with style set to “colored-pencil sketch with precise linework”:
Setting aside the sequels to shows I never watched or have abandoned, Spring is not looking great, but summer has some promise:
Spring: Farming Life In Another World 2: still hoping they stop being coy about how thoroughly the divine farming tool plows the fields. The first trailer shows off plenty of pretty gals, at least, so it promises to be as decorative as the first season.
Summer: The World’s Strongest Rearguard: please-don’t-suck-please-don’t-suck. Still no video preview, but the initial casting is up, and Our Upright Ass-Guardian is voiced by The Universal Boy Hero, Mute Lizard Best Girl is Komi, Sexy Former Manager’s biggest roles have all been porn, Cursed Swordsgirl is perhaps best known for voicing 2B, and Jailbait Shrine Maiden is Yor (aka Visha, Yun-hua, Ryu, Angeline, etc). And both the light novels and manga are resuming after a lengthy hiatus.
Summer: Bumpkin & Harem 2: yes. From what I know, the gals never actually become haremettes, despite Red showing signs of jealousy. I would like to see Age-Appropriate Hot Magic Teacher emerge as the leading candidate for wifehood, but White is determined.
Summer: Skeleton Knight 2: yes, please.
Summer: Magilumiere 2: yes-yes-yes!
I’m really looking forward to spending a few weeks in Japan soon. Osaka-style hitokuchi (one-bite) gyoza will be on the menu, but not quite this tiny. It’s going to be a mostly-Kyoto trip, but thanks to my sister having to leave early, I’ll have two-and-a-half days on my own in Tokyo.
Not the usual hot-springs episode. Or the usual first-date episode. As usual for Frieren, it’s the journey that matters.
I would not have guessed that Himmel had the same voice actor as Bakugo and Accelerator. I learned this after viewing a short clip from the Japanese-dubbed Chinese animation series “The Last Summoner” and finding the voices of hero and heroine kinda familiar.
It’s a shouty, tropey Boy-Meets-Goddess show where she spends most of her time as a bratty shoulder chibi, because full-figured-full-power form is too expensive to maintain, and any power use at all requires stuffing her face. Y’know, a Completely Original Story™.
The punchline is that her voice actress is Frieren.
(Anime News Network has no record of this show, so off to MyAnimeList; licensed by Crunchyroll, by the way, who lists it as two seasons, but it’s just Chinese vs Japanese audio)
Chaney’s latest series “Accidental Astronaut” is called that because…
…“Flash Gordon” was taken.
Z Image Base knows ray-guns and retro, but the name “Flash” is rather heavily trained on the wrong character. It does get some things right, though:
The creator of a large collection of excellent 3D models for gaming miniatures just announced on his Patreon that he’s got severe IDS (ICE Derangement Syndrome), and has invited people who disagree with him to stop supporting him financially.
His proposal is acceptable.
Part 2 of the English version of Chiharu’s Christmas story.
The furnace guys came out Monday morning, determined that it was toast, and scheduled the installation of a new one for Tuesday morning.
Which meant I spent Monday afternoon enlarging the path to the street into something wide enough to get my car out (and their truck in). I only went as far as Home Depot to pick up two space heaters, which they still had a few of. I only really needed them for one day, but I can always put one in the garage to keep it above freezing out there, and the other one will be useful occasionally.
I also bought some softener salt, and the cashier was concerned that I might try to use it as ice melt, which everyone is completely out of. I reassured her that it would only go into the water softener, but didn’t bother explaining that it’s going to stay in the trunk of the car for a few days to put some weight over the rear tires for traction.
Tuesday morning, I was spreading some of my remaining salt on the driveway to ensure there weren’t any icy spots for the furnace guys, and they showed up just as I finished. Start to finish, it took them four hours, which left about two hours for the house to warm back up 15 degrees before I picked my sister up at the airport.
I’m not going to try to clear the rest of the driveway today, just widen the lane a bit and scrape up whatever the wind has blown back over the previous work. I’m much too sore to do another whole 75-foot-long lane, and it’s too cold out there.
And add more salt. Which I now have plenty of, after finding one brand in stock at one store (Lowes). I’m storing it in the trunk of my car for extra traction.
(the shiny new Z Image base model is primarily designed to be used to create fine-tuned models, so it’s difficult to use directly, and can be disappointing compared to the effortless goodness of the Turbo model)
Mind you, for all the hype, this is predicted to be a 1-day snow storm, and we’ve had enough big wind storms already this year that there won’t be a lot of fallen trees taking down power; it’ll mostly be idiots driving into transformers.
✔️ battery-powered emergency radio with hand crank
✔️ cooked food, canned food, freeze-dried camping food, emergency long-term food
✔️ potable water for several days
✔️ gas range; also camp stove and plenty of fuel (the outdoor gas grill would be challenging to use in the cold, but possible)
✔️ small pellet-burning grill and lots of fuel
✔️ gas water heater
✔️ gas fireplace that can be manually lit
✔️ blankets, hand-warmers, winter clothing
✔️ high-capacity UPS that can be used to power small appliances
✔️ fully-charged phone/tablet/chargers
✔️ fully-charged camera batteries that can be used to recharge phone
✔️ 2 cars with full tanks of gas, one of them garaged
✔️ pre-salted driveway and walk to help reduce accumulation
✔️ computer and Switch games if I have power
✔️ books if I don’t
✔️ pepper-ball guns, with fresh CO2 cartridges
✔️ actual guns, with fresh bullets
❌ seasonal service check for gas furnace
Oops.
Apparently the furnace stopped delivering heat on Friday, but between the thermal mass of the 2,500 square-foot basement and the big gaming PC making genai gals all night long, I didn’t notice until Saturday at 6pm.
At which point it was already 14°F outside and starting to snow.
I’ve got a call into my HVAC people, but I doubt they’ll be able to get to me until Monday, what with the 17+ inches of snow predicted over the next 24 hours.
So it’s 64°F in most rooms tonight, and 67°F in the kitchen and family room thanks to the gas fireplace. Which, sadly, is at one end of the 75-foot-long house. The furnace is at least able to blow air around, so the heat from the fireplace will keep it tolerable in the rest of the house.
This week, a study in heroism. More shows like this, please.
(sadly, Crunchyroll has decided to add a mandatory post-credits recommendation page to the app (FireTV version, at least), and there’s no button to get out of it; dipshits)
(related, Apple’s Weather app has an annoying behavior on all platforms: the predicted precipitation for a day is from midnight to midnight except for the current day, where it’s “the next 24 hours starting right now”; as a result, the only time it provides useful information is at midnight)
Update: No, the correct answer is that the furnace was dying, and is being replaced completely on Tuesday (he says, writing from the future with the installers on site).
Because the thermostat was balancing the house based on multiple
sensors; the east end has all the computers, and one of them was
running all night making GenAI-gal wallpaper.
Clearly, the fix is to move that computer! Which is actually tempting, because that room has a gas fireplace that will still work if the power goes out. So I can heat the room one way or another…
(I’ve linked to a lot of Hiyodori’s Chiharu cartoons, but the complete English version of the Christmas story hasn’t been posted yet)
…with AI, that story is usually complete nonsense.
Today is not a good day to be a MS Office 365 email customer. Or one of their partners…
The new Dresden Files novel arrived Monday afternoon. I’m not sure what happened after that; the rest of the day is a blur. Harry spends a lot of time recovering from the aftermath of the Big Event(s), which may be more emotional and introspective than some fans are really interested in. He does get better. Eventually.
Good stuff, recommended for people still keeping up with this series.
(Fern is definitely more photogenic than Harry Dresden…)
The targeted LLM enhancements are doing a good job of improving the variety in outfits and backgrounds, so can I do something about ZIT’s horrible guns?
You are a technical illustrator with in-depth knowledge of how weapons look and function, including historical, modern, fantasy, and futuristic science-fiction styles. Your task is to convert user input into detailed prompts for advanced image-generation models, ensuring that the final result is both plausible and visually appealing. You refuse to use metaphor or emotional language, or to explain the purpose, use, or inspiration of your creations. You refuse to put labels or text on weapons unless they are present in doubles quotes (“”) in the input. Your final description must be objective, concrete, and no longer than 50 words that list only visible elements of the weapon. Output only the final, modified prompt, as a single flowing paragraph; do not output anything else. Answer only in English.
(yes, many models randomly slip into Chinese unless you remind them; I had one sci-fi gun description that randomly included “握把表面具有纳 米涂层防滑纹理” (which apparently translates to “the grip surface has a nano-coated anti-slip texture”, which sounds perfectly reasonable, although not something you can really expect an image-generator to render)
I may need a separate “expert” for sensible gun-handling poses. Also, some models are waaay too focused on the AR-15 as the universal “gun”, so I’m going to need to add some more focus to the prompt.
Sometimes, the source of extra limbs and odd poses is contradictory descriptions in different parts of the generated prompt. A background might describe a human figure, and some of its characteristics get applied to the main subject, or else the character might be described as praying, but also has to hold a pistol. So I’m trying this:
You are a Prompt Quality Assurance Engineer. Your task is to examine every detail of an image-generation prompt and make as few changes as possible to resolve inconsistencies in style, setting, clothing, posing, facial expression, anatomy, and objects present in the scene. Ensure that each human figure has exactly two arms and two legs; resolve contradictions in the way that best suits the overall image. Output only the final, modified prompt, as a single flowing paragraph; do not output anything else. Answer only in English.
A visual diff of some samples suggest that it does a good job. Some models try to make more changes, but the ones I’ve been using most actually produce something recognizably diffable. I doubt there’s a prompt-based solution to perspective problems, though; ZIT is good at making multiple figures interact, but terrible at ensuring they’re drawn at the same scale.
The big downside of all this LLM nonsense is that I don’t have a second graphics card to run it on, and even a high-end Mac Mini is slooooooooow at running text models (don’t even bother trying image models). Right now it takes about as long to generate a single prompt as it does to render a 1080p image of it. And every once in a while local LLMs degenerate into infinite loops (the paid ones do it, too, but it usually gets caught by the layers of code they wrap them in to enforce bias and censor naughtiness), which kinda sucks when you kick off a large batch before bedtime.
At least flushing the output of the different scripts after every line minimizes the delays caused by the LLM, so it doesn’t feel slow. I might still set up to generate big batches on the graphics card and auto-unload the model before kicking off the image generation; both the LM Studio and SwarmUI APIs have calls for that, so I can update the scripts.
Sunday’s weather forecasts had 8-10 inches of snow coming on Saturday, and another 6-7 inches on Sunday. Monday, that changed to 1 inch and 3-4 inches, respectively. Today, it’s 1-2 and 4-5. Who knows what tomorrow will bring?
This matters to me only because it affects the amount of work I have to do to clear the driveway and get my sister to the airport on Monday morning. Otherwise I’d be content to make a path just wide enough to take the trash down Sunday night.
I fired up s3cmd to refresh my offline backup of the S3 buckets I
store blog pictures in, and it refused to copy them, blowing chunks
with an unusual error message. Turns out that the Mac mount of the
NAS folder had obscure permissions errors for one sub-directory. On
the NAS side, everything is owned by root, but the SMB protocol
enforces the share permissions, so everything appears to be owned by
me, including the affected sub-dir. Deep down, though, the Mac knew
that I shouldn’t be allowed to copy files into that directory as me.
Worked fine as root, though.
And, no, I did not give an AI permission to explore my files and run commands to debug the problem. That way madness lies. 😁
One of the most prolific and enthusiastic members of the SwarmUI Discord (who has insanely good hardware for generating images and videos; the spare card he’s using just for text-generation is better than my only one) has done a lot of tinkering with LLM-enhanced prompting, adding features to the popular (with people who aren’t me) MagicPrompt extension.
(why don’t I like it? the UI is clunky as hell, it doesn’t work well with the text-generation app I run on the Mac Mini, LM Studio, and it really, really wants you to run Ollama for local LLMs, which is even clunkier; I’ve tried and deleted both of them multiple times)
Anyway, he’s shared his system prompts and recommended specific LLMs, and one of the things he’s been tinkering with is using different enhancements for each section of his dynamic prompts. So, one might be specifically instructed to create short random portrait instructions, while another generates elaborate cinematic backgrounds, and yet another for describing action and movement in a video. Basically keeping the LLM output more grounded by not asking it to do everything in one shot.
I felt like tinkering, too, so I updated my
prompt-enhancer
to support multiple requests in a single prompt, with optional custom
system prompts pulled from ~/.pyprompt.
Initial results were promising:
I saved the prompt as its own wildcard (note that using “:” to mark
the LLM prompt preset in the @<...>@ block was a poor choice for
putting into a YAML file, since it can get interpreted as a field name
unless you quote everything…) and kicked off a batch before bedtime:
__var/digitalart__ A __var/prettygal__ with __skin/normal__
and __hair/${h:normal}__, and her mood is
{2::__mood/${m:old_happy}__. __pose/${p:sexy}__|__mood/lively__}.
She is wearing @<fashion: sexy retro-futuristic science fiction
pilot uniform for women; must include a futuristic pistol >@
She is located __pos__ of the image.
@<cinematic: __place/${l:future}__. __var/scene__. >@
(someday I’ll clean up and release the wildcard sets…)
I got a lot of results that easily cleared the bar of “decent wallpaper to look at for 15 seconds”, weeding out some anatomy fails, goofy facial expressions, and Extremely Peculiar ZIT Guns.