“Statistically, one in three houses in Queensland, Australia, has one in their roof.”
— Carpet Pythons: Australian Pest ControlAnother functional style LoRA for Z Image Turbo is BOTW Zelda Style. At full strength it applies too much of the game’s various racial characteristics, but at 80% it mostly applies the visual style without the goofy Hyrulian NPC faces or painful Gerudo figures (impossibly small waists combined with washboard abs is not sexy).
I’m okay with it randomly applying Zelda’s remarkable ass, though…
Sometime soon I should generate fantasy location and costume prompts, and perhaps some bunnygirl-friendly locations for Easter. Some of the Christmas locations look vaguely Hyrule-ish, but mostly not.
First it rained heavily yesterday, then the temperature dropped 30°F last night, and then I looked out the front door this morning to find that high winds (40+ MPH) had blown my empty trash bins sixty feet down the street, and had also filled my yard with a mix of trash and recyclables from neighboring houses.
As I went out for what I thought would be a quick pickup, snow started blowing around. I got the bins back up to the house, but had to retreat and gear up before finishing the cleanup, because my fingers were starting to hurt from the bitter icy wind.
So, yeah, staying in for the rest of the day.
(style courtesy of one of the few useful ZIT LoRA, Cute Future, which really shifts the mood of my holiday cheesecake)
Lots of back and forth on the xitter in the never-ending battle between “people who like Japanese pop culture” and “people who insist on ‘fixing’ it in translation”. My take:
Localizers believe that they’re better writers in English than the original authors are in Japanese. If this were true, they wouldn’t be working as localizers.
(I’m really liking this particular cartoon style; it reliably comes out strong and consistent and cute; style section of prompt is “Drawing in the style of Jon Burgerman, chaotic compositions of cartoon characters, free-form doodling with bold, looping linework, flat graphic lighting, a vibrant, candy-colored palette, playful and energetic atmosphere.”)
So, AI is consuming all memory production in the world while simultaneously being unable to remember which commands destroy data. I’m sure it’ll all work out fine.
How much do we want AI everywhere? Instead of the original plan to make it an add-on paid service, Amazon is rolling out the new “AI” “enhanced” Echo “experience” to Prime customers by default. To opt out and restore the old dumb “by the way, did you know that I’ll keep talking until you swear at me to shut up?” behavior, use the above command. Which will probably work as reliably as switching away from the “for you” feed on X…
(took about 20 tries to get the speech bubble to come from the drone, sigh; fortunately it only took 4 seconds per try)
Oh, Amazon, you and your bullshit.
The best part is how Amazon now adds text insisting that the information on the order status page is the same as what’s available to their customer service agents, (implied) “so don’t bother contacting them”.
(fortunately it was just a present for myself…)
No, not announced for January, just that there will be an announcement for the second season in January.
(I’ve already used almost all of the decent non-porn fan-art from the first season…)
Just helping them get their start…
(ZIT’s having some scaling issues today; must have had too much ice cream and pudding for Christmas (classical reference))
Everything must go!
GenAI Gals after the jump
No, not that one. This one. Of the many pop-culture references from the Seventies that have been converted to LoRAs, this is not one I intend to pursue.
My patience is rewarded, as the 15-year-old overpriced Connie Willis novel finally drops below $10 on Kindle.
Tear off the wrapping paper and enjoy!
[Update: just realized I made the "-small" versions full-4K as well, which was silly. Page should load a lot faster now...]
Note that my new workflow is built around my SwarmUI
CLI, which
now correctly preserves metadata even when generating to JPG, so the
large images have the full parameters embedded in the EXIF User
Comment field, making it possible to drag them back into SwarmUI or
just view the prompt and other settings with exiftool:
exiftool -b -usercomment cheesecake.jpg | jq .






Season 2 announced. Clicking on the video initially used the English dub, and it’s so bad that I not only couldn’t tell what character it was supposed to be, but couldn’t even be sure it was someone speaking in character.
“conclusion arc” and side story.
(wrong magical girls, but these have a lot more fan-art)
“Let’s set up an entire season of convoluted plotting all at once!”
IMHO, Walton Goggins is the only thing holding this together, and spacing it out so you have to watch one episode per week like our primitive ancestors did does not enhance the experience. I’d prefer to just rip off the bandaid and get it over with, but that won’t be possible until February.
Maybe I’ll watch the rest then.
(I did not use the word “alien” to describe the creature, because that word is strongly associated with a very specific overused image of a pop-culture alien, just like asking for “alien symbols” gets you an Alienware logo; creature, monster, pretty much any other word is a better choice with most models)
(also, this is probably the best-looking pistol I’ve gotten out of ZIT, and it’s even in a decent “holstered” position)
Random image is random; I’m experimenting with adding a LUT post-processing pass to my SwarmUI CLI, to fix Z Image Turbo’s slightly flat colors. The output looks fine on its own, but when I mixed some of its cheesecake into the wallpaper rotation, you could really see the difference.
If anything, the pics from Qwen Image were too saturated, but for some of them that was part of the “vintage airbrush pinup” look I was going for with them. My first pass at cleaning up the ZITs was doing a basic auto-level-ish command with ImageMagick:
magick $file -contrast-stretch 0.15x0.05% new/$file
Worked great, but it would be nice to have it run server-side, before
JPG compression, and the recommended method is to apply a LUT. There’s
a whole suite of post-processing
tools available
as a SwarmUI extension, and plenty of free LUTs online
(1,
2,
3,
etc). You can also copy .cube files from any professional imaging
software you happen to have a license to, such as Photoshop, where
they’re usually named by the film/camera look they apply.
Raw from ZIT:

I think the Fuji Sensia LUT pops the colors a bit without going overboard:

(best part: applying a LUT adds basically nothing to rendering time, and generating a comparison grid of every installed LUT at different strengths takes seconds, since the server can reuse the rendered image and just apply each transform in turn)
Okay, they were both pretty cool cats, but not quite what I had in mind.