“You know, they say life is like a box of Men’s Pocky…
…you never
know when your robot girl will get kidnapped.”
Qwen seems to default to this style of cartoon art unless you push it hard in another direction; fortunately, I like it for just goofing around. I left the layout open, but specifically requested each gal’s action and the tropical gym setting, as well as requesting different hair styles and body shapes. Sadly, I also had to explicitly request that they all had two ears, but I could not come up with an incantation that put all four in sharp focus; the poor gal on the treadmill was always a bit fuzzy.
For more fun, at one point I asked for them to be super-cute, and Qwen put Superman emblems on their sports bras.
Sam Rockwell is versatile enough to pull this off.
…to re-render an image with refining and upscaling turned on, now in
my catchall Github repo as
refinewall.sh, along with a script to preserve SwarmUI’s custom
metadata when converting from PNG to JPG, with the straightforward
name swarmui-png2jpg.sh. For the refiner script, I made use of
jo to make the JSON-handling
cleaner. I also decided to use exiftool to insert the original
filename as a DocumentName tag to keep track of the source image file.
(all three pics use the same basic prompt, requesting sexy pixie gals in a slightly-cartoonish illustration; I wish I could get each specific variation on demand)
We were supposed to get a light dusting of snow on Monday, starting in the late morning. Leaving my parents’ house Sunday night, I walked out into light freezing rain, just enough to make me expect a light freeze.
Earlier in the day, my brother had asked me if I could drive my niece to school in the morning. Not having any meetings until 9:30, I said sure.
I woke up to find over an inch of snow on the ground and more still coming down, but my street was still clear. By the time I was getting ready to leave to pick her up, I was checking for school closures. Many schools were closed or had two-hour delays, but not hers, so off I went, giving myself plenty of time to get there.
No trouble on the side streets, but as we were approaching a major messed-up intersection to make a double left turn, I felt the car start sliding on a sheet of ice. 30 years in California had dulled my winter-driving skills, but I still knew what to do, and guided the car to the curb without hitting anyone or going into a ditch (unlike at least half-a-dozen other people we’d passed since leaving my brother’s house). Then I slowly got the car moving again and got back into the lane. It gave my niece a little story for her friends at school.
The real fun was that it was still snowing when I got home half an hour later (rush-hour traffic was grindingly slow in the half-frozen slush), and I had to try to make it up my steep unshoveled 75-foot-long driveway. Surprisingly, I succeeded.
By late afternoon the sun and wind had completely cleared and dried the streets and driveways, but there’s still about an inch of the white stuff covering the yard. That will all go away as it goes back up to 50°F by Wednesday afternoon.
Keeping with the tiny-gals-in-snow theme, I asked a few elves to make me a scale model of a caffeine molecule. They weren’t quite clear on the concept:
I even went widescreen to give them more room to make a complex molecule. It didn’t help:
Then I made the mistake of asking some dwarves to provide basic chemistry lessons:
(I was tinkering up an entry for the SwarmUI Discord’s daily theme contest; I started with the gals, but decided to go fully SFW and use the dwarves instead)
They’re still pretending that the item that went back to the depot on Friday afternoon and hasn’t moved since will be delivered by tomorrow. So they can hang onto my money for another day before starting to process a refund.
I don’t recognize young actress Maya Imamori, and after she was caught taking a drink four months before turning 20, neither will anyone else. I can’t decide which I’d prefer: Japan cutting back on the way it polices entertainer morality and makes performers grovel, or Hollywood adopting the practice.
More precisely, Hollywood introducing the concept of shame. They already have shunning, just for the wrong reasons.
I never thought it would happen to me, but there it was: I was sick of virgins.
Let me explain. There’s this thing about fantasy fiction, one of those whatayacallit “tropes’, where a kingdom is being ravaged by a dragon (livestock eaten, crops burned, you know the drill) and somebody gets the bright idea that the solution is to find a reasonably-healthy village girl who hasn’t played hide-the-sausage with the butcher’s son, put her in a flowing white dress, and chain her up somewhere she’ll be easy to spot when the dragon makes his next flash-fried-mutton run.
Spoiler alert: it doesn’t work. Why not? Let’s assume for the sake of argument that you’ve got a dragon problem. Every few days it launches itself into the air, flies around your villages looking for tasty livestock, swoops down, grabs a few, and heads home for dinner. Don’t get fussy about the ratio of calories burned to calories earned for the moment. It’s a dragon; there’s magic involved, okay?
Given that a decent-sized dragon can carry off an adult male sheep in each hand, er “taloned forepaw”, that’s what, 500 pounds of meat, fat, and bone, at least twice a week? Average weight of a virgin sacrifice? 80 pounds, soaking wet. This is because unless she’s seriously sickly or ugly, village girls stop being virgins around the age of twelve. Thirteen if the local boys are a little slow on the draw.
Even if you assumed that virgins were the tastiest morsels around, they’d barely be a light snack for a creature that can eat two sheep and be hungry again three days later. And if they were that tasty, he’d be back for another one the next day, and it’s not like they grow on trees.
So what are we, and yes I mean “we”, supposed to do with all those virgins? You can’t just leave them there to die from exposure; those dresses wouldn’t stand up to a stiff breeze, and it gets cold in the mountains at night. And let’s be honest, if somebody wanted them back, they’d be rescued; people don’t sacrifice sweet-tempered girls with prospects, they get rid of the annoying ones who ask too many questions and refuse to fit in. Y’know, the clever girls.
Anyway, that’s how I started a school for witches.
(file under annoying that I had to specify that the dragon had two wings, one on each side; Qwen has a habit of just not drawing limbs on the off side, triggering variation renders when I like a pic but can’t plausibly believe that the second arm/leg/wing is hidden behind the body. Also, there were size issues; it really wants to fit the entire dragon body into the frame while rendering the girls at a decent size, so in about two-thirds of my attempts, he was small enough that a little girl would make a filling meal)
Bit early for it, but it’s supposed to drop well below freezing tonight, and then snow for a while on Monday.
(Update: I thought they'd make a nice Christmas card...)
I guess it’ll be a few more days before I find out how well Amazon packaged the squishably-soft silicone item compared to the robust boxing of the two glass items. Because it went back to the local depot Friday afternoon, hasn’t been updated since, and Amazon added their automatic five-day extension before I can even try to get my money back and order another one.
(if it does show up, there’s no guarantee it’s the same one that went out the first time; they’ve reused tracking numbers before)
I wanted to refine and upscale the 162 retro-sf waifu wallpapers I made a while back, but I definitely did not want to drag each image into the SwarmUI window, click “reuse parameters”, click “direct apply” on the preset containing the new settings, and then click “Generate”. That’s how I’ve been doing it for small sets, but it’s tedious and annoying, so I wanted to script it: extract the original parameters from the PNG as JSON, add a few new fields, then send a REST call to the SwarmUI server and download the results.
I knocked it together in Bash using exiftool, jq, and curl, and
it worked great… unless the image was made with a LoRA. Which almost
all of my wall-waifus are. I banged my head against the keyboard for a
while before giving up and posting a stripped-down repeat-by to the
Discord. Within half an hour, the developer had responded: the JSON
the app stores in an image’s metadata is not the format used by the
REST API; you can’t just round-trip it. (he acknowledges this is a
should-really-fix-sometime issue)
Specifically, fields that are returned as arrays must be sent as comma-separated strings:
# extracted metadata
...
"loras": [
"Qwen/Pin-up_Girl_-_CE_-_V01e_-_Qwen",
"Qwen/Qwen_Sex-_Nudes-_Other_Fun_Stuff_-SNOFS-_-_v1-1"
],
"loraweights": [
"1",
"0.5"
],
# /API/GenerateText2Image
...
"loras": "Qwen/Pin-up_Girl_-_CE_-_V01e_-_Qwen,Qwen/Qwen_Sex-_Nudes-_Other_Fun_Stuff_-SNOFS-_-_v1-1",
"loraweights": "1,0.5",
Fortunately, jq can do this for you as a one-liner:
JSON=$(jq -c '.loras |= join(",")' <<<"$JSON")
JSON=$(jq -c '.loraweights |= join(",")' <<<"$JSON")
This is not the only underdocumented aspect of the REST API; there are very few examples, none of which give comprehensive lists of valid parameters or complete output. If the anime drought continues, I may pull down a copy of the SwarmUI repo and send patches for the API docs.
I’ll add the script to my Github repo once I finish tinkering with options. I’ve already added an option for the variation-related params, since I’ve had to use that feature a lot when an almost perfect pic is ruined by bizarre anatomical malfunctions. I think I also want to try re-rendering at a larger size instead of scaling as much (1080x1920 render + 2x upscale instead of 576x1024 + 3.75x), in the hopes of reducing finger and toe damage. Newer models can cope with the higher initial resolution. (additional options will wait until I’ve converted it to Python; for a quick hack, Bash is fine, but it’s clunky at handling JSON and REST calls)
It’s been pretty bad in the past, filling my “for you” feed with assorted scams, engagement farmers, and hate-filled Leftist activists, but at the moment, it seems to be pretty well centered around the type of things posted by the 24 accounts I follow:
Basically I get anime pics, RPG pics, cat pics, random snark and memes, and Japanese women in bikinis. This I can scroll for a while.
I was throwing a bunch of leftovers into a big pot of slumgullion, and decided to add a can of chili. The label read Stout Beef Chili, and despite it being waaaaay down the list of ingredients, the smell of beer was overwhelming. It cooked out somewhat, but it’s still pretty strong when you open the container. Enough that I can’t imagine using the other can that I bought, and will likely pitch it.
(actual country of origin for “Ackers Science” and “Ackers BORO3.3” products? China, of course)
After months of relentlessly pushing their new GenAI-enhanced version and their cloud subscription service, the current owner of the venerable MasterCook recipe-management software has sold the rights to some entity called Cook’n, which appears to be junking the software and only bought the customer list. They’re honoring subscriptions, but charging $10 to migrate you to their cloud.
Is their product comparable? No idea; I’ve seen so many products advertise MasterCook compatibility without actually implementing the full feature set that I gave up years ago.
I ordered three things from Amazon. One is made of squishably soft silicone, the other two are made from borosilicate glass. Each one has its own tracking number, suggesting they did not combine them into one box, despite me checking the “take your time” button on shipping.
Now, which one will have the sturdiest package when it arrives tonight?
(I didn’t ask for Ricotta to serve dubious concoctions in champagne flutes, but I guess I didn’t not ask for it, either; this is what happens when you fall back to a model with less-capable parsing because it has the anime LoRAs you need for specific characters)

I saw the name of this illegal alien convicted sex offender who was advising Oregon on healthcare, and my first thought was not “yeah, that tracks”, but rather:
I guess being chief metallurgist to King Charles V of Spain doesn’t pay what it used to.
(“there can be only one!”)
😁
The Flux models have plasticky skin, fewer trained art styles, and better-than-SDXL-but-not-by-much prompting. Qwen Image has excellent prompting and posing, but a strong tendency to converge on a handful of styles, locations, and faces. One of the more recent Flux models is Krea, which is supposed to be heavily trained on photographic and art styles. The full version is also 22 GB, so I wasn’t sure how well it would perform on my 24 GB RTX 4090 at all.
It was surprisingly quick, and it did style the images more than Qwen or standard Flux, but it definitely didn’t have the kind of LLM-based prompting that makes Qwen stand out.
So I crossed my fingers and set things up so that Qwen generated a 36-step 576x1024 image and then handed it off to Krea for 24 steps of refining and upscaling to 4K. Performance was quite good, but the results were… rough. One gal had a second face growing out of her ankle, another had an eye for a nipple, another had blue feet and something hideous growing out of her mutated hand, etc, etc.
TL/DR: I have yet to find a refine-only model that works well with Qwen as its base; the ones that don’t produce awful images produce low-resolution ones. So that idea was a bust, and not even a bouncy one.
(I need models that are… rock solid)
Windows 11 no longer reboots when you tell it to update-and-shutdown. The problem affected Windows 10 as well, so it’s been busted for ten years, neatly highlighting Microsoft’s QA priorities. Which somehow seem to involve cramming more ads, AI, and privacy violations into every product.
My parents have a Windows 10 PC that they were worried about not being able to upgrade as the OS falls out of support. Last week, I got a call asking me to come over and fix its sudden inability to print, which involved deleting and reinstalling the printer driver. Odd, but hey, it’s Windows.
Except that there were more problems, like the fact that left-clicking the Start button did nothing, the taskbar Search field did nothing, and right-clicking the start button to launch Settings popped up an error dialog. I didn’t have a lot of time, so I screenshotted everything and went off to research fixes, which included some partially successful incantations.
Then I discovered what was really going on: it had silently upgraded itself to Windows 11. Mostly. So now I need to copy a recovery image to a USB stick and head back over there soon to repair or re-run the upgrade. Sigh.
(I’ve got half a dozen Windows PCs around the house that I would be happy to upgrade to Win11, but I can’t; several are old enough that the CPUs simply aren’t supported, so even the workarounds won’t work around, and one couldn’t run Linux or BSD for blood or money, due to proprietary drivers required for major components)
Turns out that Automator isn’t entirely scriptable, which seems like an obvious oversight, but Apple probably couldn’t figure out how to monetize those pixels. Instead, I updated the build script for my gallery-wall app (not yet uploaded to Github) to use Platypus, not to be confused with Python Platypus.
cat > _rungallery.sh <<'EOF'
#!/bin/zsh
exec open -a "gallerywall_backend" --args "$@"
EOF
chmod +x _rungallery.sh
platypus -a "Gallery Wall" \
--interface-type None \
--droppable \
--quit-after-execution \
--bundle-identifier "org.dotclue.gallerywall" \
--author "J Greely" \
--app-version "1.0.0" \
--app-icon images/GalleryWall-wrapper.icns \
--interpreter "/bin/zsh" \
_rungallery.sh
(the downside is that it makes apps that won’t run on a nailed-down-by-default Mac, and in fact you can’t even install Platypus on most Macs, due to code-signing errors that are supposed to be fixed in the next release)
The “self-portraits” I’ve posted recently were all done with prompts starting with “slightly-cartoonish illustration” to set the style. I also used this phrasing for the Diablo 4 barbarian illustration below them, which isn’t cartoonish at all.
So why is it that the moment I start to describe them in detail, to add variety, my pinup gals go full-bore big-eye anime style? Either 2D or Frozen-style 3D?
TL/DR: mentioning eyes at all, even just their color, is enough to do it. The expressive LLM-generated mood descriptions I’ve been experimenting with were also contributing (and creating some contradictory pose instructions, which I’d already made a note to fix), but all it takes to turn “slightly-cartoonish illustration of a woman cooking” into pop-eyed anime is adding “with blue eyes”.
The following images were all done with the same settings (Qwen Image, CFG 6.5, 42 steps, seed 1019441477):