[ad_1]
I’ve experimented a few instances with producing sweet coronary heart messages utilizing varied sorts of machine studying algorithms. Initially, brief messages have been nearly all the unique text-generating neural networks might deal with. Now we have come again round to roughly the identical efficiency, but with orders of magnitude extra computational sources consumed. (though I haven’t got to photoshop the messages onto candies any extra, in order that’s good) Here is DALL-E3 producing sweet hearts:
My impression is the textual content right here is working not a lot on the extent of “listed below are believable sweet coronary heart messages” a lot as “listed below are some clusters of pixels which are related to sweet hearts”. As with most AI-generated imagery, it is probably the most spectacular at first look, after which will get worse the longer you look.
I’ve observed that the extra textual content DALL-E3 tries to place in a picture, the more serious the readability of the textual content is – I am pretty shocked at how legible a lot of the sweet hearts above have been. (Perhaps it helps set expectations that the real-life candies are sometimes garbled.) After I ask for fewer hearts, they find yourself crisper. However not essentially improved in coherence.
Coherent textual content is significantly troublesome for image-generating algorithms, so the sweet hearts may be a mirrored image of that.
However there’s one other risk that amuses me. The search “sweet hearts with messages” brings up pictures from previous AI Weirdness sweet coronary heart experiments. It’s probably that these have been a part of DALL-E3’s coaching knowledge, and so they might have had an impact on the weirdness of generated hearts that I am getting now.
After I ask for sweet hearts with “quirky, AI-style messages”, I get sweet hearts which are (to me) indistinguishable in high quality from the primary grid.
[ad_2]