• jarfil@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    breaking things down for an audience understanding neither the technical nor artistic aspect…

    Not a reason to misrepresent things. Reminds me of the animistic fallacy, if they even understand what’s really going on themselves.

    As for text, I’ve seen the MS generator spit out decent text, at least in titles and logos, and some AI art with full legible sentences.

    Unless you start off training by feeding the model 3d data (say, voxels) alongside 2d projections

    Some time ago already, there was an SD fork with bounded box support, and a ChatGPT preprocessor prompt template to do the layout. Object permanence in this case is as simple as continuing with the lower layer once the upper one is finished, maintaining object continuity in the lower layer. It’s reasonable to expect this to go from bounded boxes, to freehand layers for each object. Since an LLM has been shown to be a good preprocessor to set the layout, some more integration between both, with object feedback from the SD to reduce the layer bounding box, would do wonders. Adding an opacity mask could be a bit harder, but sounds doable.

    I don’t see the need of much higher abstraction to address this issue. Rendering videos of translucent objects, might need it, though.