You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i see theres this organization making a more open version of the diffusion based image generators.
maybe a variation of their training process can incorporate the kind of annotations the imagemonkey database has,
and in turn maybe they’d grant acess to their models.
what they're doing is exactly what i”d hoped would appear,
“imagine if people could barter the labour of doing a bunch of annotations for access to the nets”trained on thise annotations.
they do show examples of adding detsil to rouch sketch art , and the same text to image gen that diffusion is famous for.
this is basically the motivation for things like the bodypart annotation on people, urban surface types etc that ive been doing.
im not sure what the full picture is with their work.. they say anyone will be able to run it (eg 2gb model) if you have a suitable gpu. but who is paying for the training (im sure this kind of generator takes a huge expensive cluster to train.. but the cost shared through a cimmunity of millions of users should be minimal).i think “open ai” recoup thair cost by charging for access to a closed model.
The text was updated successfully, but these errors were encountered:
i see theres this organization making a more open version of the diffusion based image generators.
maybe a variation of their training process can incorporate the kind of annotations the imagemonkey database has,
and in turn maybe they’d grant acess to their models.
https://stability.ai/
https://twitter.com/emostaque/status/1556684541745676292?s=21&t=OUlPXOiE9pZtYyGdjQM0KQ
what they're doing is exactly what i”d hoped would appear,
“imagine if people could barter the labour of doing a bunch of annotations for access to the nets”trained on thise annotations.
they do show examples of adding detsil to rouch sketch art , and the same text to image gen that diffusion is famous for.
this is basically the motivation for things like the bodypart annotation on people, urban surface types etc that ive been doing.
im not sure what the full picture is with their work.. they say anyone will be able to run it (eg 2gb model) if you have a suitable gpu. but who is paying for the training (im sure this kind of generator takes a huge expensive cluster to train.. but the cost shared through a cimmunity of millions of users should be minimal).i think “open ai” recoup thair cost by charging for access to a closed model.
The text was updated successfully, but these errors were encountered: