-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Suggestion - include latent_directions #6
Comments
Nice work. @johndpope, I guess you have tried this GenEdi, how accurate are these feature vectors you think? |
I think these are just out of the box. https://github.com/anvoynov/GANLatentDiscovery https://github.com/GreenLimeSia/GenEdi/blob/master/latent_directions/eyes_open.npy https://github.com/search?q=latent_directions&type=code @Gvanderl - maybe there's interest to collaborate with this web layout for your facemaker code? UPDATE |
Very nice. I'm curious how these latent directions been got. StyleGAN is not trained with feature labels, latent space is very likely highly twisted. So did these latent directions come from manual labeling or unsupervised learning? |
I think this guy @a312863063 created them https://github.com/a312863063/generators-with-stylegan2 english |
Ha, I know some about this guy. He has a lot of computation resource, but the researching is not the first-class yet. Yes, I envy him. |
Amazon / stylegan2 take Gan-Control |
Interesting. |
I would love to collaborate, I'm not certain what exactly should be implemented though. |
Hi @Gvanderl There seems to be 3 parts - (though maybe more)
{
"image_shape": [
null,
3,
1024,
1024
],
"latent_directions": [
"latent_directions/emotion_angry.npy"
1: "latent_directions/angle_vertical.npy"
2: "latent_directions/emotion_fear.npy"
3: "latent_directions/lip_ratio.npy"
4: "latent_directions/pitch.npy"
5: "latent_directions/exposure.npy"
6: "latent_directions/roll.npy"
7: "latent_directions/eyes_open.npy"
8: "latent_directions/beauty.npy"
9: "latent_directions/nose_ratio.npy"
10: "latent_directions/glasses.npy"
11: "latent_directions/eye_eyebrow_distance.npy"
12: "latent_directions/face_shape.npy"
13: "latent_directions/mouth_open.npy"
14: "latent_directions/nose_tip.npy"
15: "latent_directions/eye_distance.npy"
16: "latent_directions/race_yellow.npy"
17: "latent_directions/mouth_ratio.npy"
18: "latent_directions/smile.npy"
19: "latent_directions/emotion_surprise.npy"
20: "latent_directions/race_black.npy"
21: "latent_directions/angle_horizontal.npy"
22: "latent_directions/gender.npy"
23: "latent_directions/emotion_happy.npy"
24: "latent_directions/race_white.npy"
25: "latent_directions/width.npy"
26: "latent_directions/emotion_disgust.npy"
27: "latent_directions/camera_rotation.npy"
28: "latent_directions/age.npy"
29: "latent_directions/height.npy"
30: "latent_directions/yaw.npy"
31: "latent_directions/nose_mouth_distance.npy"
32: "latent_directions/emotion_sad.npy"
33: "latent_directions/eye_ratio.npy"
34: "latent_directions/emotion_easy.npy"
],
"latents_dimensions": 512,
"model": "cat",
"synthesis_input_shape": [
null,
18,
512
]
} ML
1st step import the latent_directions directory - DONE I pushed a tensorflow 2 branch. Work in progress. It seems each latent directions is a vector of 'shape': (18, 512) That may makes things more complicated. This looks promising - need to prototype the code to hotwire the http_server (without UI) def change_face(image="maface_01", direction="gender", coeffs=None):
if coeffs is None:
coeffs = [-2, 0, 2]
directions = {
"smile": 'ffhq_dataset/latent_directions/smile.npy',
"gender": 'ffhq_dataset/latent_directions/gender.npy',
"age": 'ffhq_dataset/latent_directions/age.npy'
}
direction = np.load(directions[direction])
face_latent = np.load(config.latents_dir / (image + ".npy"))
move_and_show(face_latent, direction, coeffs) If we can hardcode a change of smile - then this would progress things to extend ui. UPDATE https://hostb.org/NCM - official download here UPDATE. Got the drop down showing latent directions in project. It seems final piece would be to wire up the events of changing the drop down - UPDATE https://github.com/jasonlbx13/FaceHack/blob/master/face_gan/flask_app.py if len(request.form) != 0:
smile = float(request.form['smile'])
age = float(request.form['age'])
gender = float(request.form['gender'])
beauty = float(request.form['beauty'])
angleh = float(request.form['angleh'])
anglep = float(request.form['anglep'])
raceblack = float(request.form['raceblack'])
raceyellow = float(request.form['raceyellow'])
racewhite = float(request.form['racewhite'])
feature_book = [smile, age, gender, beauty, angleh, anglep, raceblack, raceyellow, racewhite]
else:
feature_book = [0, 0, 0, 0, 0, 0, 0, 0, 0]
def move_latent(self, npy_dir, Gs_network, Gs_syn_kwargs, *args):
latent_vector = np.load(npy_dir)[np.newaxis, :]
smile, age, gender, beauty, angleh, anglep, raceblack, raceyellow, racewhite = args
new_latent_vector = latent_vector.copy()
new_latent_vector[0][:8] = (latent_vector[0] + smile * self.smile_drt + age * self.age_drt + gender * self.gender_drt
+ beauty * self.beauty_drt + angleh * self.angleh_drt + anglep * self.anglep_drt
+ raceblack * self.raceblack_drt + raceyellow * self.raceyellow_drt + racewhite * self.racewhite_drt)[:8]
with self.graph.as_default():
with self.session.as_default():
images = Gs_network.components.synthesis.run(new_latent_vector, **Gs_syn_kwargs)
PIL.Image.fromarray(images[0], 'RGB').save(
dnnlib.make_run_dir_path('./static/img/edit_face.jpg')) I guess once code is dropped in - the coefficients will be important to get this to work correctly.
Also noteable mention @swapp1990 - seems to have achieved something similar already. |
So I've been trying out different stylegan repos - and I'm imaging a fusion of two.
Basically the guts of this repo
GreenLimeSia/GenEdi#3
While it is in pytorch - I think it would be great to surface these in side bar as in the 512 vectors currently in another tab.
Also checkout artbreeder
https://www.artbreeder.com/
it has some simple controls to switch up / infuse latent vectors.
Side note - the nvidia labs are switching over to pytorch -
NVlabs/stylegan2-ada#32 (comment)
The text was updated successfully, but these errors were encountered: