You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, per this comment, google-ai-edge/mediapipe#2605 (comment) when FaceMesh is run with refine_landmarks=True, it directly returns the irises. Is there a reason to not just use that directly?
The indices for the iris landmarks can then be found using these constants:
That's great! Thanks for pointing it out. The reason for not using it here is quite simple, back when I developed the demos of this repo, this functionality was not available.
I'm not actively keeping the demos up to date either, unfortunately.
At present
iris.py
first usesFaceMesh
to find the eyes and then uses another model to find (refine) the irises:mediapipeDemos/custom/iris_lm_depth.py
Lines 50 to 72 in 47c6330
However, per this comment, google-ai-edge/mediapipe#2605 (comment) when
FaceMesh
is run withrefine_landmarks=True
, it directly returns the irises. Is there a reason to not just use that directly?The indices for the iris landmarks can then be found using these constants:
For reference, this is how the mediapipe example code then plots these:
The text was updated successfully, but these errors were encountered: