Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sign language translator #824

Closed
wants to merge 7 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions Sign-Language-Translator/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# **Sign Language Translator**

## **Overview :**

The Sign Language Translator is a real-time video processing application designed to translate sign language gestures into English words. This tool aims to facilitate communication between individuals with hearing impairments and those who do not understand sign language. Additionally, it is useful for interpreting emergency signs and gestures, providing an accessible and efficient way to understand critical information quickly.

## Features

- **Real-Time Translation**: Converts hand gestures into English words using live video feed.
- **User-Friendly Interface**: Easy-to-use interface that displays translated text instantly.
- **Interactive Communication**: Enhances interaction with disabled individuals by translating sign language.
- **Emergency Sign Understanding**: Aids in recognizing and understanding emergency gestures.
- **High Accuracy**: Utilizes advanced machine learning models to ensure accurate translation of hand gestures.

## How It Works

1. **Video Capture**: The application captures live video from a camera.
2. **Gesture Recognition**: Advanced image processing and machine learning algorithms analyze the video feed to detect and recognize hand gestures.
3. **Translation**: Recognized gestures are translated into corresponding English words.
4. **Display**: The translated words are displayed on the screen in real-time, allowing for immediate understanding and interaction.
223 changes: 223 additions & 0 deletions Sign-Language-Translator/sign_language.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,223 @@
{
"cells": [
{
"cell_type": "markdown",
"source": [
"Make sure to check camera permissions."
],
"metadata": {
"id": "4rEOrRvlEBJl"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "XiurAfp_wyGU",
"outputId": "86a621dc-f8f4-4abf-e28d-04100a1b736f"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Collecting mediapipe\n",
" Downloading mediapipe-0.10.14-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (35.7 MB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m35.7/35.7 MB\u001b[0m \u001b[31m23.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: absl-py in /usr/local/lib/python3.10/dist-packages (from mediapipe) (1.4.0)\n",
"Requirement already satisfied: attrs>=19.1.0 in /usr/local/lib/python3.10/dist-packages (from mediapipe) (23.2.0)\n",
"Requirement already satisfied: flatbuffers>=2.0 in /usr/local/lib/python3.10/dist-packages (from mediapipe) (24.3.25)\n",
"Requirement already satisfied: jax in /usr/local/lib/python3.10/dist-packages (from mediapipe) (0.4.26)\n",
"Requirement already satisfied: jaxlib in /usr/local/lib/python3.10/dist-packages (from mediapipe) (0.4.26+cuda12.cudnn89)\n",
"Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from mediapipe) (3.7.1)\n",
"Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from mediapipe) (1.25.2)\n",
"Requirement already satisfied: opencv-contrib-python in /usr/local/lib/python3.10/dist-packages (from mediapipe) (4.8.0.76)\n",
"Collecting protobuf<5,>=4.25.3 (from mediapipe)\n",
" Downloading protobuf-4.25.3-cp37-abi3-manylinux2014_x86_64.whl (294 kB)\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m294.6/294.6 kB\u001b[0m \u001b[31m26.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hCollecting sounddevice>=0.4.4 (from mediapipe)\n",
" Downloading sounddevice-0.4.7-py3-none-any.whl (32 kB)\n",
"Requirement already satisfied: CFFI>=1.0 in /usr/local/lib/python3.10/dist-packages (from sounddevice>=0.4.4->mediapipe) (1.16.0)\n",
"Requirement already satisfied: ml-dtypes>=0.2.0 in /usr/local/lib/python3.10/dist-packages (from jax->mediapipe) (0.2.0)\n",
"Requirement already satisfied: opt-einsum in /usr/local/lib/python3.10/dist-packages (from jax->mediapipe) (3.3.0)\n",
"Requirement already satisfied: scipy>=1.9 in /usr/local/lib/python3.10/dist-packages (from jax->mediapipe) (1.11.4)\n",
"Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->mediapipe) (1.2.1)\n",
"Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->mediapipe) (0.12.1)\n",
"Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->mediapipe) (4.53.0)\n",
"Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->mediapipe) (1.4.5)\n",
"Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->mediapipe) (24.0)\n",
"Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->mediapipe) (9.4.0)\n",
"Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->mediapipe) (3.1.2)\n",
"Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib->mediapipe) (2.8.2)\n",
"Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from CFFI>=1.0->sounddevice>=0.4.4->mediapipe) (2.22)\n",
"Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.7->matplotlib->mediapipe) (1.16.0)\n",
"Installing collected packages: protobuf, sounddevice, mediapipe\n",
" Attempting uninstall: protobuf\n",
" Found existing installation: protobuf 3.20.3\n",
" Uninstalling protobuf-3.20.3:\n",
" Successfully uninstalled protobuf-3.20.3\n",
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"tensorflow-metadata 1.15.0 requires protobuf<4.21,>=3.20.3; python_version < \"3.11\", but you have protobuf 4.25.3 which is incompatible.\u001b[0m\u001b[31m\n",
"\u001b[0mSuccessfully installed mediapipe-0.10.14 protobuf-4.25.3 sounddevice-0.4.7\n"
]
}
],
"source": [
"pip install mediapipe"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 255
},
"id": "PNjK91gUvvRG",
"outputId": "873caa23-ddc3-45c0-c620-68e3a5adbc45"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Error: Camera not opened.\n"
]
},
{
"ename": "error",
"evalue": "OpenCV(4.8.0) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'\n",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31merror\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-1-c54d487c0cd0>\u001b[0m in \u001b[0;36m<cell line: 195>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 198\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"Error: Camera not opened.\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 199\u001b[0m \u001b[0mexit\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 200\u001b[0;31m \u001b[0mimgRGB\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcv2\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcvtColor\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mimg\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mcv2\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mCOLOR_BGR2RGB\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 201\u001b[0m \u001b[0mresults\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mhands\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mprocess\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mimgRGB\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 202\u001b[0m \u001b[0mmultiLandMarks\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mresults\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmulti_hand_landmarks\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31merror\u001b[0m: OpenCV(4.8.0) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'\n"
]
}
],
"source": [
"import cv2\n",
"import mediapipe as mp\n",
"import time\n",
"\n",
"cap = cv2.VideoCapture(0)\n",
"cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280) # Set camera width to 1280 pixels\n",
"cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 720) # Set camera height to 720 pixels\n",
"\n",
"mpHands = mp.solutions.hands\n",
"hands = mpHands.Hands(static_image_mode=False, max_num_hands=2, min_detection_confidence=0.7)\n",
"mpDraw = mp.solutions.drawing_utils\n",
"fingercordinates=[(8,6),(12,10),(16,14),(20,18)]\n",
"thumbcordinate=(4,2)\n",
"\n",
"while True:\n",
" success,img=cap.read()\n",
" if not cap.isOpened():\n",
" print(\"Error: Camera not opened.\")\n",
" exit()\n",
" imgRGB=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)\n",
" results=hands.process(imgRGB)\n",
" multiLandMarks=results.multi_hand_landmarks\n",
" if multiLandMarks:\n",
" handpoints=[]\n",
" for handLms in multiLandMarks:\n",
" mpDraw.draw_landmarks(img,handLms,mpHands.HAND_CONNECTIONS)\n",
" for idx, lm in enumerate(handLms.landmark):\n",
" # print(id,lm)\n",
" h,w,c=img.shape\n",
" cx,cy=int(lm.x*w),int(lm.y*h)\n",
" handpoints.append((cx,cy))\n",
" for point in handpoints:\n",
" cv2.circle(img,point,10,(0,0,255),cv2.FILLED)\n",
"\n",
" upcount=0\n",
" for cordinate in fingercordinates:\n",
" if handpoints[cordinate[0]][1] < handpoints[cordinate[1]][1]:\n",
" upcount+=1\n",
" if handpoints[thumbcordinate[0]][0] >handpoints[thumbcordinate[1]][0]:\n",
" upcount+=1\n",
"\n",
" #condition for victory\n",
" if (handpoints[8][1] < handpoints[6][1]) and (handpoints[12][1] < handpoints[10][1]) :\n",
" if upcount==2:\n",
" cv2.putText(img, \"VICTORY\", (15, 250), cv2.FONT_HERSHEY_PLAIN, 3, (0, 0, 255), 5)\n",
"\n",
"\n",
" # #condition for yo\n",
" # if (handpoints[8][1] < handpoints[6][1]) and (handpoints[20][1] < handpoints[18][1]):\n",
" # if upcount==2:\n",
" # cv2.putText(img, \"YO\", (15, 250), cv2.FONT_HERSHEY_PLAIN, 3, (0, 0, 255), 5)\n",
"\n",
" #condition for ok\n",
" if handpoints[4][1] < handpoints[2][1] and handpoints[4][1] < handpoints[5][1] and handpoints[8][1] > handpoints[5][1] and upcount == 1:\n",
" cv2.putText(img, \"OK\", (15, 250), cv2.FONT_HERSHEY_PLAIN, 3, (0, 0, 255), 5)\n",
"\n",
" # Condition for hi\n",
" if (distancey[4] < distancey[2] and distancey[8] < distancey[6]) and \\\n",
" (distancey[20] < distancey[18] and distancey[12] < distancey[10]) and \\\n",
" (distancey[16] < distancey[14]):\n",
" cv2.putText(frame, txt9, (15, 250), cv2.FONT_HERSHEY_PLAIN, 3, (0, 0, 255), 5)\n",
"\n",
" # condition for smile\n",
" if handpoints[4][1] < handpoints[2][1] and handpoints[8][0] < handpoints[6][0]:\n",
" if upcount == 2:\n",
" cv2.putText(img, \"smile\", (15, 250), cv2.FONT_HERSHEY_PLAIN, 3, (0, 0, 255), 5)\n",
"\n",
" # condition for call me\n",
" if (handpoints[4][1] < handpoints[2][1] and handpoints[20][1] < handpoints[18][1]):\n",
" if upcount == 2:\n",
" cv2.putText(img, \"call me\", (15, 250), cv2.FONT_HERSHEY_PLAIN, 3, (0, 0, 255), 5)\n",
"\n",
" # condition for hot\n",
" if (handpoints[8][1] < handpoints[6][1]) :\n",
" if upcount == 1:\n",
" cv2.putText(img, \"HOT\", (15, 250), cv2.FONT_HERSHEY_PLAIN, 3, (0, 0, 255), 5)\n",
"\n",
" # condition for lose\n",
" if handpoints[thumbcordinate[0]][0] > handpoints[thumbcordinate[1]][0]:\n",
" if upcount == 1:\n",
" cv2.putText(img, \"LOSE\", (15,250), cv2.FONT_HERSHEY_PLAIN, 3, (0, 0, 255), 5)\n",
" # #condition for thumbs up\n",
" # if (handpoints[4][1] < handpoints[2][1] and handpoints[8][1] < handpoints[6][1]) and (handpoints[20][1] > handpoints[18][1] and handpoints[12][1] > handpoints[10][1]) and (handpoints[16][1] > handpoints[14][1]):\n",
" # if upcount == 1:\n",
" # cv2.putText(img, \"THUMBS UP\", (15,250), cv2.FONT_HERSHEY_PLAIN, 3, (0, 0, 255), 5)\n",
"\n",
" # condition for thief (thumb tip touching index finger tip and rest of fingers closed)\n",
" dist_thumb_index = abs(handpoints[4][0] - handpoints[8][0]) + abs(handpoints[4][1] - handpoints[8][1])\n",
" if dist_thumb_index < 60 and upcount==2:\n",
" cv2.putText(img, \"thief\", (15, 250), cv2.FONT_HERSHEY_PLAIN, 3, (0, 0, 255), 5)\n",
"\n",
"\n",
"\n",
" cv2.putText(img,str(upcount),(100,150),cv2.FONT_HERSHEY_PLAIN,12,(255,0,0),12)\n",
"\n",
"\n",
" cv2.imshow(\"finger counter\",img)\n",
" if cv2.waitKey(1) & 0xFF == 27: # Press 'Esc' key to exit the loop and close the window\n",
" break\n",
"\n",
"cap.release()\n",
"cv2.destroyAllWindows()"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Loading