Skip to content

We provide an online interview analyser which can extract information related to online interviews from the video and audio recording. The system is designed to analyse the emotions of the interviewees using image processing and deep learning techniques along with analysis on the audio of the interview using NLP and Machine learning techniques.

Notifications You must be signed in to change notification settings

VaibhaveS/Focus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Online meeting analyser for interviews using image processing and NLP

We provide an online interview analyser which can extract information related to online interviews from the video and audio recording. The system is designed to analyse the emotions of the interviewees using image processing and deep learning techniques along with analysis on the audio of the interview using NLP and Machine learning techniques. Our system provides a detailed analysis of the interview. Companies can use the results from the system to judge the candidate.

Demo Link

Video.mp4

Dataset Link

https://drive.google.com/file/d/1PH8WC63IqV0deTqNUHjxg2hr--zlkZJJ/view?usp=sharing

  1. Clone this repository
$ git clone https://github.com/VaibhaveS/Focus.git
  1. Change directory to that folder
$ cd Focus
  1. Run the jupyter notebooks
$ open them manually or use py -m Audio.ipynb and Process.ipynb
  1. Enable Google cloud Speech-to-Text API
https://console.cloud.google.com/
  1. Open Homepage.html in the browser
  • Average response time of the interviewee
The delta seconds for each question answer pair is calculated and added to a global sum to calculate the overall average response time of the interviewee. This allows the interviewer to gauge the interviewee in terms of how quick he is to think and answer questions i.e, decision making.
  • Bar chart signifying the count of each emotion

    A bar chart showing the average frequency of each emotion shown to get an overall idea of what emotion was displayed in response to the stimuli given, for example, a difficult question.
  • Number of active speakers (in a normal interview setting it should be two, but may vary)

    Number of active speakers based on audio component of the meeting is found and displayed.
  • Number of questions asked

    From the numerous sentences from the audio transcript, questions that were asked are recognised and the count is returned.
  • Percentage of non-trivial questions

    From the questions asked, the number of non-interesting questions are found using TF/IDF and an existing dataset of both interesting and non-interesting questions.
  • Percentage of interesting questions

    From the questions asked, the number of interesting questions are found using TF/IDF and an existing dataset of both interesting and non-interesting questions.

About

We provide an online interview analyser which can extract information related to online interviews from the video and audio recording. The system is designed to analyse the emotions of the interviewees using image processing and deep learning techniques along with analysis on the audio of the interview using NLP and Machine learning techniques.

Topics

Resources

Stars

Watchers

Forks

Contributors 4

  •  
  •  
  •  
  •