You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We used the Neurophotometrics instrument to collect Fiber Photometry (FP) recordings, and that system, along with the video webcam, ran through Bonsai to package that data into .csv files. We used this workflow (https://drive.google.com/file/d/1iCVy0kNryJF5e-m_A0iA8AKAJORTw5E4/view?usp=sharing) to run the experiment. Note, we manipulated the settings such that both the FP and video were recorded at 40 fps to allow for easier synchronization of the data. The FP recordings were obtained at 470 nm (signal) and 415 nm (baseline). Additionally, we used the Keydown feature to mark the beginning and end time of the behavioral test in seconds (see the righthand side of the FP recording excel files for this information).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
We used the Neurophotometrics instrument to collect Fiber Photometry (FP) recordings, and that system, along with the video webcam, ran through Bonsai to package that data into .csv files. We used this workflow (https://drive.google.com/file/d/1iCVy0kNryJF5e-m_A0iA8AKAJORTw5E4/view?usp=sharing) to run the experiment. Note, we manipulated the settings such that both the FP and video were recorded at 40 fps to allow for easier synchronization of the data. The FP recordings were obtained at 470 nm (signal) and 415 nm (baseline). Additionally, we used the Keydown feature to mark the beginning and end time of the behavioral test in seconds (see the righthand side of the FP recording excel files for this information).
After experimentation, Bonsai spit out the video and FP .csv files (video:https://drive.google.com/file/d/1cgzgWoGjIabd3EOj4APHIqfaQjoa0XPg/view?usp=sharing; .csv attached to this discussion). We ran the video file through another Bonsai workflow (https://drive.google.com/file/d/1XWbIZaeNbuccJY8WlGAG3J3vH4AXENXi/view?usp=sharing) for analysis in order to obtain the times at which the mice were in specific regions. I inputted the pixel dimensions of our Regions of Interest (in social interaction tests, there are three - two corners and an interaction zone), and the Bonsai program spit out the excel file titled "4441(R)_4442(L)_pre-SI test pt1_FP G1_06.28.2022_FP Bonsai Data Analysis." In this file, the numbers correspond to the x and y positions of the mice, respectively; the True/False represents if the mice were in the respective Regions of Interest (the file attached to the email corresponds to the left mouse); and each row corresponds to a video frame. However, as I mentioned previously, the frames in this file do no correspond to the frames in the FP recording files because the video was "sped up" to ~4'40", rather than a little over 6', and the number of frames reflects the "sped up" version of the video. Do you know how we might go about synchronizing the FP and video data
[4441(0G).4442(1G)_All Data_pre-SI test pt1_FP G1_06.282022-06-28T14_30_55.xlsx](h
4441(0G).4442(1G)_470_pre-SI test pt1_FP G1_06.282022-06-28T14_30_55.csv
4441(0G).4442(1G)_All Data_pre-SI test pt1_FP G1_06.282022-06-28T14_30_55.csv
4441(0G)_4442(1G)_415_pre-SI test pt1_FP G1_06.282022-06-28T14_30_55.csv
ttps://github.com/bonsai-rx/bonsai/files/9162148/4441.0G.4442.1G._All.Data_pre-SI.test.pt1_FP.G1_0
4441(R)_4442(L)_pre-SI test pt1_FP G1_06.28.2022_FP Bonsai Data Analysis.csv
6.282022-06-28T14_30_55.xlsx)
4441(0G)_4442(1G)_415_pre-SI test pt1_FP G1_06.282022-06-28T14_30_55.xlsx
4441(0G).4442(1G)_470_pre-SI test pt1_FP G1_06.282022-06-28T14_30_55.xlsx
?
Beta Was this translation helpful? Give feedback.
All reactions