The classical Novel Object Recognition.
1. Fully charge the phone battery before the experiment
2. After entering the revenue, set up the lighting condition (avoid direct light source and adopt a softer indirect light source for the video recording to circumvent the interference from light reflection).
3. Separate the mice into individual cages with the corresponding label on the cage
4. Prepare a square box for Novel Object Recognition (NOR) experiment, with a dimension of 60X60 cm in our case
5. Avoid adopting highly light-reflective for the box used in NOR to prevent the heterogeneous light intensity
6. Avoid adopting materials which inclined to retain smell (e.g. cartoon or wooden) to avoid interference amidst different subjects via olfactory signal
1. Turn on the computer and the phone
2. Turn on the ‘do not disturb’ mode in the phone
3. Make sure both the phone and the computer are connected to the same WLAN or WIFI network
4. Download and install the “DroidCam” app from GooglePlay store if it is not yet done
5. Open the DroidCam app in the phone and obtain the id
6. Download and install the “DroidCam Client” app from “https://www.dev47apps.com/droidcam/windows/” in the computer if it is not yet done
7. Open the DroidCam Client app in the computer and enter the corresponding id of the phone to connect the computer video input with the phone camera
8. Adjust the parameters of the phone camera (e.g. white balance, resolution) if necessary
9. In the computer, if necessary, build a folder for the experiment and build five subfolders in the experiment folder, which are ‘experiments_files’, ‘report_files’, ‘schedule_files’, ‘video_files’ and ‘zone_files’ respectively.
10. Open the SMART v3.0 software in the computer and create two new experimental files with the customized protocol for both the 10-mins training and 5-mins testing respectively
11. For both experimental files, in the ‘Configuration’ tab, open the ‘Path Settings’ and set the path of various files to the corresponding path built in Step 17 if necessary.
12. For both experimental files, in the Experiment Assistant tab, open the setting of the image source and choose DroidCam Source 2 or 3
13. Mount the phone on the phone clamp and mount the clamp on the holder scaffold
14. Place two identical objects into the field as the ‘training objects’ used in the training session. The shape, size, and position of the objects are dependent on the experimental design.
15. At the bottom of the field, delineate the objects along their bottom edge to mark their position for reference.
16. Similarly, decide the shape, and position of the ‘novel’ object used in the testing session and mark its position as aforementioned for reference as well.
1. according to the view in the SMART Player window, adjust the position of the phone camera such that the entire field area is just included in the view
2. place the two training objects in their respective position according to their marks
3. Adjust the position and intensity of the light source such there is no significant light reflection on the wall and at the bottom, i.e. make sure the brightness inside the box is homogeneous
4. In the calibration setting, set the calibration units to ‘Centimeters‘
5. move the two red lines such that they delimit the horizontal boundary of the field area
6. Set the horizontal calibration value to the width of the field area
7. move the two green lines such that they delimit the vertical boundary of the field area
8. Set the vertical calibration value to the height of the field area
9. Accept the calibration setting
10. Open the Zones Editor
11. Create three zones for the NOR experiment, one corresponds to the entire field, another two for the two identical training objects
12. Adjust the size and position of the border of the outermost zone (corresponding to the entire field) such that it circumscribes all the field area
13. Adjust the size and position of the two inner zones (corresponding to the two training objects respectively) according to the experimental design such that both objects would locate right at the center of their respective zones.
14. The size of the zones corresponding to the two training objects, which are identical, should also be identical and a bit larger than the actual training objects such that it can encompass the entire object but with some blank space in-between
15. Export the Zone data to the ‘zone_files’ for future reference if necessary
16. Save the zone setting and close the Zones Editor window
17. Open the ‘Time Settings’
18. Choose the pre-set time mode
19. Set a preferred duration for the latency, a timespan in which the program will wait and suspend the initiation of data acquisition after the first object movement is detected in the field. This can help to prevent the program from tracking the movement of the researcher’s hand when they place the mouse sample in the field
20. Set a preferred duration for the acquisition time, after which the data acquisition would terminate automatically
21. Open the ‘Detection Settings’
22. Click ‘Start Test’ to gauge the current detection performance
23. Click ‘Snap Shot’ to capture the empty field (without mouse subject) as the background reference for the detection process
24. After that, the image window, which is still in the ‘Start Test’ mode should exhibit a stark blank inside the outermost zone
25. Place a trial mouse in the field or simply waving hand (as a crude test) above the field to check whether the program can detect and thereby chase the moving object in the field
26. If the detection is still not satisfactory, try to fine-tune other detection parameters, ranging from threshold to erosion, from brightness to contrast, to optimize the detection performance
27. ‘Activity Detection’ can also be included to acquire extra data regarding the activity of the subject
28. In the ‘detection mode’ option menu, choose the ‘Triwise’ detection mode.
29. In the configuration window of the ‘Triwise’ detection, choose head as the tracking point and also the criteria for zone-transition detection
30. Save the detection setting
31. Open the ‘Subject’ tab
32. Delete the default subject and create the subject list according to the experimental design
33. Export the subject list to the ‘schedule_files’ for future reference if necessary
34. Open the ‘Schedule’ tab
35. Delete the default trial and create the schedule according to the experimental design
36. drag and drop the corresponding subjects into experimental ‘session’ to create ‘trial’
37. Select the first subject of the experiment and click the green tick in the top menu bar to assign it as the designed subject in the ensuing data acquisition process
38. Export the schedule setting to the ‘schedule_files’ for future reference if necessary
39. Open the ‘Analysis’ tab and then open the configuration window of the ‘report’.
40. Add a new customized report format based on ‘Summary report’ format and include the data of interest into the customized report list.
41. Save the experimental file as the ‘training session’ experiment
1. In the experimental file for the training session, change the data-acquisition time in time setting window from 10 minutes to 5 minutes.
2. Replace the one training object with the novel object and place it in its pre-defined position according to its marks.
3. Change the zone definition as well if the size or shape of the novel object is highly incompatible with the pre-defined zone (ideally, the size should be kept as similar to the acquaint object as possible such as the pre-defined zone definition can still be adopted).
4. Save the experimental as a new file for the ‘testing session’ experiment
1. Open the experimental file for the training session
2. place the two training objects in their respective position according to their marks
3. Open the ‘Data Acquistion’ tab
4. Click ‘Start’ to initiate the data acquisition process, which nonetheless would not be activated before the detection of any object movement inside the field
5. Click the video recording button at the bottom of the image window to start recording video for future reference as well
6. Take out the first animal subject (consistent with the schedule) from its cage and release it right at the middle of the field (be cautious not to drop the subject from high altitude and also heed the handed phone meanwhile in case of any movement of the camera)
7. The program would start to track the subject and acquire data after the pre-defined initial latency. Monitor the tracking process on the screen to ensure the propriety of the tracking and acquisition processes.
8. After the pre-defined acquisition duration, the tracking and data acquisition would also terminate automatically but the video recording needed to be stopped manually.
9. Salvage the subject out of the field (again, be mindful about the handed phone)
10. Count and record the number of feces defecated by the subject
11. Clean the feces and spray the entire field with ethanol and wait for it to evaporate
12. The testing session of the same subject should be started 1 hour later.
13. After the termination of data acquisition, the designated subject for the next data acquisition process would automatically change to the next ensuing subject in the schedule. If necessary, the schedule can be modified and the designated subject can also be reassigned with the aforementioned green tick in Step 33 in the Software Configuration session
14. Repeat Step 2-10 for the next subject
1. Open the experimental file for the testing session
2. Replace the one training object with the novel object and place it in its pre-defined position according to its marks.
3. Except for the discrepancies in the time setting (and perhaps also in zone definition), all other procedures should be the same as the aforementioned procedure for the training session.
1. After accomplishing all the trials, close all the sub-window in SMART 3.0 and open the ‘Analysis’ tap
2. In the analysis window, select all the trial results from the left sidebar.
3. Drag and drop the results in the first row of the table in the middle, which should tabulate all the results
4. For each row, choose the customized summary report by clicking the green tick beside the report option bar.
5. Click ‘Analyze’ to start the analysis with the default water maze summary report
6. Select and open all the available reports the analysis
7. Export all the reports in excel file to a folder (e.g. report_files)
8. Export some representative trajectory image if necessary
9. Further analyze these raw data with the appropriate statistical test