zoomGroupStats is now available as a package on CRAN.
Title: zoomGroupStats: Analyze Text, Audio, and Video from ‘Zoom’ Meetings
Description: Provides utilities for processing and analyzing the files that are exported from a recorded ‘Zoom’ Meeting. This includes analyzing data captured through video cameras and microphones, the text-based chat, and meta-data. You can analyze aspects of the conversation among meeting participants and their emotional expressions throughout the meeting.
# To use the stable release version of zoomGroupStats, use:
# To use the development version, which will be regularly updated with new functionality, use:
You can stay up-to-date on the latest functionality on http://zoomgroupstats.org.
I facilitated a short workshop to give an introduction to using the zoomGroupStats set of functions. The tutorial covered three issues. First, I offered recommendations for how to configure Zoom for conducting research projects. Second, I described how to use the functions in zoomGroupStats to parse the output from Zoom and run rudimentary conversation analysis. Third, I illustrated how to use zoomGroupStats to analyze the video files output from Zoom.
If you weren’t able to make it, here are a few artifacts from the session:
Presentation materials, which provided the structure for the session (but do not include the demonstrations / illustrations)
zoomGroupStats Tutorial Code, which walked through using different functions in zoomGroupStats
Supplementary Tutorial Guide, which accompanied the session to provide additional recommendations
In the zoomGroupStats functions that I described in this post, there is a function for identifying and analyzing faces. The original version that I posted uses ImageMagick to pull image frames out of video files. This is embedded in the videoFaceAnalysis function. In practice, this is a very inefficient method for breaking down a video file before sending it off to AWS Rekognition for the face analysis. I’ve found that ImageMagick takes quite a long time to pull images from a video.
As an alternative, I’ve been using ffmpeg to process the video files before using the zoomGroupStats functions. I love ffmpeg and have used it for years to manipulate and process audio and video files. After you have installed ffmpeg on your machine, you can use system(“xxxx”) in the stream of your R code to execute ffmpeg commands. For example, here’s what I include in a loop that is working through a batch of video files:
ffCmd = paste("ffmpeg -i ", inputPath, " -r 1/", sampleWindow, " -fimage2 ", outputPath, "%0d.png", sep="")
Then, you can just run system(ffCmd) to execute this line. In the line, inputPath is the path to the video file, sampleWindow is the number of seconds that you would like between each frame grab, and outputPath is the path to the directory, including an image name prefix, where you want the images saved.
Using a computer that isn’t very powerful (a Mac Mini), I was able to break down 20 ~2 hour videos (about 2400 minutes of video) into frame grabs every 20 seconds (around 7000 images) in less than an hour.
I will end up replacing ImageMagick with ffmpeg in the next iteration of the videoFaceAnalysis function. This will also come with the output of new metrics (i.e., face height/width ratio and size oscillation). Stay tuned!