Tutorials & Workshops
On the Monday 3 of June before the main workshop we will have 5 workshops/tutorials: 3 full day and two half day tutorials. If you
wish to attend the workshop/tutorial you are asked to contact the workshop contact persons listed below.
You will be asked to pay an additional fee for the Monday tutorial as part of the registration process.
|Introduction to the Dive Depth Detection (3D) method: leveraging surface reflections to estimate whale depth and improve density estimation, an example using deep diving echolocators
|Updates for using the Low-Frequency Detection and Classification System (LFDCS) with archival and near real-time acoustic datasets
|The DE part: marine mammal density estimation from passive acoustic monitoring data
|Full day (half possible)
|Pamguard: updates on Tethys database and Machine Learning
|Array Beamforming for Passive Acoustic Monitoring
Introduction to the Dive Depth Detection (3D) method: leveraging surface reflections to estimate whale depth and improve density estimation, an example using deep diving echolocators
- Email: firstname.lastname@example.org
- Workshop Organizer's Name: Annabel Westell, Amanda Holdman
- Organization: Northeast Fisheries Science Center, NOAA/NMFS
- Contact Phone Number: (774) 349 1885
- Duration: Half-day
- Modality: Hybrid (if possible)
- Anticipated Number of Participants: Unknown
Towed linear hydrophone arrays are a standard method for locating and tracking odontocetes during line-transect surveys. Using the time difference of arrival of an animal’s signal on multiple hydrophones over a series of detections, the perpendicular distance between the localized animal and the trackline can be calculated and used for acoustic density estimation. However, deep-diving odontocetes, such as sperm whales and beaked whales, primarily produce calls at depth, and thus the distances reported are actually slant distances (as opposed to true perpendicular distances). By analyzing the time difference in arrival of a direct detection and surface reflected echo, we can estimate animal depth, produce a depth corrected perpendicular distance, and improve acoustic density estimation.
Over the past five years, the Passive Acoustics Branch at NOAA’s Northeast Fisheries Science Center has been developing the Dive Depth Detection (3D) method, a semi-automated process for the acoustic detection and localization of sperm whale and beaked whale clicks recorded by a linear towed hydrophone array. The purpose of this workshop is to share the theory behind the 3D method and take participants through the step-by-step process from annotating echolocation clicks in PAMGuard (post-click detector) to visualizing a dive track (depth of clicks over time) in R. We will also demonstrate the application of this method for improved acoustic density estimation and the study of dive behaviour.
This workshop will provide an introduction to a variety of useful tools within PAMGuard and the R package PAMpal (developed by Taiki Sakai). Participants will walk through the following steps using two sample datasets: (1) how to annotate events (grouping click trains to the individual level) in PAMGuard, (2) use PAMGuard’s Target Motion Analysis (TMA) module to generate a 2D localization and estimate a slant distance to a detected animal, (3) use the R package PAMPal to extract wav clips and metadata for each click in an annotated click train, (4) use Matlab to run an autocorrelation to estimate a slant delay time between an annotated click and its surfaced reflected echo, (5) manually review and remove false detections, (6) calculate click depth, (7) calculate a depth corrected perpendicular distance, and (8) plot estimated click depths over time and visualize a dive track.
While this workshop will focus on deep diving species detected using a linear towed array, the method could be applied to stationary recorders and/ or other species.
Interested participants can contact email@example.com with questions or specific topics of interest which we will try to incorporate into the schedule.
Each participant should bring their PC laptop/ charger. They should have a recent version of R installed and if possible a recent version/ trial license of Matlab with the Signal Processing Toolbox. If participants do not have access to Matlab they can follow along with the presentation and/ or with another participant. We are working towards integrating the 3D method into the PAMPal package so that the entire process can be completed in R, open source and freely available.
A sample dataset will be provided and shared via Google drive, to be downloaded by participants before the workshop. Closer to the workshop more information and details will be shared with interested participants, for example which R libraries should be installed prior to the workshop.
Prior to the workshop, participants may benefit from reviewing the following papers:
Westell, A., Sakai, T., Valtierra, R., Van Parijs, S.M., Cholewiak, D., Deangelis, A. 2022. Sperm whale acoustic abundance and dive behaviour in the western North Atlantic. Scientific Reports, 12 (16821).
DeAngelis, A., Valiterra, R., Van Parijs, S.M., Cholewiak, C. 2017. Using multipath reflections to obtain dive depths of beaked whales from a towed hydrophone array. The Journal of the Acoustical Society of America, 142.
Valtierra, R. D., Glynn Holt, R., Cholewiak, D., & Van Parijs, S. M. 2013. Calling depths of baleen whales from single sensor data: Development of an autocorrelation method using multipath localization. The Journal of the Acoustical Society of America, 134 (3).
Aubauer, R., Lammers, M. O., and Au, W. W. L. 2000. One hydrophone method of estimating distance and depth of phonating dolphins in shallow water. The Journal of the Acoustical Society of America, 107.
Updates for using the Low-Frequency Detection and Classification System (LFDCS) with archival and near real-time acoustic datasets
- Email: firstname.lastname@example.org
- Workshop Organizer's Name: Julianne Wilder, Genevieve Davis, and Mark Baumgartner
- Organization: Northeast Fisheries Science Center, NOAA/NMFS and Woods Hole Oceanographic Institution
- Contact Phone Number: (508) 469-9332
- Duration: Full day
- Modality: Hybrid (if possible)
- Anticipated Number of Participants: Unknown
The Low-Frequency Detection and Classification System (LFDCS) is a software system built for efficient automated detection and classification of low-frequency tonal sounds produced by baleen whales in archival and real-time acoustic data. In this tutorial, we will cover the fundamentals of the desktop version of the LFDCS, including the generation of pitch tracks (contour lines that trace tonal sounds), discriminant function analysis, creating call libraries, browsing/exporting autodetections and analysis results, and species analysis protocols. An explanation of recent updates to the installation process and accessibility of the LFDCS will be provided. Interested participants can contact Julianne.Wilder@noaa.gov with their intended use of the LFDCS so the tutorial can focus on these parts of the program.
Additionally, we will cover the use of the LFDCS to detect and classify baleen whalevocalizations transmitted via satellite in near real-time by autonomous buoys and ocean gliders equipped with the programmable digital acoustic monitoring (DMON) instrument.
Participants will have access to the LFDCS Reference Guide and receive interactive training in the real-time analysis protocol used to analyze the data from these deployments that are uploaded to a public website, Robots4Whales (robots4whales.whoi.edu). The uses of the real-time system include monitoring shipping lanes, fishing grounds, migratory hotspots, and aiding visual surveys to mitigate ship strikes, fishing gear entanglements, and exposure to construction noise associated with the rapidly expanding offshore wind industry. Analyst-confirmed detections help improve conservation efforts by providing scientists, industries, and the public with near real-time information on whale presence.
Currently, each participant should bring their own Mac laptop/charger, hard drive with acoustic data, and have IDL (with a full license) and the LFDCS program downloaded onto their laptop.
We are working on an update to the LFDCS which may allow for other computer platforms (PC or Linux), and only require an IDL runtime license. Details closer-to the workshop will be sent to interested participants as to what type of computer and software licensing is needed. Prior to the workshop, participants would benefit from familiarizing themselves with the LFDCS Reference Guide (available at https://repository.library.noaa.gov/view/noaa/48671) which contains installation instructions, and reading the following paper which describes the LFDCS: Baumgartner, M.F. and S.E. Mussoline. 2011. A generalized baleen whale call detection and classification system. Journal of the Acoustical Society of America 129:2889-2902 (available at robots4whales.whoi.edu).
The DE part: marine mammal density estimation from passive acoustic monitoring data
- Email: email@example.com
- Workshop Organizer's Name: Tiago Marques, Danielle Harris, Len Thomas
- Organization: Center for Research into Ecological and Environmental Modelling (CREEM), University of St Andrews
- Contact Phone Number: +44 7872 419039 (Len Thomas)
- Duration: Full day; half-day participation is also possible for participants who either (1) just want to learn the basics (morning only) or (2) already know the concepts and wish to participate just in the case study discussions (afternoon only).
- Modality: In-person; online attendance also possible
- Anticipated Number of Participants: Unknown; at previous DCLDE workshops we have had between 20 and 40.
- Participant Pre-Requisites: None
PAM data are increasingly being used to estimate density of marine mammal populations. However, few studies have been built from the ground up with that ultimate goal in mind. In this workshop, we will introduce what is required to obtain reliable density estimates from PAM and to quantify uncertainty in these estimates. We will consider when studies not designed for density estimation can nonetheless be used and how this can be done.
In the morning we will introduce the concepts and analysis methods. We will cover spatial survey design of sensors, what detection unit (“cue”) to target (individual call/click, song unit/click train, animal, group), and the three main components of density estimation: detectability, false positive rate and cue production rate. We will describe methods for estimating detectability including distance sampling and spatial capture recapture.
In the afternoon, we will conduct some discussion-based group exercises. Participants will be split into small groups and presented with realistic monitoring scenarios and asked to design an acoustic survey to estimate density, or determine how to estimate density from an existing study. Groups will present a summary of their case study in plenary at the end of this session. We will also have time to demonstrate software that can be used for density analyses (in R).
Participants are encouraged to propose case studies to be used in the workshop. You are welcome to attend for the full or half day, although knowledge of the morning material will be assumed in the afternoon.
PAMGuard: New features and any other questions
- Email: firstname.lastname@example.org
- Workshop Organizer's Name: Douglas Gillespie
- Organization: University of St Andrews, Scotland
- Contact Phone Number:
- Duration: Full day / two half days. Option for participants to only attend the Tethys or the Deep Learning sections. We’ll fit the Batch processing section into either depending on demand.
- Modality: In-person
- Anticipated Number of Participants: 15-25
- Participant Pre-Requisites: Familiarity with PAMGuard desirable.
Three new features are under development in PAMGuard.
- Integration with the Tethys database (https://tethys.sdsu.edu/). Tethys is an open source temporal-spatial database for metadata related to acoustic recordings. The database allows the user to perform meta analyses or to aggregate data from many experimental efforts based on a common attribute. This resulting database can then be queried based on time, space, or any desired attribute and the results can be integrated with external datasets such as NASA’s Ocean Color, lunar illumination, etc. in a consistent manner.
- The Deep Learing module in PAMGuard can call classification models developed in languages such as Python directly. This combines the power of modern machine learning algorithms with the real-time operation, user interface, and human data validation offered by PAMGuard. We will demonstrate how to set up the Deep Learning module, how to import models, run them, and view the results.
- Batch processing of multiple datasets. No more frustrations setting up the same tasks on multiple similar data sets. Set up once and use the batch processor to run a dozen jobs over the weekend.
We will describe and demonstrate the use of these new features and provide examples for participants to try out on their own computers. All of these features are still under development, so we will be soliciting feedback from participants to guide ongoing work. The three features are largely independend of one another, so please indicate if you are only interested in one or two, or wish to attend other workshops and we will try to schedule accordingly.
Array Beamforming for Passive Acoustic Monitoring
- Email: email@example.com, firstname.lastname@example.org
- Workshop Organizer's Name: Dr. Vince Premus, Dr. Peter Beerens
- Organization: OASIS, Inc. A Wholly-Owned Subsidiary of ThayerMahan, Inc. and Dutch Organisation for Applied Scientific Research (TNO)
- Contact Phone Number: +1.978.877.7580
- Duration: Half day
- Modality: In-person
- Anticipated Number of Participants: 15-25
- Participant Pre-Requisites: Experience with FFT-based data analysis and acoustic measurements
Coherently beamformed towed hydrophone arrays exhibit significant potential for advanced real-time passive acoustic monitoring of endangered whale species. The frequency band below 500Hz, where many whales are acoustically active, is dominated by radiated noise from ships. This results in an ambient noise spectrum that is strongly anisotropic with levels that are elevated relative to natural ambient noise due to wind-wave interaction and biologics. As long as spatial coherence is supported by the environment, coherently beamformed arrays offer two distinct advantages over single hydrophones: (1) spatial rejection of shipping and wind driven ambient noise through beamforming, which increases detection range and thus area coverage, and (2) spatial resolution, which delivers the capacity to resolve bearing and thus track and localize acoustic sources.
In this workshop, we will overview the physical foundation of array beamforming, the concept of a plane wave signal model, and the meaning of spatial coherence. The conventional beamformer (CBF) will be developed from first principles and both time- and frequency-domain implementations will be discussed. Performance dependence of beam response on array length and sensor spacing will be characterized in terms of spatial resolution and sidelobe response. Empirical results from a array deployment off the U.S North Atlantic coast and northern Norway will be used to illustrate measured ambient noise distributions. Methods to quantify the performance benefit of arrays through the employment of calibrated source operations and signal processing metrics such as array gain, signal gain degradation, and detection range will also be presented. The workshop will conclude with some discussion of metrics for DCLDE and the factors influencing the operating point (Pd, Pfa) of a receiver for acoustic detection of baleen whales and higher frequency vocalizing and echolocating odontocetes, such as killer whales, pilot whales and beaked whales.