skip to main content


Title: Modeling Individual Specific Fish Length from Capture–Recapture Data Using the von Bertalanffy Growth Curve
Summary

We use Bayesian methods to explore fitting the von Bertalanffy length model to tag-recapture data. We consider two popular parameterizations of the von Bertalanffy model. The first models the data relative to age at first capture; the second models in terms of length at first capture. Using data from a rainbow trout Oncorhynchus mykiss study we explore the relationship between the assumptions and resulting inference using posterior predictive checking, cross validation and a simulation study. We find that untestable hierarchical assumptions placed on the nuisance parameters in each model can influence the resulting inference about parameters of interest. Researchers should carefully consider these assumptions when modeling growth from tag-recapture data.

 
more » « less
NSF-PAR ID:
10484470
Author(s) / Creator(s):
; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Biometrics
Volume:
69
Issue:
4
ISSN:
0006-341X
Format(s):
Medium: X Size: p. 1012-1021
Size(s):
p. 1012-1021
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The estimation of demographic parameters is a key component of evolutionary demography and conservation biology. Capture–mark–recapture methods have served as a fundamental tool for estimating demographic parameters. The accurate estimation of demographic parameters in capture–mark–recapture studies depends on accurate modeling of the observation process. Classic capture–mark–recapture models typically model the observation process as a Bernoulli or categorical trial with detection probability conditional on a marked individual's availability for detection (e.g., alive, or alive and present in a study area). Alternatives to this approach are underused, but may have great utility in capture–recapture studies. In this paper, we explore a simple concept:in the same way that counts contain more information about abundance than simple detection/non‐detection data, the number of encounters of individuals during observation occasions contains more information about the observation process than detection/non‐detection data for individuals during the same occasion. Rather than using Bernoulli or categorical distributions to estimate detection probability, we demonstrate the application of zero‐inflated Poisson and gamma‐Poisson distributions. The use of count distributions allows for inference on availability for encounter, as well as a wide variety of parameterizations for heterogeneity in the observation process. We demonstrate that this approach can accurately recover demographic and observation parameters in the presence of individual heterogeneity in detection probability and discuss some potential future extensions of this method.

     
    more » « less
  2. Abstract

    Integrated population models (IPMs) have become increasingly popular for the modelling of populations, as investigators seek to combine survey and demographic data to understand processes governing population dynamics. These models are particularly useful for identifying and exploring knowledge gaps within life histories, because they allow investigators to estimate biologically meaningful parameters, such as immigration or reproduction, that were previously unidentifiable without additional data. AsIPMs have been developed relatively recently, there is much to learn about model behaviour. Behaviour of parameters, such as estimates near boundaries, and the consequences of varying degrees of dependency among datasets, has been explored. However, the reliability of parameter estimates remains underexamined, particularly when models include parameters that are not identifiable from one data source, but are indirectly identifiable from multiple datasets and a presumed model structure, such as the estimation of immigration using capture‐recapture, fecundity and count data, combined with a life‐history model.

    To examine the behaviour of model parameter estimates, we simulated stable populations closed to immigration and emigration. We simulated two scenarios that might induce error into survival estimates: marker induced bias in the capture–mark–recapture data and heterogeneity in the mortality process. We subsequently fit capture–mark–recapture, state‐space and fecundity models, as well asIPMs that estimated additional parameters.

    Simulation results suggested that when model assumptions are violated, estimation of additional, previously unidentifiable, parameters usingIPMs may be extremely sensitive to these violations of model assumption. For example, when annual marker loss was simulated, estimates of survival rates were low and estimates of immigration rate from anIPMwere high. When heterogeneity in the mortality process was induced, there were substantial relative differences between the medians of posterior distributions and truth for juvenile survival and fecundity.

    Our results have important implications for biological inference when usingIPMs, as well as future model development and implementation. Specifically, using multiple datasets to identify additional parameters resulted in the posterior distributions of additional parameters directly reflecting the effects of the violations of model assumptions in integrated modelling frameworks. We suggest that investigators interpret posterior distributions of these parameters as a combination of biological process and systematic error.

     
    more » « less
  3. Summary

    Human rights data presents challenges for capture–recapture methodology. Lists of violent acts provided by many different groups create large, sparse tables of data for which saturated models are difficult to fit and for which simple models may be misspecified. We analyze data on killings and disappearances in Casanare, Colombia during years 1998 to 2007. Our estimates differ whether we choose to model marginal reporting probabilities and odds ratios, versus modeling the full reporting pattern in a conditional (log-linear) model. With 2629 observed killings, a marginal model we consider estimates over 9000 killings, while conditional models we consider estimate 6000–7000 killings. The latter agree with previous estimates, also from a conditional model. We see a twofold difference between the high sample coverage estimate of over 10,000 killings and low sample coverage lower bound estimate of 5200 killings. We use a simulation study to compare marginal and conditional models with at most two-way interactions and sample coverage estimators. The simulation results together with model selection criteria lead us to believe the previous estimates of total killings in Casanare may have been biased downward, suggesting that the violence was worse than previously thought. Model specification is an important consideration when interpreting population estimates from capture recapture analysis and the Casanare data is a protypical example of how that manifests.

     
    more » « less
  4. Abstract

    In 1958, Sidney Holt developed a model to determine the optimal mass at which to harvest a cohort of fish having von Bertalanffy growth and experiencing constant natural mortality. Holt and Ray Beverton then gave a life‐history interpretation to the analysis, from which Beverton developed a theory of Growth, Maturity, and Longevity (GML) that allows one to predict quantities such as age at maturity or relative size at maturity using life‐history parameters. I extend their results in two ways. First, keeping the original formulation, in which the rate of natural mortality is constant, I show how one can invert Beverton's result to determine the rate of natural mortality from life‐history data. I illustrate this inverse method with data on three species of tuna and compare the estimates with those based on tagging. Second, I extend Beverton'sGMLtheory to include size‐dependent mortality. I explore previously published mortality models and introduce a new mortality function that has size‐independent and size‐dependent components. I show that the new size‐dependent mortality function leads to the prediction that age at maturity depends upon asymptotic size (as well as the other life‐history parameters), something that Beverton's original theory lacked. I illustrate this extension with a simple example, discuss directions for future work and conclude that nearly 60 years on these contributions of Holt and Beverton continue to lead us in new and exciting directions.

     
    more » « less
  5. Abstract

    Valid surrogate endpoints S can be used as a substitute for a true outcome of interest T to measure treatment efficacy in a clinical trial. We propose a causal inference approach to validate a surrogate by incorporating longitudinal measurements of the true outcomes using a mixed modeling approach, and we define models and quantities for validation that may vary across the study period using principal surrogacy criteria. We consider a surrogate-dependent treatment efficacy curve that allows us to validate the surrogate at different time points. We extend these methods to accommodate a delayed-start treatment design where all patients eventually receive the treatment. Not all parameters are identified in the general setting. We apply a Bayesian approach for estimation and inference, utilizing more informative prior distributions for selected parameters. We consider the sensitivity of these prior assumptions as well as assumptions of independence among certain counterfactual quantities conditional on pretreatment covariates to improve identifiability. We examine the frequentist properties (bias of point and variance estimates, credible interval coverage) of a Bayesian imputation method. Our work is motivated by a clinical trial of a gene therapy where the functional outcomes are measured repeatedly throughout the trial.

     
    more » « less