Société de Mathématiques Appliquées et Industrielles

12e Rencontre Math-Indus - Résumés


Andy Grieve, Professor, King’s College-London, president of Royal Statistical Society

Can Consultancy provide the Bridge between Research and the Applied Statistician ?

There are a number of roles for consultants within an industrial organisation. They can play the part of “statistical guru" to statisticians involved in projects. They can influence the organisation to consider applications of statistics to new areas, or applying new statistical ideas to existing areas. They may provide the interface between the organisation and outside academic groups. They may have a role in influencing the more general outside environment, including government, as to the appropriate use of statistics. In all of these roles the consultant can provide the bridge between practice and research. In this talk I will illustrate some of these ideas by example.


Marc Buyse, Professor, Universiteit Hasselt and Chairman, IDDI Consultants ;

Generalized pairwise comparisons of prioritized outcomes
in the two-sample problem

Marc Buyse, ScD
IDDI, Louvain-la-Neuve and
University of Hasselt, Diepenbeek, Belgium

In this talk, we extend the idea behind the U-statistic of the Wilcoxon-Mann-Whitney test to perform generalized pairwise comparisons between two groups of observations. The observations are the observed outcomes of subjects measured by a single variable, possibly repeatedly measured, or by several variables of any type (discrete, continuous, time to event, etc.) Several outcomes can be considered (or repeated measures of a single outcome), as long as they can be uniquely prioritized. Generalized pairwise comparisons extend well-known non-parametric tests, such as the Wilcoxon-Mann-Whitney test, the Gehan generalized Wilcoxon test, and Fisher’s exact test. Generalized pairwise comparisons naturally lead to a general measure of the difference between the groups called the `difference in favorable pairs’, which is related to traditional measures of treatment effect such as the absolute risk difference, the effect size or standardized mean difference, the hazard ratio, and the probabilistic index. We will show the wide applicability of generalized pairwise comparisons using data from randomized clinical trials.


François Gavini, chef de département biostatistique chez Servier ;

Drug Safety Assessment and Data Mining

F. Gavini (IRIS, Servier), G. Le Teuff (Keyrus Biopharma)

Since the late 90’s, data mining algorithms have emerged in safety signals detection applied on large post-marketing surveillance databases (for example, AERS / WHO pharmacovigilance databases). Those techniques are now recognised by regulatory agencies as complementary to cases reports and pharmacovigilance expertise in timely detection of adverse drug reactions. They are being developed / used by health authorities, pharmaceutical companies and academic researchers. Among them, Proportional Reporting Ratios (PRR), empiral bayes Gamma-Poisson Shrinker (GPS), Bayesian confidence propagation by neural network (BCPNN) and multi-items GPS are commonly used. They are based on measures of signal disproportionate reporting.

More recently, Health Authorities introduced the implementation of a risk management plan by the sponsor companies that should be conducted as a continuing process throughout the lifetime of a medicinal product, including the pre-authorisation phase. Although data mining techniques are routinely applied to pharmacovigilance databases, their application to phase I-III clinical databases (which are both smaller and clinically more informative) is just at its starting point. The increasing efforts of sponsor companies for early assessment of potential risks of medicinal products might lead to their regular application to clinical databases and to stimulate the development of new techniques. Some tools, like visual data mining, classification trees, Bayesian networks, neural networks, hierarchical models, random forests are being proposed as an alternative to usual descriptive tabulated reports and repeated frequentist inferences.

We propose first to cover the main data mining techniques used in pharmacovigilance. Then the first initiatives of data mining use in drug development will be presented on safety outcomes and an example of application in oncology will be detailed.

References :

Eudravigilance expert working group (EV-EWG). EMEA/106464/2006 rev.1. 2008
Guideline on risk management systems for medicinal products for human use. EMEA/CHMP/96268/2005
Data mining for signals in spontaneous reporting databases : proceed with caution. P.S. Wendy, Hauben M. Pharmacoepidemiology and drug safety. 2007 ; 16 : 359-365

Data mining and statistically guided clinical review of adverse event data in clinical trials. Southworth H, O’Connell M. Journal of Biopharmaceutical Statistics. 2009 ; 19 : 803-817


France Mentré, Professeur, Directeur de l’unité INSERM U735 sur les modèles et méthodes pour l’évaluation des traitements ;

Nonlinear mixed effects models for the analysis and design of bioequivalence/biosimilarity studies.

Authors : France Mentré*, Anne Dubois, Thu-Thuy Nguyen, Caroline Bazzoli,

Institutions : UMR738, University Paris Diderot and INSERM, Paris, France

Nonlinear mixed effects models (NLMEM) can be used to analyse crossover pharmacokinetic bioequivalence or biosimilarity studies, as an alternative to standard non-compartmental analysis especially for trial in patients. Data should be modeled in one step, with both inter and intra patient random effects, using an appropriate estimation method, like the SAEM algorithm implemented in the software MONOLIX. We extended Wald and likelihood ratio tests of treatment effect to test equivalence and showed their properties on simulation. Before the modelling step, it is important to define an appropriate design which has an impact on the precision of parameter estimates and on the power of tests. We propose an extension of the evaluation of the Fisher Information Matrix for NLMEM including within subject variability in addition to between subject variability using a first order expansion of the model. We also include fixed effects for covariates like treatment, period and sequence usually tested in these crossover trials. We use the predicted standard errors to predict the power of the Wald test for difference or for bioequivalence and to compute the number of subject needed. These extensions are implemented in the newly released version PFIM 3.2 and were evaluated by simulations.


Mickael Guedj, Docteur, biostatisticien attaché à la Ligue contre le cancer ;

Statistique, Génétique et applications médicales

Les industries pharmaceutiques ont toujours eu pour objectif de développer des approches innovantes permettant de proposer de nouveaux produits diagnostiques, pronostiques et thérapeutiques.
Le nombre croissant de nouvelles technologies en génomique haut-débit entraîne la production d’une quantité considérable de données, offrant l’opportunité d’explorer le vivant à une échelle jusque là encore inconnue. En particulier, les puces dites d’ « expression  » et de « génotypage  » en sont les représentants les plus largement répandus et exploités afin de caractériser le fonctionnement des cellules. Aujourd’hui, l’étude biologique d’un grand nombre de pathologies humaines (cancers, Alzheimer …) passe donc par cette caractérisation moléculaire, permettant d’avancer considérablement notre compréhension des mécanismes étiologiques et le développement de nouvelles thérapies. Le volume ainsi que la diversité des données qu’il est aujourd’hui possible de générer implique le développement et l’application de stratégies d’analyse adaptées.
Au cours de cet exposé, nous discuterons des principaux problèmes statistiques liés à l’analyse de données génomiques, et de quelques cas concrets où la Génomique a permis des avancées significatives dans le domaine de la santé.


Gilbert MacKenzie
Centre of Biostatistics, Depar tment of Mathematics & Statistics, The University of Limerick,
Limerick, Ireland and ENSAI, Bruz, France

A Decade of Joint Mean-Covariance Modelling : What has the Industry learned ?

Abstract

The conventional approach to modelling longitudinal RCT data places considerable emphasis
on estimation of the mean structure and much less on the covariance structure, between re-
peated measurements on the same subject. Often, the covariance structure is thought to be
a ‘nuisance parameter’ or at least not to be of primar y ‘scientiï¬ c interest’ and little effor t is
expended on elucidating its structure. In par ticular, the idea that inter vention might affect the
covariance structure rather than, or as well as, the mean rarely intrudes.

A decade on, we shall argue that these ideas are rather pass ´
e and that from an inferential
standpoint the problem is symmetrical in both parameters µ and Σ. Throughout, we will distin-
guish carefully between joint estimation which is now relatively routine and joint model selection
which is not.

At ï¬ rst sight the task of estimating the structure of Σ, from the data, rather than from a pre-
speciï¬ ed menu, may seem daunting, whence the idea of searching the entire covariance model
space,
C , for Σ, may seem prohibitive. Thus, the ï¬ nal demand that we conduct a simultane-
ous search of the Car tesian product of the mean-covariance model space,
M x C , may seem
impossible. However, below, we shall accomplish all three tasks elegantly for a par ticular, but
very general, class of covariance structures,
C ∗ , deï¬ ned below.

The technique is based on a modiï¬ ed Cholesky decomposition of the usual marginal covari-
ance matrix Σ(t, θ), where t represents time and θ is a low-dimensional vector of parameters
describing dependence on time. The decomposition leads to a reparametrization, Σ(t, ς , φ), in
which the new parameters have an obvious statistical inter pretation in terms of the natural log-
arithms of the innovation variances, Ï‚ , and autoregressive coefï¬ cients, φ. These unconstrained
parameters are modelled, parsimoniously, as different polynomial functions of time.

In this talk we trace the histor y of the development of joint mean-covariance modelling over
the last decade, from Pourahmadi’s seminal paper in 1999 to recent times, discuss current
research in this paradigm and remark on the lack of impact on trial design and analysis.

Key References

Pan J. X. and MacKenzie G (2003). On modelling mean-covariance structures in longitudinal
studies. Biometrika, 90, 239-244.

Pourahmadi, M. (1999). Joint mean-covariance models with applications to longitudinal data :
Unconstrained parameterisation. Biometrika 86, 677-90.


Accueil du site | Contact | Coordonnées | Plan du site

Site réalisé avec SPIP + AHUNTSIC