Id´ealisation d’assemblages CAO pour l’analyse EF de
structures
Flavien Boussuge
To cite this version:
Flavien Boussuge. Id´ealisation d’assemblages CAO pour l’analyse EF de structures. Other.
Universit´e de Grenoble, 2014. French. .
HAL Id: tel-01071560
https://tel.archives-ouvertes.fr/tel-01071560
Submitted on 6 Oct 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of scientific
research documents, whether they are published
or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.THESE `
Pour obtenir le grade de
DOCTEUR DE L’UNIVERSITE DE GRENOBLE ´
Specialit ´ e : ´ Mathematiques et Informatique ´
Arretˆ e minist ´ erial : 7 ao ´ ut 2006 ˆ
Present ´ ee par ´
Flavien Boussuge
These dirig ` ee par ´ Jean-Claude Leon ´
et codirigee par ´ Stefanie Hahmann
prepar ´ ee au sein ´ Laboratoire Jean Kuntzmann - INRIA Grenoble, et
AIRBUS Group Innovations
et de l’Ecole Doctorale Mat ´ ematiques, Sciences et Technologies de ´
l’Information, Informatique
Idealization of CAD assemblies for
FE structural analyses
Idealisation d’assemblages CAO ´
pour l’analyse EF de structures
These soutenue publiquement le ` TBD,
devant le jury compose de : ´
Prof., Cecil Armstrong
Queen’s University Belfast, Rapporteur
Dr., Bruno Levy ´
Directeur de recherche, INRIA Nancy, Rapporteur
Dr., Lionel Fine
AIRBUS Group Innovations, Suresnes, Examinateur
Prof., Jean-Philippe Pernot
Arts et Metiers ParisTech, Aix en Provence, Examinateur ´
Prof., Jean-Claude Leon ´
INP-Grenoble, ENSE3, Directeur de these `
Prof., Stefanie Hahmann
INP-Grenoble, ENSIMAG, Co-Directeur de these `
M., Nicolas Chevassus
AIRBUS Group Innovations, Suresnes, Invite´
M., Franc¸ois Guillaume
AIRBUS Group Innovations, Suresnes, Invite´iiThe research described in this thesis was carried out at the Laboratoire Jean Kuntzmann
LJK-INRIA research team Imagine and the SPID team of AIRBUS Group Innovations.
This work was supported by a CIFRE convention of the ANRT and the French
Ministry of Higher Education and Research.
�c 2014, F. Boussuge, all rights reserved.
iiiivIdealization of CAD assembly for FE structural analysis
Abstract
Aeronautical companies face a significant increase in complexity and size of simulation
models especially at the level of assemblies, sub-systems of their complex products.
Pre-processing of Computer Aided Design (CAD) models derived from the digital
representation of sub-systems, i.e., Digital Mock-Ups (DMUs), into Finite Elements
Analysis (FEA) models requires usually many tedious manual tasks of model preparation
and shape transformations, in particular when idealizations of components or
assemblies have to be produced. Therefore, the purpose of this thesis is to make a
contribution to the robust automation of the time-consuming sequences of assembly
preparation processes.
Starting from an enriched DMU with geometrical interfaces between components
and functional properties, the proposed approach takes DMU enrichment to the next
level by structuring components’ shapes. This approach extracts a construction graph
from B-Rep CAD models so that the corresponding generative processes provide meaningful
volume primitives for idealization application. These primitives form the basis
of a morphological analysis which identifies the sub-domains for idealization in the
components’ shapes and their associated geometric interfaces. Subsequently, models
of components as well as their geometric representation get structured in an enriched
DMU which is contextualized for FEA application.
Based on this enriched DMU, simulation objectives can be used to specify geometric
operators that can be robustly applied to automate components and interfaces shape
transformations during an assembly preparation process. A new idealization process
of standalone components is proposed while benefiting from the decomposition into
sub-domains and their geometric interfaces provided by the morphological analysis of
the component. Interfaces between sub-domains are evaluated to robustly process the
connections between the idealized sub-domains leading to the complete idealization of
the component.
Finally, the scope of the idealization process is extended to shape transformations
at the assembly level and evolves toward a methodology of assembly pre-processing.
This methodology aims at exploiting the functional information of the assembly and interfaces
between components to perform transformations of groups of components and
assembly idealizations. In order to prove the applicability of the proposed methodology,
corresponding operators are developed and successfully tested on industrial use-cases.
Keywords : Assembly, DMU, idealization, CAD-CAE integration, B-Rep model,
generative shape process, morphological analysis
vviId´ealisation d’assemblages CAO pour l’analyse EF de
structures
R´esum´e
Les entreprises a´eronautiques ont un besoin continu de g´en´erer de grands et complexes
mod`eles de simulation, en particulier pour simuler le comportement structurel de soussyst`emes
de leurs produits. Actuellement, le pr´e-traitement des mod`eles de Conception
Assist´ee par Ordinateur (CAO) issus des maquettes num´eriques de ces sous-syst`emes en
Mod`eles El´ements Finis (MEF), est une tˆache qui demande de longues heures de travail
de la part des ing´enieurs de simulation, surtout lorsque des id´ealisations g´eom´etriques
sont n´ecessaires. L’objectif de ce travail de th`ese consiste `a d´efinir les principes et
les op´erateurs constituant la chaˆıne num´erique qui permettra, `a partir de maquettes
num´eriques complexes, de produire des g´eom´etries directement utilisables pour
la g´en´eration de maillages ´el´ements finis d’une simulation m´ecanique.
A partir d’une maquette num´erique enrichie d’information sur les interfaces
g´eom´etriques entre composants et d’information sur les propri´et´es fonctionnelles de
l’assemblage, l’approche propos´ee dans ce manuscrit est d’ajouter un niveau
suppl´ementaire d’enrichissement en fournissant une repr´esentation structurelle de haut
niveau de la forme des composants CAO. Le principe de cet enrichissement est d’extraire
un graphe de construction de mod`eles CAO B-Rep de sorte que les processus de
g´en´eration de forme correspondants fournissent des primitives volumiques directement
adapt´ees `a un processus d’id´ealisation. Ces primitives constituent la base d’une analyse
morphologique qui identifie dans les formes des composants `a la fois des sous-domaines
candidats `a l’id´ealisation mais ´egalement les interfaces g´eom´etriques qui leurs sont associ´ees.
Ainsi, les mod`eles de composants et leurs repr´esentations g´eom´etriques sont
structur´es. Ils sont int´egr´es dans la maquette num´erique enrichie qui est ainsi contextualis´ee
pour la simulation par EF.
De cette maquette num´erique enrichie, les objectifs de simulation peuvent ˆetre
utilis´es pour sp´ecifier les op´erateurs g´eom´etriques adaptant les composants et leurs
interfaces lors de processus automatiques de pr´eparation d’assemblages. Ainsi, un
nouveau proc´ed´e d’id´ealisation de composant unitaire est propos´e. Il b´en´eficie de
l’analyse morphologique faite sur le composant lui fournissant une d´ecomposition en
sous-domaines id´ealisables et en interfaces. Cette d´ecomposition est utilis´ee pour
g´en´erer les mod`eles id´ealis´es de ces sous-domaines et les connecter `a partir de l’analyse
de leurs interfaces, ce qui conduit `a l’id´ealisation compl`ete du composant.
Enfin, le processus d’id´ealisation est ´etendu au niveau de l’assemblage et ´evolue
vers une m´ethodologie de pr´e-traitement automatique de maquettes num´eriques. Cette
m´ethodologie vise `a exploiter l’information fonctionnelle de l’assemblage et les informations
morphologiques des composants afin de transformer `a la fois des groupes de
viicomposants associ´es `a une mˆeme fonction ainsi que de traiter les transformations
d’id´ealisation de l’assemblage. Pour d´emontrer la validit´e de la m´ethodologie, des
op´erateurs g´eom´etriques sont d´evelopp´es et test´es sur des cas d’application industriels.
Mots-cl´es : Assemblage, Maquette Num´erique, int´egration CAO-calcul, mod`ele
B-Rep, graphe de construction , processus g´en´eratif de forme, id´ealisation
viiiTable of contents
Abstract v
R´esum´e vii
Acronyms xxiii
Introduction 1
Context of numerical certification of aeronautical structures . . . . . . . . . 1
Some limits faced in structural simulations . . . . . . . . . . . . . . . . . . . 2
Work Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1 From a Digital Mock Up to Finite Element Assembly Models: Current
practices 5
1.1 Introduction and definition DMU concept . . . . . . . . . . . . . . . . 5
1.2 Geometric representation and modeling of 3D components . . . . . . . 7
1.2.1 Categories of geometric families . . . . . . . . . . . . . . . . . . 7
1.2.2 Digital representation of solids in CAD . . . . . . . . . . . . . . 9
1.2.3 Complementary CAD software capabilities: Feature-based and
parametric modeling . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Representation and modeling of an assembly in a DMU . . . . . . . . . 16
1.3.1 Effective DMU content in aeronautical industry . . . . . . . . . 16
1.3.2 Conventional representation of interfaces in a DMU . . . . . . . 19
1.4 Finite Element Analysis of mechanical structures . . . . . . . . . . . . 22
1.4.1 Formulation of a mechanical analysis . . . . . . . . . . . . . . . 22
1.4.2 The required input data for the FEA of a component . . . . . . 24Table of contents
1.4.3 FE simulations of assemblies of aeronautical structures . . . . . 27
1.5 Difficulties triggering a time consuming DMU adaptation to generate
FE assembly models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.5.1 DMU adaption for FE analyses . . . . . . . . . . . . . . . . . . 31
1.5.2 Interoperability between CAD and CAE and data consistency . 33
1.5.3 Current operators focus on standalone components . . . . . . . 34
1.5.4 Effects of interactions between components over assembly transformations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.6 Conclusion and limits of current practices about DMU manual adaption
for FE assembly models generation . . . . . . . . . . . . . . . . . . . . 39
1.7 Research objectives: Speed up the DMU pre-processing to reach the
simulation of large assemblies . . . . . . . . . . . . . . . . . . . . . . . 40
2 Current status of procedural shape transformation methods and tools
for FEA pre-processing 43
2.1 Targeting the data integration level . . . . . . . . . . . . . . . . . . . . 43
2.2 Simplification operators for 3D FEA analysis . . . . . . . . . . . . . . . 45
2.2.1 Classification of details and shape simplification . . . . . . . . . 45
2.2.2 Detail removal and shape simplification based on tessellated models 47
2.2.3 Detail removal and shape simplification on 3D solid models . . . 49
2.2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.3 Dimensional reduction operators applied to standalone components . . 53
2.3.1 Global dimensional reduction using the MAT . . . . . . . . . . 53
2.3.2 Local mid-surface abstraction . . . . . . . . . . . . . . . . . . . 55
2.3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.4 About the morphological analysis of components . . . . . . . . . . . . 58
2.4.1 Surface segmentation operators . . . . . . . . . . . . . . . . . . 58
2.4.2 Solid segmentation operators for FEA . . . . . . . . . . . . . . . 60
2.4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.5 Evolution toward assembly pre-processing . . . . . . . . . . . . . . . . 64
2.6 Conclusion and requirements . . . . . . . . . . . . . . . . . . . . . . . . 67
3 Proposed approach to DMU processing for structural assembly simxTable
of contents
ulations 69
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2 Main objectives to tackle . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.3 Exploiting an enriched DMU . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4 Incorporating a morphological analysis during FEA pre-processing . . . 75
3.4.1 Enriching DMU components with their shape structure as needed
for idealization processes . . . . . . . . . . . . . . . . . . . . . . 77
3.4.2 An automated DMU analysis dedicated to a mechanically consistent
idealization process . . . . . . . . . . . . . . . . . . . . . 78
3.5 Process proposal to automate and robustly generate FEMs from an enriched
DMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.5.1 A new approach to the idealization of a standalone component . 81
3.5.2 Extension to assembly pre-processing using the morphological
analysis and component interfaces . . . . . . . . . . . . . . . . . 81
3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4 Extraction of generative processes from B-Rep shapes to structure
components up to assemblies 85
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.2 Motivation to seek generative processes . . . . . . . . . . . . . . . . . . 86
4.2.1 Advantages and limits of present CAD construction tree . . . . 87
4.2.2 A new approach to structure a component shape: construction
graph generation . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.3 Shape modeling context and process hypotheses . . . . . . . . . . . . . 91
4.3.1 Shape modeling context . . . . . . . . . . . . . . . . . . . . . . 91
4.3.2 Generative process hypotheses . . . . . . . . . . . . . . . . . . . 93
4.3.3 Intrinsic boundary decomposition using maximal entities . . . . 96
4.4 Generative processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.4.1 Overall principle to obtain generative processes . . . . . . . . . 98
4.4.2 Extrusion primitives, visibility and attachment . . . . . . . . . . 100
4.4.3 Primitive removal operator to go back in time . . . . . . . . . . 103
4.5 Extracting the generative process graph . . . . . . . . . . . . . . . . . . 107
4.5.1 Filtering out the generative processes . . . . . . . . . . . . . . . 107
xiTable of contents
4.5.2 Generative process graph algorithm . . . . . . . . . . . . . . . . 109
4.6 Results of generative process graph extractions . . . . . . . . . . . . . . 110
4.7 Extension of the component segmentation to assembly structure segmentation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5 Performing idealizations from construction graphs 119
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5.2 The morphological analysis: a filtering approach to idealization processes 120
5.2.1 Morphological analysis objectives for idealization processes based
on a construction graph . . . . . . . . . . . . . . . . . . . . . . 121
5.2.2 Structure of the idealization process . . . . . . . . . . . . . . . . 124
5.3 Applying idealization hypotheses from a construction graph . . . . . . 125
5.3.1 Evaluation of the morphology of primitives to support idealization126
5.3.2 Processing connections between ‘idealizable’ sub-domains Dij . . 133
5.3.3 Extending morphological analyses of Pi to the whole object M . 137
5.4 Influence of external boundary conditions and assembly interfaces . . . 141
5.5 Idealization processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.5.1 Linking interfaces to extrusion information . . . . . . . . . . . . 146
5.5.2 Analysis of GS to generate idealized models . . . . . . . . . . . 147
5.5.3 Generation of idealized models . . . . . . . . . . . . . . . . . . . 153
5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6 Toward a methodology to adapt an enriched DMU to FE assembly
models 159
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.2 A general methodology to assembly adaptions for FEA . . . . . . . . . 160
6.2.1 From simulation objectives to shape transformations . . . . . . 161
6.2.2 Structuring dependencies between shape transformations as contribution
to a methodology of assembly preparation . . . . . . . 164
6.2.3 Conclusion and methodology implementation . . . . . . . . . . 167
6.3 Template-based geometric transformations resulting from function identifications
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
xiiTable of contents
6.3.1 Overview of the template-based process . . . . . . . . . . . . . . 169
6.3.2 From component functional designation of an enriched DMU to
product functions . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.3.3 Exploitation of Template-based approach for FE models transformations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.3.4 Example of template-based operator of bolted junctions transformation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.4 Full and robust idealization of an enriched assembly . . . . . . . . . . . 183
6.4.1 Extension of the template approach to idealized fastener generation185
6.4.2 Presentation of a prototype dedicated to the generation of idealized
assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Conclusion and perspectives 191
Bibliography 211
Appendices I
A Illustration of generation processes of CAD components I
A.1 Construction process of an injected plastic part . . . . . . . . . . . . . I
A.2 Construction process of an aeronautical metallic part . . . . . . . . . . I
B Features equivalence IX
C Taxonomy of a primitive morphology XV
D Export to CAE software XIX
xiiiTable of contents
xivList of Figures
1.1 The Digital Mock-Up as the reference representation of a product, courtesy
of Airbus Group Innovations. . . . . . . . . . . . . . . . . . . . . . 6
1.2 Regularized boolean operations of two solids. . . . . . . . . . . . . . . . 9
1.3 CSG and B-Rep representations of a solid. . . . . . . . . . . . . . . . . 10
1.4 Examples of non-manifold geometric models. . . . . . . . . . . . . . . . 12
1.5 CAD construction process using features. . . . . . . . . . . . . . . . . . 15
1.6 Example of an aeronautical CAD assembly: Root joint model (courtesy
of Airbus Group Innovations). . . . . . . . . . . . . . . . . . . . . . . . 16
1.7 Example of complex DMU assembly from Alcas project [ALC08] and
Locomachs project [LOC16]. . . . . . . . . . . . . . . . . . . . . . . . . 18
1.8 Representation of a bolted junction in a structural DMU of an aircraft. 20
1.9 Classification of Conventional Interfaces (CI) under contact, interference
and clearance categories. . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.10 Process flow of a mechanical analysis. . . . . . . . . . . . . . . . . . . . 23
1.11 Example of FE mesh models . . . . . . . . . . . . . . . . . . . . . . . . 24
1.12 Example of a FE fastener simulating the behavior of a bolted junction
using beam elements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.13 Example of aeronautical FE models. . . . . . . . . . . . . . . . . . . . 31
1.14 Illustration of a shim component which does not appear in the DMU
model. Shim component are directly manufacture when structural components
are assembled. . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.15 Illustration of a manual process to generate an idealized model. . . . . 35
1.16 Example of contact model for a FE simulation. . . . . . . . . . . . . . . 38
2.1 Identification of a skin detail . . . . . . . . . . . . . . . . . . . . . . . . 46
2.2 Illustration of the MAT method . . . . . . . . . . . . . . . . . . . . . . 48
xvLIST OF FIGURES
2.3 Details removal using the MAT and detail size criteria [Arm94]. . . . . 49
2.4 Topology adaption of CAD models for meshing [FCF∗08]. . . . . . . . . 50
2.5 Illustration of CAD defeaturing using CATIA. . . . . . . . . . . . . . . 51
2.6 Illustration of the mixed dimensional modeling using a MAT [RAF11]. 54
2.7 Illustration of mid-surface abstraction [Rez96]. . . . . . . . . . . . . . . 56
2.8 An example of particular geometric configuration not addressed by facepairs
methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.9 Illustration of different connection models for idealized components. . . 58
2.10 Mesh Segmentation techniques. . . . . . . . . . . . . . . . . . . . . . . 59
2.11 Automatic decomposition of a solid to identify thick/thin regions and
long and slender ones, from Makem [MAR12]. . . . . . . . . . . . . . . 60
2.12 Idealization using extruded and revolved features in a construction tree,
from [RAM∗06]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.13 Divide-and-conquer approach to idealization processes using a maximal
volume decomposition (by Woo [Woo14]). . . . . . . . . . . . . . . . . 62
2.14 Assembly interface detection of Jourdes et al. [JBH∗14]. . . . . . . . . . 65
2.15 Various configurations of the idealization of a small assembly containing
two components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.1 Current process to prepare assembly structures. Each component of the
assembly is transformed individually. . . . . . . . . . . . . . . . . . . . 70
3.2 Structuring a DMU model with functional properties. . . . . . . . . . . 73
3.3 DMU enrichment process with assembly interfaces and component functional
designations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.4 Interactions between simulation objectives, hypotheses and shape transformations.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.5 Proposed approach to generate a FEM of an assembly structure from a
DMU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.1 An example of a shape generation process. . . . . . . . . . . . . . . . . 87
4.2 An example of shape analysis and generative construction graph. . . . . 91
4.3 Modeling hypotheses about primitives to be identified in a B-Rep object. 92
4.4 Entities involved in the definition of an extrusion primitive in a B-Rep
solid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
xviLIST OF FIGURES
4.5 Illustrations of two additive primitives: (a) an extrusion primitive and
(b) a revolution one. The mid-surfaces of both primitives lie inside their
respective volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.6 Example of two possible decompositions into primitives from a solid. . 96
4.7 Pipeline producing and exploiting generative shape processes. . . . . . 97
4.8 Examples of configurations where faces must be merged to produce a
shape-intrinsic boundary decomposition. . . . . . . . . . . . . . . . . . 98
4.9 Overall scheme to obtain generative processes. . . . . . . . . . . . . . . 99
4.10 An example illustrating the successive identification and removal of
primitives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.11 An example illustrating the major steps to identify a primitive and remove
it from an object . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.12 Example of geometric interface . . . . . . . . . . . . . . . . . . . . . . 103
4.13 Example of a collection of primitives identified from a B-Rep object. . . 103
4.14 Illustration of the removal of Pi. . . . . . . . . . . . . . . . . . . . . . . 105
4.15 Illustration of the removal operator for interface of surface type 1a. . . 106
4.16 Illustration of the removal operator for interface of volume type 3. . . . 107
4.17 Illustration of the simplicity concept to filtering out the generative processes.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.18 Criteria of maximal primitives and independent primitives. . . . . . . . 110
4.19 Extraction of generative processes for four different components. . . . . 111
4.20 Result of generative process graph extractions. . . . . . . . . . . . . . . 113
4.21 Illustration of the continuity constraint. . . . . . . . . . . . . . . . . . . 114
4.22 A set of CAD construction trees forming a graph derived from two consecutive
construction graph nodes. . . . . . . . . . . . . . . . . . . . . . 115
4.23 Illustration of the compatibility between the component segmentation (a)
and assembly structure segmentation (b). . . . . . . . . . . . . . . . . . 116
4.24 Insertion of the interface graphs between primitives obtained from component
segmentations into the graph of assembly interfaces between
components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.1 From a construction graph of a B-Rep shape to a full idealized model
for FEA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.2 Global description of an idealization process. . . . . . . . . . . . . . . . 124
xviiLIST OF FIGURES
5.3 Determination of the idealization direction of extrusion primitives using
a 2D MAT applied to their contour . . . . . . . . . . . . . . . . . . . . 126
5.4 Example of the morphological analysis of a component. . . . . . . . . . 129
5.5 Idealization analysis of components. . . . . . . . . . . . . . . . . . . . . 130
5.6 Illustration of primitives’ configurations containing embedded sub-domains
Dik which can be idealized as beams or considered as details. . . . . . . 131
5.7 Example of a beam morphology associated with a MAT medial edge of
a primitive Pi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.8 Synthesis of the process to evaluate the morphology of primitives Pi. . . 134
5.9 Taxonomy of connections between extrusion sub-domains. . . . . . . . 135
5.10 Illustration of propagation of the morphological analysis of two primitives.139
5.11 Propagation of the morphology analysis on Pi to the whole object M. . 140
5.12 Influence of an assembly interface modeling hypothesis over the transformations
of two components . . . . . . . . . . . . . . . . . . . . . . . 141
5.13 Illustration of the inconsistencies between an assembly interface between
two components and its projection onto their idealized representations. 143
5.14 Two possible schemes to incorporate assembly interfaces during the segmentation
process of components. . . . . . . . . . . . . . . . . . . . . . 144
5.15 Illustration of an interface graph derived from the segmentation process
of a component. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.16 Illustration of an interface cycle between primitives. . . . . . . . . . . . 148
5.17 Examples of medial surface positioning improvement. (a) Offset of parallel
medial surfaces, (b) offset of L-shaped medial surfaces. . . . . . . . 150
5.18 Example of identification of a group of parallel medial surfaces and border
primitives configurations. . . . . . . . . . . . . . . . . . . . . . . . 151
5.19 Example of a volume detail configuration lying on an idealized primitive. 152
5.20 Interfaces connection operator . . . . . . . . . . . . . . . . . . . . . . . 154
5.21 Illustration of the idealization process of a component that takes advantage
of its interface graph structures. . . . . . . . . . . . . . . . . . . . 155
5.22 Illustration of the successive phases of the idealization process. . . . . . 156
6.1 Setting up an observation area consistent with simulation objectives. . 162
6.2 Entire idealization of two components. . . . . . . . . . . . . . . . . . . 162
6.3 Transformation of groups of components as analytical models. . . . . . 163
xviiiLIST OF FIGURES
6.4 Influence of interfaces over shape transformations of components. . . . 163
6.5 Synthesis of the structure of an assembly simulation preparation process. 166
6.6 Use-Case 1: Simplified solid model with sub-domains decomposition
around bolted junctions. . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.7 Overview of the main phases of the template-based process. . . . . . . 170
6.8 Subset of TF N , defining a functional structure of an assembly. . . . . . 171
6.9 Principle of the template-based shape transformations. . . . . . . . . . 174
6.10 Compatibility conditions (CC) of shape transformations ST applied to T. 174
6.11 Checking the compatibility of ST (T) with respect to the surrounding
geometry of T. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.12 Multi-scale simulation with domain decomposition around bolted junctions,
(courtesy of ROMMA project [ROM14]). . . . . . . . . . . . . . 177
6.13 Template based transformation ST (T) of a bolted junction into simple
mesh model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.14 User interface of a template to transform ‘assembly Bolted Junctions’. . 181
6.15 Results of template-based transformations on CAD assembly models. . 182
6.16 Idealized surface model with FE fasteners to represent bolted junctions. 183
6.17 Illustration of Task 2: Transformation of bolted junction interfaces into
mesh nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
6.18 Results of the template-based transformation of bolted junctions. . . . 184
6.19 User interface of the prototype for assembly idealization. . . . . . . . . 187
6.20 Illustration of a component segmentation which extracts extruded volumes
to be idealized in task 3. . . . . . . . . . . . . . . . . . . . . . . . 187
6.21 Illustration of task 4: Identification and transformation of groups of
idealized surfaces connected to the same assembly interfaces. . . . . . . 188
6.22 Final result of the idealized assembly model ready to be meshed in CAE
software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
A.1 Example of a shape generation process 1/5 . . . . . . . . . . . . . . . . II
A.2 Example of a shape generation process 2/5 . . . . . . . . . . . . . . . . III
A.3 Example of a shape generation process 3/5 . . . . . . . . . . . . . . . . IV
A.4 Example of a shape generation process 4/5 . . . . . . . . . . . . . . . . V
A.5 Example of a shape generation process 5/5 . . . . . . . . . . . . . . . . VI
xixLIST OF FIGURES
A.6 Example of a shape generation process of a simple metallic component VII
B.1 Examples of Sketch-Based Features . . . . . . . . . . . . . . . . . . . . X
B.2 Examples of Sketch-Based Features . . . . . . . . . . . . . . . . . . . . XI
B.3 Examples of Dress-Up Features . . . . . . . . . . . . . . . . . . . . . . XII
B.4 Examples of Boolean operations . . . . . . . . . . . . . . . . . . . . . . XIII
D.1 Illustration of the STEP export of a Bolted Junction with sub-domains
around screw. (a) Product structure open in CATIA software, (b) associated
xml file containing the association between components and
interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XX
D.2 Illustration of the STEP export of a Bolted Junction. Each component
containing volume sub-domains is exported as STEP assembly. . . . . . XX
D.3 Illustration of the STEP export of a Bolted Junction. Each inner interface
between sub-domains is part of the component assembly. . . . . . . XXI
D.4 Illustration of the STEP export of a Bolted Junction. Each outer interface
between components is part of the root assembly. . . . . . . . . . . XXII
D.5 Illustration of the STEP export of the full Root Joint assembly. . . . . XXIII
xxList of Tables
1.1 Categories of Finite Elements for structural analyses. . . . . . . . . . . 25
1.2 Connector entities available in CAE software. . . . . . . . . . . . . . . 28
1.3 Examples of interactions or even dependencies between simulation objectives
and interfaces as well as component shapes. . . . . . . . . . . . 30
5.1 Categorization of the morphology of a primitive using a 2D MAT applied
to its contour. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
C.1 Morphology associated with a MAT medial edge of a primitive Pi. 1/2 XVI
C.2 Morphology associated with a MAT medial edge of a primitive Pi. 2/2 XVII
xxiLIST OF TABLES
xxiiAcronyms
B-Rep Boundary representation. 10–14, 17, 20, 21, 34, 45, 49, 50, 52, 62, 63, 77, 78,
85, 86, 89, 91–93, 96–98, 101, 102, 114, 116–118, 120, 140, 141, 147, 171, 192–195
CAD Computer Aided Design. IX, 2, 3, 5–7, 9–14, 16–19, 21, 24–26, 29, 30, 33–36,
39–41, 43, 44, 47–53, 55, 59, 60, 63, 65, 67, 70, 73, 77–79, 86–93, 97–99, 114, 116,
118–120, 122, 123, 140, 144, 155, 156, 159, 169, 179, 180, 183, 186, 189, 191, 193,
195
CAD-FEA Computer Aided Design to Finite Element Analysis(es). 12, 43, 191–193
CAE Computer Aided Engineering. 11, 12, 23, 25, 26, 33, 35, 37–41, 43–45, 47, 50–53,
67, 77, 123, 156, 165, 180, 189
CSG Constructive Solid Geometry. 9–11, 13, 62, 63
DMU Digital Mock-Up. XIX, 2, 3, 5–8, 11, 12, 16–21, 23, 30–35, 37–41, 43, 64, 67–75,
77–79, 82, 83, 85, 86, 142, 157, 159, 160, 167, 168, 171, 172, 178–180, 183, 189,
191–193, 196
FE Finite Elements. 5, 7, 22, 25, 27–32, 38–41, 44–47, 49, 52, 54, 64, 70, 72, 77, 79,
82, 85, 121–123, 125, 136, 145, 152, 153, 157, 159, 162, 165, 169, 177, 180, 183,
185, 186, 189, 191, 193
FEA Finite Element Analysis(es). 1–3, 5, 7, 12, 22, 28, 30–32, 40, 43, 45, 46, 49, 57,
58, 60, 64, 67–70, 72, 74–79, 83, 85, 87–89, 96, 117, 119–121, 123, 141, 156, 160,
168, 172, 173, 176, 177, 180, 191
FEM Finite Element Models. 1, 22–24, 26, 27, 29, 44, 46, 69–72, 76, 79, 82, 88, 121,
147, 183
KBE Knowledge Based Engineering. 168
MAT Medial Axis Transform. XV, 44, 47–49, 53–57, 60, 126, 128, 131–133, 137, 145,
166, 195
xxiiiAcronyms
PDMS Product Data Management System. 72
PDP Product Development Process. 1–3, 5–7, 9, 12, 19, 32, 39, 44, 46, 88, 89, 180
PLM Product Lifecycle Management. 6, 16, 21, 32, 72
xxivIntroduction
Context of numerical certification of aeronautical structures
Aeronautical companies face increasing needs in simulating the structural behavior of
product sub-assemblies. Numerical simulation plays an important role in the Product
Development Process (PDP) of a mechanical structure: it allows engineers to
numerically simulate the mechanical behavior of this structure submitted to a set of
physical constraints (forces, pressures, thermal field, . . . ). The local or global, linear
or nonlinear, static or dynamic analyses of structural phenomena using Finite Element
Analysis(es) (FEA) simulations are now widespread in industry. These simulations
play an important role to reduce the cost of physical prototyping, to justify and certify
structural design choices. As an example, let us consider the last Airbus Program A350.
There, a major physical test program is still required to support the development and
certification of the aircraft. However, it is based on predictions obtained from Finite
Element Models (FEM). Consequently, the test program validates the internal load
distributions which have been computed numerically.
Today, FEAs are not restricted anymore to the simulation of components alone;
they can be applied to large assemblies of components. Simulation software capabilities
associated with optimized mathematical resolution methods can process very large
numbers of unknowns derived from their initial mechanical problems. Such simulations
require few days of computations, which is an acceptable amount during a product development
process in aeronautics. However, it is important to underline that these
numerical simulations require the setting of a mathematical model from the physical
object or assembly being analyzed. On purpose, the FEA method incorporates simplification
hypotheses applied to the geometric models of components or assemblies
compared to their real shapes and, finally, produces an approximate solution. To obtain
the most faithful results with a minimum bias within a short amount of time,
engineers are encouraged to spend a fair amount of time on the generation of simulation
models. They have to stay critical with respect to the mathematical method used,
the consequences of simplification hypotheses, in order to understand the deviations of
the simulation model compared to the real tests to judge the validity of the simulation
results.
1Introduction
Some limits faced in structural simulations
Numerical simulations of assemblies remain complex and tedious due to the preprocessing
of assembly 3D models available from Digital Mock-Ups(DMUs) that stands
as the virtual product reference in industry. This phase is highly time consuming compared
to the numerical computations’ one. In the past, the use of distinct software
tools between the design and simulation phases required generating once again the
Computer Aided Design (CAD) geometry of a component in the simulation software.
Today, the development and use of DMUs in a PDP, even with a rather small assembly
models, bring 3D models at hand for engineers. The concept of DMU was initially developed
for design and manufacture purposes as a digital representation of an assembly
of mechanical components. Consequently, DMUs are good candidates to support digital
analyses of several PDP processes, e.g. part assembly ones. In industry, DMUs are
widely used during a PDP regarded as the virtual product geometry reference. This
geometric model contains a detailed 3D representation of the whole product structure
that is made available for simulation engineers. To prepare large sub-structure models
for simulation, such as wings or aircraft fuselage sections; the DMU offers a detailed
and precise geometric input model. However, speeding up the simulation model generation
strongly relies on the time required to perform the geometric transformations
needed to adapt a DMU to FEA requirements.
The pre-processing phase implies reworking all DMU 3D data to collect subsets of
components, to remove unnecessary or harmful areas leading to simplified shapes, to
generate adequate FE meshes, to add the boundary conditions and material properties
as needed for a given simulation goal. All these operations, the way they are currently
performed, bring little added value to a PDP. Currently, time and human resources
involved in pre-processing CAD models derived from DMUs into FE models can even
prevent engineers from setting up structural analyses. Very tedious tasks are required
to process the large amount of DMU components and the connections between them,
like contact areas.
Commercial software already provide some answers to the interactions between design
and behavioral simulation processes for single components. Unfortunately, the
operators available are restricted either to interactive geometric transformations, leading
to very tedious tasks, or to automated simulation model generation adapted to
simple models only. A rather automated generation of complex assembly simulation
models still raises real difficulties and it is far too tedious to process groups of components
as well as sub-assemblies. As detailed in Chapter 1, these difficulties arise
because shape transformations are needed since designers and simulation engineers
work with different target models, resulting in the fact that DMUs cannot be easily
used to support the preparation of structural analysis models.
Scientific research work has mainly focused on the use of global geometric transformations
of standalone CAD components. Few contributions have addressed the au-
2Introduction
tomation of assembly pre-processing (see Chapter 2) leaving engineers to interactively
process each assembly component. Aeronautical structures are particularly complex to
transform due to the large amount of transformations on hundred thousands of parts
and interfaces joints. These operators are still not generic enough to be adapted to
engineers’ needs, especially when idealizations of components or assemblies must be
produced. Indeed, it still is a common practice for engineers to generate interactively
their own models because preparation operations are still not generic enough. Consequently,
some simulations are not even addressed because their preparation time cannot
fit within the schedule of a PDP, i.e. simulation results would be available too late.
Work Purposes
To reach the needs of large assembly simulation model preparation, improvements in
processing DMUs are a real challenge in aircraft companies. The contributions in this
thesis are mainly oriented toward the transformation of 3D CAD models extracted from
a DMU, and their associated semantics, for Finite Element analysis of large assembly
structure. To handle large models, it is mandatory that the proposed principles and
operators speed up and to automate as much as possible the DMU transformations
required. This work will be guided by input DMU data defining the exact content of
the simulation models to be built and will use mechanical and geometric criteria for
identifying the necessary geometric adaptation.
This research thesis is divided into 6 chapters:
• Chapter 1 describes the current practices in aeronautical industries about the
generation, from DMUs, of geometric models supporting the generation of simulation
models. It will define the different geometric entities used in CAD software
as well as the notion of mechanical analysis using the FE method. Also, it will
detail the problematic of DMU geometry preparation for FE assembly models;
• Chapter 2 is a current bibliographical and technological status on the methods
and tools proposed for the preparation and adaptation of geometric models for
FEA. This analysis will consider the review of component pre-processing as well
as their assembly counterpart;
• Chapter 3 presents the proposed contribution to assembly pre-processing based on
the recommendations of chapter 1 and the analysis of chapter 2. This approach
uses, as input model, an enriched DMU at an assembly level with geometric
interfaces between its components, functional properties of these components and,
at the component level, a structured volume segmentation using graph structure.
From this enriched model, an analysis framework is able to connect the simulation
hypotheses with the shape transformations. This chapter will also identify the
geometric operators segmenting a component to transform it in accordance with
the user’s simulation requirements;
3Introduction
• Chapter 4 exposes the principles of the geometric enrichment of a component
using a construction graph. An algorithm extracting generative processes from
B-Rep shapes will be detailed. It provides a powerful geometric structure containing
simple primitives and geometric interfaces between them. This structure
contributes to an analysis framework and it remains compatible with an assembly
structure containing components and geometric interfaces;
• Chapter 5 details the analysis framework through the exploitation of the construction
graph to analyze the morphology of component. Then, geometric operators
are specified that can be robustly applied to automate components’ and
interfaces’ shape transformations during an assembly preparation process;
• Chapter 6 extends this approach toward a methodology using the geometric operators
previously described that performs idealizations and template based transformations
of groups of components. Results of this methodology will also be
presented to illustrate it through aeronautical examples that use the transformation
operators developed;
4Chapter 1
From a Digital Mock Up to
Finite Element Assembly
Models: Current practices
This chapter presents the problematic of DMU adaptation for the generation of
Finite Elements (FE) assembly models. In a first step, the technical context
is addressed through the description of the DMU data content. This description
deals with the geometrical entities and concepts used to represent 3D CAD
components as well as with the representation of assemblies currently available
before DMU pre-processing. Then, the notion of mechanical analysis using the
FE method is defined and the main categories of geometric models within FEA
are described. The analysis of current industrial processes and practical DMU
data content highlights the issues regarding assembly simulation model preparation
and points out the lack of tools in industrial software to reach the level
of abstraction required by FEA, especially when idealizations are needed. The
main time consuming shape transformations and the missing information about
components’ interfaces in DMUs are identified as a starting point to improve the
robustness of DMU pre-processing.
1.1 Introduction and definition DMU concept
To speed up a Product Development Process (PDP), as stated in the introduction,
aeronautical, automotive and other companies face increasing needs in setting up FE
simulations of large sub-structures of their products. Their challenge covers the study
of standalone components but it is now expanding to simulate the structural behavior
of large assembly structures containing up to thousands of components.
Today, aeronautical companies have to manage a range of products during their
entire lifecycle, from their early design phase to their manufacture and even up to
their destruction and recycling. The corresponding digital data management concept
5Chapter 1: From a DMU to FE Assembly Models: current practices
Design Manufacturing
Pre-Sales Maintenance
Industrial Means
Digital Mock-Up
(DMU) as a Master
Figure 1.1: The Digital Mock-Up as the reference representation of a product, courtesy of
Airbus Group Innovations.
aggregating all the information about each product is called Product Lifecycle Management
(PLM). This concept includes the management of a digital product definition
for all the functions involved in a PDP. To replace a physical mock up by its digital
counterpart, the concept of a virtual representation of a product has been developed,
i.e., the Digital Mock Up (DMU) (see Figure 1.1). As Drieux [Dri06] explained, a
DMU is an extraction from the PLM of a product at a given time. The DMU was
initially created for design and manufacture purposes as a digital representation of an
assembly of mechanical components. Consequently, DMUs are convenient to support a
virtual analysis of several processes, e.g., part assembly ones. For instance, DMU may
be extracted at the manufacturing level, which let engineers to quickly generate and
simulate trajectories of industrial robots and to set and validate assembly tolerances.
During project reviews of complex products such as an aircraft, DMUs contribute to
the technical analysis of a product, as conducted by engineering teams. Connected to
virtual reality technology, a DMU can be at the basis of efficient immersive tools to
analyze interferences among the various subsystems contained into the corresponding
product [Dri06, IML08].
During the design phase, the DMU is considered as the reference geometry of the
product representation. It provides engineers all the digital information needed during
their PDP. The various CAD models representing different stages of the product during
its development or meta data related to specific applications such as manufacturing are
6Chapter 1: From a DMU to FE Assembly Models: current practices
examples of such information. The development and use of DMUs in a PDP bring 3D
assembly models at hand for engineers. Because this reference model contains detailed
3D geometry, it offers new perspectives for analysts to process more complex shapes
while speeding up their simulation model generation up to the FE mesh. However,
speeding up the simulation model generation strongly relies on the time required to
perform the geometric transformations needed to adapt the DMU to FE requirements.
In order to understand the challenges involved in the preparation phase from DMUs
of large sub-structure models for FE simulations, it seems appropriate to initially
present the various concepts and definitions related to the Finite Element Analysis
(FEA) of mechanical structures as well as those related to the current models and
information available in DMUs. This chapter describes the current practices regarding
shape transformations needed to generate the specific geometric models required for
FEA from a DMUs model. Starting from the theoretical formulation of a mechanical
analysis to the effective shape transformations faced by engineers to generate FE
models, this chapter raises the time consuming preparation process of large assembly
structures as an issue. This is detailed at section 1.5 and refers to the identification of
key information content during current FE simulation preparation processes from two
perspectives: a component point of view as well as an assembly point of view. In the
last section 1.7, the research objectives are presented to give the reader an overview of
the research topic addressed in this thesis.
1.2 Geometric representation and modeling of 3D
components
A DMU is straightforwardly related to the representation of the 3D components contained
in the product. As a starting point, this section outlines the principles of
mathematical and computer modeling of 3D solids used in CAD software. Also, it
describes the common schemes available for designers to generate components through
a construction process. Because a component is used in a DMU as a volume object,
this section focuses on solids’ representations.
1.2.1 Categories of geometric families
Prior to explanations about the common concepts for representing 3D components, it
is important to recall that the geometric model describing a simulation model contains
different categories of geometric entities used in the CAD and FEA software
environments. These entities can be classified, from a mathematical point of view, in
accordance to their manifold dimension.
7Chapter 1: From a DMU to FE Assembly Models: current practices
0-dimensional manifold: Point
These entities, the simplest geometric representation, are not intended to represent
the detailed geometry of components. However, they are often used in structural analysis,
as abstraction of a component, i.e., its center of mass or a key point of a component
where concentrated forces are applied, . . . . They are also frequently encountered to
represent interfaces between aeronautical systems and structures in a DMU. They are
also the lowest level entity in the description of a component’s solid model.
1-dimensional manifold: Line, Circle, Curve
These entities, such as lines, circles and more generally curves, are mainly involved
in the definition of models of higher dimension like surfaces. In structural analysis,
they represent long and slender shapes, e.g., components behaving like beams, with
the complement of section inertia. During a solid modeling process, they are part of the
definition of 2D sketches (see Figure 1.5 for an example of sketch based form feature),
or as profile curves in other shape design processes. Also, they represent the location
of geometric interfaces between components, e.g., the contact of a cylinder onto a plane.
2-dimensional manifold: Surface
Surfaces are used to represent the skin, or boundary, of a 3D object. Initially, they
were introduced to represent complex shapes of an object, commonly designated as
free-form surfaces. Polynomial surfaces like B´ezier, B-Spline, NURBS (Non-Rational
B-Spline), Coons surfaces are commonly used for modeling objects with curved surfaces
and for the creation of simulation models, e.g., CFD simulations for aerodynamics or
simulations using the isogeometric paradigm. Here, surface models will be essentially
reduced to canonical surfaces, i.e., plane, sphere, cylinder, cone, torus, which are also
described by classical implicit functions. This restriction is set for simplicity purposes
though it is not too restrictive for mechanical components because they are heavily
used. In structural analysis, using a surface model is frequent practice to represent
idealized models equivalent a volume component resembling a sheet. The notion of
idealization will be specified in subsection 1.3.2. Even if a surface model can represent
complex shapes, they are not sufficient to represent a 3D object as a volume, which
requires an explicit representation of the notion of inside, or outside.
3-dimensional manifold: Solid
A solid contains all the information to define comprehensively the volume of the
3D object it represents. Based on Requicha’s mathematical definition [Req77, Req80],
a solid is a subset of the 3D Euclidian space. Its principal properties are:
• A solid have a homogeneous three dimensionality. It contains a homogeneous
interior. Solid’s boundary cannot have isolated portions;
8Chapter 1: From a DMU to FE Assembly Models: current practices
A Ç B A È B A - B
A Ç* B A È* B A -* B
Additional
Face
(a)
(b) A
B
A
B
Figure 1.2: Boolean operator of two solids: (a) Conventional boolean operator produce additional
face, unclosed boundary. (b) CAD uses regularized boolean operator producing valid
solid
• A solid must be closed. When applied to solid, rigid motions (translation, rotation)
or operations that add or remove material, must produce others solids;
• The boundary of a solid must determine unambiguously the interior and exterior
of the solid.
To describe a solid, topological properties are mandatory in addition to the geometric
entities defining this object. This requirement is particularly important when
describing complex shapes because they are generated using a process that combines
simple primitives to progressively increase the shape complexity until reaching the desired
solid. Indeed, the principle of generation process exists for complex free-form
surfaces is similar to that of complex solids. During the design process, the constructive
processes used to combine elementary primitives are key information to enable
efficient modification processes that are frequently required during a PDPIt is commonly
admitted that 80% of the design time is spent on modifications processes.
1.2.2 Digital representation of solids in CAD
Geometric modeling is the core activity of a CAD system. It contains all the geometric
information describing 3D objects. Although there are various digital representations
of 3D components falling into the category of solids, the two major representations
used in CAD software are detailed in the following paragraphs.
Constructive Solid Geometry (CSG) representation
This representation designates Constructive Solid Geometry approaches devoted to
the design and generation of 3D components. It is important to note that the ’usual’
set of Boolean operations cannot be directly applied on solid model. Otherwise, it
would create invalid solids. As illustrated on Figure 1.2a, the conventional intersection
9Chapter 1: From a DMU to FE Assembly Models: current practices
+
- CSG Tree
Resulting
Solid
(a)
Face
Edge
Vertex
B-Rep
Decomposition
(b)
Figure 1.3: (a) Representation of the construction tree of a CSG component. (b) B-Rep solid
decomposed into faces, edges and vertices (after Stroud [Str10]).
operator applied to two solids would create a non regular solid with an isolated face,
in Figure 1.2b the intersection of two cubes would generate a face and not a solid or
the empty set. Therefore, it is necessary to define a new set of Boolean operations,
the so-called regularized set intersection, union and difference (see Figure 1.2b). These
operators are a modified version of the conventional operators and will be used in the
algorithm presented in chapters 4 and 5.
The CSG approaches represents a solid as a sequence of Boolean operations of
elementary solids (cylinder, sphere, extrusion, revolution), i.e., primitives (see Figure
1.3a). The modeler stores primitives (cylinders, cubes, . . . ) and operations that
have been applied to them, essentially regularized Boolean operations (union, intersection,
difference). It can be visually represented as a tree structure but a CSG
representation does not necessarily form a binary tree. The advantage of this model
is to give a structure which can be easily modified, if the modification is compatible
with the construction tree. The location of a simple primitive, e.g., a hole created with
the subtraction of a cylinder, can be easily changed without modification of the CSG
structure. Weaknesses are: the difficulty to represent complex geometric shapes with
free-form surfaces, the complete tree re-evaluation under modifications and the non
uniqueness of this tree with regard to a given shape.
Boundary representation (B-Rep)
In a B-Rep representation, the CAD kernel processes the skin of the object and
the inside/outside of material. The B-Rep model contains the result of the operations,
i.e., the information defining the shape of the solid (see Figure 1.3b). The volume of
a solid is represented by a set of surfaces describing its boundary. Two categories of
information are stored in a B-Rep model, a topological structure and a set of geometric
entities:
10Chapter 1: From a DMU to FE Assembly Models: current practices
• The geometric information: it consists into a set of surfaces defining the boundary
of the solid and locating it in 3D space. These surfaces are bounded by trimming
curves;
• The topological information: this datastructure enables the expression of the
mandatory topological properties, i.e., closure, orientation, that leads to the description
of shells, faces, wires, edges, vertices, expressing the adjacency relationships
between the topological entities. It can be represented using incidence
graphs such as face-edge and edge-vertex graphs.
In a B-Rep model, the set of surfaces is closed and Euler operators express the necessary
conditions to preserve the consistency of a solid’s topology during modifications. The
advantage of this representation holds in its ability to use non-canonical surfaces, i.e.,
NURBS, allowing a user to represent more complex shapes than CSG representation.
Among the disadvantages of B-Rep models, the representation of the solid’s boundary
contains information only about its final shape and this boundary is not unique for
given shape. Today, the B-Rep representation is widespread in most of CAD geometric
modelers and it is associated with a history tree to enable the description and use of
parametric models.
Nowadays, CAD modelers incorporate the B-Rep representation as well as Boolean
operators and the B-Rep representation is the main representation used in aeronautical
DMUs. In this thesis, the input CAD model of a 3D component is considered as
extracted from a DMU and defined as a solid via a B-Rep representation.
Representation of manifold and non-manifold geometric models
To understand the different properties used in CAD volume modelers and Computer
Aided Engineering (CAE) modelers, the notions of manifold solid and non-manifold
objects have to be defined. One of the basic properties of a CAD modeler representing
solids is that the geometric models of 3D components have to be two-manifold to define
solids. The condition for an object to be two-manifold is that, ’at every point of its
boundary, an arbitrary small sphere cuts the object’s boundary in a figure homomorphic
to a disc’. This condition ensures that a solid encloses a bounded partition of the 3D
space and represents a physical object. Practically, in a B-Rep representation, the
previous condition reduces partly to the another condition at every edge of a manifold
solid where it must be adjacent to two faces exactly. Because a solid is a two-manifold
object, its B-Rep model always satisfies the Euler-Poincar´e formula as well as all the
associated operators performing the solid’s boundary transformations required during
a modeling process:
v − e + f − h = 2 ∗ (s − g) (1.1)
where v, e, f, h, s and g represent the numbers of vertices, edges, faces, hole-loops and
the numbers of connected components (shells) and the genus of the solid, respectively.
11Chapter 1: From a DMU to FE Assembly Models: current practices
Non-Manifold Connection
Surface
Volume
Line
Figure 1.4: Examples of non-manifold geometric models.
As a difference, an object is said to be non-manifold when it does not satisfy the
conditions to be a manifold. To address the various needs of object representation,
the concept of manifold has been extended to represent a wider range of shapes as
needed through a PDP. As illustrated in Figure 1.4, a non-manifold geometric modeling
kernel incorporates the ability to describe geometric regions of different manifold
dimensions, connected or not along other geometric regions of lower manifold dimensions.
Consequently, an edge can be adjacent to more than two faces. However, some
basic properties of solids are no longer valid, which increases the difficulty in defining
the consistency of such models. In the context of the Computer Aided Design
to Finite Element Analysis(es) (CAD-FEA), this also referred to as ‘cellular modeling’
[TNRA14] and few geometric modeling kernels, natively incorporate this category
of models [CAS14] These models are commonly used in structural analysis where surfaces
often intersect along more than one edge (see Figure 1.11c). Therefore, CAE
software proposes datastructures to generate non-manifold geometry. However, most
of the commercial FEA softwares contains manifold geometric modelers with extensions
to be able to model non-manifold objects, which does not bring the desired end-user
performances. Here, the input CAD models are considered as manifold solids and the
generated models for FEA can be non-manifold.
1.2.3 Complementary CAD software capabilities: Feature-based
and parametric modeling
CAD software incorporate a volume geometric modeling kernel to create manifold solids
and a surface modeler to enable the generation of two-manifold objects with free-form
surfaces, i.e., two-manifold with boundary objects. CAD tools are essential for the
generation of DMU because they are used to design and to virtually represent the
components of a product. In addition to the presentation of the geometric models used
in CAD software, (see Section 1.2.2), it is also crucial to mention some CAD practices
contributing to the design of mechanical components. This will help understanding the
additional information associated to B-Rep models which are also available in a DMU.
12Chapter 1: From a DMU to FE Assembly Models: current practices
Concept of feature:
As stated in Section 1.2.2, B-Rep models only describe the final shape of solids.
To ease the generation process of 3D mechanical models and to generate a modeling
history, CAD software uses pre-defined form features as primary geometric regions of
the object [Sha95]. A feature is a generic concept that contains shape and parametric
information about a geometric region of a solid. The features can represent machining
operations such as holes, pockets, protrusions or more generic areas contributing to the
design process of 3D components like extrusions or revolutions.
Generative processes:
The following chapters of this thesis use the term ”generative processes” to represent
an ordered sequence of processes emphasizing the shape evolution of the B-Rep
representation of a CAD component. Each generative process corresponds to the generation
of a set of volume primitives to be added or to be removed from a 3D solid
representing the object at one step of its construction.
Features Taxonomy: The features of solid modeling processes can be categorized
into two sets [Fou07]:
• Features independent from any application:
– Geometric entities: points, axes, curves, sketches;
– Adding/removing material: extrusion, revolution, sweeping. Boolean operators;
– Surface operations: fillets, chamfers, fillings;
– Repetitions, symmetries.
• Features related to an application (hole drilling, sheetmetal forming, welding).
As an example, a material addition extrusion feature is illustrated in Figure 1.5. Its
generative process consists in drawing a 2D sketch using lines, circles or planar curves
to define a planar face and then, to submit it to a translation using an extrusion vector
to generate a volume.
The principle of feature-based modeling is to construct a part ”from a simple shape
to a complex one” and it is similar to the CSG principle. As illustrated in Figure 1.5
and, more generally, in Appendix A where a complete part design is represented, the
user starts with the design of simple volumes to represent the overall solid’s shape of a
component and to add progressively shape details such as fillets and holes to reach its
final shape. This qualitative morphological approach to a solid’s construction process
13Chapter 1: From a DMU to FE Assembly Models: current practices
can be partly prescribed by company modeling rules but a user stays particularly free
to choose the features and their sequence during a construction process. This is a
consequence of a design process enabling the user to monitor step by step, with simple
features, the construction process of a solid.
In CAD software, the sequence of features to create a component is represented and
stored in a construction tree (see Figure 1.5).
Dependences between features and construction tree:
The construction tree of a solid is connected to the notion of parametric modeling.
In addition to the feature concept, parametric modeling has been introduced in CAD
software to enable the regeneration of a solid when the user wants to apply a local
shape modification. With parametric modeling, the user input defines the geometric
constraints and dimensions of a feature in relation with others. In most cases, the
localization of a new feature is not applied in the global coordinates system of the
solid. A feature uses an existing geometric entity of the solid, e.g., a planar face, as
basis for a new sketch (see Figure 1.5). The sketching face creates another dependency
between features in addition to the parent/ child relationships that are stored in the
construction tree.
Conclusion
Solid modeling is a way to represent a digital geometric model of a 3D component.
The B-Rep representation used in commercial CAD software allows a user to design
complex shapes but it provides only a low level volume description because it does not
give a morphological description of a solid. As a complement, feature modeling is easy
to learn for a CAD user because it allows him, resp.her, to naturally design mechanical
components without an in-depth understanding of CAD modeling kernel. Construction
trees structure a 3D object using simple feature models, extending the use of B-Rep
models with a history representing their construction processes. Parametric modeling
is also efficient to produce a parameterized representation of a solid to enable easy
modifications of some of its dimensions.
14Chapter 1: From a DMU to FE Assembly Models: current practices
16
100
100
50
Extrusion
Extrusion
Cut
Extrusion
Fillet
Fillet
Hole
Mirror
Figure 1.5: CAD construction process using form features. The modeling sequence is stored
in a construction tree.
15Chapter 1: From a DMU to FE Assembly Models: current practices
X 21 X 24 X 45 X 4 X 2
Figure 1.6: Example of an aeronautical CAD assembly: Root joint model (courtesy of Airbus
Group Innovations).
1.3 Representation and modeling of an assembly in
a DMU
Any mechanical system is composed of different components assembled together with
mechanical joints. This section aims at presenting how an assembly is represented
and created in a DMU. It underlines how an assembly is processed with a non-formal
conventional representation. In particular, this section deals with the current content
of a DMU extracted from a PLM in an aeronautic industrial context (data that are
actually available for structural simulation).
1.3.1 Effective DMU content in aeronautical industry
Assembly structure in a CAD environment
In a CAD software, an assembly is a structure that organizes CAD components into
groups. Each component contains the geometrical and topological data as described
in Section 1.2.2. Then, the component is instantiated in the assembly structure as
many times as it should appear. The Figure 1.6 represents an aeronautical structure
(wing fuselage junction of an aircraft) with a sub-assembly for each composite part and
instantiation of standard components such as screws and nuts.
16Chapter 1: From a DMU to FE Assembly Models: current practices
To create an assembly structure in 3D, the user iteratively positions components in
3D space relatively to other components, (axis alignment of holes, surface mating, . . . ).
These connections between components, called position constraints, connect degrees of
freedom from each component involved in the corresponding constraints. However,
these constraints may not represent the common geometric areas connecting the corresponding
components. For instance, to connect a screw with the through hole of a
plate, a coaxiality constraint can be applied between the cylindrical surface axis of the
screw and the hole axis, independently from any contact between the surfaces. The
radii of the cylindrical surfaces of these two components are not involved, i.e., any
screw can be inserted in the hole. This does not match the reality and can lead to
assembly inconsistencies.
In a CAD environment, the assembly structure is stored in a product tree which
connects each of its components to others with assembly constraints. On top of the
B-Rep representation, each component can contain complementary information such as
a name or a product reference, a contextual description of the component’s function,
modification monitoring information, color, material designation. During a product
design process, a CAD environment also offers capabilities to share external references
to other components not directly stored in a product structure, to parameterize components’
dimensions with mathematical formulas, . . .
DMU evolution during a PDP
All along the product construction process in a PDP, its digital representation
evolves. The information related to the product definition such as its 3D geometric
representation gets modified. In addition, the product development process is shared
among several design departments that address different engineering areas: the product
mechanical structure, electrical systems, . . . All these areas have to share their design
information and sub-assemblies definitions. Whether it is geometry or parameters,
this information should be integrated in a common product representation. The DMU
concept is a means to support the geometric definition and evolution of components
while maintaining the product assembly updated. As an example, depending on the
maturity of the product, a DMU can be reduced to a simple master geometry at the
early stage of the product design using functional surfaces or, it can contain the full
3D representation of all its components as required for manufacturing.
All these needs require that a Product Data Management System (PDMS) be used
to support the definition and evolution of a DMU. The PDMS structures the successive
versions, adaptions to customer requirements as well as the various technical solutions
that can be collectively designated as variants. A product is represented as a tree referencing
the CAD components and their variants. The various product sub-assemblies
of the design team, initially created in a CAD environment, are transferred and centralized
in the PDMS. It is based on this environment that an engineer can formulate
a request to extract a DMU used as input model for his, resp. her, simulations.
17Chapter 1: From a DMU to FE Assembly Models: current practices
Complex
details
Complex component
geometry
Large number of junctions
in aeronautical products
DMU reduced to a set of
individual component,
no positionning constraints Size constraint
(large number of components)
Various categories
of Bolted junctions
Figure 1.7: Example of complex DMU assembly from Alcas project [ALC08] and Locomachs
project [LOC16].
The DMU content and management in the aeronautical industry: a pragmatic
point of view.
Today, as described previously, a DMU stands for the reference geometric representation
of a product used by structural and systems engineers. A DMU is the input
model to generate their simulation models. In practice however, the information of a
DMU extracted from the PDMS reduces to a set of CAD components positioned in
3D space with respect to a global reference frame and a tree structure representing a
logical structure of this product [Fou07, FL∗10]. This fair loss of information originates
from:
• The size and the fragmented location of a DMU: it contains a large amount of
components (see Figure 1.7) created by different design teams during a PDP, e.g.,
in aeronautics, the extraction of a DMU from the PDMS requires one day (no
centralized data);
• The robustness of a DMU: Positioning constraints between components are not
available. Components are standalone objects in a common reference frame. A
DMU is an extraction from the PDMS at a given time. During the evolution of
a PDP, engineers cannot maintain the interfaces between the geometric models
18Chapter 1: From a DMU to FE Assembly Models: current practices
of the components; the corresponding geometric constraints, if they were set,
have to be removed because their management becomes too complex. As an
example, if a component is removed, the removal of its corresponding geometric
constraints could propagate other modifications throughout the whole assembly.
Additionally, the amount of geometric constraints gets very large for complex
products and their consistency is still an open problem [LSJS13]. It can reach
more than three hundreds for an assembly with less than fifty components. This
is what motivates the solution to locate all components into a global coordinate
system. Consequently, each component is positioned independently of the others,
which increases the robustness of the DMU regarding to its modifications even
though the consistency of the DMU becomes more difficult to preserve.
As a result, if a product is complex and has a large amount of components created by
different design teams during a PDP, the PDMS does not contain information specifying
assemblies. Information about proximity between components is not available. The
relational information between parts in an assembly created in a CAD environment,
e.g., the assembly constraints, is lost during the transfer between the CAD environment
and the PDMS. The DMU is restricted to a set of individual CAD components and
a tree decomposition of a product. As Drieux explained in its DMU analysis [Dri06],
a DMU is a geometric answer to design specifications, it does not contain multiple
representations adapted to the various users’ specifications during a PDP, including
the structural analysis requirements.
1.3.2 Conventional representation of interfaces in a DMU
In order to carry on the pragmatic analysis of a configuration where a simulation engineer
receives a DMU as input model, this section defines the notion of assembly
interfaces between components and their conventional representations in industry.
Lacking conventional representation of components
Shahwan et al. showed [SLF∗13] that the shapes of digital components in a DMU
may differ from the shape of the physical object they represent. These differences
originates from a compromise between the real object shape that can be tedious to
model and the need for their shape simplifications to ease the generation of a DMU. This
is particularly true for standard parts such as components used in junctions (bolted,
riveted). Regarding their large number, each geometric detail, e.g., a threaded area, are
not represented because it would unnecessarily complicate the DMU without improving
its efficiency during a design process. Since there is no standard geometric model of
assembly components used in 3D junctions, each company is likely to set its own
representation.
As illustrated in Figure 1.8, in large aeronautical DMUs, bolted junctions as well as
19Chapter 1: From a DMU to FE Assembly Models: current practices
Line to highlight
The head position
Line defining the shank
of the fasterner
Figure 1.8: Representation of a bolted junction in a structural DMU of an aircraft.
riveted junctions may be represented with a simplified representation defined with two
perpendicular lines. This representation is sufficient to generate an equivalent volume
model using basic information of bolt type, nominal diameter and length. However, no
information exists about the connections between the bolt and the junction’s components
the bolt is related to. There is neither a logical link between the screw and nut
with the plates they connect nor a geometric model of the interface between the screw
and nut forming the junction. More generally, the lack of geometric interface exists
for every interface between assembly components. As explained at Section 1.5.3, this
poor representation complicates the generation of equivalent simulation models. This
complexity issue applies also to deformable components1, which are represented under
an operating configuration.
Definition of interfaces between components
Based on the description of the DMU content given at Section 1.3.1, there is no
explicit information about the geometric interaction between components. Even the
content of geometric interfaces between components can differ from one company to
another. In this thesis, the definition of Conventional Interface (CI) of L´eon et al.
[FL∗10, LST∗12, SLF∗13] is used. From their DMU analysis, they formalized the
representation of conventional interfaces between components. To cover all the possible
interactions between two B-Rep objects C1 and C2, they classified the CIs into three
categories (see Figure 1.9):
1. Contacts: these are configurations where the boundary surfaces of the components
C1 and C2 and relatives positions of these components are such that:
∂C1 ∩ ∂C2 = S �= ∅ and ∂C1 ∩∗ ∂C2 = ∅ where ∂C1 and ∂C2 represent the
boundary surfaces of C1 and C2, respectively. S refers to one or more geometric
1Here, deformable components refer to a category of components with plastic or rubber parts
whose displacements under loading conditions are of a magnitude such that the designer take them
into account when setting up the DMU. This is to oppose to metal parts where their displacements
are neglected.
20Chapter 1: From a DMU to FE Assembly Models: current practices
Surface Interface
δC ∩ δC
Volume Interface
δC ∩* δC
Contact Interference Clearance
C1
C2
1 2 1 2
Figure 1.9: Classification of Conventional Interfaces (CI) under contact, interference and
clearance categories.
elements that can be surface-type, line-type or point-type. Figure 1.9a illustrates
contacts between CAD components;
2. Interferences: these are configurations where the boundary surfaces of the components
C1 and C2 and the relative positions of these components are such that:
∂C1 ∩∗ ∂C2 = C12 �= ∅ where C12 is the intersection volume. Interferences are
detected and analyzed during a DMU review to ensure that there is no physical
integration problem between components. However, according to L´eon et
al., interferences, also named clashes, may occur when components’ shapes are
simplified with respect to their physical models (see Figure 1.9b), or in case of
incorrect relative positions of components. Interferences resulting from these partial
positions make a DMU virtually inconsistent, which requires user’s analysis.
Interferences between standard components generate specific classes of interferences,
which is used to process DMUs in the present work;
3. Clearances: they represent 3D domains without a clear geometric definition,
which is difficult to identify and to represent, (see Figure 1.9c). In this work,
clearances are considered as functional clearances and are identified as design
features.
The concept of CI can be used in our assembly context, since it is independent from
any modeling context. Section 3.3 explains how CI can be extracted from a DMU.
Conclusion
The development and use of DMUs in a PDP bring 3D models at hand for engineers.
The DMU extracted from a PLM contains the complete geometric representation of the
product using a B-Rep representation and CAD construction trees. Complex shapes
are directly available without having to be rebuilt them in a simulation software environment.
However, due to considerations of robustness and size, a DMU is reduced
21Chapter 1: From a DMU to FE Assembly Models: current practices
to a set of isolated components without an explicit geometric representation of the
interfaces between them.
1.4 Finite Element Analysis of mechanical structures
This section aims at introducing some principles of the Finite Element Method (FEM)
for structural analysis. Because the scope of this thesis covers the pre-processing of geometrical
data for FEA, this section does not detail the resolution method but focuses
on the input data required by FEA. First of all, it introduces the concept of mechanical
model for FEA. Then, it enumerates the data needed for FEA ranging from the
geometric model of each component using a FE mesh to the representation of connections
between meshes as required to propagate displacements and stress fields over the
assembly that stand for the mechanical models of the interfaces between components.
Subsequently, it describes the industrial approach to various mechanical analyses of an
aircraft structure at different levels of physical phenomena from large coarse models
representing global deformations to detailed small assemblies devoted to the analysis
of stress distributions, as examples. Within each category, the geometric models
representing the components and their connections are described.
1.4.1 Formulation of a mechanical analysis
The goal of a numerical simulation of the mechanical behavior of a structure is to anticipate
or even supersede a physical test. It allows engineers to simulate a mechanical
behavior of a virtual structure, i.e., without the existence of the real structure.
The mechanical analysis process
Independently from a resolution method, e.g., the finite element analysis or the
finite difference method, as stated in Fine [Fin01], the mechanical analysis process
may be split into three main steps (see Figure 1.10):
1. The formulation of the model behavior:
Just as in a physical test, each virtual simulation has a specific objective: a
simulation objective (type of behavior to be observed such as displacements in a
particular area, maximal loads under a prescribed mechanical behavior, accuracy
of the expected results, . . . ). As Szabo [Sza96] describes, the first formulation
phase consists in building a theoretical model integrating the mechanical behavior
laws representative of the physical system. The analyst specifies and identifies
the key attributes of the physical system and the characteristic values of the
mechanical behavior: the simulation hypotheses. Then, the analyst applies a
22Chapter 1: From a DMU to FE Assembly Models: current practices
Pre-processing
Formulate the model mechanical behaviour from hypotheses relative to
the simulation objective. Adapt the input DMU data (geometry, physical
properties, boundary condition) to resolution method requirements.
01
step
Apply the resolution method (e.g. Finite Element Method) to the
simulation model and obtain results
Resolution
Analyse the results. Determine the accuracy, discuss with design team,
validate and integrate in the PDP
Post-processing.
03
step
02
step
Figure 1.10: Process flow of a mechanical analysis.
set of modeling rules related to the simulation hypotheses in order to create a
reduced numerical simulation model ready to be sent to the resolution system.
The choice of the modeling rules implies decisions on the mechanical behavior
of the structure. When defining the shape of the structure derived from its
real shape (see Section 1.4.2) and setting up the constraints and hypotheses
related to analytical resolution methods, the mechanical engineer limits its range
of observations to the simulation objectives. In practice, the formulation of the
model behavior may be viewed as the transformation of the DMU input, which is
regarded as the digital representation of the physical structure, into the numerical
simulation model. Section 1.5 gives details of this crucial integration phase;
2. The resolution of the model behavior:
Once the simulation model is generated, the mechanical engineer launches the
resolution process. This phase is performed automatically by the CAE software.
Currently, the main resolution method used for structural analysis is the FEM,
which sets specific constraints at the level of the mesh generation process;
3. The results analysis:
Once the resolution process has ended, the mechanical engineer has to analyze
the results, i.e., the solutions fields computed and the output parameters that can
be derived from these fields. He, resp. she, determines the solutions’s accuracy,
discusses with design teams to decide about shape modifications, validates and
integrates the results in the PDP.
23Chapter 1: From a DMU to FE Assembly Models: current practices
CAD Model Volume Mesh Shell Mesh
(a) (b) (c)
Figure 1.11: Example of FE mesh models: (a) CAD initial model of a structure, (b) Meshed
model with 3D volume elements,(c) Meshed model with idealized 2D shell elements.
1.4.2 The required input data for the FEA of a component
Although other resolution methods exist (analytical or numerical, e.g., finite difference
method), in mechanical simulation, the FEM is a method widespread in industry.
The FEM is a general numerical method dedicated to the resolution of partial differential
equations and its applicability is not restricted to structural simulation, it
covers thermal, electromagnetism, thermodynamics, . . . Many documents exist which
relate in detail the principles of this method, reference books of Zienkiewicz [ZT00],
Bathe [Bat96] are among them. This section concentrates on the data requirements
of the method to formulate a simulation model and addresses pragmatically how the
engineer can collect/generate these data.
Geometry: Finite Element Mesh
To solve partial differential equations applied to a continuum, i.e., a continuous
medium, the FEM defines an equivalent integral formulation on a discretized domain.
This discrete domain is called a Finite Element Mesh and is produced by decomposing
the CAD model representing the structure into geometric elements of simple and well
known geometry, i.e., triangles, tetrahedra forming the finite elements, . . . , whose
individual physical behavior reduces to a simple model (see Figure 1.11). When the
structure is subjected to a set of physical constraints, the equilibrium equations of the
overall structure percolate through all the elements once they have been assembled
under a matrix form.
The model size, i.e., the number of finite elements, has a direct influence on the
computation time to obtain the solution fields and it may introduce approximation
errors if not set correctly. The engineer must efficiently identify the right level of mesh
refinement related to the mechanical phenomenon he, resp. she, wants to observe. In
practice, to ease the meshing phase, the input CAD geometry is simplified to adapt
its shape to the simulation objectives and the mesh generation requirements. If this
24Chapter 1: From a DMU to FE Assembly Models: current practices
Finite
Element
Geometry Morphological properties
1D-element:
Beam
<< L
l ,l << L 1 2 L
l2 l1
L
Long and slender sub domain having
two dimensions that are small enough
compared to the third one. These two
dimensions define the beam section parameters.
2D-element:
Shell, plate,
membrane
e
l 1
l 2
2 e << l ,l 1 Thin sub domain having one dimension
which is small compared to the two others.
This dimension defines the thickness
parameter.
3D-element:
Volume
Sub domain without any specific morphological
property that must be processed
with a three-dimensional mechanical
behavior.
Table 1.1: Categories of Finite Elements for structural analyses.
simplification is carried out properly, it would not only generate a good mesh but this
mesh is obtained quickly also. This simplification phase incorporates shape transformations
and all their inherent issues are discussed in Section 1.5.
Finite Element Choice and families
When setting up a simulation model, the choice of finite elements is essential. Each
finite element has an approximation function (polynomial function) which has to locally
approximate at best the desired solution. As explained in Section 1.4.1, it is the engineer
who chooses the type of finite element in adequacy with the prescribed simulation
objectives. There are many types of finite elements to suit various applications and
their selection is conducted during the early phase of the CAD model pre-processing.
It is a matter of compromise between the geometry of the components, the desired
accuracy of the simulation results as well as the computation time required to reach
this accuracy.
Figure 1.1 presents the main categories of finite elements classified in accordance
with their manifold properties (see Section 1.2.1).
Idealized elements
Based on the shell theory of Timoschenko [TWKW59], specific finite elements are
available in CAE software to represent a thin volume, e.g., shell elements. These elements
can significantly reduce the number of unknowns in FE models, leading to
25Chapter 1: From a DMU to FE Assembly Models: current practices
a shorter computation time compared to volume models. Also, using shell elements
rather than volume ones gives access to different mechanical parameters: section rotation
and stress distribution in the thickness is implicitly described in the element.
Rather than discretizing a volume into small volume elements, it is represented by its
medial surface (see Table 1.1 2D-element). The thickness becomes a numerical parameter
associated with the element. Long and slender sub domains can be processed
analogously. A beam element is well suited to represent these volume sub domains
using an equivalent medial line (see Table 1.1 1D-element). From the sections of these
volumes, their inertia parameters are extracted and they become numerical parameters
assigned to the beam elements.
Such elements imply a dimensional reduction of the initial volume, a 1-dimensional
reduction for shells and 2-dimensional reduction for beams. This modeling hypothesis
is called idealization. In a CAD-CAE context, the idealization refers to the geometric
transformation converting a initial CAD solid into an equivalent medial surface or
medial line which handles the mechanical behavior of a plate, a shell or a beam, respectively.
This geometrically transformed model is called idealized model. Such an
example is given on Figure 1.11c. Idealized sub domains are particularly suited to aeronautical
structures, which contain lots of long and thin components (panels, stringers,
. . . ). Using idealized representations of these components can even become mandatory
to enable large assembly simulations because software license upper bounds (in terms of
number of unknowns) are exceeded when these components are not idealized. However,
Section 1.5.3 illustrates that the practical application of such an idealization process
is not straightforward. The sub domains candidates for idealization are subjected to
physical hypotheses:
• The simulation objectives must be compatible with the observed displacements
or stress field distributions over the entire idealized sub-domains, i.e., there is no
simulation objective related to a local phenomenon taking place in the thickness
or section of an idealized domain;
• The sub domains satisfy the morphological constraints of idealization hypotheses,
e.g., a component thickness must be at least 10 times smaller than the other two
dimensions of its corresponding sub domain.
Material data, loads and boundary conditions
On top of the definition of a mesh geometry and its associated physical properties,
e.g., sections, thickness, inertias, the FEM requires the definition of material parameters,
loads and boundary conditions.
Material data are associated with each finite element in order to generate the global
stiffness matrix of the equivalent discretized sub domains representing the initial CAD
model. The material properties (homogeneity, isotropy, linearity, . . . ) are themselves
inherent to the model of constitutive law representative of the component’s mechanical
26Chapter 1: From a DMU to FE Assembly Models: current practices
behavior. In case of a component made of composite material, the spatial distribution
of the different layers of fibers should be carefully represented in the meshed model of
this component.
Loads and boundary conditions are essential settings of a mechanical simulation
to describe the mechanical effects of other components on the ones of interest. Consequently,
loads and boundary conditions are also part of the mechanical simulation
pre-processing. A load can be a punctual force applied at a finite element node, a
pressure distributed over the surface of a set of finite elements or even a force field,
e.g., gravity force. Similarly, boundary conditions have to be attached to a particular
set of nodes. Boundary condition settings interact with idealization processes (see
Section 1.5.3), e.g., a force acting on an elongated side of a long slender volume is
applied to a linear sequence of nodes of the idealized equivalent model defined as a
beam model. Consequently, the boundary condition is also dimensionally reduced. In
practice, an engineer defines the loads and boundary conditions areas over a component
using partitioning operators prior to mesh the component.
1.4.3 FE simulations of assemblies of aeronautical structures
The FEM issues have been extensively addressed for standalone components and integrated
in a PDP. However, the FE simulation target is now evolving toward assembly
structures, which are under focus in the next section.
An assembly can be regarded as a set of components interacting with each other
through their interfaces. These interfaces contribute to mechanical functions of components
or sub-assemblies [BKR∗12, KWMN04, SLF∗13]. An assembly simulation model
derives from shape transformations interacting with these functions to produce a mechanical
model containing a set of sub domains discretized into FEs and connected
together to form a discretized representation of a continuous medium.
Interactions between sub domains in assembly models and associated hypotheses
An assembly simulation model is not just a set of meshed sub domains positioned
geometrically in a global coordinate system. These sub domains must be connected
to each other to generate global displacement and stress fields over the assembly. To
process every assembly interface (see Section 1.3.2), the user should decide which mechanical
behavior to apply. Connections between components through their interfaces
can be of type kinematic or physical and associated with physical data (stiffness, friction
coefficient, . . . ) and material parameters, as necessary. The selection of connector
types is subjected to user’s hypotheses regarding the relative behavior of sub domains
representing the components, e.g., motion and/or relative interpenetration. Here, a
sub domain designates either an entire component or a subset of it when it has been
27Chapter 1: From a DMU to FE Assembly Models: current practices
With relative motion Without relative motion
With interpenetration
Deformable junctions models:
used to model complete mechanical
connections with deformable
connectors elements,
i.e., springs, dampers, . . .
Rigid junctions models: used to
model rigid connections, i.e., ball
joints, welds, rivets, bolts, . . .
Deformable fastener Hinge
Junction
Without interpenetration
Normal and tangential contact:
Used to model the normal
and tangential stresses (friction)
transmitted between two solids
in contact during the simulation.
Kinematic constraints: used to
model relationships expressed as
displacement/velocity between
nodes, e.g., tie constraints, rigid
body, . . .
Contact
Kinematic
Constraint
Table 1.2: Connector entities available in CAE software.
idealized. The connection types are synthesized in Figure 1.2.
The introduction in a FEA of a relative motion between components (contacts condition)
considerably increases the complexity of this analysis. Indeed, a contact is not
a linear phenomenon and requires the use of a specific nonlinear computational model,
which slows down the simulation time. Setting up a contact is a strong hypothesis,
which leads to the definition of the potential contact areas on both components. The
sets of FEs in contact must be carefully specified. On the one hand, they should contain
sufficient elements, i.e., degrees of freedom, to cover the local phenomenon while
limiting the interpenetration between meshes. On the other hand, they should not
contain too many elements to avoid increasing unnecessarily the computation time.
During the early design phases, in addition to idealized models of components, it
is common to perform simulations using simplified representation of junctions. In this
case, the junction simulation objectives aim at transferring plate loads throughout the
whole assembly and FE beam elements are sufficient to model the bolts’ behavior.
In this configuration, the whole group of components taking part of the junction is
replaced by a unique idealized model (see Figure 1.12). When applied to a FEA of
large aeronautical structures, these models are called FE connections with fasteners
and they are widely used to integrate component interactions with or without contact
conditions. A fastener connection may be applied either to mesh nodes or may be
28Chapter 1: From a DMU to FE Assembly Models: current practices
A
Fastening point defined
with region of
influence
FE Fastener Bolted Junction
Beam
connection
B
C
A
B
C Region
of Influence
Figure 1.12: Example of a FE fastener simulating the behavior of a bolted junction using
beam elements.
mesh-independent, i.e., a point to point connection is defined between surfaces prior
to the mesh generation process.
Interactions between simulation objectives and the simulation model preparation
process
Simulation objectives drive the shape transformations of CAD solids and interact
with the simulation hypotheses to model connections between components. During a
PDP, simulations may be used at various steps of a design process to provide different
informations about the mechanical behavior of components and sub systems. Based on
Troussier’s [Tro99] classification, three simulation objectives are taken as examples in
Table 1.3 to illustrate how simulation objectives influence the idealization process and
the models of interactions between components as part of different simulation models.
As an illustration of the influence of simulation objectives on the generation of
different simulation models, Figure 1.13 presents two FE models derived from the
same assembly structure of Figure 1.6:
• A simplified model used at a design stage of pre-dimensioning and design choices
(see Figure 1.13a). The simulation objective is to estimate globally the load
transfer between plates through the bolted junctions and to identify the critical
junctions. This model contains idealized components with shell FE in order to
reduce the number of degrees of freedom. The junctions are modeled with FE
fasteners containing beam elements and a specific stiffness model, i.e., the Huth’s
law [Hut86]. This model contains 145 000 degrees of freedom and solving it takes
15 minutes, which allows the engineer to test various bolted junctions layouts,
part thicknesses and material characteristics;
• A full 3D FEM to validate design choices and check conformity with the certi-
fication process prior to physical testing (see Figure 1.13b). The simulation objectives
contain the validation of the load transfer distribution among the bolted
junctions and the determination of the admissible extreme loads throughout the
structure. To adapt the FE model to these simulation objectives while repre-
29Chapter 1: From a DMU to FE Assembly Models: current practices
Element of
a
simulation
process
Pre-dimensioning and
design choices
Validation of
mechanical tests
Contribution to
phenomenon
understanding
Simulation
objectives
Determine of the number
of junctions, a component
thickness or material, . . .
Analyze the distribution
of the stress field in a
structure. Locate possible
weaknesses.
Understand the behavior
of the structure to correlate
with results after
physical tests
Internal
Connections
(Interfaces)
Physical junction simpli-
fied, no contact (rivet and
pin models associated to
fasteners).
Physical junction simpli-
fied or use of volume patch
model. Contact interactions
between components.
Complete physical junction,
use of volume model
with contact interactions.
Components’
shape
Large number of components.
Idealized: thin
parts represented as shell
models.
Simplified (shell models)
for large assemblies, volume
model or mixed dimensional
model accepted
if rather small number of
components.
Small number of components.
Complete volume
model.
Simulation
model
Linear Linear or nonlinear Nonlinear
Table 1.3: Examples of interactions or even dependencies between simulation objectives and
interfaces as well as component shapes.
senting the physical behavior of the structure, an efficient domain decomposition
approach [CBG08, Cha12] uses a coarse 3D mesh (tetrahedral FE) far enough
from each bolted junction and a specific sub domain around each bolted junction
(structured hexahedral mesh) where friction and pretension phenomena are part
of the simulation model. Here, the objective is not to generate a detailed stress
distribution everywhere in this assembly but to observe the load distribution areas
among bolts using the mechanical models set in the sub domain, i.e., the
patch, around each bolt. This model contains 2.1106 degrees of freedom and is
solved in 14 hours. Only one such model is generated that corresponds to the
physical test.
Conclusion
This section described the main categories of geometric models used in the FEA of
structures. The simulation objectives drive the generation of the simulation models,
i.e., FE meshes, boundary conditions, . . . , used as input data for solving the FE models.
In addition to each component definition, a FE assembly must integrate connection
models between meshed components. In Section 1.5, the different modeling hypotheses
are analyzed with regard to the geometric transformations applied on the DMU input in
order to obtain a new adapted CAD model that can be used to support the generation
of a FE mesh model.
30Chapter 1: From a DMU to FE Assembly Models: current practices
Full 3D FEM
Volume Model
Idealized FEM
Shell model
(a)
(b)
Fastener
Refined Mesh
in subdomains
around junction
Figure 1.13: Example of aeronautical FE models: (a) an idealized model with fasteners, (b)
a full 3D model with a decomposition of plates around each bolted junction and a fine mesh
in the resulting sub domain around each bolted junction.
1.5 Difficulties triggering a time consuming DMU
adaptation to generate FE assembly models
This section aims at illustrating the complexity of the generation of FE models from
DMUs. It highlights the differences between a component’s shape in a DMU with
respect to the level of abstraction required for a given FEA, especially when a FEA
requires an idealization process. This section characterizes and analyzes some specific
issues about assembly simulation model preparation and exposes the lack of tools in
industrial software, which leads engineers to process manually all the shape transformations
and strongly limit the complexity of assemblies that can be processed in a
reasonable amount of time.
1.5.1 DMU adaption for FE analyses
Today mechanical structures used in mechanical simulations contain a large number of
components, each with a complex shape, binded together with mechanical junctions.
In the aeronautic industry, the dimensioning and validation of such structures leads
engineers to face two digital challenges:
• The formulation of mechanical simulation models, as developed in Section 1.4,
31Chapter 1: From a DMU to FE Assembly Models: current practices
that can simulate the mechanical behavior of a structure lead to the components’
dimensioning as well as the validation of the joint technologies selected (bolting,
welding, riveting). During this phase, the engineers have to determine the most
adapted simulation model regarding the physical phenomena to observe. They
need to set up a FEA and its associated simulation hypotheses that produce the
FE model which best meets the simulation objectives with the simulation software
environment and technologies available. In practice, a simulation engineer
supervises the DMU extraction processes to specify the components to be extracted
and/or those having negligible mechanical influences with respect to the
simulation objectives. Yet, this assessment is qualitative and is strongly dependent
upon the engineer’s know-how. Another issue about data extraction stands
in the component updates during the PDP. Any geometrical change of a DMU
component has to be analyzed by the simulation engineer. Due to the tedious
interactive transformations required, a trade-off has to be reached between the
time required for the shape update in the FE model and the mechanical influence
of the component with respect to the simulation objectives. Here, we face
a qualitative judgment;
• The generation of appropriate component shapes from a DMUs to support the
generation of simulation models. As explained at Section 1.1, the DMU stands for
the geometric reference of a product definition. Through the PLM software, engineers
have typically access to DMUs containing the geometry of the 3D assembly
defining the product and additional information, essentially about the material
properties of each component. However, the extracted DMU representation is not
directly suited for numerical FE simulations. Shape transformations are mandatory
because designers and mechanical engineers work with different component
shapes, resulting in the fact that a DMUs cannot directly support the mesh generation
of structural analysis models. These models must meet the requirements
of the simulation hypotheses, which have been established when setting up the
simulation objectives and the specifications of the mechanical model as part of
the FEA. The component shapes generated for the FEA have to be adapted to
the level of idealization derived from the specifications of the desired mechanical
model, the shape partitioning required for the application of the boundary conditions
and loads as well as the level of details of the shape with respect to the FE
size required when generating the FE mesh. During the generation of mechanical
assembly models, the engineer must also take into account the total number of
components, the representation of multiple interfaces between components and a
higher level of idealization and larger details than for standalone components, to
produce coarse enough assembly models.
To increase the scope of physical assembly simulations, these two challenges lead
engineers to use models with simplified 3D representations using idealized shells rather
than representations using solids, from a geometric point of view and, from a compu-
32Chapter 1: From a DMU to FE Assembly Models: current practices
tational mechanics point of view, models with specific component interfaces models.
These targets require specific treatments during the preparation process of a simulation.
Now, the purpose is to describe the major difficulties encountered by engineers
during the preparation of assembly simulation models.
1.5.2 Interoperability between CAD and CAE and data consistency
The first difficulty to generate assembly simulation models derives from the interoperability
between the CAD and CAE systems. CAD tools have been initially developed
in the 60s to help designers modeling solids for applications such as machining or freeform
surfaces. CAD has evolved along with CAM (Computer Assisted Manufacturing),
driving the functionalities of CAD software. However, simulation software has evolved
independently. CAD systems do not support a full native integration of simulation
preparation modules. The current practice is to export a DMU to subcontracting
companies in charge of the simulation pre-processing, which themselves use specialized
CAE software to read and transform the CAD components geometry. Each of these
two software, (CAD and CAE), efficiently supports its key process. CAD software are
efficient to manage robustly and intuitively modify B-rep solids, to generate large assembly
models but they contain basic meshing strategies and most of them are able to
model non-manifold objects. CAE software are dedicated to simulation processes, they
provide capabilities to describe non-manifold geometry (useful for idealized models) but
are limited in modeling non-manifold models. They incorporate robust meshing tools
(with topological adaption capabilities) and extensive capabilities to describe contact
behaviors, material constitutive laws, . . . . However, CAE software relies on a different
geometric kernel than CAD, which breaks the link between them and leaves open the
needs for shape transformation operators.
Also, a transfer of a component from a CAD to a CAE environment has a severe
impact on the transferred information. The geometry has to be translated during its
import/export between softwares that use different datastructures and operators. This
translation can be achieved through a neutral format like STEP (Standard for The
Exchange of Product model data) [ISO94, ISO03]. However, this translation may lead
to solid model inconsistencies resulting from different tolerance values used in the respective
geometric modeling kernels of CAD and CAE software. These inconsistencies
may prevent the use of some transformation operators, involving manual corrections.
Additionally, the coherence of the input assembly data is crucial. An assembly containing
imprecise spatial locations of components and/or components shapes that do
not produce consistent CIs (see Section 1.3.2) between components or even the non existence
of a geometric model of some components (such as shim components which are
not always designed as illustrated in Figure 1.14) implies their manual repositioning or
even their redesign to meet the requirements of the simulation model. In the proposed
33Chapter 1: From a DMU to FE Assembly Models: current practices
Functional Gap to
assemble components
Shim component
(not design in DMU)
to fill the gap
Figure 1.14: Illustration of a shim component which does not appear in the DMU model.
Shim component are directly manufacture when structural components are assembled.
approach, the input DMU is assumed to be free of the previous inconsistencies and
therefore, it is considered as coherent.
1.5.3 Current operators focus on standalone components
To transform the shape of an initial B-Rep CAD model of a standalone component
into a new one as required for its simulation model, the mechanical engineer in charge
of the simulation pre-treatment sequentially applies different stages of shape analysis
and geometric transformations. His, resp. her, objectives is to produce a new CAD
model that can support the mesh generation process. This mesh must be consistent
with respect to the simulation objectives and produced in a reasonable amount of time.
Based on the simulation objectives reduced to this component, the engineer evaluates
qualitatively and a priori, the interactions between its boundary conditions and
its areas of simulation observation, e.g., areas of maximum displacements or maximum
stresses, to define whether or not some sub domains of this component should be suppressed,
idealized. Currently, engineering practices iteratively apply interactive shape
transformations:
1. Idealizations, which are the starting transformations, because they are of highest
shape transformation level since they perform manifold dimension reductions;
2. Details removal comes after with topological and skin detail categories [Fin01]
that can be also grouped together under the common concept of form feature;
3. Mesh generation requirements leading to solid boundary and volume partitioning
are the last step of shape transformations that can be achieved with the socalled
‘virtual topology’ operators or, more generally, meshing constraints [She01,
FCF∗08].
34Chapter 1: From a DMU to FE Assembly Models: current practices
1 : Extract Pair of faces
from CAD geometry
2 : Generate medial face from
Pair of Faces (Automatic)
3 : Connect all medial
faces together
4: Assign thickness/offset
to medial surface
P1
P2
P3
P1
P2
P3
P1
P2
P3
P1
P2
P3
x 14
Shell model with
thickness
3D initial model
Repetitive Process on
each Medial Surface
connections
(a)
(b)
Figure 1.15: Illustration of a manual process to generate an idealized model: (a) initial solid
superimposed with its idealized model, (b) iterative process using face pairing identification
and mid-surface extensions to connect mid-surfaces.
Commercial softwares already provide some of these operators to adapt DMUs to
CAE processes but they are restricted to simple configurations of standalone components.
Fewer software, like Gpure [GPu14], offer capabilities to process specific
DMU configurations using large facetted geometric models. Component shape transformations,
which is the current target of high level operators, are reduced to manual
interactions to apply defeaturing operations on CAD parts such as blend removal
[ZM02, VSR02], shape simplifications [LAPL05] or to directly remove features
on polyhedral models [MCC98, Tau01, LF05]. In all cases, the flow of interactions
is monitored by the engineer. This results in very tedious and time consuming tasks
requiring a fair amount of resources.
When an idealization is required, engineers can create the resulting mid-surface
with a manual and qualitative identification of face pairs [Rez96] or using a medial axis
surface generation process [ABD∗98, AMP∗02]. However, information in between idealizable
areas is not available and the engineer has to manually create the connections
by extending and trimming mid-surfaces, which is highly tedious and relies also on
his, resp. her, mechanical interpretation. Figure 1.15 illustrates a manual idealization
process where the user identifies faces pairs, then generates mid-surfaces and creates
manually new faces to connect medial faces together while locating the idealized object
as much as possible inside the initial volume. Applied to complex shapes, e.g., aircraft
structure machined parts, this process flow is highly time consuming as a consequence
of the numerous connection areas required and can be hardly automated because slight
35Chapter 1: From a DMU to FE Assembly Models: current practices
shape modifications strongly influence the process flow. Creating idealized domains in
areas where face paring cannot be applied rather than leaving a volume domain in these
areas, is a common industrial practice to reduce the number of degrees of freedom of a
simulation model and reduce the use of mix dimensional models, thus avoiding transfers
between shell and volume finite elements because it is not recognized as a good
mechanical model. A volume mesh in connection areas is only beneficial if it brings a
gain in accuracy, pre-processing or simulation time. Today, generating volume meshes
in connection areas requires lots of manual interventions because these volume shapes
can be quite complex. Often, the main difficulty is to partition the initial object into
simple volumes to generate structured meshes.
The lack of robust and generic operators results in a very time consuming CAD
pre-processing task. These geometric operators are analyzed in detail in Chapter 2 to
understand why they are not generic and robust enough.
1.5.4 Effects of interactions between components over assembly
transformations
The amount of shape transformations to be performed significantly increases when
processing an assembly. The engineer has to reiterate numerous similar interactive
operations on series of components, the amount of such components being large.
Unlike modeling a standalone component having no adjacent component, an assembly
model must be able to transmit displacements/stresses from one component to
another. Therefore, the preparation of an assembly model compared to a standalone
component implies a preparation process of their geometric interfaces. Consequently,
to obtain a continuous medium, the engineer must be able to monitor the stress distribution
across components by adding either kinematic constraints between components
or prescribing a non-interpenetration hypothesis between them by adding physical contact
models. Thus, modeling hypotheses must be expressed by the engineer at each
component interface of an assembly.
Today, the interactive preparation of the assembly depicted at Figure 1.13 requires a
5 days preparation to produce either an idealized model or a model based on simplified
solids. When looking at this model, some repetitive patterns of groups of components
can be observed. Indeed, these patterns are 45 bolted junctions that can be further
subdivided into 3 groups of identical bolt junctions, i.e., same diameter. Each group
can be further subdivided in accordance with the number of components tightened.
The components forming each of these attachments belong to a same function: holding
tight in position and transferring forces between the plates belonging to the wing and
the fuselage. While a standalone component contributes to a function, an assembly is
a set of components that fulfill several functions between them. During an interactive
simulation preparation process, even if the engineer has visually identified repetitive
36Chapter 1: From a DMU to FE Assembly Models: current practices
configurations of bolts, he, resp. she, has to transform successively each component
of each bolt. A property, by which some components share similar interactions than
others and could be grouped together because they contribute to the same function,
cannot be exploited because there is no such functional information in the DMU and
the geometric models of the components are not structured with their appropriate
boundary decomposition to set up the connection with their function, e.g., imprint of
contact areas are not generated on each component boundary and the contact areas
are connected to a function. Thus, the engineer has to repeat similar shape transformations
for each component. However, if the geometric entities contributing to the
same function are available, grouped together and connected to their function before
applying shape transformations, the preparation process could be improved. For instance,
bolted junctions would be located and transformed directly into a fastener
model through a single operator. Further than repetitive configurations, it is here
the impossibility to identify and locate the components and geometric entities forming
these repetitive patterns that reduces the efficiency of the preparation process.
Processing contacts
Hypothesizing the non-interpenetration of assembly components produces non linearity
and discontinuities of the simulation model. In this case, the engineer must
locate the potential areas of interpenetration during the analysis. Due to the lack of
explicit interfaces between components in the DMU, all these contact areas must be
processed interactively. At each contact interface, the analyst has to manually subdivide
the boundary of each component to generate their geometric interface and then,
assign mechanical parameters, such as a friction coefficient, to this interface. In the
use-case represented in Figure 1.6, every bolted junction contains between 5 and 7 geometric
interfaces at each of the 45 junctions, which amounts to 320 potential contact
conditions to define interactively. To avoid these tedious operations, in a context of
non linear computations, there is a real need to automate the generation of contacts
models in assembly simulations. This automation can be applied to a DMUs with the:
• Determination of geometric interface areas between components, i.e.,
– Localize geometric interfaces between components likely to interpenetrate
during the simulation;
– Estimate and generate the extent of contact areas over component boundaries.
Meshed areas of the two components can be compatible or not depending
on the capabilities of CAE software;
• Generation of functional information to set the intrinsic properties of contact
models, i.e.
– Define the friction parameters;
37Chapter 1: From a DMU to FE Assembly Models: current practices
Functional Tolerance
Loose fit Fitted Snug fit
User’s choice
Mechanical
components DMU
Nominal diameter
for bearing and shaft
CAE
Preprocessing Virtualization
Simplified representation
of a bearing
Shaft Contact Area
Bearing Simulation Model
with Friction contact
Figure 1.16: Example of contact model for a FE simulation.
– Define the kinematic relations between component meshes in contact areas
with respect to the dimensional tolerances between surfaces. Figure 1.16
exemplifies a contact between a shaft and a bearing. Commonly, a DMU
exhibits CIs [SLF∗12, SLF∗13] where components’ representations can share
the same nominal diameter while they can fulfill different functions according
to their fitting (clearance, loose fit, snug fit), thus requiring different settings
in their FE respective contact models.
As a result, DMUs do not contain enough information to automate the generation of
contact models. FE models need geometric and functional information about components
interfaces to delineate contact areas as well as to assign contact model parameters.
Contribution of component functions to the simulation preparation
To automatically handle these repetitive configurations related to components contributing
to the same function in an assembly, the simulation preparation process must
be able to identify these functions from the input DMU. Currently, the engineer is
unable to automate these repetitive tasks because he, resp. she, has no information
readily identifying connections in the assembly.
Simulation models chosen by the engineer in a CAE library to replace the junctions
are geometrically simple and basic interactive operators are available to achieve
the necessary shape transformations. As shown in Figure 1.12, an idealized model of
a bolted connection modeled with a fastener consists in a set of points connected by
line elements to describe the fastener. Using a mesh-independent fastener, the points
representing the centers of the bolt holes in the tightened components do not even
need to coincide with a surface mesh node. These idealization transformations are
rather simple locally, given the component shapes. Hence, the challenge is neither the
38Chapter 1: From a DMU to FE Assembly Models: current practices
geometric complexity nor the mesh generation. Indeed, it holds in the term ‘bolted
junction’ to identify this geometric set of components and generate geometric relationships
between areas of their boundaries. The issue consists in determining the function
of each component in an assembly in order to group them in accordance with identical
functions and to make decisions about modeling hypotheses (simplification, idealization)
on component shapes associated with these identified functions.
Conclusion
Shape transformations taking place during an assembly simulation preparation process
interact with simulation objectives, hypotheses and functions attached to components
and to their interfaces. To improve the robustness of the geometric operators
applied during simulation preparation, and to make them applicable not only to components
but also to assemblies, is a first objective to reduce the amount of time spent
on assembly pre-processing.
1.6 Conclusion and limits of current practices about
DMU manual adaption for FE assembly models
generation
Currently, configuring rather complex assembly models for simulations is difficult to
handle within the time scale prescribed by an industrial PDP. The pre-processing of
CAD models derived from DMUs to produce FE models is far too long compared
to the simulation time, it may represent 60% of the whole simulation process (see
Section 1.4.1). Consequently, some simulations are not even addressed because their
preparation time cannot fit within the schedule of a PDP, i.e., simulation results would
be available too late.
Because the shape of CAD models obtained from the engineering design processes
is neither adapted to the simulation requirements nor to the simulation solvers, shape
transformations are mandatory to generate the simulation models. Consequently,
DMUs cannot directly support the preparation process of structural analysis models.
Today, the operators available in CAD/CAE software allow an engineer to perform
either interactive geometric transformations leading to very tedious tasks or automated
model generation adapted to simple models only or models containing only a restricted
set of form features [TBG09, RG03, SGZ10, CSKL04, LF05, LAPL05, ABA02, SSM∗10,
DAP00]. Unfortunately, these operators are still not generic enough to be adapted to
analysts’ needs and a rather automated generation of complex component simulation
models still raises numerous difficulties, especially when component idealizations must
be performed.
39Chapter 1: From a DMU to FE Assembly Models: current practices
To generate assembly simulation models, in addition to its component transformations,
the engineers need to generate all the connections between its components. Simulation
models for assemblies strongly need geometric interfaces between components
to be able to set up boundary conditions between them and/or meshing constraints,
e.g., to satisfy conformal mesh requirements. Studying the content and structure of an
assembly model, as available in a PDMS, reveals that product assemblies or DMUs are
reduced to a set of components located in 3D space without geometric relationships
between them. The information about the interfaces between components are generally
very poor or nonexistent, i.e., real contact surfaces are not identified or not part
of each component boundary. As a consequence, it is common practice for engineers to
generate interactively the connections between components, which is error prone, due
to the large number of repetitive configurations such as junction transformation.
Finally, processing complex DMUs for the simulation of large assembly models is a
real challenge for aircraft companies. The DMUs, used in large industrial groups such
as Airbus Group, consist in hundreds of thousands of components. Thus, engineers in
charge of such simulations can hardly consider applying the usual methods involving
manual processing of all components as well as their interfaces. To meet the needs
for large assembly simulation models, improvements in processing DMUs are a real
challenge in aircraft companies and it is mandatory to robustly speed up and automate,
as much as possible, the DMU transformations required.
1.7 Research objectives: Speed up the DMU preprocessing
to reach the simulation of large assemblies
To improve the simulation preparation process of large assembly simulation models,
this thesis aims at defining the principles that can be set up to automate the shape
adaption of CAD models for the simulation of large assembly structures and developing
the associated shape transformation operators. The range of CAD models addressed is
not restricted to standalone components but covers also large assembly structures. The
tasks planned are mainly oriented toward the transformation of 3D geometric models
and the exploitation of their associated semantics for the FEA of structural assemblies
applicable to static and dynamic analyses. The task breakdown is as follows:
• Analyze FE simulation rules to extract and classify modeling criteria related to
user-defined simulation objectives;
• Based on CAE discipline’s rules, specifications and process structure, formalize
shape transformations operators to increase the level of automation of component
transformations as well as the transformation of its geometric interfaces;
40Chapter 1: From a DMU to FE Assembly Models: current practices
• Implement and validate idealization operators to transform assembly component
shapes and assembly interfaces between components while preserving the semantics
of the mechanical behavior intended for this assembly;
• Specify the transformation process monitoring and the methodology contributing
to the generation of mechanical (CAE) models exploiting a functionally enriched
DMUs.
Prior to any automation, a first step outlined in Chapter 2 analyzes in detail the
available operators and scientific contributions in the field of data integration and shape
transformations for mechanical simulations. The objective is to understand why the
current operators and approaches are not robust enough to be applied to aeronautical
assemblies. From this analysis, Chapter 3 refines the thesis objectives and exposes a
new approach to speed up the shape adaption of CAD assembly models derived from
DMUs as needed for FE assembly models. The proposed method is able to adapt a
component shape to the simulation objectives and meshing constraints. It incorporates
the automation of tedious tasks part of the CAD component idealization process,
specifically the treatment of connections between idealizable areas. The proposed algorithms
detailed in Chapters 4 and 5 have to be robust, applicable for CAD aeronautical
components and preserve the semantic of the mechanical behaviors targeted. These
operators contribute to an assembly analysis methodology, presented in Chapter 6,
that definitively generalizes assembly transformation requirements in order to prove
the capacity of the proposed approach to challenge the generation of large assembly
simulation models.
41Chapter 1: From a DMU to FE Assembly Models: current practices
42Chapter 2
Current status of procedural
shape transformation methods
and tools for FEA
pre-processing
The transformation of DMUs into structural analysis models requires the implementation
of methods and tools to efficiently adapt the geometric model and its
associated information. Therefore, this chapter proposes a review of the current
CAD-FEA integration related to data integration and shape transformations. In
this review, the procedural transformations of CAD components are analyzed,
from the identification of details to the dimensional reduction operations leading
to idealized representations. The geometrical operators are also analyzed with
regard to the problem of assembly simulation preparation. Moreover, this chapter
identifies that current geometric operators are lacking application criteria of
simplification hypotheses.
2.1 Targeting the data integration level
Chapter 1 described the industrial needs to reduce the time spent on assembly preparation
pre-processing for FEA, now the objective of this chapter is to understand why
the available procedural geometric modeling methods and operators still do not meet
the engineers’ requirements, leading them to generate interactively their own models.
Different approaches have been proposed for a better interoperability between CAD
and CAE, which can be mainly classified into two categories [DLG∗07, HLG∗08]:
• Integration taking place at a task level: It refers to the integration of activities of
design and structural engineers, hence it relates to design and FEA methodologies
and knowledge capitalization in simulation data management;
43Chapter 2: Current status of methods and tools for FEA pre-processing
• Data integration level: It addresses data structures and algorithms performing
shape transformations on 3D models of standalone components. More generally,
these data structures and operators help connecting CAD and CAE software.
To support the integration of simulation tasks into a PDP, Troussier [Tro99] explains
that the knowledge involved in the generation of geometric models is not explicitly
formalized. The simulation model definition and generation are based on the collective
knowledge of some structure engineers. Therefore, the objective of CAD/CAE
integration is not only to reduce the pre-processing time but also to decrease the level
of expertise needed to choose and apply the correct transformations to CAD models.
Eckard [Eck00] showed that the early integration of structural simulation in a design
process could improve a PDP leading to a shorter time-to-market, which applies to assembly
processing as well as to standalone components. Badin et al. [Bad11, BCGM11]
proposed a specific method of knowledge management used in several interacting activities
within a design process. According to them, structure engineers and designers
collaborate and exchange design information. However, the authors assume that
relationships between dimensional parameters of CAD and simulation models of components
are available, which is not necessarily the case. Additionally, they refer to
configurations where the shapes of components are identical in both the design and
simulation contexts, which is not common practice for standalone components and
hardly applicable to assemblies where the reduction of complexity is a strong issue. To
help structure engineers, Bellenger [BBT08], Troussier [Tro99] and Peak [PFNO98] formalized
simulation objectives and hypotheses attached to design models when setting
up simulations. These objectives and hypotheses are subsequently used for capitalization
and reuse in future model preparations. This integration at a task level underlines
the influence of simulation objectives and hypotheses without setting up formal connections
with the shape transformations required.
Since the industrial problems addressed in this thesis focus on the robust automation
of shape transformations, it seems appropriate to concentrate the analysis of prior
research on the data integration level. These research contributions can categorized in:
• Detail removals performed either before or after meshing a component [LF05,
LAPL05, FMLG09, GZL∗10];
• Shape simplifications applied to facetted models [FRL00, ABA02];
• Idealization of standalone components [CSKL04, SRX07, SSM∗10, RAF11, Woo14]
using surface pairing or Medial Axis Transform (MAT) operators.
Section 2.2 analyzes the two first categories of shape simplifications and Sections 2.3
concentrates on the specific transformation of dimensional reduction, which is widely
used to generate assembly FEMs as illustrated in Section 1.4. Section 2.4 explores
morphological approaches such as the geometric analysis and volume decomposition
of 3D solids to enforce the robustness of FE models generation. Finally, Section 2.5
44Chapter 2: Current status of methods and tools for FEA pre-processing
addresses the evolution of the procedural simplification and idealization during component
pre-processing from standalone components toward an assembly context.
2.2 Simplification operators for 3D FEA analysis
In CAE applications, the removal of details to simplify a component before meshing it
has led to local shape transformations based on B-Rep or polyhedral representations.
These transformations create new geometric entities that can incorporate acceptable
deviations of a FEA w.r.t. reference results. This section analyzes the different operators
and methods aiming at identifying and removing the regions considered as details
on 3D solids.
2.2.1 Classification of details and shape simplification
As explained in Section 1.4, the level of detail of a solid shape required for its FE
mesh is related to the desired accuracy of its FEA. The removal or simplification
of a sub-domain of a solid is valid depending when its associated FEA results meet
the accuracy constraint. Armstrong and Donaghy [DAP00] and Fine [Fin01] define
details as geometric features which do not significantly influence the results of an FE
simulation. Starting from this definition, Fine [Fin01] classifies the details under three
categories:
• Skin details: They represent geometric regions which can be removed without
changing neither the 3-dimensional manifold property of the solid (see Section
1.2.1) nor its topology (see Section 1.2.2). This category includes the removal
of fillets, chamfers, bosses, . . . ;
• Topological details: This category represents geometric regions which can be
removed without changing the 3-dimensional manifold property of the solid but
their removal modifies the solid’s topology. For example, removing a through
hole changes the topology of the solid and the number of hole-loops in the EulerPoincar´e
formula decreases;
• Dimensional details: This category represents geometric regions which can be
removed and reduce locally the manifold dimension of the solid along with a
modification of its topology. This category is related to the idealization process
where entire solid models can be represented either with surfaces (dimensional
reduction of 1), lines (dimensional reduction of 2) or may even be replaced by a
point (dimension reduction of 3).
In this categorization L´eon and Fine [LF05] define the concept of detail from a
physical point of view. According to them, the result of a FEA can be evaluated with
45Chapter 2: Current status of methods and tools for FEA pre-processing
CAD Model
FEM Volume
FEM idealized
Volume region which does not
influence the result of FE simulation
➱ to be considered as detail
Volume which is not
represented in idealized model
➱ cannot be considered as detail
using idealized FEM model
Figure 2.1: Identification of a skin detail related to the accuracy of a FE volume model. With
an idealized model, a skin detail cannot be characterized.
‘a posteriori error estimators’ [EF11, BR78, LL83]. These error estimators characterize
the influence of a discretization process, i.e., the FE mesh generation, over the solution
of the partial differential equations describing a structure behavior. However, as
explained in Section 1.4.3, the behavior simulation of large assemblies is heavily based
on idealized models to reduce the size, as much as possible, of simulation models and
improve their use during a PDP. In this context, skin and topological details cannot
be related to the accuracy of the FEA since the error estimators cannot be applied to
shape transformations subjected to a dimensional reduction. Indeed, a volume region
which does not satisfy the idealization conditions (see Section 1.4.2) is part of an idealized
model but not dimensionally reduced. Therefore, as illustrated in Figure 2.1,
evaluating the physical influence of small volume details using an idealized FEM has no
meaning because the notion of discretization process is not meaningful over the entire
model. When considering idealizations, there is currently no ‘error estimators’ to evaluate
the influence of the dimensional reductions achieved through these transformations.
The definition of skin and topological details has to be discussed and extended in the
context of dimensionally reduced models.
Even if this classification cannot address idealized models, the simplification operators
have to be studied to determine the geometry they are able to process and the
information they are able to provide to reduce the complexity of an idealization process.
Effectively, it is important to evaluate into which extent skin and topological simpli-
fication operators should be applied prior to dimensional reduction or if dimensional
reduction takes place first and further simplifications should operate on the dimensionally
reduced model. Therefore, the next sections detail the principles of the geometric
operators identifying and removing details and determine their suitability to interact
with a dimensional reduction process. As mentioned in [Fou07], these operators aim
at identifying the geometric regions on the 3D object considered as details and then,
remove them from the object in order to generate a simplified model. Approaches to
detail removals can be subdivided in two categories depending on the geometric model
46Chapter 2: Current status of methods and tools for FEA pre-processing
describing the component: those which act on tessellated models1 and those which
modify an initial CAD model.
2.2.2 Detail removal and shape simplification based on tessellated
models
Although a tessellated object is a simplified representation of an initial CAD model,
its geometric model is a collection of planar facets, which can be processed more generically
than CAD models. Therefore, the operators based on tessellated models are
generic enough to cover a large range of geometric configurations. In what follows,
shape simplification operators applicable to the object skin are analyzed first then, a
particular attention is paid to the Medial Axis Transform (MAT) operator which extracts
an object structure.
Shape simplification
Different approaches have been proposed to simplify the shape of a CAD component
using an intermediate faceted model or modifying a FE mesh of this component. These
approaches can be synthesized as follows:
• Dey [DSG97] and Shephard [SBO98] improve directly the FE mesh quality by
eliminating small model features based on distance criteria compared to the targeted
level of mesh refinement. The objective is to avoid poorly-shaped elements
and over-densified mesh areas and the treatments proposed are generic;
• Clean-up operators [JB95, BS96, RBO02] repair the degeneracies of CAD models
when they have lost information during a transfer between CAD/CAE environments
or when they contain incorrect entity connections. Their main issue is
the computational cost to recalculate new geometries more suitable for analysis
[LPA∗03] and the ability of the algorithms to process a wide range of con-
figurations. Furthermore, the geometric transformations are inherently small
compared to the model size, which may not be the case for simulation details;
• Others methods [BWS03, HC03, Fin01, QO12] generate and transform an intermediate
tessellated model derived from the CAD component. Fine [Fin01]
analyses this tessellated geometry using a ‘tolerance envelope’ to identify and
then, remove skin details. Andujar [ABA02] generates new, topologically simpli-
fied, models by discretizing the solid object input using an octree decomposition.
The advantage of these approaches, dedicated to 3D volume FE, holds in their
1Here, it is referred to tessellated models rather than meshes, as commonly used in computer
graphics, to distinguish faceted models used in computer graphics from FE meshes that are subjected
to specific constraints for FE simulations. Here, the term mesh is devoted to FE mesh.
47Chapter 2: Current status of methods and tools for FEA pre-processing
(a) MAT 2D (b) MAT 3D
Medial Axis
Medial Surface
Figure 2.2: Illustration of the MAT: (a) in 2D, (b) in 3D.
independence with respect to the CAD design model. These approaches can support
of a wide variety of shapes while avoiding inherent CAD systems issues, i.e.,
surfaces connections, tolerances, . . . . Nevertheless, any shape modification of the
CAD model cannot be taken into account easily and trigger new iterations of the
simplification process;
• Hamri and L´eon [OH04] propose an intermediate structure, the High Level Topology,
in order to preserve a connection between the tessellated model and the CAD
model. As a result, bi-directional mappings can be set between these models, e.g.,
boundary conditions, B-rep surface types, . . . . However, the shape transformations
are still performed on the tessellated model.
Detail removal using the MAT
To identify shape details in sketches, Armstrong [Arm94, ARM∗95] uses the MAT.
The MAT has been initiated by Blum [B∗67] and represents, in 2D, the shape defined by
the locus of centroids of the maximal inscribed circles in a contour (see Figure 2.2a) or,
in 3D, by the maximal spheres inscribed in a solid (see Figure 2.2b). The combination of
the centerlines and the diameter of the inscribed circle on these centerlines, respectively
the center-surfaces in 3D, forms the skeleton-like representation of the contour in 2D,
respectively the solid in 3D, called MAT.
As described in [ARM∗95], The MAT operator is particularly suited to provide
simplification operators with geometric proximity information in 3D and to identify
geometric details on planar domains. The authors use a size criterion to identify:
• Details in 2D sketches using the ratio between the length of boundary sketch
edges and the radius of the adjacent maximal circle;
• Narrow regions using an aspect ratio between the length of the medial edge to the
maximal disk diameter on this local medial edge. A region is regarded as narrow
when this ratio is lower than a given threshold. In addition, the authors refer to
48Chapter 2: Current status of methods and tools for FEA pre-processing
Geometric model Simplified model
remove
remove
Groove reduced to line
Hole reduced to point
Inner MA
Outer MA
Figure 2.3: Details removal using the MAT and detail size criteria [Arm94].
the principle of Saint-Venant that relates to the boundary conditions location, to
categorize a region as a detail.
This method demonstrates the efficiency of the MAT in 2D to analyze, a priori, the
morphology of sketch contours. It can compare and identify local regions smaller
than their neighborhood. Figure 2.3 illustrates the analysis of a 2D sketch with the
MAT [Arm94] to identify details to be removed or idealized. Here, the MAT is addressed
as a detail removal operator because the manifold dimension of the 2D domain
is not reduced. Nevertheless, it can act also as a dimensional reduction operator.
An analysis of the pros and cons of the MAT as a dimensional reduction operator is
performed in Section 2.3.1.
Operators based on tessellated models may be applied to a large range of configurations
because the input model uses a simple polyhedral definition to represent surfaces
in 3D. These operators are efficient to remove skin details before meshing. Yet, large
modifications of CAD models are difficult to take into account.
2.2.3 Detail removal and shape simplification on 3D solid models
As explained in the introduction of this chapter, simplifying CAD solids before meshing
is a way to enable a robust mesh generation and to obtain directly the shape
required for a FEA without local adjustments of the FE mesh. Transformations can be
classified into two complementary categories: transformations modifying the boundary
decomposition of a B-Rep model without changing the model’s shape, transformations
modifying the shape as well as its boundary decomposition.
Topology adaption
Virtual topology approaches [SBBC00, She01, IIY∗01, LPA∗03, ARM∗95] have been
developed to apply topological transformations to the boundary of an initial B-Rep
49Chapter 2: Current status of methods and tools for FEA pre-processing
Narrow regions
Edge deletion
Face
simplification
(a) (b) (c)
Figure 2.4: Topology adaption of CAD models for meshing [FCF∗08]: (a) CAD model, (b)
Meshing Constraint Topology obtained with the adaption process, (c) Mesh model generated
with respect to Meshing Constraint Topology.
model in order to generate a new boundary decomposition that meet the simulation
objectives of this B-Rep model and express the minimum required constraints for mesh
generation. Virtual topology approaches belong to the first category of transformations.
To anticipate the poorly-shaped mesh elements resulting from B-rep surfaces having a
small area, the operation include splitting, merging edges and clustering faces. Anyhow,
the objective is to contribute to the generation of a boundary decomposition of
a B-Rep model that is intrinsic to the simulation objectives rather being tied to the
decomposition constraints of a geometric modeling kernel. Foucault et al. [FCF∗08]
propose a complementary topology structure called ‘Meshing Constraint Topology’
with automated adaption operators to enable the transformation of CAD boundary
decomposition with mesh-relevant faces, edges and vertices for the mesh generation
process (see Figure 2.4). In addition to the topological transformations (edge deletion,
vertex deletion, edge collapsing and merging of vertices), the data structure remains
intrinsic to the initial object which makes it independent from any CAD kernel representations.
Topology adaption is an efficient operator before mesh generation and
it is available in most CAE software. However, virtual topology operators are neither
generic across CAE software nor they form a complete set of transformations.
Form feature extraction
The main category of solid model simplification is the extraction or recognition of
features (holes, bosses, ribs, . . . ). Different application domains’ requirements lead to a
wide variety of feature definitions. Here, a feature is defined as in [Sha95] and refers to
a primary geometric region to be removed from a B-Rep object and hence, simplifies its
shape. The corresponding operators belong to the second category of transformations.
The simplification techniques initially define a set of explicit geometric areas identified
on an object. Then, specific criteria are applied, for example metrics, to evaluate
and select the candidate features to remove. The construction tree resulting from
components’ design (see Section 1.3) directly provides features that can be evaluated.
50Chapter 2: Current status of methods and tools for FEA pre-processing
(a) (b)
Figure 2.5: Illustration of CAD defeaturing using CATIA: (a) CAD initial model, (b) Simplified
CAD model with holes, fillets and chamfers suppressions.
However, this approach relies on the availability of this tree, which is not always the
case (see Section 1.5.2 on interoperability). Feature recognition approaches are based
on the fact that the construction tree is not transferred from a CAD to a CAE system,
and they process directly the B-rep model to recognize features.
A reference survey of CAD model simplification covering feature recognition techniques
has been performed by Thakur [TBG09]. For specific discussions on geometric
feature recognition see Shah et al. [AKJ01]. A particular domain, mostly studied in
the 80-90s is the recognition of machining features. The methods [JC88, VR93, JG00,
JD03] in this field are efficient in recognizing, classifying and removing negative features
such as holes, slots or pockets. Han et al. [HPR00] give an overview of the
state-of-the-art in manufacturing features recognition. Machining feature recognition
has been pioneered by Vandenbrande [VR93]. Woo et al. [WS02, Woo03] contributed
with a volume decomposition approach using a concept of maximal volume and observed
that some of them may not be meaningful as machining primitives. In the
field of visualization, Lee et al. [LLKK04] address a progressive solid model generation.
Seo [SSK∗05] proposes a multi-step operator, called wrap-around, to simplify CAD
component. To reduce the complexity of assembly models, Kim [KLH∗05] uses this
operator and proposes a multi-resolution decomposition of an initial B-rep assembly
model for visualization purposes. These operators simplify the parts after detecting
and removing small or negative features and idealize thin volume regions using face
pairing. Simplification is based on local information, i.e., edge convexity/concavity,
inner loops, . . . The obtained features are structured in a feature tree depending on the
level of simplification. A wide range of shapes is generated with three basic operators.
However, the multi-resolution model is subjected to visualization criteria, which may
not produce shape transformations reflecting the application of simulation hypotheses,
in general. Lockett [LG05] proposes to recognize specific positive injection molding
features. Her method relies on an already generated Medial Axis (MA) to find features
from idealized models. However, it is difficult to obtain a MA in a wide range of configurations.
Tautges [Tau01] uses size measures and virtual topology to robustly identify
geometric regions considered as details but is limited to local surface modification.
51Chapter 2: Current status of methods and tools for FEA pre-processing
One common obstacle of feature recognition approaches is their difficulty to set
feature definitions that can be general enough to process a large range of configurations.
This is often mentioned by authors when features are interacting with each other
because the diversity of interactions can lead to a wide range of configurations that
cannot be easily identified and structured. In addition, in most cases, the definition
of the geometric regions considered as features is based on a particular set of surfaces,
edges and vertices extracted from the boundary of the B-Rep object. The assumption
is that the detection operations based on the neighboring entities of the features
are sufficient to construct both the volume of the features and the simplified object.
However, the validity of this assumption is difficult to determine in a general setting,
e.g., additional faces may be required to obtain a valid solid, which fairly reduces the
robustness of these approaches.
Blend removal
Removal of blends can be viewed as a particular application of features recognition.
Automatic blend features removal, and more precisely, finding sequences of blend
features in an initial shape, is relevant to FE pre-processing and characterizes shape
construction steps. Regarding blends removal, Zhu and Menq [ZM02] and Venkataraman
[VSR02] detect and classify fillet/round features in order to create a suppression
order for removing these features from a CAD model. CAD software has already proposed
blend removal operators and it is these operators that are considered in this
thesis (see Figure 2.5 for a example of a CAD component defeaturing result). In general,
blend removal can be viewed as a first phase to prepare the model for further
extraction and suppression of features.
2.2.4 Conclusion
This section has shown that detail removals essentially address 3D volume simulations
of standalone components. The suitability of these simplification operators for assembly
structures has not been investigated, up to now. Additionally, the approaches to
the automation of detail removal have not been developed for idealization. The definition
of details addresses essentially volume domains and refers to the concept of
discretization error that can be evaluated with posteriori error estimators. As a result,
the relationship between detail removal and idealization has not been addressed.
Approaches based on tessellated models produce a robust volume equivalent model
but incorporating them with idealization processes, which are often refering to B-Rep
NURBS models, does not seem an easy task. Many features-based approaches exist
but they are not generic enough to process a wide range of shapes.
The operators presented in this section can be employed in a context of CAD to CAE
adaption, provided the areas being transformed are clearly delineated. The difficulty
is to determine the relevant operator or sequence of operators in relation to the user
52Chapter 2: Current status of methods and tools for FEA pre-processing
simulation objective. For now, only operators simplifying surfaces of 3D objects have
been presented, in the following section idealization operators introduce categories of
complexity with the dimensional reduction of standalone components.
2.3 Dimensional reduction operators applied to standalone
components
As explained in Section 1.4, to generate idealized models, operators are required to
reduce the manifold dimension of 3D solids to surfaces or lines. Different approaches
have been proposed to generate automatically idealized models of components for CAE.
These approaches can be divided into two categories:
• Global dimensional reduction. These approaches refer to the application of a
geometric operator over the whole 3D object, e.g., using the MAT that can be
globally applied to this object, to generate an overall set of medial surfaces;
• Local mid-surface abstraction. Mid-surface abstraction addresses the identification
of local configurations characterizing individual medial surfaces (using face
pairs, deflation) on CAD models and, subsequently, handles ithe connection of
these medial surfaces to generate an idealized model.
2.3.1 Global dimensional reduction using the MAT
Armstrong et al. [DMB∗96, ABD∗98, RAF11] come up with the MAT to generate
idealized models from 2D sketches and 3D solids. To identify geometric regions in
shell models, which may be represented in an FE analysis with a 1D beam, Armstrong
et al. [ARM∗95, DMB∗96, DAP00] analyze a skeleton-based representation generated
with the MAT. Although the MAT produces a dimensionally reduced geometry of an
input 2D contour, local perturbations (end regions, connections) need to be identified
and transformed to obtain a model suitable for FEA. As for the details identification of
Section 2.2, an aspect ratio (ratio of the minimum length between the medial edge and
its boundary edges to the inscribed maximum disk along this medial edge) and a taper
criterion (maximum rate of diameter change with respect to medial edge length) are
computed to automatically identify entities that must be either reduced or suppressed.
Based on a user input threshold for aspect ratio and taper, the corresponding areas of
the MAT are categorized into regions idealized either as 1D beam element, or regions
kept as 2D elements, or regions idealized as 0D element (concentrated mass). Beam
ends, that differ from the resulting MAT methodology, are also identified through the
topology of the MAT in order to extend the idealizable regions.
More recently, Robinson and Armstrong [RAM∗06, RAF11] generalize the approach
to 3D solid to identify 3D regions which, potentially, could be idealized as 2D shell
53Chapter 2: Current status of methods and tools for FEA pre-processing
Volume Mesh
in Interface area Idealized
areas
Interface offset
for coupling
Perturbation
in ends MA
(a) (b) (c)
Figure 2.6: Illustration of the mixed dimensional modeling using a MAT [RAF11]: (a) the
MAT representation, (b) the model partitioned into thin and perturbations features, (c) the
resulting mixed dimensional model.
elements. In a first step, a 3D MAT is used to identify potential volume regions. Then,
the MATs of these regions are analyzed by a second 2D MAT to determine the inner
sub-regions which fully meet an aspect ratio between their local thickness and MAT
dimensions. The final candidates for idealization should satisfy the 2D ratio within the
face resulting from the 3D MAT as well as the 3D ratio. Similarly to 2D end regions
derived from a 2D MAT, the residual boundary faces from the MAT 3D are extended.
Some limitations of the MAT with respect to idealization processes are:
• The generation of the MAT. Although progress has been made in MAT generation
techniques for 3D objects, the computation of an accurate 3D MAT is still
a research topic [RAF11]. Even if approaches [CBL11, RG03] exist which enable
the computation of a MAT as a G0 geometric object, 3D MAT from free-form surfaces
[RG10, BCL12], B-splines surfaces [MCD11] and planar polyhedra [SRX07],
the most efficient algorithms are still based on a discrete representations. The
most efficient way to obtain a MAT derives from Voronoi diagrams or from distance
fields [FLM03]. An efficient implementation of an algorithm has been proposed
by Amenta [ACK01] and, more recently, by Miklos [MGP10]. However, the
result is also a discrete representation, which has to be somehow approximated
to produce a more global geometric object;
• The need for processing local perturbations (see Figure 2.6b). For mechanical
analysis purposes, the topological features in ending regions have to be modified
to extend the medial surfaces. These undesirable regions complicate and restrain
the analysis domain of the MAT;
• The connection areas. The MAT generates complex configurations in connection
areas. Armstrong and Robinson [RAF11] produce mixed dimensional FE models
with idealized surfaces or lines and volume domains in the connections between
54Chapter 2: Current status of methods and tools for FEA pre-processing
these regions (see Figure 2.6). These mixed dimensional models, which involve
specific simulation techniques using mixed dimensional coupling, do not contain
idealized models in connections areas. In addition, to ensure an accurate load
transfer from one surface to another, they increase the dimensions of volume
connections based on the Saint-Venant’s principle (see Figure 2.6c). As a result,
the idealized areas are reduced. However, the current industrial practice, as
explained in Section 1.4.3, aims at generating fully idealized models incorporating
idealized connections. This practice reduces the computational time, reducing
the number of degrees of freedom, and ensures a minimum model accuracy based
on user’s know-how. In this context, the major limit of MAT methods is the
processing of these connections areas which do not contain proper information to
link medial surfaces.
2.3.2 Local mid-surface abstraction
To generate fully idealized models, alternative approaches to MAT identify sets of
boundary entities of the CAD models as potential regions to idealize. Then, midsurfaces
are extracted from these areas and connected together.
Face pairing techniques
Rezayat [Rez96] initiated the research in mid-surface abstraction from solid models.
His objective was to combine the geometric and topologic information of the B-rep
model to robustly produce idealized models while transforming geometric areas. This
method starts with the identification of surfaces which can be paired based on a distance
criterion between them. During this identification phase, an adjacency graph is
generated representing the neighbouring links between face-pairs. This graph uses the
topological relationships of the initial B-rep model. Then, for each face-pair, a midsurface
is generated as the interpolation of this geometric configuration, as illustrated
in Figure 2.7a. During the final step,the mid-surfaces are connected together using the
adjacency graph of the B-Rep model (see Figure 2.7b). Although this method generates
fully idealized models close to manually created ones, the underlying issue is the
identification of areas that could potentially be idealized. Indeed, the identification
of face-pairs does not ensure that the thickness, i.e., the distance between face-pairs,
is as least ten times smaller than the other two directions (see idealization conditions
described in Section 1.4.2). The areas corresponding to these face pairs is designated
here as tagged areas. In addition, the connection between mid-surfaces requires the
definition of new geometric entities which result from an intersection operator. This
intersection operator solely relies on the boundary of the areas to be connected, i.e, the
face-pairs. There is no information about the boundary of the regions to be idealized
as well as the interface areas between their mid-surfaces, e.g., limits of valid connections
areas. As illustrated in Figure 2.8d, this information does not appear directly
55Chapter 2: Current status of methods and tools for FEA pre-processing
F2
F1
Mid-surface
(a) (b)
Figure 2.7: Illustration of mid-surface abstraction [Rez96], (a) creation of mid-surfaces from
face-pairs, (b) connection of mid-surfaces to generate a fully idealized model.
F1 F2 F3
F5 F4
F6
F1
F3
F4
F6
F3
F6
F1 F2 F3
F5 F4
F6
Fi / Fj
Invalid face pair
Valid face pair
Fi / Fj
F1
F3
F4
F6
Tagged boundary
Non-tagged boundary
Interface between regions to be idealized
Regions to be idealized
Rejected
configuration
Rejected
configuration
Accepted
configuration
CAD
Mid-surface results
Face-pair information
(a)
(b)
(c)
(d)
Figure 2.8: An example of particular geometric configuration not addressed by face-pairs
methods: (a) Valid configuration without mid-surface connection, (b) and (c) rejection of
an invalid face-pair due the overlapping criterion, (d) information on non-tagged areas and
interfaces between idealizable regions that are not evaluated with face-pairs methods.
on the initial model. These areas are the complement of tagged areas with respect
to the boundary of the initial model; they are named non-tagged areas. As a result,
mid-surface abstraction are reduced to specific geometric configurations when the facepairs
overlap each other. As depicted in Figure 2.8, the face-pairs F3-F6 and F2-F5
are rejected due to the overlapping criterion. So, the idealized configurations 2.8b and
c are rejected whereas they could be suitable for FEA.
In order to improve the robustness of idealized areas processing, Lee and al. [LNP07a]
use a propagation algorithm through the B-rep model topology to identify face-pairs.
However, this approach is limited to configurations where the face-pairs can be connected
in accordance with predefined schemes. Ramanathan and Gurumoorthy [RG04]
identify face-pairs through the analysis of mid-curve relationships of all the faces of the
solid model. For each face, its mid-curve is generated using a 2D MAT. This generation
is followed by the analysis of the mid-curve graph in order to identify face-pairs.
The resulting mid-faces, derived from face-pairs, are then connected to each other in
56Chapter 2: Current status of methods and tools for FEA pre-processing
accordance with the mid-curve adjacency graph. This method increases the robustness
of face-pairing, indirectly using the morphology of the paired faces. Analyzing the midcurve
relationships of adjacent faces enables a morphological comparison of adjacent
faces. Since mid-curves have been obtained through a MAT, the face-pairs identifi-
cation depends on the accuracy of this mid-curve generation. This method comes up
with face-pairs close to each other and sufficiently large along the two other directions
to meet the idealization hypothesis. However, this approach is limited to planar areas.
Negative offsetting operations
Sheen et al. [SSR∗07, SSM∗10] propose a different approach to generate mid-surfaces:
the solid deflation. The authors assume that a solid model can be seen as the result of
the inflation of a shell model. Their principle is to deflate the solid model, shrinking it
down to a degenerated solid with a minimum distance between faces close to zero. This
generates a very thin solid model looking like an idealized model. In a next step, faces
are extracted and sewed together to create a non-manifold connected surface model.
The issue of this method lies in the generation of the deflated model. Indeed, a facepairs
detection is used to generate the mid-surfaces input to the shrinking operation.
This face-pair detection does not cover configurations with a thickness variation, which
is common for aeronautical parts and other mechanical components. This approach is
similar to a straightforward MAT generation [AA96, AAAG96], which applies a negative
offset to boundary lines in 2D, surfaces in 3D, respectively, in order to obtain a
skeleton representation. Yet, this representation being an approximation of the MAT,
it does not meet everywhere the equal distance property of a mid-surface and does not
provide an answer for all polyhedral solids [BEGV08].
2.3.3 Conclusion
As explained in Section 1.4 and 1.5, the shape of a component submitted to a mesh
generation depends on the user’s simulation objectives. This analysis of dimensional
reduction operators highlighted the lack of idealization-specific information to delimit
their conditions of application. All geometric regions do not satisfy the idealization
conditions and hence, these idealization operators cannot produce correct results in
these areas. A purely geometric approach cannot produce directly a fully idealized
model adapted to FEA requirements. An analysis process is necessary to evaluate the
validity of the idealization hypotheses and determine the boundary and interfaces of
the regions to be idealized.
The MAT is a good basis to produce a 3D skeleton structure and provides geometric
proximity information between non adjacent faces. However, it is difficult to obtain in
3D and requires post-processing local perturbations and connection areas. Face-pair
techniques are efficient in specific configurations, especially for planar objects. Yet,
their main issues remain in their validity with respect to the idealization hypotheses and
57Chapter 2: Current status of methods and tools for FEA pre-processing
1
2
Kinematic
connection Shortest distance
Offset connection
Perpendicular
connection
1
2
Figure 2.9: Illustration of different connection models for idealized components.
difficulties to process the connection between mid-faces. As illustrated in Figure 2.9,
the connection areas derive from specific modeling hypotheses. The user may decide on
the connection model that is most appropriate for his, resp. her, simulation objectives.
To improve the dimensional reduction of components, the objectives are expressed
as:
1. Identify the volume sub-domains candidate to idealization, i.e., the regions that
meet the idealization hypotheses;
2. Obtain additional information on interfaces between sub-domains to generate
robust connections there.
2.4 About the morphological analysis of components
As a conclusion of the previous Section 2.3, geometric operators require a pre-analysis
of a component shape to determine their validity conditions. Shape decomposition is a
frequent approach to analyze and then structure objects. This section aims at studying
the operators dedicated to a volume decomposition of 3D objects with an application
to FEA.
2.4.1 Surface segmentation operators
There are many methods of 3D mesh2 segmentation developed in the field of computer
graphics. They are mainly dedicated to the extraction of geometric features from these
3D meshes. A comparative study of segmentation approaches of 3D meshes, including
2The word mesh is used in the computer graphics context, which refers to a faceted model with no
constraint similar to FE meshes.
58Chapter 2: Current status of methods and tools for FEA pre-processing
(a) (b)
Figure 2.10: Mesh Segmentation: (a) face clustering of Attene [AFS06], (b) shape diameter
function of Shapira [SSCO08].
CAD components, is proposed by Attene et al. [AKM∗06]. Reference work by Hilaga
[HSKK01] applies a Reeb-graph approach to find similarities between 3D shapes.
Watershed [KT03, Kos03], spectral analysis [LZ04], face clustering [AFS06], regions
growing [ZPK∗02, LDB05], shape diameter functions [SSCO08] are other techniques
to subdivide a 3D mesh for shape recognition, part instantiation, or data compression.
Figure 2.10 illustrates two mesh segmentation techniques [AFS06, SSCO08] on
a mechanical part. These algorithms are not subjected to parameterization issues like
B-Rep CAD models are. They partition a mesh model into surface regions but do
not give a segmentation into volume sub-domains and region boundaries are sensitive
to the discretization quality. A post-processing of the surface segmentation has to be
applied to obtain volume partitions.
The main objective of the methods cited above is to divide the object in accordance
with a “minima rule” principle introduced by Hoffman and Richards [HR84]. This rule
consists in dividing this object to conform to the human perception of segmentation.
The authors state that human vision defines the edges of an object along areas of high
negative curvature. Hence, the segmentation techniques divide a surface into parts
along contours of negative curvature discontinuity. In these areas, the quality of an
algorithm is based on its ability to meet this “minima rule”. Searching for regions
of high concavity, algorithms are sensitive to local curvature changes. Depending on
the threshold value of extreme curvature, the object may be either over-segmented
or under-segmented. Even if this threshold is easier to monitor for CAD components
because they contain many sharp edges, the curvature criterion is not related to the
definition of idealized areas. Consequently, the methods using this criterion do not
produce a segmentation into regions satisfying the idealization hypotheses and regions
that can be regarded as volumes.
This section has covered surface segmentation operators that are not appropriate
in the context of a segmentation for idealization. The next section studies volume
segmentation operators producing directly a decomposition of a solid model into volume
partitions.
59Chapter 2: Current status of methods and tools for FEA pre-processing
Thin-sheet
region in green
(thin-sheet meshable)
(a) (b)
Semi-structured
hybrid mesh
of thick region
(c)
Long/slender region in blue
(swept mesh)
Complex region in yellow
(unstructured mesh)
Figure 2.11: Automatic decomposition of a solid to identify thick/thin regions and long
and slender ones, from Makem [MAR12]: (a) initial solid model, (b) segmented model, (c)
semi-structured hybrid mesh of thick regions.
2.4.2 Solid segmentation operators for FEA
Recently, researches concentrated on the identification of specific regions to automatically
subdivide a complex shape before meshing. They address shape transformations
of complex parts. The automatic segmentation of a mechanical component into volume
regions creates a positive feature decomposition, i.e., the component shape can be
generated by the successive merge of the volume regions. This principle particularly
applies to dimensional reduction processes, i.e, idealizations.
Volume region identification for meshing
In FEA, solid segmentation methods have been developed to simplify the meshing
process. The methods of Lu et al. [LGT01] and Liu and Gadh [LG97] use edge loops
to find convex and sweepable sub-volumes for hex-meshing. More recently, the method
proposed by Makem [MAR12] automatically identifies long, slender regions (see Figure
2.11). Makem [MAR12] shows that the decomposition criteria have to differ from
the machining ones. Heuristics are set up to define the cutting strategy and to shape
the sub-domains based on loops characterizing the interaction between sub domains.
Setting up these heuristics is difficult due to the large diversity of interactions between
sub-domains. Criteria for loop generation aim at generating a unique decomposition
and are not able to evaluate alternatives that could improve the meshing scheme.
To reduce the complexity of detecting candidate areas for dimensional reduction,
Robinson and al. [RAM∗06] use preliminary CAD information to identify 2D sketches
employed during the generation of revolving or sweepable volume primitives in construction
trees. Figure 2.12 illustrates this process: the sketches are extracted from the
construction tree, analyzed with a MAT to determine thin and thick areas forming a
feature. Then, this feature is reused as an idealized profile to generate a mixed dimensional
model. However, in industry, even if the construction tree information exists in a
native CAD model, the creation of features depend on the designer’s modeling choices,
60Chapter 2: Current status of methods and tools for FEA pre-processing
2D Sketch of
a revolution feature
Slender regions
to revolve as surface
3D CAD Component
(volume)
Mix dimensional Model
(volumes and surfaces)
Figure 2.12: Idealization using extruded and revolved features in a construction tree,
from [RAM∗06].
which do not ensure to obtain appropriate sketches mandatory to get efficient results.
Divide-and-conquer approaches
An alternative to the complexity of the idealization process can be found in divideand-conquer
approaches. Firstly, the solid is broken down into volume sub-domains,
which are smaller to process. Then, idealizing these sub-components and combining
them together produces the final idealized model.
Chong [CSKL04] proposes operators to decompose solid models based on shape
concavity properties prior to mid-surface extractions that reduce the model’s manifold
dimension. Mid-surfaces are identified from closed loops of split edges and sub-domains
are processed using mid-surfaces. The solid model decomposition algorithm detects
thin configurations if edge pairs exist in the initial model and matches an absolute
thickness tolerance value. Some volume regions remain not idealized because of the
nonexistence of edges-pairs on the initial object.
In the feature recognition area, Woo et al. [WS02, Woo03] set a volume decomposition
approach using a concept of maximal volume. Their decomposition is based
on local criteria, i.e., concave edges, to produce the cell decomposition. Consequently,
Woo et al. observe that some maximal volumes may not be meaningful as machining
primitives and further processing is required in this case to obtain machinable
sub-domains. Recently, Woo [Woo14] describes a divide-and-conquer approach for
mid-surface abstraction (see Figure 2.13). A solid model is initially decomposed into
simple volumes using the method of maximal volume decomposition [WS02, Woo03] as
well as feature recognition of Sakurai [Sak95, SD96]. The mid-surfaces are extracted
from these simple volumes using face-pairing. Then, face-pairs are connected using a
union Boolean operation, thus creating a non-manifold surface model. Finally, a ge-
61Chapter 2: Current status of methods and tools for FEA pre-processing
Figure 2.13: Divide-and-conquer approach to idealization processes using a maximal volume
decomposition (by Woo [Woo14]).
ometric operator identifies and removes local perturbations of mid-surfaces which do
not correspond to the faces of the original model. The major objective of this approach
is the complexity reduction of the initial mid-surface abstraction. It increases the robustness
of the face paring algorithm by applying it on simpler volumes. However, the
connections between mid-surfaces are based on the topology of the initial solid without
any analysis of its shape related to the user’s simulation objectives. Some solids
can be topologically identical but differ in their morphology. Consequently, a morphological
analysis of their shape is mandatory to identify the sub-domains subjected to
dimensional reduction and to understand the interactions between these sub-domains
through their interfaces. Here, the idealization processes are still restricted to a purely
geometrical operator that does not integrate user’s simulation objectives. Additionally,
this method faces difficulties to handle general configurations and connections between
idealized sub-domains through mid-surface extension operations.
B-Rep decomposition through feature trees
As observed in Section 2.2, the feature recognition techniques are a way to extract
volume sub-domains from B-Rep solids. They support segmentation processes for detail
removal but do not provide construction process structures of these B-Rep solids. To
this ens, different approaches have been proposed to decompose an object shape into
a feature tree.
Shapiro [SV93] and Buchele [BC04] address the B-Rep to CSG conversion as a
means to associate a construction tree with a B-Rep model. Buchele [BC04] applies this
principle to reverse engineering configurations. CSG tree representations can be categorized
into either half-space or bounded solid decompositions. In [SV93, BC04] B-Rep
to half-space CSG representation is studied and it has been demonstrated that half-
62Chapter 2: Current status of methods and tools for FEA pre-processing
spaces solely derived from a volume boundary cannot always be integrated into a CSG
tree forming a valid solid. In Buchele [BC04], Shapiro and Vossler’s approach [SV93]
is complemented to generate a CSG representation from scanned objects and to obtain
both its geometry and topology. There, additional algorithms must be added to
produce complementary half-spaces. Moreover, the meaning of half-space aggregations
is not addressed, i.e., there is no connection between the volume faces and primitives
that can be used to create it.
Li and al. [LLM06, LLM10] introduce a regularity feature tree used to highlight
symmetry in a solid that differs from CSG and construction trees. This tree structure
is used to highlight symmetry properties in the object but it neither provides a CSG
tree nor primitive entities that could serve as basis for idealization applications. Belaziz
et al. [BBB00] propose a morphological analysis of solid models based on form features
and B-Rep transformations that are able to simplify the shape of an object and enable
simplifications and idealizations. Somehow, this method is close to B-Rep to CSG
conversion where the CSG operators are defined as a set of shape modifiers instead of
Boolean operators. Indeed, the shape modifiers are elementary B-Rep operators that
do not convey peculiar shape information and each stage of the morphological analysis
produces a single tree structure that may not be adequate for all simplifications and
idealizations.
All the approaches generating a CSG type tree structure from a B-Rep bring a
higher level shape analysis with connections to a higher level monitoring of shape transformations,
symmetry properties. . . However, the corresponding framework of B-Rep
to CSG conversion must be carefully applied to avoid unresolvable boundary configurations.
Furthermore, producing a single tree structure appears too restrictive to cover
a wide range of shape transformation requirements.
2.4.3 Conclusion
Solid segmentation operators directly provide a volume decomposition of the initial
object. A segmentation brings a higher level of geometric information to the initial
B-Rep solid. Previous methods have shown the possibility to generate a segmentation
or, even construction processes, from an initial CAD model. Therefore, the current
operators:
• do not always produce a complete segmentation, e.g., not all features are identified,
and this segmentation is not necessarily suited for idealization due to
algorithms focusing on other application areas;
• may be reduced to simple configurations due to a fairly restrictive definition of the
geometric areas being removed from the solid. Furthermore, if these operators
generate also a construction process, it is restricted to a single process for a
component;
63Chapter 2: Current status of methods and tools for FEA pre-processing
• could produce a complete segmentation, e.g., divide and conquer approaches,
but they do not ensure that the volume partitions are also simple to process and
usable for mid-surfacing.
A more general approach to volume decomposition should be considered to depart
from a too restrictive feature definition while producing volume partitions relevant for
idealization purposes. Therefore, the difficulty is to find adequate criteria to enable a
segmentation for dimensional reduction and connections operators.
The previous sections have presented the main methods and tools for FEA preprocessing
and, more specifically, for idealization processes in a context of standalone
components. The next section describes the evolution of these operators toward an
assembly context.
2.5 Evolution toward assembly pre-processing
Currently, the industrial need is to address the simulation of assembly structures. However,
few contributions address the automation of assembly pre-processing. Automated
simplifications of assembly for collaborative environment like the multi-resolution approach
of Kim [KWMN04] or surface simplification of Andujar [ABA02] transform assembly
components independently from each other. This is insufficient to pre-process
FE assembly models because mechanical joints and interfaces tightening the different
components must take part to these pre-treatment (see Chapter 1.4).
Group of components transformations
In the assembly simplification method of Russ et al. [RDCG12], the authors propose
to set component dependencies to remove groups of components having no influence
on a simulation, or to replace them by defeatured, equivalent ones. However, the
parent/child relationships created from constraint placement of components does not
guarantee to obtain the entire neighborhood of a component because these constraints
are not necessarily related to the components’ geometric interfaces. As explained in
Section 1.3.1, these positioning constraints are not necessarily equivalent to the geometric
areas of connections between components. Additionally, DMUs don’t contain
components’ location constraints when assemblies are complex, which is the case in
the automotive and aeronautic industries to ease design modifications during a PDP.
Moreover, part naming identification used in this approach is not sufficient because it
locates individual components contained in the assembly, only. Relations with their
adjacent components and their associate geometric model are not available, i.e., replacing
a bolted junction with an idealized fastener implies the simplification of its nut
and its screw as well as the hole needed to connect them in the tightened components.
64Chapter 2: Current status of methods and tools for FEA pre-processing
Small areas
difficult to mesh
(c) (d) (e)
(a) (b)
Figure 2.14: Assembly interface detection of Jourdes et al. [JBH∗14]: (a) CAD bolted junction
with three major plates, (b) some interfaces, (c) cut through of a bolt of this junction, (d)
corresponding interfaces, (e) detail of a small geometric area between an interface and the
boundary of the component.
Interface detection
To provide mesh compatibility connectivity, i.e., an interface between two components
ensures the same mesh distribution in the interface area of each component,
Lou [LPMV10] and Chouadria [CV06] propose to identify and re-mesh contact interfaces.
Quadros [QVB∗10] establishes sizing functions to control assembly meshes. Yet,
these methods are used directly on already meshed models and address specific PDP
configurations where CAD models are not readily available. Clark [CHE08] detects
interfaces in CAD assemblies to create non-manifold models before mesh generation
but he does not consider the interactions between interfaces and component shape
transformation processes. In [BLHF12], it is underlined that if a component is simplified
or idealized, its shape transformation has an influence on the transformation of
its neighbors, e.g., a component idealized as a concentrated mass impacts its interfaces
with its neighboring components.
Assembly operators available in commercial software are reduced to, when robust
enough, the automated detection of geometric interfaces between components. However,
their algorithms use a global proximity tolerance to find face-pairs of components
and they don’t produce explicitly the geometric contact areas. From a STEP representation
of an assembly model, Jourdes and al. [JBH∗14] describe a GPU approach
65Chapter 2: Current status of methods and tools for FEA pre-processing
1 2
1 2
1 2
1
Idealized with contact V1
Assembly
interfaces
Component 1
interfaces
CAD assembly of two
components
Idealizable areas of
components
FEM models
Full idealized
Mix dimensional with contact
Specific Connector
1 2 1 2
1
Idealized with contact V2
1 2
Idealized with contact V3
1 2
Idealized with contact V4
Kinematic connection
Figure 2.15: Various configurations of the idealization of a small assembly containing two
components.
to automatically detect the exact geometric regions of interfaces between components
(see Figure 2.14). The results of this technique are used in this thesis as input assembly
interfaces data. Yet, obtaining the geometric regions of interfaces is not sufficient, they
have to be analyzed to evaluate their suitability with respect to meshing constraints.
Figure 2.14e shows the creation of small surfaces, which are difficult to mesh, resulting
from the location of an interface close to the boundary of component surfaces.
To reach the requirements of assembly idealizations, the current geometric operators
have to take into account the role of assembly interfaces between components,
with respect to the shape transformations of groups of components. Figure 2.15 shows
the idealization of an assembly containing two components. The idealization of ‘component
1’ interacts with the idealization of ‘component 2’, and both interact with the
user’s choice regarding the assembly interface transformation. Depending on the simulation
objectives, the user may obtain either an idealized model of the two components
with contact definition, or a globally idealized model from the fusion of the two component.
The user may even apply a specific connector model which does not require
any geometry other than its two extreme points.
Assemblies bring a new complexity level into idealization processes since the shape
of the components and their interactions with their neighbors have to be analyzed
before applying geometric operators.
66Chapter 2: Current status of methods and tools for FEA pre-processing
2.6 Conclusion and requirements
The research review in this chapter combined with the context described in Chapter
1 shows that CAD/CAE integration is mainly oriented toward the data integration
of standalone components, preparations of assembly models under global simulation
objectives require an in-depth analysis and corresponding contributions.
Regarding standalone component processing, although automated operators exist,
they are currently effective on simple configurations only. To process complex models,
the engineer interactively modifies the component using shape transformation operators
according to his/her a priori expertise and evaluation of the simulation model being
created. These specific operators, among which some of them are already available in
CAE software, have to be selected and monitored by the engineer. Their applications
still require lots of manual interactions to identify the regions to transform and correct
unintended geometric perturbations. Because of the diversity of simulation contexts,
the preconceived idea of applying a generic geometric operator to every component
to perform any simplification or idealization, is not valid and must evolve toward
simulation context-dependent operators.
The selection of mechanical hypotheses, because of their impact on the DMU model,
should also be part of the automation of a mechanical analysis pre-processing. This
issue is particularly crucial when performing dimensional reductions on a component.
Generating an idealized equivalent model cannot be reduced to the simple application of
a dimensional reduction operator. The effects of idealization hypotheses are supposed
to have established the connection between the component shape and its simulation
objectives. This connection can be made through the identification of geometric areas
candidates to idealizations and associated with the connections between idealized subdomains.
An analysis of the component shape, subdividing it into idealizable areas
and interfaces between them (see Figure 2.15), is a means to enrich the input CAD
solid and prepare the geometry input to dimensional reduction operators. The current
volume segmentation operators are restricted to configurations focusing on particular
application domains. They often produce a single segmentation into sub-volumes and
instantiate the same problem on rather simple configurations. Achieving good quality
connections between idealized sub-domains in a robust manner is still a bottleneck of
many approaches processing CAD solids for FEA, which requires new developments.
Regarding assembly model processing, there is currently a real lack in scientific
research and software capabilities, both. To reach the objective of large assembly
structural simulation, pre-processing the DMU, which conveys the product definition,
has also to be automated. Assembly simulation models, not only require the availability
of the geometric model of each component, but they must also take into account the
kinematics and physics of the entire assembly to reach the simulation objectives. This
suggests that the entire assembly must be considered when specifying shape transformations
rather than reducing the preparation process to a sequence of individu-
67Chapter 2: Current status of methods and tools for FEA pre-processing
ally prepared components that are correctly located in 3D space. As mentioned in
Section 1.5.4, to adapt an assembly to FEA requirements, it is mandatory to derive
geometric transformations of groups of components from simulation objectives and
component functions. As it results from Chapter 1, the knowledge of interface’s geometries
and additional functional information on components and their interfaces is a
good basis to specify these geometry transformation operators on assemblies. In addition,
to perform assembly idealizations, structuring geometric models of components in
areas to be idealized and component interfaces, is consistent with the assembly structure,
i.e., its components and their interfaces. Such a component geometric structure
helps preserving the DMU consistency.
68Chapter 3
Proposed approach to DMU
processing for structural
assembly simulations
This chapter sets the objectives of the proposed approach to DMU pre-processing
for the simulation of FE structural assembly models. To obtain an efficient transformation
of a DMU into a FEM requires geometric operators that process input
geometric models which have been structured and enriched with additional functional
information. With respect to the objectives set up, the proposed approach
uses this enriched DMU, both at 3D solid and assembly levels, to analyze its
geometry and connect it to the simulation hypotheses with the required shape
transformations.
3.1 Introduction
Chapter 1 has pointed out the industrial context and identified the general problem
definition addressed in this thesis. The current practices about model generation for
structural mechanical analyses have been described, especially when the resolution is
performed using the FE method. Chapter 2 has analyzed the approaches of academia
that investigate the automation of FE models generation. The need for shape analysis
as a basis of robust geometric transformation operators has been underlined and the
lack of research in assembly pre-processing has been pointed out. Figure 3.1 summarizes
the manual processes of a DMU transformation for the FEA of assembly structures.
The analysis of the ongoing practices has been stated in Section 1.5. A main issue,
observed in the aeronautical industry, is the manual and isolated application of geometric
transformations on each component of the assembly. An assembly component is
considered as a standalone part and the user iterates his, resp. her, global simulation
objective on each component as well as on each assembly interface. As a result, the
69Chapter 3: Proposed approach to DMU processing for structural assembly simulations
· Pure CAD geometric model;
· No contact;
· Junction considered as
individual component.
DMU EXTRATION PREPROCESSING
· Manual transformation of individual
component (idealization);
· Manual definition of contacts
between components;
SIMULATION
x45
x45
DMU FEM
CAD/CAE
weak link
Non standardized processes
and tools for all category of
assembly component
Lessons learned
Simulation
Objective
Simulation
Objective
Simulation
Objective
DATA
PROCESSES
: Processes
: Data
Figure 3.1: Current process to prepare assembly structures. Each component of the assembly
is transformed individually.
use of FEA in aeronautical industry is bounded by the time required to set up its associated
FEM. Now, the major challenge is the automation of some FEA preparation
tasks so that more simulations can be performed on assembly models.
3.2 Main objectives to tackle
As stated in Chapter 2: ‘generating an idealized equivalent model cannot be reduced
to the simple application of a dimensional reduction operator’. Indeed:
1. Generating simulation models from DMUs requires the selection of the CAD components
having an influence on the mechanical behavior the engineer wants to
analyze. Setting up the simulation requires, as input, not only the 3D geometric
model of component shapes but also their functional information that help
selecting the appropriate components (see Section 1.5.4);
2. A DMU assembly is defined by a set of 3D components and by the interactions
between them. To automate the preparation of FE assembly models, it is mandatory
to take into account the interfaces between components (see Section 1.4.3).
An assembly interface, not only contains the geometric information delimiting
the contact/interference areas on each component, but contains also the ‘functional’
information characterizing the behavior of the interface, e.g., clamping,
friction, . . . ;
70Chapter 3: Proposed approach to DMU processing for structural assembly simulations
3. To generate idealized components, i.e., the dimensional reduction process of 3D
volumes into equivalent medial surfaces/lines, two main aspects have to be considered:
• A 3D shape is generally complex and requires different idealizations over
local areas depending on the morphology of each of these areas (see Section
2.3);
• Interfaces between components have an influence on the choice made for
these local idealizations. Therefore, the idealization operator has to take
into account this information as a constraint, which is not the case of current
idealization operators (see Section 2.5).
To address the problem of the FEM preparation of large assembly structures, this
chapter introduces an analysis-oriented approach to provide enriched DMUs before geometric
transformations. The following sections explain the principles and contributions
of this approach, which are subsequently detailed in the next chapters:
• Section 3.3: This section shows that existing approaches are able to provide a
certain level of functional information. Two main approaches have been exploited.
The method of Jourdes et al. [JBH∗14] generates assembly interfaces from DMU
models and the method of Shahwan et al. [SLF∗12, SLF∗13] provides functional
designation of components. These methods can be used in our current preprocessing
approach to provide enriched DMUs before geometric transformations
take place. Nevertheless, some improvements are proposed to take into account
the geometric structure of components required for an idealization process;
• Section 3.4: idealization operators necessitate an estimation of the impact of
the idealization hypotheses over a component shape, i.e., the identification of
areas candidate to idealization. This section sets our objectives to achieve a
robust assembly idealization process. They consist in structuring a component’s
shape and taking advantage of this structure to perform a morphological analysis
to identify areas conforming to the user’s simulation hypotheses. Subsequently,
these hypotheses are used to trigger the appropriate idealization operator over
each area;
• Section 3.5: this section outlines the proposed processes exploiting an enriched
DMU to robustly automate the major time-consuming tasks of a DMU preparation.
3.3 Exploiting an enriched DMU
Efforts have been made to improve:
71Chapter 3: Proposed approach to DMU processing for structural assembly simulations
• the coordination between engineers in charge of structure behavior simulations
and designers;
• the use of simulation results during a design process.
However, as described in Section 1.3, the DMUs automatically extracted from the
PLM are not suited for FE simulations. Because of the product structure, DMUs do
not contain structural components, only. DMU components have to be filtered during
the early phase of FEA pre-processing to avoid unnecessary geometric treatments on
components which are considered as details at the assembly level. As explained in
Section 1.5.1, this process is based on a qualitative judgment exploiting engineers’
know-how. A way to increase robustness of this extraction process, is to have available
more useful information for the engineers. At least, this information must contain the
functional properties of components.
In addition, considering that the extracted DMU is coherent and contains the exact
set of components subjected to shape transformations, the amount of information
which can be extracted from the PLM system is not sufficient to set up a robust and
automated pre-processing approach to simulations. Even though the extraction of additional
meta-data can be improved (see Section 1.5.4), FEM pre-processing requires
the exact interface areas between components as well as the functional designation of
each component, which are not available in PLM systems, at present.
A main objective of this thesis is to prove that a quantitative reasoning can be made
from an enriched and structured DMU to help engineers determining the mechanical
influence of components under specific simulation objectives.
Benefiting from existing methods that identify functional interfaces and
functional designation of components in DMUs
The method of Jourdes et al. [JBH∗14] presented in Section 2.5, detects geometric
interfaces between components. Starting from an input B-Rep model, i.e.,
a STEP [ISO94, ISO03] representation of an assembly, the algorithm identifies two
categories of interfaces as defined in Section 1.3.2: surface and linear contacts, and
interferences.
The information regarding assembly interfaces are used by Shahwan et al. [SLF∗13]
to provide, through a procedural way, functional information linked to DMUs.
Even if Product Data Management System (PDMS) technology provides the component
with names referring to their designation, this information is usually not suf-
ficient to clearly identify the functions of components in an assembly. For example,
in AIRBUS’s PLM, a component starting with ‘ASNA 2536’ refers to a screw of type
‘Hi-Lite’ with a 16mm diameter. This component designation can be associated under
specific conditions to an ‘elementary function’, e.g., fastening function in the case of
‘ASNA 2536’. However, information about each component designation does not iden-
72Chapter 3: Proposed approach to DMU processing for structural assembly simulations
Bolted junction
Nut
Counter nut
Planar Support
Planar Support
Planar Support
Threaded Link
Cap-screw
Initial Geometry Functional Interfaces Functional Designation
Head
Tightened
Components
Thread Shaft
Figure 3.2: Structuring a DMU model with functional properties after analyzing the assembly
geometric interfaces and assigning a functional designation to each component (from Shahwan
et al. [SLF∗12, SLF∗13]).
tify its relation with other components inside the scope of a given function, i.e., the
geometric model of a component is not structured with respect to its function. How
an algorithm can determine which component is attached to another one to form a
junction? Which screw is associated with which nut? Additionally, there is a large
range of screw shapes in CAD component libraries. How to identify specific areas on
these screws through names only? Also, the word screw is not a functional designation;
it does not uniquely refer to a function because a screw can be a set screw, a
cap screw, . . . Therefore, to determine rigorously the functional designation of components,
Shahwan et al. [SLF∗12, SLF∗13] inserted a qualitative reasoning process that
can relate geometric interfaces up to the functional designation of components, thus
creating a robust and automated connection between 3D geometric entities and functional
designations of components. This is a bottom-up approach that fits with our
current requirements.
The complete description of a DMU functional enrichment can be found in [SLF∗13].
Figure 3.2 shows the result of the functional enrichment of the aeronautical root-joint
use-case presented in Figure 1.6. Initially, the DMU was a purely geometric model.
Now, it is enriched with the functional designation of components. Using the definition
of Shahwan et al. [SLF∗13], a functional designation of a component is an unambiguous
denomination that functionally distinguishes one class of components from another.
It relates the geometric model of a component with its functional interfaces (derived
from the conventional interfaces described in Section 1.3.2) and with the functional
73Chapter 3: Proposed approach to DMU processing for structural assembly simulations
DMU 1 DMU 1 2 DMU 1 2 3
C1
C2
C3
C1
C2
C3
CAD
+ PLM
Product Structure
3- Functional
designations
1- PLM
information
2- Assembly
interfaces
I1/3
I2/3 S1
S2
S3.1
C2 S3.2
C1
C3
Fct.1
C3
C1 C2
S1 S2
I1/3
S3.1 S3.1
I2/3
Graph of assembly
interfaces and functions
CAD + assembly interfaces +
component functional designation
Fct.1 C1 C2 C3
C1 : Plate
C2 : Plate
C3 : Screw
C1 C2
S1 S2
Imprint
of interface
CAD + assembly
interfaces
C1 C2
S1 S2
I1/2
Graph of assembly
interfaces
Interface
Figure 3.3: DMU enrichment process with assembly interfaces and component functional
designations.
interfaces of its functionally related components. The functional designation of a component
binds its 3D model and functional interfaces to a symbolic representation of its
functions. Regarding screws, illustrative examples of functional designations are: cap
screw, locked cap screw, set screw, stop screw, . . .
As a result, a component model as well as its geometric model gets structured. As
illustrated in Figure 3.3, its B-Rep model contains imprints of its functional interfaces
and geometric relationships with functional interfaces of functionally related components.
The functional interfaces contain the lowest symbolic information describing
the elementary functions of a component and each functional designation expresses
uniquely the necessary relations between these elementary functions.
Functional analysis and quantitative reasoning
This enriched DMUs makes available information required to perform a functional
analysis of the assembly being prepared for FEA. This analysis allows us to implement
a quantitative reasoning which can be used to increase the robustness of the automation
of shape transformations of components and their interfaces during an assembly
preparation process. Geometric entities locating functional interfaces combined with
the functional designation of each component enable the identification and location
of groups of components to meet the requirements specified in Section 1.5.4. In the
research work described in the following chapters, the fully enriched functional DMU
74Chapter 3: Proposed approach to DMU processing for structural assembly simulations
Simulation
Objectives Hypotheses
Components
Components’
interfaces
Shape
transformations
Shape
transformations
: Affect
: Interact
Figure 3.4: Interactions between simulation objectives, hypotheses and shape transformations.
with functional interfaces stands as input data to geometric transformations.
Need to extend the DMU enrichment to a lower level: the component shape
Thanks to a better extraction and functional enrichment of the geometric models of
DMU components, new operators are able to identify components and their interfaces
that will be subjected to shape transformations. However, this enrichment is not suffi-
cient to determine the geometric areas to be idealized or transformed in the assembly
(see Section 2.3). Prior to any shape transformation, we propose to extend this functional
assembly enrichment up to the component level. This enrichment is driven by
the component’s shape and its interaction with the simulation objectives and related
hypotheses. The next section highlights the requirements of this enrichment approach.
3.4 Incorporating a morphological analysis during
FEA pre-processing
According to Chapter 1, component shapes involved in assembly simulation preparation
processes interact with simulation objectives, hypotheses, and shape transformations
applied to components and their interfaces. Figure 3.4 shows interactions between
shape transformations and FEA modeling hypotheses. To be able to specify the shape
analysis tools required, the interactions between shape transformations acting on components
as well as on assemblies, on the one hand, and FEA hypotheses, on the other
hand, should be formalized. The suggested analysis framework’s objective is the reconciliation
of simulation hypotheses with geometric transformations.
A morphological analysis driven by idealization needs
As stated in Chapter 2, a morphological analysis dedicated to assembly components
can improve the robustness of a geometric idealization process.
75Chapter 3: Proposed approach to DMU processing for structural assembly simulations
The natural way would be to automate the user’s approach. Indeed, during the current
generation of FE meshes, as explained in Section 1.5, this morphological analysis
phase is conducted by the engineer, on each component, individually. Based on his,
resp. her, own experience, the engineer visually analyzes the component shape and selects
the areas to preserve, to suppress, or to modify. Troussier [Tro99] highlighted the
lack of tools helping engineers to build and validate their models. She proposed to refer
to previous case studies and make them available to the engineer when a new model
has to be built. This knowledge capitalization-based method helps engineers analyze
their models through the comparison of the current simulation target with respect to
the previously generated simulation models. Indeed, referring to already pre-processed
FEM enforces the capitalization principle set up.
However, even if the engineer is able to formalize the simulation hypotheses, one dif-
ficulty remains regarding the concrete application of the shape transformations derived
from these hypotheses. A visual interpretation of the required geometric transformations,
based on past experiences, is feasible for simple components with more or less
the same morphology than previous models.
A complex geometry contains numerous regions with their own specific simplification
hypotheses. These regions can interact with each other, leading the engineer to
reach compromises about the adequate model to generate. For example, many variants
of mechanical interactions can appear in interface areas between sub-domains generated
for idealized components (see Figure 2.15). It can be difficult for the engineer to get a
global understanding of all the possible connections between medial surfaces. As long
as no precise mechanical rule exists in these connection areas, each person could have
his, resp. her, own interpretation of the hypotheses to apply there. When processing
assembly configurations, as illustrated in Section 2.5, its assembly interfaces influence
the idealization of the components interacting there. In the case of large assembly
structures, on top of the huge amount of time required to analyze all the repetitive
configurations, an engineer can hardly anticipate all the interactions between components.
Such an interactive analysis process, using the engineer’s know-how, does not
seem tractable.
Beyond potential lessons learned from previous FEA cases and because current
automatic analysis tools are not suited to engineers’ needs (see Section 2.4), it is
of great interest to develop new automated shape analyzing tools in order to help
engineers understand the application of their simplification hypotheses on new shapes.
The following objectives derive from this target.
76Chapter 3: Proposed approach to DMU processing for structural assembly simulations
3.4.1 Enriching DMU components with their shape structure
as needed for idealization processes
A shape analysis-based approach derives from the information available upstream, i.e.,
the DMU geometry content before FEA pre-processing that reduces to CAD components.
Due to the interoperability issue between CAD and CAE software (see Section
1.5.2), the prevailing practice extracts the B-Rep representation of each component.
During component design, successive primitive shapes, or form features, are sequentially
added into a construction tree describing a specific modeling process (see
Section 1.2.2). This tree structure is editable and could be analyzed further to identify,
in this construction tree, a shape closer to the FE requirements than the final one in
order to reduce the amount of geometric transformations to be applied. Often, this
modeling process relies on the technology used to manufacture the solid. From this
perspective, a machined metal component design process differs, for example, from a
sheet metal component one. This difference appears in CAD systems with different
workshops, or design modules, targeting each of these processes. As an example, the
solid model of a sheet metal component can be obtained from an initial surface using
an offset operator. This surface is close to the medial surface that can be used as
an idealized representation of this component. However, directly extracting a simpler
shape from a construction tree is not a generic and robust procedure for arbitrary
components. Such a procedure:
• cannot eliminate all the geometric transformations required to generate a FE
model;
• is strongly dependent upon the modeling process of each component;
• and is specific to each CAD software.
Above all and independently of the extracted geometry, it is essential to analyze a
component shape before applying it any geometric transformation.
To achieve a robust shape processing, the component shape needs to be structured
into regions that can be easily connected to the simulation hypotheses.
Proposal of a volume segmentation of a solid as a component shape structure
Following the recommendations of Section 2.4, we propose to set up a volume
segmentation of a 3D solid to structure it as an enriched input model to generate a
robust morphological analysis dedicated to mechanical simulations.
As stated in the conclusion of Section 2.4, the generic methods for 3D object decomposition
segment mesh1 models only. Volume decompositions of B-Rep models are
1In the computer graphics context.
77Chapter 3: Proposed approach to DMU processing for structural assembly simulations
restricted to specific applications that do not cover the FEA needs.
Here, the objective is set on a proposal of a robust segmentation of a B-Rep solids
to enrich them. Because the robust generation of quality connections between idealized
sub-domains is still a bottleneck of many approaches that process CAD solids for FEA,
the proposed segmentation should incorporate the determination of interfaces between
the volumes resulting from the segmentation. The proposed method is based on the
generation of a construction graph from a B-Rep shape. This contribution is detailed
in Chapter 4.
3.4.2 An automated DMU analysis dedicated to a mechanically
consistent idealization process
An analysis can cover multiple purposes: physical prediction, experimental correlation,.
. . The proposed analysis framework is oriented toward the geometric issues about
the idealization of assemblies for FEA. Section 2.3 has revealed that a major difficulty
encountered by automated methods originates from their lack of identification of the
geometric extent where simplification and idealization operators should be applied.
Using the new structure of a component shape, our objective is placed on a morphological
analysis process able to characterize the idealization transformations that can
take place on a component shape. Therefore, this process should incorporate, during
the pre-processing of DMU models, a set of operators that analyze the initial CAD
geometry in order to connect it to simulation hypotheses and determine the geometric
extent of these hypotheses. Chapter 5 is dedicated to this contribution. The objectives
of this morphological analysis enumerate:
• The identification of regions considered as details with respect to the simulation
objectives. The DMU adapted to FEA should contain only the relevant geometric
regions which have an influence on the mechanical behavior of the structure;
• The identification of relevant regions for idealization compared to regions regarded
as volumes. The morphology of a component has to be analyzed in order
to determine the thin regions to be transformed into mid-surfaces and the long
and slender regions to be transformed into beams. Also, this morphological analysis
has to provide the engineer with a segmentation of components into volume
sub-domains which have to be expanded into the whole assembly;
• The characterization of interfaces between idealizable regions. These interfaces
contain significant information regarding the interaction between idealizable regions.
They are used to connect medial surfaces among each other.
78Chapter 3: Proposed approach to DMU processing for structural assembly simulations
Finally, the DMU enrichment process is completed with assembly information as
well as information about the shape structure of each component. Consequently, this
enriched DMU is geometrically structured. It is now the purpose the next section to
carry on setting up objectives to achieve a robust pre-processing from DMU to FEM.
3.5 Process proposal to automate and robustly generate
FEMs from an enriched DMU
Figure 3.5 summarizes the corresponding bottom-up approach proposed in this thesis.
The first phase uses the methods Jourdes et al. [JBH∗14] and Shahwan et al. [SLF∗12,
SLF∗13] to enrich the DMU with assembly interfaces and functional designations of
components as recommended in Section 1.5.4. The initial CAD solids representing
components are also enhanced with a volume decomposition as suggested in Section 2.4
to prepare a morphological analysis required to process the idealization hypotheses.
The second phase analyses this newly enriched DMU to segment it in accordance
with the engineer’s simulation objectives (see Section 3.4), i.e., to identify areas that
can be idealized or removed when they are regarded as details. This results in the
generation of a so-called contextualized DMU.
Providing the engineer with a new contextualized DMU does not completely fulfill
his, rep. her, current needs to create geometric models for structural analysis. Consequently,
the proposed scheme should not only develop and validate methods and
tools to structure and analyze a DMU up to its component level, but also contain processes
to effectively generate FE assembly models. In the third phase, the functional
and morphological analyses lead to the definition of the assembly transformation process
as planed in the second phase, i.e., the transformation of groups of components
including dimensional reduction operations.
Exploiting the contextualized DMU, it is proposed to develop a two level adaption
process of a DMU for FEA as follows:
• One process is dedicated to standalone geometric component idealization. The
objective of this new operator is the exploitation of the morphological analysis
and hence, to provide the engineer with a robust and innovative approach to 3D
shape idealization;
• Another process extending the idealization operator to assembly idealization.
This operator is a generalization of the standalone operator adapted to assembly
transformation requirements. To implement this process, we set up a generic
methodology taking into account the simulation requirements, the functional assembly
analysis and assembly interfaces.
79Chapter 3: Proposed approach to DMU processing for structural assembly simulations
1- PLM
information
DMU EXTRATION
Contextualized DMU
for mechanical
simulation
Enriched DMU
Adapted DMU
for FEA
DMU
ENRICHMENT
4- Volume segmentation
of 3D Solid
2- Assembly interfaces
3- Functional
designations
...
FEM
Lessons learned
DATA
PROCESSES
Analysis-based
Pre-processing
SIMULATION
1
: Processes
: Data
Phase 1 Phase 2 Phase 3
Proposed Approach
Settings of geometric process
transformations
Definition of assembly process
transformations
ANALYSIS DMU TRANSFORMATION
DMU
DMU 1 2 3 4
Morphological analysis
Functional assembly
analysis
Operators
Library
Simulation
Objective
Contributions
Figure 3.5: Proposed approach to generate a FEM of an assembly structure from a DMU.
80Chapter 3: Proposed approach to DMU processing for structural assembly simulations
3.5.1 A new approach to the idealization of a standalone component
When components have to be fully idealized, their pre-processing requires the development
of a robust idealization process containing a dimensional reduction operator
associated with a robust one that connects medial surfaces. As shown in Section 2.3,
existing approaches face two issues:
• The extraction of a mid-surface/medial line from an idealized sub-domain. Current
dimensional reduction operators focus directly on the generation of midsurface/medial
line without having completely evaluated the idealization hypotheses
and determined the sub-regions associated to these hypotheses;
• The connection of the set of extracted mid-surfaces/medial lines. Current operators
encounter difficulties to generate consistent idealized models in connections
areas, i.e., regions which usually do not satisfy the idealization conditions.
To cover these issues, we propose to analyze the morphology of the shape before
applying a dimensional reduction operator. Therefore, this operator focuses on the
extraction of medial surfaces only in the sub-domains morphologically identified as
plate/shell models and on the extraction of medial lines in the sub-domains morphologically
identified as beam models. Simultaneously, this morphological analysis is
used to provide information on internal interfaces between sub-domains to be idealized.
We propose to exploit this new information within the idealization operator to
produce consistent geometric models, i.e., on-purpose idealization of sub-domains with
on-purpose connections between them. This process is detailed in Section 5.5.
3.5.2 Extension to assembly pre-processing using the morphological
analysis and component interfaces
The second process required addresses the transformation of assembly models. The
proposed operators have to be applicable to volume sub-domains, which can originate
from components or from a group of components.
Evolving the concept of details in the context of assembly structures
Section 2.2 has shown that the relationship between detail removal and idealization
processes has not been investigated. The definition of details stated in [LF05, RAF11]
addresses essentially volume domains and refers to the concept of discretization error
that can be evaluated with a posteriori error estimators.
Assemblies add another complexity to the evaluation of details. It is related to the
existence of interfaces between components. As illustrated in Section 1.5.4, interfaces
81Chapter 3: Proposed approach to DMU processing for structural assembly simulations
are subjected to hypotheses to define their simulation model and Table 1.2 points
out the diversity of mechanical models that can be expressed with simulation entities.
Recently, Bellec [BLNF07] described some aspects of this problem. Yet, comparing the
respective influences of rigid versus contact interface models is similar to the evaluation
of idealization transformations: this is also a complex issue.
The concept of detail, apart from referring to the real physical behavior of a product,
is difficult to characterize for assembly idealization. The structural engineer’s knowhow
is crucial to identify and remove them with interactive shape transformations.
Benefiting from the morphological analysis of components’ shapes, another objective
of this thesis is to provide the user with tools that show areas that cannot be regarded
as details (see Sections 5.3.1 and 5.5.2). This way, the concept of details evolves from
standalone component to assembly level pre-processing.
Automated transformations of groups of components
As explained in Section 1.5.4, the transformation of groups of components, e.g.,
junctions, by pre-defined FE simplified geometry, e.g., fasteners, is a top requirement
to reduce the FEM preparation time.
Focusing on these specific configurations, the main issue remains the robust identifi-
cation of the components and assembly interfaces to be transformed. Another objective
of this thesis is also to provide a robust operator to identify and transform configurations
of groups of components involved in the same assembly function, which is detailed
in Chapter 6.
From the analysis of DMU transformation requirements for FE assembly model
preparation [BLHF12], the proposed method relies on a qualitative reasoning process
based on the enriched DMU as input. From this enriched model, it is shown that
further enrichment is needed to reach a level of product functions where simulation
objectives can be used to specify new geometric operators that can be robustly applied
to automate component and assembly interface shape transformations. To prove the
validity of this approach, Section 6.3 presents a template-based operator to automate
shape transformations of bolted junctions. The method anticipates the mesh generation
constraints around the bolts, which also minimizes the engineer’s involvement.
3.6 Conclusion
This chapter has introduced the main principles and objectives of the proposed analysisoriented
approach of DMU pre-processing for the simulation of FE structural assembly
models. This approach covers:
• The enrichment of the input geometry, both at 3D solid and assembly levels.
82Chapter 3: Proposed approach to DMU processing for structural assembly simulations
It is critical to provide a volume decomposition of the geometric model of each
component in order to access and fully exploit their shape. This structure is
a good starting point for the identification of areas of interest for idealization
hypotheses. At the assembly level, the DMU is enriched with geometric interfaces
between its components, i.e., contacts and interferences, and with the functional
designation of components;
• The development of an analysis framework for the simulation of mechanical structures.
From the enriched DMU model, the analysis framework can be used to
specify geometric operators that can be robustly applied to automate component
and interface shape transformations during an assembly preparation process. In
accordance with the context of structural simulations, this framework evaluates
the conditions of application of idealization hypotheses. It provides the engineer
with the operators dedicated to shape adaption after idealizable volume subdomains
have been identified. Also, after the areas considered as details have
been identified and information about sub-domains interfaces have been added,
the user’s modeling rules can be applied in connection areas;
• The specification of geometric operators for the idealization of B-rep shapes and
operators transforming groups of components, such as bolted junctions, bene-
fiting from the previously structured DMU. Through the development of such
operators, the proposed approach can be sequenced and demonstrated on aeronautical
use-cases.
The next chapters are organized in accordance with the proposed approach, as
described in the current one. Chapter 4 details the B-Rep volume decomposition
using the extraction of generative construction processes. Chapter 5 describes the
concepts of the FEA framework using a construction graph to analyze the morphology
of components and derive idealized equivalent models. Chapter 6 extends this approach
to a methodology for assembly idealization and introduces a template-based operator
to transform groups of components.
83Chapter 3: Proposed approach to DMU processing for structural assembly simulations
84Chapter 4
Extraction of generative
processes from B-Rep shapes
to structure components up to
assemblies
Following the global description of the proposed approach to robustly process
DMUs for structural assembly simulation, this chapter exposes the principles
of the geometric enrichment of components using a construction graph. This
enrichment method extracts generative processes from a given B-Rep shape as a
high-level shape description and represents it as a graph while containing all non
trivial construction trees. Advantageously, the proposed approach is primitivebased
and provides a powerful geometric structure including simple primitives and
geometric interfaces between them. This high-level object description is fitted to
idealizations of primitives and to robust connections between them and remains
compatible with an assembly structure containing components and geometric
interfaces.
4.1 Introduction
Based on the analysis of DMU transformation requirements for FE assembly model
preparation in Chapter 1 as well as the analysis of prior research work in Chapter 2,
two procedures are essential to generate the mechanical model for the FEA of thin
structures:
• The identification of regions supporting geometric transformations such as simplifications
or idealizations. In Section 1.4.2, the analysis of thin mechanical
shell structures introduces a modeling hypothesis stating that there is no normal
stress in the thickness direction. This hypothesis is derived from the shape of
85Chapter 4: Extraction of generative processes from B-Rep shapes
the object where its thin volume is represented by an equivalent medial surface.
The idealization process connects this hypothesis with the object shape. Section
2.3 illustrates that idealization operators require a shape analysis to check
the idealization hypothesis on the shape structure and to delimit the regions to
be idealized;
• The determination of interface areas between regions to be idealized. Section 2.3
showed that the interface areas contain the key information to robustly connect
idealized regions. In addition to idealizable areas, the determination of interfaces
is also essential to produce fully idealized models of components.
The proposed pre-processing approach, described in Chapter 3, is based on the
enrichment of the input DMU data. More precisely, the B-rep representation of each
CAD component has to be geometrically structured to decompose the complexity of
their initial shape into simpler ones. At the component level, we propose to create
a 3D solid decomposition into elementary volume sub-domains. The objective of this
decomposition is to provide an efficient enrichment of the component shape input to
apply the idealization hypotheses.
This chapter is dedicated to a shape decomposition method using the extraction of
a construction graph from B-Rep models [BLHF14b, BLHF14a]. Section 4.2 justifies
the extraction of generative construction processes suited for idealization processes1.
Section 4.3 sets the modeling context and the hypotheses of the proposed approach.
Section 4.4 describes how to obtain generative processes of CAD components, starting
from the identification of volume primitives from a B-Rep object to the removal process
of these primitives. Finally, Section 4.5 defines the criteria to select the generative
processes generating a construction graph for idealization purposes. This construction
graph will be used in Chapter 5 to derive idealized models. In a next step, the component
perspective is extended to address large assembly models. Consequently, the
segmentation approach is analyzed with respect to CAD assembly representation in
Section 4.7.
4.2 Motivation to seek generative processes
This section presents the benefits of modeling processes to structure a B-Rep component.
It shows the limits of CAD construction trees in mechanical design and explains
why it is mandatory to set-up generative processes adapted to idealization processes.
1 generative processes represent ordered sequences of processes emphasizing the shape evolution of
the B-Rep representation of a CAD component.
86Chapter 4: Extraction of generative processes from B-Rep shapes
4.2.1 Advantages and limits of present CAD construction tree
As observed in Section 1.2.3, a mechanical part is progressively designed in a CAD software
using successive form features. This initial generation of the component shape
can be regarded as the task where the component shape structure is generated. Usually,
this object structure is described by a binary construction tree containing the
elementary features, or primitives, generating the object. This construction tree is
very efficient to produce a parameterized model of a CAD object. Effectively, the user
can easily update the shape of the object when modifying parameters defined within
a user-selected feature and then, re-processing the subset of the construction tree located
after this primitive. As illustrated in Section 2.4, Robinson et al. [RAM∗06] show
that a construction tree with adapted features for FEA can be used to easily generate
idealized models.
1 3
(b)
(a)
St
B B
6
T
8 11
15 17 18 20
B T
B T T T
23 33 34
T B
T B
2 4 - 5 7 9 - 10
16 19
21
12 ® 14
22 24 ® 32
Figure 4.1: An example of a shape generation process: (a) final object obtained after 34
modeling steps and viewed from top (T) and bottom (B), (b) some intermediate shapes
obtained after the i
th modeling step. The letter T or B appearing with step number indicates
whether the object is viewed from top or bottom.
87Chapter 4: Extraction of generative processes from B-Rep shapes
However, the construction tree produced during the design phase may not be suited
for the shape decomposition taking place at other stages of a PDP, e.g., during process
planning and FEA. Three main issues are preventing the use of current CAD
construction trees from FEM pre-processing:
• The complexity of the final object shape and feature dependencies.
The concept of feature-based design eases the generation of an object shape by
adding progressively, and one by one, simple form features. This way, the user
starts from a simple solid, i.e., a primitive, and adds or removes volumes using
pre-defined features (extrusion, revolution, sweeping, . . . ) one after the other until
he, resp. she, has reached the desired shape of the object. As a consequence of
this principle “from simple to complicated”, the resulting construction tree can
be complex and contains numerous features. Because, the user inserts one form
feature at a time, the construction tree is necessarily binary, i.e., each tree node
contains one form feature and the object shape obtained after combining this
form feature with the object resulting from the previous tree node. As an example,
Figure 4.1 illustrates this configuration with a rather complex component
where the user’s entire modeling process consists of 37 steps, some of them containing
multiple contours producing simultaneously several features. Two views,
defined as top and bottom in Figure 4.1b, show the major details of the object
shape. Figure 4.1a depicts some of the 34 steps involving either extrusion or
revolution operations and incorporating either material addition or removal as
complementary effects when shaping this object. The parent/child dependencies
between form features further increase the complexity of this construction
process. The suppression or modification of a parent feature is not always possible
due to geometric inconsistencies generated in subsequent tree steps when
parent/child dependencies cannot be maintained or when the object boundary
cannot produce a solid. This is particularly inconvenient in the context of FEM
pre-processing which aims at eliminating detail features to simplify component
shapes;
• The non-uniqueness and user dependence. Construction trees are not
unique, i.e., different users often generate different construction trees for the
same final shape. The choice of the sequence of features is made by the designer
and depends on his, resp. her, own interpretation of the shape structure of the
final object. In current industrial practices, specific modeling rules limit the
differences in construction tree generation but they are not dedicated to FEM
requirements as explained in Section 3.3;
• The construction tree availability. Construction trees contain information
which is very specific to each CAD system and each software has its own data
structures to represent this construction scheme. Most of the time, this information
is lost when transferring objects across CAD systems or even across the
88Chapter 4: Extraction of generative processes from B-Rep shapes
different phases of a PDP. Practically, when using STEP neutral format [ISO94,
ISO03], definition of construction tree structures associated with parametric modeling
are not preserved. Indeed, to edit and to modify a shape, the parametric
relations taking part to the construction tree would need also to be exported. This
is difficult to obtain, e.g., even during the upgrade of CATIA software [CAT14]
from V4 to V5, the transfer was not fully automatic and some information in
construction trees was lost.
As it appears in Figure 4.1, a shape generative process can be rather complex and,
even if it is available, there is no straightforward use or transformation of this process
to idealize this object (even though its shape contains stiffeners and thin areas that
can be modeled with plate or shell elements rather than volume ones). With respect to
the idealization objectives, it appears mandatory to set-up another generative process
that could incorporate features or primitives with their shapes being close enough to
that of stiffeners and thin wall areas.
4.2.2 A new approach to structure a component shape: construction
graph generation
Construction trees are important because an object submitted to a FEA preparation
process can be subjected to different simplifications at different levels of its construction
process. One key issue of these trees is their use of primitives that are available
in common industrial CAD software. Another problem lies in the storage of the shape
evolution from the initial simple primitive shape to the final object. This principle ‘from
simple to complicated’ matches the objective of using the tree contents for idealization
and simplification purposes. Indeed, obtaining simpler shapes through construction
processes could already reduce the number of geometric transformations. However,
because the CAD construction tree is presently unique, not always available and complicated
to modify, its use is difficult. The proposed approach here, structuring the
shape of a component, consists in producing generative processes of its B-Rep model
that contain sets of volume primitives so that their shapes are convenient for FE simulation.
The benefits of extracting new generative processes, as ordered sequences
of processes emphasizing the shape evolution of the B-Rep representation of a CAD
component, are:
• To propose a compact shape decomposition adapted to the idealization
objectives. Extraction of compact generative processes aims at reducing their
complexity while getting a maximum of information about their intrinsic form
features. The proposed geometric structure decomposes an object into volume
sub-domains which are independent from each other and close enough to regions
that can be idealized. This segmentation method differs from the divide and conquer
approaches of [Woo14] because generative processes contain volume prim-
89Chapter 4: Extraction of generative processes from B-Rep shapes
itives having particular geometric properties, e.g., extruded, revolved or swept
features. The advantage of these primitives is to ease the validation of the idealization
criteria. Indeed, one of their main dimensional characteristics is readily
available. For example, the extrusion distance, revolve angle or sweep curve, reduces
the primitive analysis to the 2D sketch of the feature. Moreover, interfaces
between primitives are identified during the extraction of each generative process
to extend the analysis of individual primitives through the whole object and to
enable the use of the engineer’s FE connection requirements between idealizable
regions;
• To offer the user series of construction processes. In a CAD system, a
feature tree is the unique available definition of the component’s construction but
is only one among various construction processes possible. Furthermore, in a simulation
context, it is difficult to get an adequate representation of the simulation
model which is best matching the simulation requirements because the engineer’s
know-how takes part to the simulation model generation (see Section 3.4). Consequently,
the engineer may need to refer to several construction processes to
meet the simulation objectives as well as his, resp. her, shape transformation
requirements. Providing the engineer with several construction processes helps
him, resp. her, generate easily a simulation model. The engineer will be able
to navigate shape construction processes and obtain a simpler one within a construction
tree that meets the idealization requirements in the best possible way;
• To produce a shape parameterization independent from any construction
tree. The construction tree is a well-known concept for a user. Parent/child
dependencies between features, generated through sketches and their reference
planes, ease the interactive design process for the user but creates geometric dependencies
difficult to understand for the user. Performing modifications of a
complex shape remains difficult to understand for the user, e.g., the cross influence
between features located far away in the construction tree is not easy to
anticipate. Considering the generation of a construction graph, the proposed approach
does not refer to parent/child dependencies between features, i.e., features
are geometrically independent each other. The morphological analysis required
(see Section 3.4) does not refer to a need for a parameterized representation of
components, i.e., each primitive does not need to refer to dependencies between
sketch planes. Components can be modified in a CAD software and their new geometry
can be processed again to generate a construction graph without referring
to the aforementioned dependencies.
As a conclusion, one can state that enabling shape navigation using primitive features
similar to that of CAD software is an efficient complement to algorithmic approaches
reviewed in Chapter 2 and construction trees. More globally, construction
graphs can support both efficiently.
90Chapter 4: Extraction of generative processes from B-Rep shapes
Shape modeling processes
Segmentation
CAD Mesh
Idealization
M M-1 M-2
Pi(M/M ) -1 Pi(M /M ) -1 -2 Pi(M ) -2
Figure 4.2: An example of shape analysis and generative construction graph.
To this end, this chapter proposes to extract a construction graph from B-Rep
CAD models so that the corresponding generative processes are useful for mechanical
analysis, particularly when idealization processes are necessary. The graph is extracted
using a primitive removal operator that simplifies progressively the object’s shape.
One could says that the principle is to go ‘backward over time’. This characteristic
of construction trees is consistent with the objective of simplification and idealization
because the shapes obtained after these operations should get simpler. Figure 4.2
illustrates the extraction of a shape modeling process of a CAD component. Primitives
Pi are extracted from a sequence of B-Rep objects Mi which become simpler over time.
The set of primitives Pi generates a segmented representation of the initial object which
is used to derive idealized FE models.
The following sections detail the whole process of extraction of generative processes
from B-Rep shapes.
4.3 Shape modeling context and process hypotheses
Before describing the principles of the extraction of generative processes from a BRep
shape, this section sets the modeling context and the hypotheses of the proposed
approach.
4.3.1 Shape modeling context
As a first step, the focus is placed on B-Rep mechanical components being designed
using solid modelers. Looking at feature-based modeling functions in industrial CAD
systems, they all contain extrusion and revolve operations which are combined with
addition or removal of volume domains (see Figure 4.3a). The most common version of
the extrusion, as available in all CAD software, is defined with an extrusion direction
91Chapter 4: Extraction of generative processes from B-Rep shapes
variant with
material removal
material
blend as part of addition
extrusion contour
blend variable
radius
V
H
Extrusion Contour
(a) (b)
(c) (d)
Slanted
extrusion Cutting
Plane
Figure 4.3: (a) Set of basic volume modeling operators, (b) sketch defining an extrusion
primitive in (a), (c) higher level volume primitive (slanted extrusion), (d) reference primitive
and its first ‘cut’ transformation to generate the object in (c).
orthogonal to a plane containing the primitive contour. Such an extrusion, as well as
the revolution, are defined here as the reference primitives. These feature-based B-Rep
operations can be seen as equivalent to regularized Boolean operations as available also
in common hybrid CAD modelers, i.e., same primitive shapes combined with union or
subtraction operators. Modelers also offer other primitives to model solids, e.g., draft
surfaces, stiffeners, or free-form surfaces from multiple sections. Even though we don’t
address these primitives here, it is not a limitation of our method. Indeed, draft surfaces,
stiffeners, and similar features can be modeled with a set of reference primitives
when extending our method to extrusion operations with material removal and revolutions.
Appendix B illustrates the simple features and boolean operations available
in CAD software and show that it can mainly be reduced to additive/removal extrusion/revolution
features in order to cover the present software capabilities . Figure 4.3c
illustrates some examples, e.g., an extrusion feature where the extrusion direction is
not orthogonal to the sketching plane used for its definition. However, the resulting
shape can be decomposed into an extrusion orthogonal to a sketching plane and ‘cuts’
(see Figure 4.3d) if the generation of a slanted extrusion is not available or not used
straightforwardly. Indeed, these construction processes are equivalent with respect to
the resulting shape.
Another category of form features available from B-Rep CAD modelers are blending
radii. Generally, they have no simple equivalence with extrusions and revolutions.
Generated from B-Rep edges, they can be classified into two categories:
1- constant radius blends that can produce cylindrical, toroidal or spherical surfaces;
92Chapter 4: Extraction of generative processes from B-Rep shapes
2- constant radius blends attached to curvilinear edges and variable radius blends.
Category 1 blends include extrusion and revolution primitives and can be incorporated
in their corresponding sketch (see Figure 4.3a). This family of objects is part of
the current approach. Category 2 blends are not yet addressed and are left for future
work. Prior work in this field [VSR02, ZM02, LMTS∗05] can be used to derive M from
the initial object MI to be analyzed, possibly with user’s interactions.
In summary, all reference primitives considered here are generated from a sketching
step in a plane defining at least one closed contour. The contour is composed of line
segments and arcs of circles, (see Figure 4.3b). This is a consequence of the previous
hypothesis reducing the shapes addressed to closed surfaces bounded by planes, cylinders,
cones, spheres, tori, and excluding free-form shapes in the definition of the object
boundary. This is not really restrictive for a wide range of mechanical components
except for blending radii. The object M to be analyzed for shape decomposition is
assumed to be free of blending radii and chamfers that cannot be incorporated into
sketched contours. The generative processes are therefore concentrated on extrusion
primitives, in a first place, in order to reduce the complexity of the proposed approach.
Further hypotheses are stated in the following sections.
4.3.2 Generative process hypotheses
Given a target object M to be analyzed, let us first consider the object independently
of the modeling context stated above. M is obtained through a set of primitives
combined together by adding or removing material. Combinations of primitives thus
create interactions between their bounding surfaces, which, in turn, produce intersection
curves that form edges of the B-Rep M. Consequently, edges of M contain
traces of generative processes that produced its primitives. Hence, following Leyton’s
approach [Ley01], these edges can be seen as memory of generation processes where
primitives are sequentially combined.
Current CAD modelers are based on strictly sequential processes because the user
can hardly generate simultaneous primitives without looking at intermediate results
to see how they combine/interact together. Consequently, B-Rep operators in CAD
modelers are only binary operators and, during a design process, the user-selected one
combines the latest primitive generated to the existing shape of M at the stage t of this
generative process. Additionally, CAD modelers providing regularized Boolean operators
reduce them to binary operators, even though they are n-ary ones, as classically
defined in the CSG approaches [Man88]. Here, the proposed approach does not make
any restriction on the amount of primitives possibly generated ‘in parallel’, i.e., the
arity of the combination operators is n ≥ 2. The generated processes benefit from this
hypothesis by compacting the construction trees nodes. This property is illustrated in
the result Section 4.6 of this chapter in Figure 4.22.
93Chapter 4: Extraction of generative processes from B-Rep shapes
d
(a)
(b) Top Bottom (c)
IG (face 2)
contour edge
(convex)
lateral face (transparent)
Fb2
contour edge
(concave)
Fb1 (transparent)
IG (face 1)
fictive lateral edge
lateral edge
Attachment
contour( , )
Fb1
Fb2
Figure 4.4: a) Entities involved in an extrusion primitive. Visible extrusion feature with its
two identical base faces F b1 and F b2. (b) Visible extrusion feature with its two different
base faces F b1 and F b2. (c) Visible extrusion feature with a unique base face F b1 (detail of
Figure 4.1a - 34B).
Hypothesis 1: Maximal primitives
The number of possible generative processes producing M can be arbitrary large,
e.g., even a cube can be obtained from an arbitrary large number of extrusions of
arbitrary small extent combined together with a union operator. Therefore, the concept
of maximal primitives is introduced so that the number of primitives is finite and as
small as possible for generating M.
A valid primitive Pi identified at a stage t using a base face Fb1 is said to be
maximal when no other valid primitive Pj at that stage having F�
b1 as base face
can be entirely inserted in Pi (see Section 4.4.2 and Figure 4.4a): ∀Pj , Pj �⊂ Pi.
Fb1 is a maximal face as defined at Section 4.3.3.
Maximal primitives imply that the contour of a sketch can be arbitrary complex,
which is not the case in current engineering practice, where the use of simple primitives
eases the interactive modeling process, the parameterization, and geometric constraint
assignments to contours. The concept of maximal primitive is analog to the concept of
maximal volume used in [WS02, Woo03, Woo14]. Yet, this concept is no used in feature
recognition techniques [JG00]. Even if making use of maximal primitives considerably
reduces the number of possible generative processes, they are far from being unique for
M.
Hypothesis 2: Additive processes
94Chapter 4: Extraction of generative processes from B-Rep shapes
(a) (b)
Extrusion
contour
Extrusion Revolution
Mid-surface
Mid-surface
d Direction
of extrusion Revolution
contour
Angle
of revolution
Figure 4.5: Illustrations of two additive primitives: (a) an extrusion primitive and (b) a
revolution one. The mid-surfaces of both primitives lie inside their respective volumes.
We therefore make the further hypothesis that the generative processes we are
looking for are principally of type additive, i.e., they are purely based on a regularized
Boolean union operator when combining primitives at each stage t of generative modeling
processes. This hypothesis is particularly advantageous when intending to tailor
a set of generative processes that best fit the needs of idealization processes. Indeed,
idealized structures, such as mid-surfaces, lie inside such primitives, and connections
between primitives locate also the connections between their idealized representatives.
Figure 4.5 illustrates an extrusion and a revolution primitives. With both of them, the
3D solid of the primitive includes its mid-surface. Therefore, the idealized representation
of M can be essentially derived from each Pi and its connections, independently
of the other primitives in case of additive combinations. Figure 4.6 gives an example
where M can be decomposed into two primitives combined with a union (b). M in
Figure 4.6, (b) can thus be idealized directly from these two primitives and their interface.
On the contrary, when allowing material removal, idealization transformations
are more complex to process, while the resulting volume shapes are identical. Figure
4.6c shows two primitives which, combined by Boolean subtraction, result also in
object (a). However, computing an idealization of (a) by combining idealizations of its
primitives in (c) is not possible.
Performing the idealization of M from its primitives strengthens this process compared
to previous work on idealization [CSKL04, Rez96, SSM∗10] of solids presented
in Section 2.3.1 for two reasons. Firstly, each Pi and its connections bound the 3D
location of and the connections with other idealized primitives. Secondly, different categories
of connections can be defined, which is important because idealization processes
still rely on the user’s know-how to process connections significantly differing from reference
ones. The next Chapter 5 explains in details how to connect mid-surfaces using
a taxonomy of connections between extrusion primitives.
Hypothesis 3: Non trivial variants of generative processes
To further reduce the number of possible generative processes, the processes described
should be non trivial variants of processes already identified. For example, the
95Chapter 4: Extraction of generative processes from B-Rep shapes
(a)
(b) (c)
+ - OR
Figure 4.6: a) Simple shape with idealizable sub-domains, (b) Primitives to obtain (a) with
an additive process, (c) Primitives to obtain (a) with a removal process.
same rectangular block can be extruded with three different face contours and directions
but they create the same volume. Two primitives generating the same shape are
considered as the same non-trivial primitive. If the resulting shape of two processes at
the jth-level of a construction is the same then, these two processes are said equivalent
and are reduced to a single one and the object shape at the (j−1)th-level is also unique.
These equivalent processes can be detected when comparing the geometric properties of
the contours generating this same shape. Other similar observations will be addressed
in the following sections when describing the criteria to select meaningful generative
processes.
The above hypotheses aim at reducing the number of generative processes producing
the same object M while containing primitives suited to idealization transformations,
independently of the design process initially set up by engineers.
Conclusion
The overall approach can be synthesized through the process flow of Figure 4.7.
The input STEP file contains the B-Rep model M. A set of generative processes is
extracted that form sets of construction trees, possibly producing a graph. To this
end, application dependent criteria are used to identify one or more construction trees
depending on the application needs. Here, we focus on criteria related to idealization
for FEA.
4.3.3 Intrinsic boundary decomposition using maximal entities
In order to extract generative processes from the B-Rep decomposition of M, it is
important to have a decomposition of M, i.e., topology description, that is intrinsic to
96Chapter 4: Extraction of generative processes from B-Rep shapes
Selection of generative process(es) Application dependent criteria
STEP file (input Model)
Generation of generative process graph
Shape transformations (idealization)
Figure 4.7: Pipeline producing and exploiting generative shape processes.
its shape. A B-Rep decomposition of an object is however not unique (and thus not
suitable), because it is subjected to two influences:
• Its modeling process, whether it is addressed forward during a design process
or backward as in the present work. Indeed, each operation involving a primitive
splits/joins boundary faces and edges of the solid. When joining adjacent
faces or edges, their corresponding surfaces or curves can be identical. Their
decomposition is thus not unique. However, CAD modelers may not merge the
corresponding entities, thus producing a boundary decomposition that is not
changing the object shape (see Figure 4.8a). For the proposed approach purposes,
such configurations of faces and edges must lead to a merging process so
that the object boundary decomposition is unique for a given shape, i.e., it is
intrinsic to the object shape;
• The necessary topological properties to setup a consistent paving of an object
boundary, i.e., the boundary decomposition must be a CW-complex. Consequently,
curved surfaces need to be partitioned. As an example, a cylinder is
decomposed into two half cylinders in most CAD modelers or is described with
a self connected patch sewed along a generatrix (see Figure 4.8b). In either case,
the edge(s) connecting the cylindrical patches are adjacent to the same cylindrical
surface and are not meaningful from a shape point of view. Hence, for the
proposed approach purposes, they must not participate to the intrinsic boundary
decomposition of the object.
Following these observations, the concepts of maximal faces and edges introduced
by [FCF∗08] is used here as a means to produce an intrinsic and unique boundary
decomposition for a given object M. Maximal faces are identified first. For each face
of M, a maximal face F is obtained by repeatedly merging an adjacent face Fa sharing
a common edge with F when Fa is a surface of same type and same parameters than
F, i.e., same underlying surface. F is maximal when no more face Fa can be merged
with F. Indeed, maximal faces coincide with ‘c-faces’ defined in [Sil81] that have been
proved to uniquely defined M. Similarly, for each edge of M, a maximal edge E with
97Chapter 4: Extraction of generative processes from B-Rep shapes
(a) (b)
Couples of faces that can be merged
Figure 4.8: Examples of configurations where faces must be merged to produce a shapeintrinsic
boundary decomposition: (a) face decomposition due to the modeling process, (b)
face decomposition due to topological requirements.
adjacent faces F1 and F2 is obtained by repeatedly merging an adjacent edge Ea when
Ea is also adjacent to F1 and F2. Again, E is maximal when no more edge Ea can
be merged with E. As a consequence of these merging processes, it is possible to
end up with closed edges having no vertex or with closed faces having no edge. An
example for the first case is obtained when generating the maximal face of the cylinder
in Figure 4.8b. A sphere described with a single face without any edge and vertex is
an example for the second case.
Because of maximal edges without vertices and faces without edges, merging operations
are performed topologically only, i.e., the object’s B-Rep representation is left
unchanged. Maximal faces and edges are generated not only for the initial model M
but also after the removal of each primitive when identifying the graph of generative
processes. Consequently, maximal primitives (see Hypothesis 1) are based on maximal
faces and edges even if not explicitly mentioned throughout this document. Using the
concept of maximal faces and edges the final object decomposition is independent of
the sequence of modeling operators.
4.4 Generative processes
Having define the modeling hypotheses and context in the previous Section 4.3, this
section presents the principles of the construction of generative processes from B-Rep
object. It explains how the primitives are identified and how to remove them from an
object M.
4.4.1 Overall principle to obtain generative processes
Preliminary phase
As stated in Section 4.3.1, a preliminary step of the method is to transform it into a
blending radii-free object M. To this end, defeaturing functions available in most CAD
98Chapter 4: Extraction of generative processes from B-Rep shapes
Identification of extrusion primitives Pi
Going back
over time
Generating M
from primitives
Object M
M®M’
Removal of primitives from M’
M’ empty? No Yes End
Set of construction
trees producing M
Set of extrusion primitives Pi
Construction graph of primitives
Figure 4.9: Overall scheme to obtain generative processes.
Object M
End
Object M-1
Object M-2
Removal of Pi
Identification of extrusion
primitives Pi in M
Identification of extrusion
primitives Pi in M-1
Identification of extrusion
primitives Pi in M-2
Pi
Removal of Pi
Figure 4.10: An example illustrating the successive identification and removal of primitives.
systems are applied. This operation is a consequence of the modeling context defined
in Section 4.3.1. Even though these functions may not be sufficient and robust enough,
this is the current working configuration. In contrast to blending radii, most chamfers
are included in the present approach because they can be part of extrusion primitives
and hence, included in the sketched contours used to define extrusion primitives. Even
if CAD software provide specific functions for chamfers, they are devoted to the design
context but basic operators of extrusion with material addition or removal could produce
the same result, in general. This analysis regarding chamfers shows the effect of
the concept of maximal primitives (see Hypothesis 1).
Main phase
Starting with the solid M, the generative processes are obtained through two phases:
99Chapter 4: Extraction of generative processes from B-Rep shapes
• M is processed by iterative identification and removal of primitives. The objective
of this phase is to ‘go back in time’ until reaching root primitives for generative
processes. The result of this phase is a set of primitives;
• Based on hypotheses of Section 4.3.2, a set of generative processes is produced using
the primitives obtained at the end of the first phase to meet the requirements
of an application: here idealization (see Chapter 5).
Finally, the decomposition D of M into extrusion primitives is not limited to a single
construction tree but it produces a construction graph GD iteratively generated from
M. GD contains all possible non trivial construction trees of M (see Hypothesis 3).
The process termination holds whenever M is effectively decomposable into a set of
extrusion primitives. Otherwise, D is only partial and its termination produces either
one or a set of volume partitions describing the most simplest objects D can reach.
Figure 4.9 summarizes the overall scheme just described previously. When generating
GD, we refer to M = M0 and evolutions M−j of it backward at the jth step of D.
Figure 4.10 illustrates the major steps of the extraction of a generative process graph,
i.e., from the primitive identification up to its removal from M, and will be further
explained in Sections 4.4.2 and 4.4.3.
4.4.2 Extrusion primitives, visibility and attachment
In order to identify extrusion primitives Pi in M = M0 and evolution M−j of it, backward
at the jth step of the generation of the generative process graph, it is mandatory
to define its geometric parameters as well as the hypotheses taken in the present work
(see Figure 4.4).
First of all, let us notice that a ‘reference primitive’ Pi is never appearing entirely
in M or M−j unless it is isolated like a root of a construction tree, i.e., Pi = M or
Pi = M−j . Apart from these particular cases, Pi are only partly visible, i.e., not all
faces of Pi are exactly matching faces of M−j . For simplicity, we refer to such Pi as
‘visible primitives’. Pi is the memory of a generative process that took place between
M−j and M−(j+1). Extracting Pi significantly differs compared to feature recognition
approaches [Rez96, LGT01, WS02, SSM∗10, JG00]. In feature recognition approaches,
Pi is identified through validity constraints with its neighboring attachment in M, i.e.,
faces and edges around Pi. These constraints limits the number of possible primitives
by looking to the best interpretation of some visible boundaries of the object M. Here,
identifying visible primitives enables the generation of reference ones having simpler
contours. Only the visible part of the primitive is used to identify the primitive in M,
without restricting the primitive to the visible boundaries of M. The proposed identi-
fication process of Pi is more general, it does not integrate any validity constraint on
100Chapter 4: Extraction of generative processes from B-Rep shapes
the attachment of Pi with M. This constraint released, this process enables the identi-
fication of a greater number of primitives which can be compared with each other not
only through their attachment to M but also through their intrinsic shape complexity.
Definition of the primitive
The parameters involved in a reference extrusion Pi are the two base faces, F b1 and
F b2, that are planar and contain the same sketched contour where the extrusion takes
place. Considering extrusions that add volume to a pre-existing object, the edges of F bi
are called contour edges which are all convex. Indeed, Pi being standalone primitive,
all its contour edges are convex. A convex edge is such that the outward normals of
its adjacent faces define an angle α where: 0 < α < π. When Pi belongs to M−j ,
the contour edges along which Pi is attached to M−j can be either convex or concave
depending on the neighborhood of Pi in M−j (see Figure 4.4a).
In the direction d of the extrusion, all the edges are straight line segments parallel
to each other and orthogonal to F bi. These edges are named lateral edges. Faces adjacent
to F bi are called lateral faces. They are bounded by four edges, two of them being
lateral edges. Lateral edges can be fictive lateral edges when a lateral face coincides
with a face of M−j adjacent to Pi (see Figure 4.4a). When lateral faces of Pi coincide
with adjacent faces in M−j , there cannot be edges separating Pi from M−(j+1) because
of the definition of maximal faces. Such a configuration refers to fictive base edges (see
Figure 4.11 with the definition of primitive P1).
Principle of primitive identification: Visibility
The visibility of Pi depends on its insertion in M−j and sets the conditions to identify
Pi in ∂M−j
2. An extrusion primitive Pi can be visible in different ways depending on
its insertion in a current object M−j . The simplest visibility is obtained when Pi’s base
faces F bi in M−j exist and when at least one lateral edge connects F bi in M−j (see
Figure 4.4a and 4.11(step1)).
More generally, the contour of F b1 and F b2 may differ from each other (see Figure
4.4b) or the primitive may have only one base face F b1 visible in M−j together
with one existing lateral edge that defines the minimal extrusion distance of F b1 (see
Figure 4.4c). Our two hypotheses on extrusion visibility thus state as follows:
• First, at least one base face F bi is visible in M−j , i.e., the contour of either F b1
or F b2 coincides with a subset of the attachment contour of Pi in M−j ;
• Second, one lateral edge exists that connects F bi in M−j . This edge is shared by
two lateral faces and one of its extreme vertices is shared by F bi.
2∂M−j is the boundary of the volume object M, i.e., the B-Rep representation
101Chapter 4: Extraction of generative processes from B-Rep shapes
d d
P1 P2
Primitive
to Remove Interface
Identical Faces Volume to remove
from Primitive
Reduced Primitive
Included in Solid
Simplified Solid
1 - Find Extrusion
Primitives
2 - Keep included
Primitives
3 - Find Interfaces
4 - Remove Primitives
from Solid
Fb1 contour edge
P1 P2
P1
P1
Not Included in Solid
Figure 4.11: An example illustrating the major steps to identify a primitive Pi and remove
it from the current model M−j .
Pi is entirely defined by F bi and the extrusion length obtained the maximum length
of the generatrix of Pi extracted from its lateral faces partly or entirely visible in M−j .
Notice that the lateral edges mentioned may not be maximal edges when lateral faces
are cylindrical because maximal faces may remove all B-Rep edges along a cylindrical
area. These conditions of definition of extrusion distance restricts the range of extrusion
primitives addressed compared to the use of the longest lateral segment existing
in the lateral faces attached to F bi. However, it is a first step enabling to address a
fair set of mechanical components and validate the major concepts of the proposed
approach. This generalization is left for future work. Figure 4.4b, c give examples
involving two or one visible base faces, respectively.
Attachment
An extrusion primitive Pi is attached to M−j in accordance to its visibility in M−j .
The attachment defines a geometric interface, IG, between Pi and M−(j+1), i.e., IG =
Pi ∩ M−(j+1). This interface can be a surface or a volume or both, i.e., a nonmanifold
model. One of the simplest attachments occurs when Pi has its base faces
F b1 and F b2 visible. This means that Pi is connected to M−(j+1) through lateral faces
only. Consequently, IG is a surface defined by the set of lateral faces not visible in
Pi. Figure 4.4a illustrates such a type of interface (IG contains two faces depicted in
yellow).
Simple examples of attachment IG between Pi and M−(j+1) are given in Figure 4.4.
102Chapter 4: Extraction of generative processes from B-Rep shapes
(b)
Pi
M-(j+1) IG
S
e1
e2
(a)
Pi
M-(j+1) IG
Volume Interface
Figure 4.12: Example of geometric interface IG between Pi and M−(j+1): (a) surface type,
(b) volume type.
M-j
Valid Primitives
Primitive (Pi)
Non Valid
Primitive
Direction
of extrusion
(a)
(b)
Fb1
Figure 4.13: Collection of primitives identified from M−j : (a) Valid primitives included in
M−j , (b) invalid primitive because it is not fully included in M−j . Green edges identify the
contour of the base face of the primitive.
4.4a involves a surface interface and 4.4b illustrates a volume one. Let us notice that
the interface between Pi and M−(j+1) in 4.4b contains also a surface interface located
at the bottom of the primitive that is not highlighted. However, as we will see in
Section 4.5, all possible variants of IG must be evaluated to process the acceptable
ones.
In a first step, Pi can be translated directly into an algorithm to identify them
(procedure f ind visible extrusion of algorithm 1). The visibility of Pi does not refer to
its neighboring faces in M−j . Next, they are subjected to validity conditions described
in the following section.
4.4.3 Primitive removal operator to go back in time
The purpose is now to describe the removal operator that produces a new model M−(j+1)
anterior to M−j . This removal operator is defined as a binary operator with Pi and
M−j as operands and M−(j+1) as result. In the context of a generative process, M−j
relates to a step j and M−(j+1) to a step (j + 1).
103Chapter 4: Extraction of generative processes from B-Rep shapes
Characterization of interfaces
In order to be able to generate M−(j+1) once Pi is identified, it is necessary to reconstruct
faces adjacent to Pi in M−j so that M−(j+1) defines a volume. To this end, the
faces of M−j adjacent to Pi and IG must be characterized. Here, Pi is considered to be
adjacent to other subsets of primitives through one edge at least. The removal operator
depends on the type of IG. Due to manifold property of M, two main categories of
interfaces have been identified:
1- IG is of surface type. In this category, the removal operator will have to create
lateral faces and/or the extension of F b2 so that the extended face coincides with
F b1. Indeed, this category needs to be subdivided into two sub categories:
a- IG contains lateral faces of Pi only (see Figure 4.4a) or IG contains also an
extension of F b2 and edges of this extension are concave edges in M−(j+1);
b- IG may contains lateral faces of Pi but it contains an extension of F b2 and
the edges of this extension are fictive base edges in M−j . These edges would
be convex edges in M−(j+1), (see P1 in Figure 4.11);
2- IG contains at least one volume sub-domain.
In addition, considering that F b1 at least is visible and Pi is also visible (see Section
4.4.2), the attachment contour may not be entirely available to form one or more
edge loops (see Figure 4.4a). Also, IG can contain more than one connected component
when Pi is resembling a handle connected to M−(j+1), which produces more than one
edge loop to describe the attachment of Pi to M−(j+1) in IG.
Validity
Whatever the category of interface, once Pi is identified and its parameters are set
(contour and extrusion distance), it is necessary to validate it prior to define its interface
(step 2 of Figure 4.11). Let Pi designates the volume of the reference primitive, i.e.,
the entire extrusion Pi. To ensure that Pi is indeed a primitive of M−j , the necessary
condition is formally expressed using regularized Boolean operators between these two
volumes:
(M−j ∪∗ Pi) −∗ M−j = φ. (4.1)
This equation states that Pi intersects M−j only along the edge loops forming its attachment
to M−(j+1), i.e., Pi does not cross the boundary of M−j at other location than
its attachment. The regularized Boolean subtraction states that limit configurations
producing common points, curve segments or surface areas between Pi and M−j at
other locations than the attachment of Pi are acceptable. This condition strongly reduces
the number of primitives over time. Figure 4.13 illustrates the list of 9 primitives
identified from an object M−j . 8 primitives in 4.13a satisfy the validity criterion as
they are included in M−j . The primitive in 4.13b is not fully included in M−j and is
104Chapter 4: Extraction of generative processes from B-Rep shapes
IG
( type 1a surface )
(a)
(b)
(c)
IG
( type 1b surface )
IG
( type 2 volume )
Pi
Pi
Pi
Fictive edge
e1
e2
Figure 4.14: Illustration of the removal of Pi with three different interface types: (a) type
1a, (b) type 1b, (c) type 2.
removed from the set. Another example in Figure 4.11 at step 2 shows that primitives
P2 and P3 can be discarded.
Removal of Pi
The next step is the generation of M−(j+1) once Pi has been identified and removed
from M−j . Depending of the type of IG, some faces of Pi may be added to ensure that
M−(j+1) is a volume (see Figure 4.11 steps 3 and 4). For each category of interface
between Pi and M−j , the removal operation is described as follow:
• Type 1a: If IG is of type 1a, then the faces adjacent to the contour edges of F b1
are orthogonal to F b1. These faces are either planar or cylindrical. IG contains
the faces extending these faces, Fa1 , to form the lateral faces of Pi that were
‘hidden in M−j ’. Edges of the attachment of Pi belonging to lateral faces of Pi
can be lateral edges (either real or fictive ones) or arbitrary ones. Lateral edges
bound faces in Fa1 , arbitrary edges bound the extension of the partly visible
lateral faces of Pi, they belong to: Fa2 . Then, IG may contain the extension of
F b2 called Fa3 such that: F b2 ∪ Fa3 = F b1. Then:
∂M−(j+1) = (∂M−j − ∂Pi) ∪ (Fa1 ∪ Fa2 ∪ Fa3 ), (4.2)
where ∂M−j is the set of connected faces bounding M−j , ∂Pi is the set of connected
faces bounding the visible part of Pi. ∂M−(j+1) defines a closed, orientable
surface, without self intersection. M−(j+1) is therefore a volume. Figure 4.14 a
and Figure 4.15 illustrate this process for interface of type 1a;
• Type 1b: If IG is of type 1b, IG contains a set of faces extending lateral faces of
Pi: Fa1 . To reduce the description of the various configurations, let us focus on
the key aspect related to the extension of F b2 contained in IG. If this extension
105Chapter 4: Extraction of generative processes from B-Rep shapes
Fb1
Partly visible
base face Fb2
Partly visible
lateral face of Pi
Visible lateral
faces of Pi
Fa1
Fa3
Fa2
Lateral edge
¶M_ j (¶M_ j - ¶Pi) ¶M_ (j+1)
Figure 4.15: Illustration of the removal of Pi for interface of surface type 1a and generation
of ∂M−(j+1) with the extension of lateral and base faces.
can be defined like Fa3 above, it has to be observed that fictive edges of this
extension in M−j are replaced by convex edges in M−(j+1), i.e., edges of the
same type (convex) as their corresponding edges in F b1 (see Figure 4.11 step 3
left image). Without going into details, these fictive edges can be removed to
simplify the contour of Pi since they bring unnecessary complexity to Pi and
does not affect the complexity of M−(j+1). In addition to simplify progressively
the object’s shape, reducing the complexity of primitives’ contours is a way to
obtain primitives having a form as simple as possible. The corresponding effect
is illustrated on Figure 4.11 steps 3 and 4 and on Figure 4.14b. This contour
simplification can influence the contents of the sets Fa1 and Fa3 above but it has
no impact on the integrity of the volume M−(j+1) obtained;
• Type 2: If IG belongs to category 2, it contains at least one volume sub-domain.
Here again the diversity of configurations can be rather large and it is not intended
to give a detailed description of this category. A first condition to generate a
volume interface relates to surfaces adjacent to Pi. If S is the extension of such
a surface and S ∩∗ Pi �= φ, S may contribute to the generation of a volume
sub-domain. Then, each of these surfaces has to be processed. To this end, all
the edges attaching Pi in M−(j+1) and bounding the same surface in M−(j+1)
are grouped together since they form a subset of the contour of faces possibly
contributing to a volume sub-domain. These groups are named Ea. Such an
example of edge grouping is given in Figure 4.14b where e1 and e2 are grouped
because of their adjacency between Pi and the same cylindrical surface. Ea,
together with other sets of edges are used to identify loops in S that define a
volume sub-domain of IG that must satisfy validity conditions not described here
for sake of conciseness. Figure 4.16 illustrates the identification of a volume
interface, S divides Pi into two volume sub-domains and generates a volume
interface.
There may be several valid volume sub-domains defining alternative sets of faces
to replace the visible part of Pi, ∂Pi, in ∂M−j by sets of faces that promote either the
106Chapter 4: Extraction of generative processes from B-Rep shapes
Fb1
¶M_ j (¶M_ j - ¶Pi)
Pi
S
+
Volume
interface
type 2
Surface
interface type 1a
¶M_ (j+1)
=
Ea1
Ea2
Ea3
Figure 4.16: Illustration of the removal of Pi containing a volume interface of type 2.
extension of surfaces adjacent to Pi or the imprint of Pi in M−(j+1) with the use of faces
belonging to the hidden part of Pi in M−j . All the variants are processed to evaluate
their possible contribution to the generative process graph.
If, in a general setting, there may be several variants of IG to define M−(j+1),
these variants always produce a realizable volume, which differs from the half-space
decomposition approaches studied in [SV93, BC04] where complement to the halfspaces
derived from their initial boundary were needed to produce a realizable volume.
4.5 Extracting the generative process graph
Having defined the primitive removal operator, the purpose is now to incorporate constraints
on variants of IG so that a meaningful set of models M−j , j > 0, can be
generated to produce a generative process graph.
4.5.1 Filtering out the generative processes
As mentioned earlier, the principle of the proposed approach is to ‘go back in time’
from model M to single primitives forming the roots of possible construction trees. The
main process to select primitives to be removed from M−j is based on a simplification
criterion.
Primitive selection based on a shape simplicity concept
107Chapter 4: Extraction of generative processes from B-Rep shapes
nj = 8
F1
F2
F5
F3
F4
F8
F6
F1
F3 F2
F4
F4
F7
n(j+1) = 4 1
n(j+1) = 7 2
n(j+1) = 6 3
F1
F6 F5
F2 F3
F7
F1 F2
F3 F4
F5
F6
Volume Interface
P1
P2
P3
M-j
M-(j+1)
M-(j+1)
M-(j+1)
2
3
1
δj1 = 4
δj1 = 1
δj1 = 2
Figure 4.17: Illustration of the simplicity concept to filtering out the generative processes.
The number of maximal faces of is greatly reduced with P1 than P2 and P3.
Any acceptable primitive removal at step j of the graph generation must produce a
transformation of M−j into k objects M−(j+1)k using IGk , one of the variants of IG, such
that M−(j+1)k is simpler than M−j . This simplicity concept is a necessary condition
for the graph generation to converge toward a set of construction trees having a single
primitive as root. Consequently, the simplicity concept applied to the transition between
M−j and M−(j+1)k is sufficient to ensure the convergence of the graph generation
process.
The shape simplification occurring between M−j and M−(j+1)k can be defined as
follows. First of all, it has to be considered that ∂M−j and ∂M−(j+1)k contain maximal
faces and edges. In fact, after Pi is removed and replaced by IGk to produce M−(j+1)k ,
its boundary decomposition is re-evaluated to contain maximal faces and edges only.
Then, let nj be the number of (maximal) faces in M−j and n(j+1)k be the same quantity
for M−(j+1)k , the quantity δjk:
δjk = nj − n(j+1)k (4.3)
characterizes the shape simplification under the variant IGk if:
δjk ≥ 0. (4.4)
This condition is justified because it enforces a ‘diminishing number of maximal faces
over time’, which is an intrinsic quantity to each shape.
Figure 4.17 illustrates the simplicity criterion between three primitives P1, P2, and
P3 to be removed from a solid M−j . M−j , the initial solid, contains nj = 8 (maximal)
faces. When removing P1 from M−j , the resulting solid M−(j+1)1 contains n(j+1)1 = 4
(maximal) faces. Identically, the resulting solids from P2 and P3 contains respectively
n(j+1)2 = 7 and n(j+1)3 = 6 (maximal) faces. As a result, the primitive P1 is selected
because the quantity δj1 = 4 is greater than δj2 and δj3. By removing P1, the resulting
object M−j is simpler than with P2 or P3.
108Chapter 4: Extraction of generative processes from B-Rep shapes
4.5.2 Generative process graph algorithm
Having defined the condition to evolve backward in the generative process graph, the
graph generation is summarized with algorithm 1.
Algorithm 1 Extract generative process graph
1: procedure Extract graph � The main procedure to extract generative processes of a solid M
2: inputM
3: node list ← root; current node ← root;
4: arc list ← nil; current arc ← nil; node list(0) = M
5: while size(node list) > 0 do � Stop when all solids M−j reach a terminal primitive root
6: current node = last element of list(node list)
7: M−j = get solid(current node)
8: conf ig list = P rocess variant(M−j )
9: compare conf ig(get all conf ig(graph), conf ig list) � Compare new variants with the existing graph nodes
10: for each conf ig in conf ig list do
11: M−(j+1) = remove primitives(M−j , conf ig) � Remove the identified primitives from M−j
12: node = generate node(M−(j+1), conf ig)
13: add node(graph, node)
14: arc = generate arc(node, current node)
15: add arc(graph, arc)
16: append(node list, node)
17: end for
18: remove element from list(node list, current node)
19: end while
20: end procedure
21: procedure conf ig list = P rocess variant(M−j ) � Process each variant M−j to go ’backward in time’
22: initialize primitive list(prim list)
23: ext list = f ind extrusion(M−j )
24: for each Pi in ext list do
25: Pi = simplif y prim contour(Pi, M−j )
26: interf list = generate geom interfaces(Pi, M−j )
27: interf list = discard complex(interf list, Pi, M−j )
28: if size(interf list) = 0 then
29: remove from list(Pi, ext list);
30: end if
31: append(prim list, interf list(i))
32: end for
33: sort primitive(prim list)
34: conf ig list = generate independent ext(prim list, M−j , )
35: end procedure
36: procedure ext list = f ind extrusion(M−j ) � Find sets of primitives to be removed from M−j
37: ext list = f ind visible extrusions(M−j );
38: ext list = remove ext outside model(M−j , ext list); � Reject primitives not totally included in M−j
39: ext list = remove ext included ext(ext list); � Process only maximal primitives
40: end procedure
The main procedure Extract graph of the algorithm 1 processes the node list containing
the current variants of the model at the current step ‘backward in time’ using
the procedure P rocess variant and compares the new variants to the existing graph
nodes using compare conf ig. If variants are identical, graph nodes are merged, which
creates cycles. Then, Extract graph adds a tree structure to a given variant corresponding
to the new simpler variants derived from M−j . The graph is completed when
there is no more variant to process, i.e., node list is empty. Here, the purpose is to
remove (using remove primitives) the largest possible amount of primitives Pi whose
interfaces IGk are not overlapping each other, i.e., ∀(i, j, k, l), i �= j, IGk ∈ Pi, IGl ∈ Pj ,
109Chapter 4: Extraction of generative processes from B-Rep shapes
P1 P2 P3
(a) (b)
Criterion of Maximal Primitive Criterion of Independence
dependent
edges
independent
edges
Figure 4.18: Selection of primitives: (a) Maximal primitive criterion not valid for P2 and P3
because they are included in P1, (b) two dependent primitives with common edges in red.
IGl ∩ IGk = φ, otherwise δjk would not be meaningful. Selecting the largest possible
amount of Pi and assigning them to a graph node is mandatory to produce a compact
graph. Each such node expresses the fact that all its Pi could be removed, one by one,
in an arbitrary order, which avoids describing trivial ordering changes. The primitive
removal operator, described in Section 4.4.3, generates not only simpler solids’ shapes
but also simplify the primitives’ contours (using simplify prim contour). Both simpli-
fication effects reduce considerably the complexity of the extracted generative processes
compared to the initial construction tree of the CAD component.
To process each variant M−j of M, P rocess variant starts with the identification
of valid visible extrusion primitives in M−j using f ind extrusion (see Sections 4.4.2
and 4.4.3 respectively). However, to produce maximal primitives (see Hypothesis 1),
all valid primitives which can be included into others (because their contour or their
extrusion distance is smaller than the others) are removed (remove ext included ext).
Figure 4.18a shows two primitives P2 and P3 included in a maximal primitive (see
Hypothesis 1) P1.
Once valid maximal primitives (see Hypothesis 1) have been identified, processing
the current variant M−j carries on with contour simplification: simplify prim contour,
if it does not impact the shape complexity of M−(j+1) (see Section 4.4.3). Then, all the
valid geometric interfaces IGk of each primitive are generated with
generate geom interf aces (see Section 4.4.3) and interfaces IGk increasing the shape
complexity are discarded with discard complex to ensure the convergence (see Section
4.5.1). Sets of independent primitives are ordered to ease the user’s navigation in
the graph. As illustrated in Figure 4.18, two primitives are independent if there is no
geometric intersection between them.
4.6 Results of generative process graph extractions
The previous process described in Section 4.5 has been applied to a set of components
whose shapes are compatible with extrusion processes to stay consistent with
110Chapter 4: Extraction of generative processes from B-Rep shapes
(c) (d)
(a)
T
T
T
B
T
T
B
T
T
T
(b)
Figure 4.19: Extraction of generative processes for four different components: a, b, c, d.
Orange sub-domains highlight the set of visible primitives removed at each step of the graph
generation. Construction graph reduces to a tree for each of these components: (a) T and B
indicate Top and Bottom views to locate easily the primitives removed. Other components
use a single view.
111Chapter 4: Extraction of generative processes from B-Rep shapes
algorithm 1 though they are industrial components. The results have been obtained
automatically using algorithm 1 implemented using Python and bindings with Open
Cascade (OCC) library [CAS14]. The complexity of Algorithm 1 is O(n2). Regarding
time, most consuming operations refer to the procedure f ind extrusion which uses
boolean operations to verify the validity of each primitive. Therefore, the practical
performance of the algorithm is dependent on the robustness and complexity of the
boolean operators. Statistics given are the amount of calls to a generic Boolean type
operator available in the OCC [CAS14] library, the total number of visible primitives
(f ind visible extrusions), nv, and the final number of Pi in the graph, np.
Results on industrial shapes
Figure 4.19 shows the generative processes extracted from four different and rather
simple components. They are characterized by triples (nB; nv; np), (2183; 220; 8),
(9353; 240; 31), (8246; 225; 15), (1544; 132; 6), for (a), (b), (c) and (d), respectively.
The graph structure reduces to a tree one for each of them. It shows that merging all
extrusions in parallel into a single node can be achieved and results into a compact
representation. These results also show the need for a constraint that we can formalize
as follows: configurations produced by generate independent ext must be such that
each variant M−(j+1)k generated from M−j must contain a unique connected component
as it is with M. However, this has not been implemented yet. This continuity constraint
expresses the fact that M is a continuous medium and its design process follows this
concept too. Figure 4.21 illustrates this constraint on the construction graph of a
simple solid. The object M−11 is composed of 5 solids which represent a non continuum
domain. Consequently, any of its transformation stages must be so to ensure that any
simplified model, i.e., any graph node, can stand as basis for an idealization process.
Then, it is up to the idealizations and their hypotheses to remove such a constraint,
e.g., when replacing a primitive by kinematic boundary conditions to express a rigid
body behavior attached to one or more primitives in the graph.
Figure 4.20 shows the graph extracted from the component analyzed in Figure 4.1.
It is characterized by (111789; 1440; 62). Two variants appear at step 4 and lead to
the same intermediate shape at step 8. It effectively produces a graph structure. It
can be observed that the construction histories are easier to understand for a user than
the one effectively used to model the object (see Figure 4.1). Clearly, the extrusion
primitives better meet the requirements of an idealization process and they are also
better suited to dimension modification processes as mentioned in Section 4.1. The
current implementation of Algorithm 1 uses high-level operators, e.g., boolean operations,
rather than dedicated ones. This implementation limits the time reduction which
could be achieved compared to the interactive transformations. Issues also lies in the
robustness of CAD boolean operators which use quite complex modeling techniques.
In a future work, instead of boolean operators, specific operators can be developed for
an efficient implementation.
112Chapter 4: Extraction of generative processes from B-Rep shapes
M
M-1
M-2
M-3 M-3
M-8
M-42
M-52
M-62
M-72
M-7 1
M-6 1
M-5 1
M-4 1
L2 L1
Figure 4.20: Construction graph GD of a component. Orange sub-domains indicate the
removed primitives Pi at each node of GD. Labels M−jk indicate the step number j when
‘going back over time’ and the existence of variants k, if any. Arrows described the successive
steps of D. Arcs of GD are obtained by reversing these arrows to produce construction trees.
Steps M−61 and M−62 differ because of distinct lengths L1 and L2.
113Chapter 4: Extraction of generative processes from B-Rep shapes
M
M-1 M 2 -11
Non continuous
medium ( 5 solids)
Continuous
medium ( 1 solid)
Figure 4.21: Illustration of the continuity constraint with two construction processes of an
object M. The generated object M−11 is composed of 5 independent solids which represent
a non continuum domain. Object M−12 contains one solid which represents a continuum
domain.
Equivalence between a graph representation and a set of construction trees
Also, GD is a compact representation of a set of construction processes. Figure 4.22
illustrates the equivalence between a set of construction trees and the graph representation
GD. There, the set of primitives removed from M−j to obtain M−(j+1) is characterized
by the edge α−j,−(j+1) of GD that connects these two models. The cardinality
of this set of primitives is nj . If these nj primitives are related to different sketching
planes, they must be attached to nj different steps of a construction tree in a CAD
software. Without any complementary criterion, the ordering of these nj primitives
can be achieved in nj ! different ways involving nj additional nodes and edges in GD
to represent the corresponding construction tree. Here, it is compacted into a single
graph edge α−j,−(j+1) in GD between M−j and M−(j+1). Furthermore, the description of
this set of tree structures requires the expansion of α−j,−(j+1) into the nj ! construction
tree structures ending up to M−j . This modification of GD generates new cycles in GD
between M−j and M−(j+1). Indeed, the graph structure of GD is a much more compact
structure than the construction tree of CAD software. All the previous results and
observations show that GD is a promising basis for getting a better insight about a
shape structure and evaluating its adequacy for idealizations.
4.7 Extension of the component segmentation to
assembly structure segmentation
This section explains how the generative processes used for single B-Rep component
can be adapted to assembly structures of several B-Rep components.
114Chapter 4: Extraction of generative processes from B-Rep shapes
M-j M-(j+1)
Contruction Trees
Graph GD
Compact
representation
of nj primitives
in one edge α-j,-(j+1)
1 - 4
5
7
6
8
9 9
7 - 8
1 - 6
7
1 - 6
7
9
8
...
!nj ordering possibilities corresponding the nj primitives
Figure 4.22: A set of CAD construction trees forming a graph derived from two consecutive
construction graph nodes.
115Chapter 4: Extraction of generative processes from B-Rep shapes
P7
P3
P17
P1
P2
P4
P6
P8
P9
P10 P11 P12 P13
P14 P15
P16 P18
P5
Interface Graph
between primitives
C1
C2
C3
C4
Interface Graph
between components
(a) Component (b) Assembly
Figure 4.23: Illustration of the compatibility between the component segmentation (a) and
assembly structure segmentation (b).
Equivalence between generative processes of components and assembly structures
As explained in 1.3.1, a CAD assembly structure contains a set of volume B-Rep
components located in a global reference frame. The method of Jourdes et al. [JBH∗14]
enriches the assembly with geometric interfaces between components. Shahwan et
al. [SLF∗12, SLF∗13] further enrich the results of Jourdes et al. with functional designation
of components. As a result, the final assembly model is composed of a set of
3D solid components connected to each other through functional interfaces.
An equivalence can be made between this enriched assembly structure (see Figure
4.23b) and generative processes of components. Indeed, a solid decomposition
of each component can be derived from its generative processes expressed with GD.
It provides intrinsic structures of components made of 3D solid primitives linked by
geometric interfaces (see Figure 4.23a). These structures are compatible with the assembly
structure also described using 3D solids and interfaces. The decomposition of
each component, GD, can be integrated into an assembly graph structure. Figure 4.24a
illustrates an assembly of two components C1 and C2 connected by one assembly interface
I1,2. In 4.24b, each component is subdivided into two primitives (P1,1, P1,2) and
(P2,1, P2,2), respectively, linked by geometric interfaces I11,12 and I21,22 , respectively.
Now, the assembly structure can be represented by a graph GA where nodes represent
the assembly components and edges contains functional assembly interfaces. Each solid
decomposition of a component Ci also constitutes a graph structure GDi which can be
nested into the nodes of GA, see 4.24c, in a first place.
116Chapter 4: Extraction of generative processes from B-Rep shapes
Graph of the final enriched
assembly model
P1.1 P1.2
C1
C2
I1/2 I1/2
P1.1
P1.2
P2.2
P2.1
I1.1/1.2
P2.1
P2.2
I1/2
I2.1/2.2
I1.1/1.2
Assembly structure Components segmentation
(a) (b) (c)
Figure 4.24: Insertion of the interface graphs between primitives obtained from component
segmentations into the graph of assembly interfaces between components GA: (a) The assembly
structure with its components and assembly interfaces between these components, (b) the
components segmented into primitives and interfaces between these primitives forming GDi ,
(c) the graph of the final enriched assembly model.
Advantages for the shape analysis of components and assemblies
The compatibility of the component segmentation with the assembly graph structure
is a great benefit for the analysis algorithms dedicated to the determination of
the simulation modeling hypotheses. Considering sub-domains and interfaces as input
data to a general framework enabling the description of standalone components as
well as assemblies, the analysis algorithms can be applied at the level of a standalone
component as well as at the assembly level. This property extends the capabilities
of the proposed FEA pre-processing methods and tools from the level of standalone
components to an assembly level and contributes to the generation of FE analyses of
assembly structures.
However, it has to be pointed out that the nesting mechanism of GA and GDi
has been briefly sketched and a detailed study is required to process configurations
in which component interfaces are not exactly nested into component primitive faces,
e.g., interface I1,2 that covers faces of P2,1 and P2,2. Additionally, interfaces used for
illustration are all of type surface, whereas interfaces between primitives can be of
types surface or volume and geometric interfaces between components can be of type
contact or interference. If, in both cases, this leads to common surfaces or volumes,
the detailed study of these configurations is left to future work to obtain a thorough
validation.
4.8 Conclusion
This chapter has described a new approach to decompose a B-Rep shape into volume
sub-domains corresponding to primitive shapes, in order to obtain a description that
is intrinsic to this B-Rep shape while standing for a set of modeling actions that will
be used to identify idealizable sub-domains.
117Chapter 4: Extraction of generative processes from B-Rep shapes
Construction trees and shape generation processes are common approaches to model
mechanical components. Here, it has been shown that construction trees can be extracted
from the B-Rep model of a component. Starting with a B-Rep object free of
blends, the proposed approach processes it by iteratively identifying and removing a
set of volume extrusion primitives from the current shape. The objective of this phase
is to ‘go back in time’ until reaching root primitives of generative processes. As a
result, a set of non-trivial generative processes (construction trees) is produced using
the primitives obtained at the end of this first phase.
It has been shown that construction trees are structured through a graph to represent
the non trivial collection of generative processes that produce the input B-Rep
model. This graph contains non trivial construction trees in the sense that neither
variants of extrusion directions producing the same primitive are encoded nor are the
combinatorial variants describing the binary construction trees of CAD software, i.e.,
material addition operations that can be conducted in parallel are grouped into a single
graph node to avoid the description of combinatorial combinations when primitives are
added sequentially as in CAD software. Thus, each node in the construction graph
can be associated with simple algorithms to generate the trivial construction variants
of the input object.
The proposed method includes criteria which generate primitives with simple shapes
and which ensure that the shape of intermediate objects is simplified after each primitive
suppression. These properties guarantee the algorithm to be convergent.
Finally, a graph of generative processes of a B-Rep component is a promising basis to
gain a better insight of a shape structure and to evaluate its adequacy for idealizations.
It has been illustrated that this process can also be extended to the assembly context
with the nesting of the primitive-interface structure with respect to the componentinterface
one. The next Chapter 5 describes how a construction graph can be efficiently
used in an idealization process.
118Chapter 5
Performing idealizations from
construction graphs
Benefiting from the enrichment of a component shape with its construction graph,
this chapter details the proposed morphological approach and the idealization
process to generate idealized representations of a component shape’ primitives.
Based on this automated decomposition, each primitive is analyzed in order to
define whether it can be idealized or not. Subsequently, geometric interfaces
between primitives are taken into account to determine more precisely the idealizable
sub-domains. These interfaces are further used to process the connections
between the idealized sub-domains generated from these primitives to finally
produce the idealized model of the initial object. Also it is described how the
idealization process can be extended to assembly models.
5.1 Introduction
According to Section 1.5.4, shape transformations taking place during an assembly simulation
preparation process interact with simulation objectives, hypotheses, and shape
transformations applied to standalone components and to their assembly interfaces.
Section 2.3 underlines the value of a shape analysis prior to the application of these
transformations to characterize their relationship with geometry adaption and FEA
modeling hypotheses, especially in the scope of idealization processes which need to
identify the candidate regions for idealization.
As explained in Chapter 2, prior research work has concentrated on identifying idealizable
areas rather than producing simple connections between sub-domains [LNP07b,
SSM∗10, MAR12, RAF11]. Recent approaches [CSKL04, Woo14] subdivide the input
CAD model into simpler sub-domains. However, these segmentation algorithms do not
aim at verifying the mechanical hypotheses of idealization processes and the identified
features found do not necessarily produce appropriate solids for dimensional reduction
operators. The idealization process proposed in this chapter benefits from the shape
119Chapter 5: Performing idealizations from construction graphs
IP2/P3
C1
IP2/P3
Graph of volume
primitives of C1
P1 P2
P3
IP1/P3
P3
P1 P2
IP1/P3
Component C1
Medial
Surfaces
e1 e2
e3
P1 P2
P3
Idealizable
subdomains
Non idealizable
subdomains
e1 e2
e3
Thickness
Morphological
analysis of component
Full idealized
CAD model
Construction graph
Figure 5.1: From a construction graph of a B-Rep shape to a full idealized model for FEA.
structure produced by construction graphs generated from a component as a result of
the process described in Chapter 4 and containing volumes extrusion primitives which
can be already suited to idealization processes. This construction graph directly offers
the engineer different segmentation alternatives to test various idealization configurations.
Starting from a construction graph of a B-Rep model, this chapter describes a
morphological analysis approach to formalize the modeling hypotheses for the idealization
of a CAD component. This formalization leads to the generation of a new
geometric structure of a component that is now dedicated to the idealization process.
Figure 5.1 illustrates the overall process which is then described in the following sections.
Section 5.2 explains the advantages of applying a morphological analysis on a
component shape and states the main categories of morphology that needs to be located.
Section 5.3 describes the general algorithm proposed to analyze the morphology
of a B-Rep shape from its generative processes containing extrusion primitives. The
algorithm evaluates each primitive morphology and process interfaces between these
primitives to extend the morphological analysis to the whole object. Section 5.4 studies
the influence of external boundary conditions and of assembly interfaces on the
new component structure. There, the objective is to determine the adequacy of the
proposed approach with an assembly structure. Section 5.5 illustrates the process to
derive idealized models from a set of extrusion primitives and geometric interfaces.
5.2 The morphological analysis: a filtering approach
to idealization processes
A common industrial principle of the FEA of mechanical structures is to process the
analysis step-by-step using a top-down approach. To simulate large assembly struc-
120Chapter 5: Performing idealizations from construction graphs
tures, i.e., aeronautical assemblies, a first complete idealized mesh model is generated
to evaluate the global behavior of the structure. Then, new local models are set up
to refine the global analysis in critical areas. At each step of this methodology, the
main purpose of the pre-processing task is to generate, as quickly as possible, a well
suited FE model. In the industry, guidelines have been formalized to help engineers
defining these FE models and correctly applying the required shape transformations.
Although these guidelines are available for various simulation objectives, there are still
difficulties about:
• The model accuracy needed to capture the mechanical phenomenon to be evaluated.
An over-defined model would require too many resources and thus, would
delay the results. In case of large assemblies, the rules set in the guidelines are
very generic to make sure a global model can be simulated in practice. This way,
the FEM is not really optimized. As an example, the mesh size is set to a constant
value across all geometric regions of the components and is not refined in
strategic areas. Section 1.4 points out current engineering practices to generate
large FE assembly models;
• The application of modeling rules. The engineer in charge of the FEM creation
has difficulties when identifying the regions potentially influencing or not the
mechanical behavior of the structure, prior to the FEA computation results. In
addition, evaluating the geometric extent of the regions to be idealized as well
as determining the cross influences of geometric areas on each other are difficult
tasks for the engineer because they can be performed only mentally, which is even
harder for 3D objects. Section 2.3 highlights the lack of idealization operators to
delimit their conditions of application.
In case of large assembly data preparation, automated tools are required (due to
model size and time consuming tasks). These tools have to produce simulation models
in line with the two previous challenges (model accuracy and rule-based modeling).
In the following sections, we introduce approaches to deal with such challenges using
morphological analysis tools. These tools support the engineers when comparing the
manufactured shape of a component with the simplification and idealization hypotheses
needed to meet some simulation hypotheses.
5.2.1 Morphological analysis objectives for idealization processes
based on a construction graph
As introduced in Chapter 3, a major objective of this thesis aims at improving the
robustness of FE pre-processing using a shape analysis of each component before the
application of shape transformation operators. For a simulation engineer, the purpose
is to understand the shape, support the localization of transformations, and to build
121Chapter 5: Performing idealizations from construction graphs
the different areas to transform in the initial CAD models. This scheme contributes to
an a priori approach that illustrates the application of modeling hypotheses, especially
the idealization hypotheses. In addition, this approach is purely morphological, i.e., it
does not depend on discretization parameters like FE sizes.
Morphological categories identified in solid models
The first requirement of the idealization process is to identify which geometric regions
of the object contain proportions representing a predefined mechanical behavior.
In the scope of structural simulations, the predefined categories representing a specific
mechanical behavior correspond to the major finite element families listed in Table 1.1.
These categories can be listed as follows:
• Beam: a geometric sub-domain of an object with two dimensions being signifi-
cantly smaller than the third one. These two dimensions define the beam cross
section;
• Plate and shell: a geometric sub-domain of an object with one dimension being
significantly smaller than the two others. This dimension defines the thickness
of the plate or the shell, respectively;
• 3D thick domain: a geometric sub-domain that does not benefit from any of the
previous morphological properties and that should be modeled with a general 3D
general continuum medium behavior.
From a geometric perspective, the principle of the idealization process corresponds
to a dimensional reduction of the manifold representing a sub-domain of the solid object,
as defined in 1.2.1. A detailed description of the idealization hypotheses is available
in Section 1.4.2. To construct fully idealized models derived from an object M,
geometric connection operations between idealized sub-domains must be performed.
As stated in Chapter 2, automated geometric transformations do not produce accurate
results in connections areas. The main issue lies in the determination of the geometric
boundaries where the connection operators must process the idealized sub-domains
(see Figure 2.8). Currently, the engineer applies these operators manually to define the
direction and the extent of the connection areas (see Figure 1.15 illustrating a manual
process to connect medial surfaces). The proposed idealization process benefits from
an enriched initial component model with a volume segmentation into sub-domains
and into interfaces between them that contain new geometric information to make the
connections operators robust. This segmentation gives the engineer, a visual understanding
of the impact of simulation hypotheses on the component geometry. Additionally,
the proposed analysis framework identifies the regions that can be regarded as
details, independently of the resolution method. These regions represents areas having
no significant mechanical influence with respect to the morphological category of their
neighboring regions.
122Chapter 5: Performing idealizations from construction graphs
In case of assembly structures, the categories presented above are still valid at
the difference that the identified geometric domains are not anymore a sub-domain
restricted to a single solid. Indeed, a group of components can be considered as a
unique beam, plate, or shell element. In this case, if the interfaces between a group of
connected sub-domains from different components have been defined as mechanically
rigid connections by the engineer, this group of sub-domains can also be identified as
a unique continuum, hence distinguishing components is not necessary anymore. The
effects of assembly interfaces are detailed in Section 5.4.
Adequacy of generative construction graphs with respect to the morphological
analysis
A shape decomposition is a frequent approach to analyze and structure objects
for FEA requirements. Section 2.4 has highlighted the limits of current morphological
analysis methods and underlined the need of more robust automatic techniques adapted
to FE model generation. The proposed approach uses the generative construction
graph GD to perform a morphological analysis of CAD components adapted to FEA
requirements. Section 4.2 has addressed the advantages of construction graphs to
structure the shape of a component. It proposes a compact shape decomposition of
primitives containing a fair amount of information that is intrinsic to this shape. The
geometric structure of GD with sub-domains and associated interfaces is close to the
structure described in Section 5.2.1 with regions candidate to idealization. GD also
offers various construction processes which enable the engineer to construct and study
various simulation models that can be derived from the same component using different
construction processes.
Generating the idealization of an object M from a set of primitives, obtained from
its construction graph GD, is more robust compared to the prior idealization methods
for three reasons:
• The information contained in GD is intrinsic to the definition of primitives. Each
maximal primitive Pi ∈ GD and its associated interfaces determine both the
3D location of the idealized representation of Pi and its connections with its
neighboring idealized primitives;
• The effective use of connections between sub-domains to be idealized. A taxonomy
of categories of connections can be defined. This classification determines
the most suitable geometric operator to process each connection. Currently, the
idealization process still relies on the engineer’s expertise to manage complex
connections whereas a CAD or a CAE software is bound to much simpler connections;
• Shape modification processes of components. When a component shape modification
is performed, only the impacted primitives have to be revised in its
123Chapter 5: Performing idealizations from construction graphs
· Global morphological analysis of each primitive Pi using extrusion distance
· Morphological analysis of the extrusion contour of each primitive Pi and determination of the geometric
sub-domains Dj(Pi)to be idealized in Pi
Morphological analysis of the primitives Pi
01
step
Extension of the morphological analysis of primitives Pi to the whole object
02
step
· Categorization of the interfaces IG between primitives Pi
· Segmentation of the primitives Pi in new partitions P’ based on the interfaces IG typology
· Merge or make independent the new partitions P’ with the primitives Pi to obtain a segmentation which is
most suited for idealization
Idealization of the primitives Pi and processing connections
· Generation of mid-surface and mid-lines of the primitives Pi to be idealized
· Connections of the idealized models of the primitives Pi
03
step
Figure 5.2: Global description of an idealization process.
construction graph. Therefore, the idealization process can be locally updated
and does not have to be restarted over from its shape decomposition.
The next section details the structure of the proposed idealization process from the
exploitation of the geometric information provided with the construction graph GD of
a component to the generation of its idealized models.
5.2.2 Structure of the idealization process
The idealization process of an object M is based on morphological analysis operations,
idealization transformations and connections between idealized sub-domains. Its
objective is to produce the idealized model, denoted by MI , of the initial object M.
Figure 5.2 contains a comprehensive chart illustrating the various steps of the proposed
idealization process. In a first step, the decomposition of M into primitives Pi,
described in the graph GD, leads to a first morphological analysis of each primitive Pi.
For each extrusion primitive, this morphological analysis determines whether Pi has a
morphology of type plate, shell, or a morphology of type thick 3D solid (see Section 5.2
describing the morphology categories). This first step is described in Section 5.3.1.
Then, a second morphological analysis is applied to each primitive Pi to determine
whether or not it can be subdivided into sub-domains Dij , morphologically different
from the one assigned to Pi. This analysis is performed using the 2D extrusion contour
124Chapter 5: Performing idealizations from construction graphs
of the primitive Pi. The resulting decomposition of Pi generates new interfaces IG,
integrated in GD.
In a second phase, using a typology of connections between the different categories
of idealizations, the interfaces IG between Dij are used to propagate and/or update
the boundary of the primitives Pi. This step, described in Section 5.3.3, results in
the decomposition of the object M into sub-domains with a morphology of type beam,
plate/shell, or 3D thick solid. The third phase consists in generating the idealization of
each primitive Pi and then, it connects the idealized domains of Pi using the typology
and the location of the interfaces IG. During this operation, the morphology of each
Pi, combined with the typology of each of its interfaces IG, is used to identify regions
to be considered as details, independently from any FE size. The generation of an
idealized model is described in Section 5.5.
Overall, the different phases illustrated in Figure 5.2 fit into an automated process.
The engineer can be involved in it to select some connection model between idealized
sub-domains or some boundary adjustment category of idealized sub-domain depending
on its types of connections. The next sections detail each step of the proposed
idealization process.
5.3 Applying idealization hypotheses from a construction
graph
The purpose of this section is to illustrate how the construction graph GD of an object
M obtained with the algorithm described at Section 4.5.2 can be used in shape idealization
processes. In fact, idealization processes are high level operations that interact
with the concept of detail because the idealization of sub-domains, i.e., Pi obtained
from GD, triggers their dimensional reduction, which, in turn, influences the shape
of areas around IGs, the geometric interfaces between these sub-domains. Here, the
proposed approach is purely morphological, i.e., it does not depend on discretization
parameters like FE sizes. It is divided into two steps. Firstly, each Pi of GD is evaluated
with respect to an idealization criterion. Secondly, according to IGs between Pis,
the ‘idealizability’ of each Pi is propagated in GD through the construction graph up
to the shape of M. As a result, an engineer can evaluate effective idealizable areas.
Also, it will be shown how variants of construction trees in GD can influence an idealization
process. Because the idealization process of an object is strongly depending
on the engineer’s know-how, it is the principle of the proposed approach to give the
engineer access to the whole range of idealization variants. Finally, some shape details
will appear subsequently to the idealization process when the engineer will define FE
sizes to mesh the idealized representation of M.
125Chapter 5: Performing idealizations from construction graphs
e
Thickness
Max Diameter
d
d
(a) (b)
e
Figure 5.3: Determination of the idealization direction of extrusion primitives using a 2D
MAT applied to their contour. (a) Configuration with an extrusion distance (i.e., thickness d
= e) much smaller than the maximal diameter obtained with the 2D MAT on the extrusion
contour, the idealization direction corresponds to the extrusion direction (b) Configuration
with an extrusion distance much larger than the maximal diameter obtained with the 2D MAT
on the extrusion contour, the idealization direction is included in the extrusion contour.
5.3.1 Evaluation of the morphology of primitives to support
idealization
Global morphological analysis of each primitive Pi
In a first step, each primitive Pi extracted from GD is subjected to a morphological
analysis to evaluate its adequacy for idealization transformation into a plate or a shell.
Because the primitives are all extrusions and add material, analyzing their morphology
can be performed with a MAT [MAR12, RAM∗06, SSM∗10].
A MAT is particularly suited to extrusion primitives having constant thickness since
it can be applied in 2D. Furthermore, it can be used to decide whether or not subdomains
of Pi can be assigned a plate or shell mechanical behavior. In the present case,
the extrusion primitives obtained lead to two distinct configurations (see Figure 5.3).
Figure 5.3a shows a configuration with a thin extrusion, i.e., the maximal diameter Φ
obtained with the MAT from Pi’s contour is much larger than Pi’s thickness defined by
the extrusion distance d. Then, the idealized representation of Pi would be a surface
parallel to the base face having Pi’s contour. Figure 5.3b shows a configuration where
the morphology of Pi leads to an idealization that would be based on the content of
the MAT because d is much larger than Φ.
To idealize a sub-domain in mechanics [TWKW59], a commonly accepted reference
proportion used to decide whether a sub-domain is idealizable or not is a ratio of ten
between the in-plane dimensions of the sub-domain and its thickness, i.e., xr = 10.
Here, this can be formalized with the morphological analysis of Pi obtained from the
MAT using: x = max((max Φ/d),(d/max Φ)). Consequently, the ratio x is applicable
for all morphologies of extrusion primitives.
126Chapter 5: Performing idealizations from construction graphs
d: extrusion distance
d = 2
(b)
(c)
d < max Ø
x =
d > max Ø
x d =
max Ø
max Ø Ratio x
MAT 2D
d = 35
0 3
xu xr
1.5 < <25
x 1.5 = 25 = 16.6 d = 1.5
(a)
Max Diameter Ø = 25
Max Diameter Ø = 10
d = 25
Max Diameter Ø = 25
Primitive Morphology
(d)
35 > 10
x 10 = 35 = 3.5
2 < 10
x 2 = 10 = 5
25 = 25
x 25 = 25 = 1
10
Max Diameter Ø = 10
d
Table 5.1: Categorization of the morphology of a primitive using a 2D MAT applied to the
contour of extrusion primitives. Violet indicates sub-domains that cannot be idealized as
plates or shells (see component d), green ones can be idealized (see component a) and yellow
ones can be subjected to user decision (see component b and c).
127Chapter 5: Performing idealizations from construction graphs
Because idealization processes are heavily know-how dependent, using this reference
ratio as unique threshold does not seem sufficient to help an engineer analyze
sub-domains, at least because xr does take precisely into account the morphology of
Pi’s contour. To let the engineer tune the morphological analysis and decide when Pi
can/cannot be idealized a second, user-defined threshold, xu < xr, is introduced that
lies in the interval ]0, xr[. Figure 5.3b illustrates a configuration where the morphological
analysis does not produce a ratio x > xr though a user might idealize Pi as a
plate.
Let xu = 3 be this user-defined value, Table 5.1 shows the application of the 2D
MAT to the contours of four extrusion primitives. This table indicates the three categories
made available to the engineer to visualize the morphology of Pi. Primitives
with a ratio of x > xr, e.g., primitive (a), are considered to be idealizable and are
colored in green. Primitives with a ratio of xu < x < xr, e.g., primitives (b) and (c),
are subjected to a user’s decision for idealization and are colored in yellow. Finally,
primitives with a ratio x < xu, e.g., primitive(d), indicate sub-domains that cannot be
idealized and are colored in violet.
Figure 5.4 illustrates the evaluation of the morphology of the primitives of a component
prior to its idealization. The component has been initially segmented into
15 extrusion primitives using the algorithm presented at Section 4.5.2. Then, the 2D
MAT has been applied to the extrusion contour of each primitive to determine the
maximal diameter Φ. Finally, this diameter is compared to the extrusion distance of
the corresponding primitive to determine the ratio x. Three morphological evaluations
are presented in Figure 5.4 that correspond to different values of the thresholds xu and
xr, which are set to (a) (3, 10), (b) (6, 10), and (c) (2, 10), respectively.
Figure 5.5 shows the result of the interactive analysis the user can perform from the
graphs GD obtained with the components analyzed in Figures 5.5a, b, c, and d. It has
to be mentioned that the morphological analysis is applied to GD rather than to a single
construction tree structure so that the engineer can evaluate the influence of D with
respect to the idealization processes. However, the result obtained on the component
in Figure 4.20 shows that the variants in GD have no influence with respect to the
morphological analysis criterion, in the present case. Consequently, Figure 5.5 displays
the morphological analysis obtained from the variant M−j2 in Figure 4.20. Results
on components in 5.5a, c also show the clear use of this criterion because some nonidealizable
sub-domains (see indications in Figure 5.5 regarding violet sub-domains)
are indeed well proportioned to be idealized with beams.
Now, considering the morphological classification of sub-domains stated in Section
5.2.1, this first morphological analysis of Pi acts as a necessary condition for Pi to
fall into a category of:
• plate/shell but Pi can contain sub-domains Dij where Φ can get small enough
to produce a beam-like shape embedded in Pi, see Figure 5.6a. In any case, Pi
128Chapter 5: Performing idealizations from construction graphs
Ø
0
3
10
x
Ø
Ø
Ø
Max Diameter Ø
Extrusion distance d
xr
xu
v
0
6
x
xr
xu
Ratio x
0
2
x
xr
xu
10 10
Ø
Ø Ø Ø
Ø
Evaluation of sub domain for idealization
Identification of Maximal
diameter in 2D sketch
using MAT
Segmentation from
Construction Graph
d < max Ø
x =
d > max Ø
x d =
max Ø
max Ø
d
(a) (b) (c)
Figure 5.4: Example of the morphological evaluation of the extrusion primitives extracted
during the construction process of a component. Violet indicates sub-domains that cannot
be idealized as plates or shells, green ones can be idealized and yellow ones are up to the
user’s decision.
129Chapter 5: Performing idealizations from construction graphs
T
T
B
B
sub domains idealizable as beams
(a) (c) T B
0
3
10
x
xr
xu
T B (b)
(d)
Figure 5.5: Idealization analysis of components. T and B indicate Top and Bottom views of
the component, respectively. The decomposition of a is shown in Figure 4.20 and decompositions
of b, c and d are shown in Figure 4.19. Violet indicates sub-domains that cannot be
idealized as plates or shells, green ones can be idealized and yellow ones can be subjected to
user’s decisions.
130Chapter 5: Performing idealizations from construction graphs
Beam
local sub-domain to idealize as beam
local sub-domain
to consider as detail
local sub-domain
to idealize as beam
Beam
Primitive Pi
Dij
Dik
d
local Ø ≈ d << max Ø
max Ø
local Ø
local Ø ≈ d << max Ø
d
d
local Ø << d ≈ max Ø
Detail
(a)
(b)
(c)
Figure 5.6: Illustration of primitives’ configurations containing embedded sub-domains Dik
which can be idealized as beams or considered as details. (a) Primitive morphologically identified
as a plate which contains a beam sub-domain, (b) primitive morphologically identified
as a plate which contains a volume sub-domain to be considered as a detail, (c) primitive
morphologically identified as a thick domain which contains a beam sub-domain
cannot contain a sub-domain of type 3D thick domain because the dominant
sub-domain Dij is morphologically a plate/shell. If there exists a sub-domain
Dik, adjacent to Dij , such that x < xu, i.e., it is morphologically thick, Dik is
not mechanically of type 3D thick domain because it is adjacent to Dij . Indeed,
Dik can be regarded as a detail compared to Dij since the thickness of Dij will
be part of the dimensional reduction process. Figure 5.6b shows an example of
such configuration;
• 3D thick domain because Pi contains at least one dominant sub-domain Dij
of this category. However, it does not mean that Pi does not contain other subdomains
Dik that can be morphologically of type plate/shell or beam. Figure 5.6b
illustrates a beam embedded in a 3D thick domain.
Indeed, all green sub-domains and yellow ones validated by the engineer can proceed
with the next step of the morphological analysis. Similarly, violet sub-domains cannot
be readily classified as non idealizable. Such configurations show that the classification
described in Section 5.2.1 has to take into account the relative position of sub-domains
Dij of Pi and they are clearly calling for complementary criteria that are part of the
next morphological analysis where Pi needs to be decomposed into sub-domains Dij to
refine its morphology using information from its MAT.
131Chapter 5: Performing idealizations from construction graphs
Determination of geometric sub-domains Dij to be idealized in Pi
Then, in a second step, another morphological analysis determines in each primitive
Pi if some of its areas, i.e., sub-domains Dij , can be associated with beams and, therefore,
admit further dimensional reduction. Indeed, the previous ratio x determines
only one morphological characteristic of a sub-domain Dij , i.e., the dominant one, of
Pi because the location of the MAT, where x is defined, is not necessarily reduced to a
point. For example, Figure 5.7 illustrates a configuration where x holds along a medial
edge of the MAT of the extrusion contour. Similarly to the detail removal using MAT
conducted by Armstrong et al. [Arm94], a new ratio y is introduced to compare the
length of the medial edge to the maximal disk diameter along this local medial edge.
The parameter y is representative of a local elongation of Pi in its contour plane and
distinguishes the morphology of type beam located inside a morphology of type plate
or shell when the starting configuration is of type similar to Figure 5.3a. If Pi is similar
to Figure 5.3b, then the dominant Dij is of type beam if x appears punctually or of
type plate/shell if x appears along a medial edge of the MAT of the extrusion contour
of Pi.
Appendix C provides two Tables C.1 and C.2 with 18 morphological configurations
associated with a MAT medial edge of a primitive Pi. The two tables differ according
to whether the idealization direction of Pi corresponds to the extrusion direction, see
Table C.1 (type similar to Figure 5.3a), or whether the idealization direction of Pi is
included in the extrusion contour, see Table C.2 (type similar to Figure 5.3b). The
reference ratio xr and user ratio xu are used to specify, in each table, the intervals of
morphology differentiating beams, plates or shells and 3D thick domains. Therefore,
nine configurations are presented in Table C.1 illustrating the elongation of the extrusion
contour of Pi. Table C.1 allows both the elongation of the extrusion distance
and the elongation of the extrusion contour, this produces also nine configurations.
These tables illustrates 18 morphological possible configurations when the medial edge
represents a straight line with a constant radius for the inscribed circles of the MAT.
Other configurations can be found when the medial edge is a circle, or more generally,
a curve or when the radius is changing along the medial edge. These configurations
have not been studied in detail and are left for future work.
L1
L2
max Ø
x = L1 / max Ø
y = L2 / max Ø
xu = 10 (user threshold)
L1 < xu . max Ø
L2 > xu . max Ø
Figure 5.7: Example of a beam morphology associated with a MAT medial edge of a primitive
Pi.
132Chapter 5: Performing idealizations from construction graphs
Tables C.1, C.2 of Appendix C represents a morphological taxonomy associated
with one segment of the MAT of Pi. Because the extrusion contour of Pi consists
in line segments and arcs of circles, the associated MAT has straight and curvilinear
medial edges which can be categorized as follows:
1. Medial edges with one of their end point located on the extrusion contour of Pi,
the other one being connected to another medial edge;
2. Medial edges with their two end points connected to other medial edges. In the
special case of a segment having no end point, e.g., when the extrusion contour
is a circular ring, its MAT reduces to a closed circle and falls into this category.
Segments of category 1 are deleted and the morphological analysis focuses on the
segments of category 2 which are noted Sij . On each of these edges, the ratio y includes
a maximum located at an isolated point or it is constant along the entire edge. ymax
represents this maximum and is assigned to the corresponding medial edge, Sij . The
set of edges Sij is automatically classified using the taxonomy of Tables C.1, C.2 or
some of them can be specified by the engineer wherever yu < y < yr. This is the
interactive part left to the engineer to take into account his, resp. her, know-how.
Pi is segmented based on the changes in the morphological classification of the
edges Sij . This decomposition generates a set of sub-domains Dij of each primitive Pi.
These sub-domains Dij are inserted with their respective morphological status and their
geometric interfaces in the graph GD. Figure 5.8 summarizes the different phases of the
morphological analysis of each extrusion primitive Pi extracted from the construction
graph GD of on object M. Because each sub-domain Dij is part of only one primitive
Pi, it can also be considered as a new primitive Pk. To reduce the complexity in the
following process, the sub-domains Dij are regarded as independent primitives Pk.
These results are already helpful for an engineer but it is up to him, or her, to
evaluate the mechanical effect of IGs between primitives Pi. To support the engineer
in processing the stiffening effects of IGs, the morphological analysis is extended by a
second step described as follows.
5.3.2 Processing connections between ‘idealizable’ sub-domains
Dij
The morphological analysis of standalone primitives Pi is the first application of GD.
Also, the decomposition obtained can be used to take into account the stiffening effect
of interfaces IG between Pi or, more generally, between Dij , when Pi
1 are iteratively
1From now on Pi designates sub-domains that correspond to primitives Pi obtain from the segmentation
of M or one subdivision domain Dkj of a primitive Pk decomposed after the morphological
analysis described at Section 5.3.1.
133Chapter 5: Performing idealizations from construction graphs
For each extrusion primitivePi
MAT 2D on extrusion contour of Pi
Input: set of primitives Pi from construction graph GD of an object M
Output: set of sub-domains Dij of morphology type beams, plates or shells and 3D thick domains
Determination of Pi global morphology (x =max((max Φ / d), (d / max Φ))
For each medial edge Sij of the MAT 2D of category 2
Determination of the morphology associated with Sij (y = LSij / max Φ )
If morphology (Sij) ≠ morphology (Pi)
and Sij not a detail
Segmentation of Pi in sub-domains Dij and insertion in GD
Primitive Pi
Figure 5.8: Synthesis of the process to evaluate the morphology of primitives Pi.
merged together along their IG up to obtain the whole object M. As a result, new
sub-domains will be derived from the primitives Pi and the morphological analysis will
be available on M as a whole, which will be easier to understand for the engineer. To
this end, a taxonomy of connections between extrusion sub-domains is mandatory.
Taxonomy of connections between extrusion sub-domains to be idealized
This taxonomy, in case of ”plate sub-domain connections”, is summarized in Figure
5.9a. It refers to parallel and orthogonal configurations for simplicity but these
configurations can be extended to process a larger range of angles, i.e., if Figure 5.9
refers to interfaces IG of surface type, these configurations can be extended to interfaces
IG of volume type when the sub-domains S1 and S2 are rotated w.r.t. each other.
More specifically, it can be noticed that the configuration where IG is orthogonal to
the medial surfaces of S1 and S2 both is lacking of robust solutions [Rez96, SSM∗10]
and other connections can require deviation from medial surface location to improve
the mesh quality. Figure 5.18c illustrates such configurations and further details are
given in Section 5.5.2.
Figure 5.9 describes all the valid configurations of IG between two sub-domains S1
and S2 when a thickness parameter can be attached to each Pi, which is presently the
case with extrusion primitives.
Figure 5.9a depicts the four valid configurations: named type (1), (2), (3), (4).
These configurations can be structured into two groups: type (1) and type (4) form the
134Chapter 5: Performing idealizations from construction graphs
(b)
S2 S2
S1
S1
S2
S1
S2
S1
(c)
Fb1S1
Fb1S2
Fb1S2
IG
IG
Orthogonal to S1
and Parallel to S2:
Orthogonal:
Parallel:
Parallel: S1 // S2 Orthogonal:
Medial Surface S1
vs Medial Surface S2
Interface IG
vs Medial Surface S1 & S2
S1
S1
S1
S1
S2
S2 S2
S2
IG
IG
IG
IG
IG ^ S1
IG // S2
type (1)
I'G
I'G
I'G I'G
IG ^ S1
IG ^ S2
IG // S1
IG // S2
type (1) e1
e2
e1
e1
e1
e2
e2
e2
e1+e2
e1+e2
e1+e2
I'G
I'G
S'1
S'2
S'3
type (4)
e2
e1
e2
e1
e2
I'G I'G
S''21 S''22
S'21 S'22
type (2)
(a)
type (2)
type (3)
type (4)
type (2)
type (2)
type (2)
type (1)
type (4)
C2
C1
C1
C2
Figure 5.9: (a) Taxonomy of connections between extrusion sub-domains Pi. (b) Decomposition
of configurations type(1) and type(4) into sub-domains Pi showing that the decomposition
produced reduces to configurations type (2) only. (c) example configurations of types (1)
and (4) where S1 and S2 have arbitrary angular positions that generate volume interfaces
IG where base faces Fb1S1 and Fb1S2 are intersection free in configuration type (1) and Fb1S2
only is intersection free in configuration type (4). 135Chapter 5: Performing idealizations from construction graphs
group C1, and (2) and type (3) form the group C2. Figure 5.9b illustrates the effect
of the decomposition of configurations type (1) and type (4) that produces configurations
(2) only.
Reduced set of configurations using the taxonomy of connections
Configuration type (1) of C1 is such that the thicknesses e1 and e2 of S1 and S2 respectively,
are influenced by IG, i.e., their overlapping area acts as a thickness increase
that stiffens each of them. This stiffening effect can be important to be incorporated
into a FE model as a thickness variation to better fit the real behavior of the corresponding
structure. Their overlapping area can be assigned to either S1 or S2 or form
an independent sub-domain with a thickness (e1 + e2). If S1 and S2 are rotated w.r.t.
each other and generate a volume IG, the overlapping area still exists but behaves with
a varying thickness. Whatever the solution chosen to represent mechanically this area,
the sub-domains S1 and S2 get modified and need to be decomposed. The extent of
S2 is reduced to produce S�
2 now bounded by I�
G. Similarly, the extent of S1 is reduced
to S�
1 now bounded by another interface I�
G. A new sub-domain S�
3 is created that
contains IG and relates to the thickness (e1 + e2) (see Figure 5.9b). Indeed, with this
new decomposition IG is no longer of interest and the new interfaces I�
G between the
sub-domains S�
i produce configurations of type (2) only.
Similarly, configuration (4) is such that S2 can be stiffened by S1 depending on the
thickness of S1 and/or the 2D shape of IG (see examples in Figure 5.11). In this case,
the stiffening effect on S2 can partition S2 into smaller sub-domains and its IG produces
a configuration of type (2) with interfaces I�
G when S2 is cut by S1. The corresponding
decomposition is illustrated in Figure 5.9b and Figure 5.10. This time, IG is still
contributing to the decomposition of S1 and S2 but S2 can be decomposed in several
ways (S�
21 , S�
22 ) or (S��
21 , S��
22 ) producing interfaces I�
G. Whatever, the decomposition
selected to represent mechanically this area, the key point is that I�
G located on the
resulting decomposition are all of same type that corresponds to configuration (2).
Configuration (1) reduces the areas of S1 and S2 of constant thicknesses e1 and e2,
which can influence their ‘idealizability’. Configuration (4) reduces the area of S2 of
thickness e2 but it is not reducing that of S1, which influences the ‘idealizability’ of
S2 only. As a result, it can be observed that processing configurations in C1 produce
new configurations that always belong to C2. Now, considering configurations in C2,
none of them is producing stiffening effects similar to C1. Consequently, the set of
configurations in Figure 5.9a is a closed set under the decomposition process producing
the interfaces I�
G. More precisely, there is no additional processing needed for C2 and
processing all configurations in C1 produces configurations in C2, which outlines the
algorithm for processing iteratively interfaces between Pi and shows that the algorithm
always terminates.
Figure 5.9a and b refers to interfaces IG of surface type. Indeed, GD can produce
136Chapter 5: Performing idealizations from construction graphs
interfaces of volume type between Pi. This is equivalent to configurations where S1 and
S2 departs from parallel or orthogonal settings as depicted in Figure 5.9. Such general
configurations can fit into either set C1 or C2 as follows. In the 2D representations of
Figure 5.9a, b, the outlines of S1 and S2 define the base faces Fb1 and Fb2 of each Pi.
What distinguishes C1 from C2 is the fact that configurations (1) and (4) each, contains
at least S2 such that one of its base face (Fb1S2 in Figure 5.9c) does not intersect S1 and
this observation applies also for S1 in configuration (1) (Fb1S1 in Figure 5.9c). When
configurations differ from orthogonal and parallel ones, a first subset of configurations
can be classified into one of the four configurations using the distinction observed, i.e.,
if a base face of either S1 or S2 does not intersect a base face of its connected subdomain,
this configuration belongs to C1 and if this property holds for sub-domains S1
and S2 both, the corresponding configuration is of type (1). Some other configurations
of type (4) exist but are not detailed here since the purpose of the above analysis
is to show how the reference configurations of Figure 5.9a can be extended. The
completeness of configurations has not been entirely investigated yet.
5.3.3 Extending morphological analyses of Pi to the whole object
M
Now, the purpose is to use the stiffening influence of some connections as analyzed in
Section 5.3.2 to process all the IG between Pi, to be able to propagate and update the
‘idealizability’ of each Pi when merging Pis. This process ends up with a new subdivision
of some Pi as described in the previous section and a decomposition of M into
a new set of sub-domains Pi
2, each of them having an evaluation of its ‘idealizability’
so that the engineer can evaluate more easily the sub-domains he, or she, wants to
effectively idealize.
The corresponding algorithm can be synthesized as follows (see algorithm 2). The
principle of this algorithm is to classify IG between two Pi such that if IG belongs
to C1 (configurations 1 and 4 in algorithm 2), it must be processed to produce new
interface(s) I�
G and new sub-domains that must be evaluated for idealization (procedure
Propagate morphology analysis). Depending on the connection configuration between
the two primitives Pi, one of them or both are cut along the contour of IG to produce the
new sub-domains. Then, the MAT is applied to these new sub-domains to update their
morphology parameter (procedure MA morphology analysis) that reflects the effect of
the corresponding merging operation taking place between the two Pi along IG that
stiffens some areas of the two primitives Pi involved. The algorithm terminates when
all configurations of C1 have been processed.
Among the key features of this algorithm, it has to be observed that the influence
2Here again like in Section 5.3.1, Pi designates also the set of sub-domains Dkj that can result
from the decomposition of a primitive Pk when merging it with some other Pl sharing an interface
with Pk.
137Chapter 5: Performing idealizations from construction graphs
Algorithm 2 Global morphological analysis
1: procedure P ropagate morphologyanalysis(GD, xu) � The main procedure to extend morphological analyses of
sub-domains to the whole object
2: for each P in list prims(GD) do
3: if P.x > xu then � If the primitive has to be idealized
4: for each IG in list interfaces prim(P) do
5: P ngh = Get connectedprimitive(P, IG)
6: if IG.conf ig = 1 or IG.conf ig = 4 then
7: interV ol = Get interfaceV ol(P, P ngh, IG)
8: Pr = Remove interfaceV ol(P, interV ol) � Update the primitive by removing the volume
resulting from interfaces with neighbors
9: for i = 1 to Card(Pr) do � New morphological analysis of the partitions Pr
10: P� = Extract partition(i, Pr)
11: P�
.x = MA morpho analysis(P�
)
12: P ngh.x = MA morph analysis(P ngh)
13: if IG.conf ig = 1 then
14: if P ngh.x > xu then
15: P rngh = Remove interV ol(P ngh, interV ol)
16: interV ol.x = MA morph analysis(interV ol)
17: for j = 1 to Card(P rngh) do
18: P� ngh = Extract partition(j, P rngh)
19: P� ngh.x = MA morpho analysis(P� ngh)
20: if interV ol.x < xu then � If the interVolume is ‘idealizable’
21: Merge(P, P ngh, interV ol) � Merge the intervolume either with P or P ngh
22: end if
23: end for
24: else � If the interVolume is not ‘idealizable’
25: P = P�
26: Merge(P ngh, interV ol) � Merge the interVolume with the neighboring primitive
which is non ‘idealizable’
27: end if
28: Remove prim(P ngh, list prims(GD))
29: end if
30: if P�
.x < xu then � if a partition is not ‘idealizable’
31: Merge(P ngh, P�
) � Merge the partition with the non ‘idealizable’ primitive neighbor
32: end if
33: end for
34: end if
35: end for
36: end if
37: end for
38: end procedure
39: procedure MA morphology analysis(Pi) � Procedure using the 2D MAT on the extrusion contour of a primitive
40: Cont = Get Contour(Pi)
41: listof pts = Discretize contour(Cont)
42: vor = V oronoi(listof pts) � MAT generated using Voronoi diagram of a set of points
43: maxR = Get max radius of inscribed Circles(vor)
44: x = Set primitive idealizableT ype(maxR, Pi)
45: return x
46: end procedure
138Chapter 5: Performing idealizations from construction graphs
Init CAD model
Modeling
processes Evaluation of sub
domains morphology
Same results of
morphology analysis
IG
(4)
(4)
IG
A
B
Modeling
processes
OR
New
Sub domains
Volume Interface I’G
New
Sub domains
Volume Interface I’G
Figure 5.10: Propagation of the morphology analysis of each Pi to the whole object M. A
and B illustrates two different sets of primitives decomposing M and numbers in brackets
refer to the configuration category of interfaces (see Section 5.3.2.)
of the primitive neighbor Pngh of Pi, is taken into account with the update of Pi that
becomes Pr. Indeed, Pr can contain several volume partitions, when Card(Pr) > 1,
depending on the shapes of Pi and Pngh. Each partition P� of Pr may exhibit a different
morphology than that of Pi, which is a more precise idealization indication for the
engineer. In case of configuration 1, the overlapping area between Pngh and Pi must
be analyzed too, as well as its influence over Pngh that becomes Prngh . Here again,
Prngh may exhibit several partitions, i.e., Card(Prngh ≥ 1), and the morphology of
each partition P�
ngh must be analyzed. If the common volume of P�
ngh and P� is not
idealizable, it is merged with either of the stiffest sub-domains Pngh or Pi to preserve
the sub-domain the most suited for idealization. In case a partition P� of Pr is not
idealizable in configuration 4, this partition can be merged with Pngh if it has a similar
morphological status.
Figure 5.10 illustrates this approach with two modeling processes of a simple component.
Both processes contain two primitives to be idealized by plate elements and
interacting with a surface interface of type (4). The stiffening effect of one primitive on
the other creates three sub-domains with interfaces I�
G of type (2). The sub-domain in
violet, interacting with both sub-domains to be idealized, can be merged with each of
the other sub-domains to create a fully idealized geometry or it can be modeled with
a specific connection defined by the user.
Full examples of the extension of the morphological analysis to the whole object
M using the interfaces IG between the primitives of GD, are given in Figure 5.11.
Figures 5.11a, b and c show the sub-domain decomposition obtained after processing
the interfaces IG between primitives Pi of each object M. The same figures illustrate
also the update of the morphology criterion on each of these sub-domains when they
are iteratively merged through algorithm 2 to form their initial object M. Areas A
and B show the stiffening effect of configurations of category (1) on the morphology
139Chapter 5: Performing idealizations from construction graphs
(a) (b)
T T
B B
(c) C
A
D
B
T T B B
Figure 5.11: Propagation of the morphology analysis on Pi to the whole object M. a, b and c
illustrate the influence of the morphology analysis propagation. The analyzed sub-domains
are iteratively connected together to form the initial object. T and B indicate the top and
bottom views ob the same object, respectively.
of sub-domains of M. Areas C and D are examples of the subdivision produced with
configurations of type (4) and the stiffening effects obtained that are characterized by
changes in the morphology criterion values.
After applying algorithm 2, one can notice that every sub-domain strictly bounded
by one interface IG of C2 or by one interface I�
G produced by this algorithm gives a
precise idealization information about an area of M. Areas exhibiting connections of
type (1) on one or two opposite faces of a sub-domain give also precise information,
which is the case for examples of Figure 5.11. However, if there are more piled up
configurations of type (1), further analysis is required and will be addressed in the
future.
Conclusion
This section has shown how a CAD component can be prepared for idealization. The
initial B-Rep geometry has been segmented into a set of extrusion primitives Pi using
its construction graph GD. Using a taxonomy of geometric interfaces, a morphological
analysis has been applied to these primitives to identify the ‘idealizable’ areas over
the whole object. As a result, the geometric model is partitioned into volume subdomains
which can be either idealized by shell or beams or not idealized at all. At
that stage, only a segmentation of a standalone component has been analyzed. Neither
the compatibility with an assembly structure nor the influence of external boundary
conditions have been addressed yet, as this is the purpose of the next section.
140Chapter 5: Performing idealizations from construction graphs
C1
C2
I1/2
Assembly interface
F Boundary condition:
Load F =500N
Boundary condition:
rigid condition to wall
C3
C3 = C1 U C2
F
Assembly interface considered
as a rigid connection
between C1 and C2
Figure 5.12: Influence of an assembly interface modeling hypothesis over the transformations
of two components
5.4 Influence of external boundary conditions and
assembly interfaces
As explained in Section 4.7, an assembly model is composed of a set of 3D solid components
linked to each other through functional interfaces. A boundary condition that is
external to the assembly, as defined in Section 1.4.2, also acts as an interface between
the object and its external environment. Figure 5.12 illustrates two types of boundary
conditions, a load acting on component C1 and a rigid connection with the environment
on C2. These areas are defined by the user and are represented as a geometric
region on its B-Rep surface which is equivalent to an assembly interface, except that
the interface is only represented on one component.
Each component of the assembly can be segmented using respective construction
graphs. However, the segmentation of components generates new geometric interfaces
between primitives which can be influenced by the assembly interfaces. Therefore, this
section aims at studying the role of assembly interfaces and boundary conditions in the
idealization process. They can be analyzed either before the morphological analysis as
input data or after the segmentation of components.
Impact of the interface modeling hypotheses
Depending on the simulation objectives, the engineer decides if he, resp. she, wants
to apply some mechanical behavior over some assembly interfaces (see Table 1.2).
This first choice highly influences the components’ idealization. As it is highlighted
in Section 6.2.1, the engineer may decide not to apply any mechanical behavior at a
common interface between two components, e.g., with the definition of rigid connections
between their two mesh areas to simulate a continuous medium between components
at this interface. This modeling hypothesis at assembly interfaces influences directly
the geometric transformations of components. As illustrated in Figure 5.12, a set of
components connected by rigid connections can be seen as a unique component after
merging them. Therefore, to reduce the complexity of the FEA pre-processing, the
141Chapter 5: Performing idealizations from construction graphs
morphological analysis can be applied to this unique component instead of applying it
to each component individually.
In case the engineer wants to assign a mechanical behavior to interfaces between
components, these interfaces ought to appear in the final idealized model. Now, defining
when this assignment can take place during the pre-processing enumerates:
a) at the end of the idealization process, i.e., once the components have been idealized;
b) during the segmentation process, i.e., during the construction graph generation
of each component or during their morphological analysis.
These two options are addressed in the following parts of this section. In this section,
only the geometric aspect of assembly interfaces is addressed. The transfer of
meta-information, e.g., friction coefficient, contact pressure is not discussed here.
Applying assembly interfaces information after components’ idealization
In a first step, this part of the section studies the consequences of integrating assembly
interfaces at the end of the idealization process, i.e., option (a) mentioned above.
These assembly interfaces represent information that is geometrically defined by the
interactions between components. These interactions have a physical meaning because
the contacts between components exist physically in the final product though a physical
contact may not be always represented in a DMU as a common area between the
boundaries of two components [SLF∗13] (see Section 1.3.2). For sake of simplicity,
let us consider that physical contacts are simply mapped to common areas between
components. Then, the assembly interfaces are initially prescribed in contrast with
geometric interfaces, IG between primitives of a component that have been created
during its segmentation process and aim at facilitating the geometric transformations
performed during the idealization process of a component.
One can observe that these assembly interfaces have to be maintained during the
dimensional reduction operations of each component. However, these interfaces can
hardly be transferred using the only information provided by the idealized models. For
example, Figure 5.13b shows that a projection of the assembly interface on the idealized
model of C1 could generate narrow areas which would be difficult to mesh and, on that
of C2, could produce areas that fall outside the idealized model of C2. This assembly
interface has been defined on the initial solid models of C1 and C2. If this link is lost
during the idealization process, a geometric operator, i.e., like the projection operator
just discussed, has to recover this information to adapt this interface on the idealized
assembly model. Therefore, to obtain robust transformations of assembly interfaces,
these interfaces have to be preserved during the dimensional reduction processes of each
142Chapter 5: Performing idealizations from construction graphs
Assembly
interface
Generation
of narrow mesh areas
C1
C2
CAD + assembly interfaces Idealized representation
Projection of
assembly interface
(a) (b)
Non-correspondance
between interface
and idealized surfaces
Figure 5.13: Illustration of the inconsistencies between an assembly interface defined between
initial CAD components C1 and C2 (a) and its projection onto their idealized representations
(b).
component. Each of these interfaces, as a portion of the initial solid boundary of a
component, has a corresponding representation in its idealized model. This equivalent
image would have to be obtained through the transformations applied to the initial
solid model of this component to obtain its idealized representation.
Integration of assembly interfaces during the idealization process
In a second step, this part of the section addresses the option (b) mentioned above.
As stated in Section 5.4, assembly interfaces have to be generated before the dimensional
reduction of assembly components. Now, the purpose is to determine at which
stage of the proposed morphological analysis process the interfaces should be integrated.
This analysis incorporates the segmentation process, which prepares a component
shape for idealization and is dedicated to a standalone 3D solid. This approach
can be extended to an assembly model from two perspectives described as follows:
b1) The assembly interfaces and boundary conditions can be used to monitor the
definition of specific primitives, e.g., primitives containing the whole assembly
interface. Figure 5.14a illustrates such a decomposition with two components C1
and C2 fitting the assembly interface with only one primitive in both segmentations.
The benefit of this approach lies in avoiding splitting assembly interfaces
across various component primitives, which would divide this assembly interface
representation across all the primitives boundaries;
b2) The segmentation process is performed independently of the assembly interfaces.
Then, they are introduced as additional information when transforming the set
of primitives derived from each component and these interfaces are incorporated
into the idealized representation of each component. In this case, the intrinsic
143Chapter 5: Performing idealizations from construction graphs
Constrained
extrusion face
Integration of external
interfaces after segmentation
(cat. b2)
(a)
(b)
External
assembly interface
Internal
segmentation interface
Integration of external
interfaces before segmentation
(cat. b1)
P1.1
P1.2
I1/2
I1/2
I1/2
I1.1/1.2
I2.1/2.2
I2.1/2.2 I1.1/1.2
P1.1
P1.2
P2.2
P2.2
P2.1
P2.1
C1
C2
C1
C2
Figure 5.14: Two possible schemes to incorporate assembly interfaces during the segmentation
process of components C1 and C2: (a) The assembly interface is used to identify extrusion
primitives of C1 and C2 containing entirely the assembly interface in one extrusion contour,
(b) The assembly interface is integrated after the segmentation of components C1 and C2
and propagated on each of their primitives.
property of the proposed segmentation approach is preserved and the assembly
interfaces are propagated as external parameters on every primitive they are
respectively related to. Figure 5.14b shows the imprints of the assembly interface
on the primitives derived from the segmentation of components C1 and C2.
As a conclusion of the previous analyses, the choice of the idealization process made
in this thesis falls into category (b2). Though a fully detailed analysis would bring complementary
arguments to each of the previous categories, it appears that category (a)
is less robust than category (b), which is an important point when looking for an automation
of assembly pre-processing. Within category (b), (b1) leads to a solution
that is not intrinsic to the shape of a component, which is the case for (b2). With the
current level of analysis, there is no strong argument favoring (b1) over (b2) and (b2)
is chosen to keep the level of standalone component pre-preprocessing decoupled from
that of the assembly level.
Therefore, assembly interfaces are not constraining the extraction of each the construction
graph GD for each component. During the segmentation process, assembly
interfaces are propagated only. Rigid connections only are assembly interfaces that can
be processed prior the component segmentation process without interfering with it.
Indeed, the first part of this section has shown that these interfaces lead to merge the
components they connect. Consequently, the rigid interfaces only can be removed from
the initial CAD components after the corresponding components have been merged,
which simplifies the geometry to be analyzed. From a mechanical point of view, this
operation is equivalent to extending the continuum medium describing each compo-
144Chapter 5: Performing idealizations from construction graphs
nent because their material parameters and other mechanical parameters are strictly
identical.
Then, the other assembly interfaces and external boundary conditions, where a
mechanical behavior has to be represented, are propagated through the segmentation
process and taken into account during the dimensional reduction process. Chapter 6
carries on with the analysis of interactions between simulation objectives, hypotheses
and shape transformations for assembly pre-processing. This helps structuring the
preparation process of an assembly in terms of methodology and scope of shape transformation
operators. Section 6.4 shows an example of automated idealization of an
aeronautical assembly using the idealization process presented in this chapter which is
also taken into account in the methodology set in Chapter 6.
Now that the roles of assembly interfaces and external boundary conditions have
been clarified, the next section focuses on the dimensional reduction of a set of extrusion
primitives connected through geometric interfaces, IG. The objective is to set up a
robust idealization operator enabling the dimensional reduction of extrusion primitives
and performing idealized connections between medial surfaces through the analysis of
the interface graph GI of an object M.
5.5 Idealization processes
Having decomposed a assembly component M into extrusion primitives Pi, the last
phase of the idealization process consists in the generation and connection of idealized
models of each primitive Pi. Now, the interfaces IG between Pis are precisely identified
and can be used to monitor the required deviations regarding medial surfaces. These
deviations are needed to improve the idealization process and to take into account the
engineer’s know-how when preparing a FE model (see discussions of Chapter 2).
Based on the morphological analysis described in Sections 5.2 and 5.3, each Pi has
a shape which can be classified in idealization categories of type plate, shell, beam or
3D thick solid. Therefore, depending on Pi’s morphological category, a dimensional
reduction operator can be used to generate its idealized representation. The geometric
model of the idealized Pi is:
• A planar medial surface when Pi has been identified as a plate. This surface
corresponds to the extrusion contour offset by half the thickness of this plate
along the extrusion direction;
• A medial surface when the primitive has been identified as a shell (see the detailed
taxonomy in Tables C.1, C.2). This medial surface is generated as the extrusion
of the medial line extracted from the extrusion contour of Pi. This medial line
can be generated by applying the 2D MAT to the extrusion contour, as proposed
145Chapter 5: Performing idealizations from construction graphs
P1
P2
P3
Solid Primitives
P1
P2
P3
Initial CAD model
Interfaces
P7
P3
Interface of type (2)
Interface of type (4)
P17
P1
P2
P4
P6
P8
P9
P10 P11 P12 P13
P14 P15
P16 P18
P5
Interface Graph GI
Figure 5.15: Illustration of an interface graph containing IGs derived from the segmentation
process of M producing GD.
by Robinson et al. [RAM∗06]. The shell thickness varies in accordance with the
diameter of circles inscribed in the extrusion contour;
• A medial line when the primitive has been identified as a beam. This line is
generated through the extrusion of the point representing the barycenter of the
extrusion contour if the beam direction is aligned with the extrusion direction.
If the beam direction is orthogonal to the extrusion direction, the medial line
corresponds to the medial line of the extrusion contour, offset by half of the
extrusion distance;
• The volume domain of Pi when Pi is identified as a 3D thick solid;
since every Pi reduces to an extrusion primitive.
Now that each primitive Pi is idealized individually, the purpose of the following
section is to show how the medial surfaces can be robustly connected based on the
taxonomy of interfaces IG illustrated in Figure 5.9.
5.5.1 Linking interfaces to extrusion information
From the construction graph GD and the geometric interfaces IG between its primitives
Pi, the interface graph GI can be derived. Figure 5.15 illustrates GI for a set of extrusion
primitives extracted from one component of the aeronautical use-case presented
in Figure 1.6.
In GI , each node, named Ni, is a primitive Pi ∈ GD and each arc is a geometric
interface IG between any two primitives Pi and Pj , as IG has appeared during the
segmentation process of M. In a first step, GI is enriched with the imprints of the
146Chapter 5: Performing idealizations from construction graphs
boundary of each IG on each primitive Pi and Pj that defines this IG. The jth boundary
of an IG w.r.t. a primitive Pi is noted Cj(Pi).
A direct relationship can be established between Cj(Pi) and the information related
to the extrusion property of Pi. The interface boundary Cj(Pi) is classified in accordance
with its location over ∂Pi. To this end, each node Ni of GI representing Pi is subdivided
into the subsets: Ni(Fb1), Ni(Fb2), Ni(Fl), that designates its base face Fb1, its base
face Fb2 and its lateral faces Fl, respectively. Then, Cj(Pi) is assigned to the appropriate
subset of Ni. As an example, if Cj(Pi) has its contours located solely on the base face
F b1 of Pi, its is assigned to Ni(Fb1), or if Cj(Pi) belongs to one at least of the lateral
faces Fl, it is assigned to Ni(Fl). Figure 5.16 illustrates the enrichment of the interface
graph GI of a simple model containing three primitives P1, P2 and P3. For example,
the boundary C1(P1), resulting from the interaction between P1 and P3, is assigned to
F b1 of P1. Reciprocally, the equivalent C1(P3) refers to a lateral face of P3.
The following step determines potential interactions of Cj(Pi)s over Pi. When a pair
of Cj(Pi)s shares a common geometric area, i.e., their boolean intersection is not null:
Cj(Pi) ∩ Ck(Pi) �= φ, (5.1)
the resulting intersection produces common points or curve segments that are de-
fined as an interface between the pair of interface boundaries (Cj(Pi), Ck(Pi)) and the
nth interface is noted IDnCj/Ck . Three interfaces between Cj(Pi) have been identified
on the example of Figure 5.16, e.g., ID1C1/C2 represents the common edge interaction
between C1(P1) and C2(P1). These new relations between Cj(Pi)s form a graph structure
GID where the nodes represent the boundary Cj(Pi) and the arcs define the interface
IDnCj/Ck . The graph structure GID related to a primitive Pi is strictly nested into the
i
th node of GI . More globally, the graph structure GID is nested into GI .
The graph structures GID derived from the relations between the boundaries of
interfaces IG of each Pi can be ‘merged’ with the interface graph GI . Let us call GS
this graph (see Figure 5.16d).
5.5.2 Analysis of GS to generate idealized models
Using GS, algorithms may be applied to identify specific configurations of connections
between idealized primitives. These algorithms are derived from the current industrial
practices of idealized FEM generation from B-Rep models. Specific configurations
of interface connections can be identified automatically from GS while allowing the
engineer to locally modify the proposed results based on his, resp. her, simulation
hypotheses. So, nodes in GS can be either of type Cj(Pi) if there exists a path between
Cj(Pi)s in Pi or they are Pis if there is no such path. Arcs are built up on either IGs or
IDnCj/Ck depending on the type of node derived from Cj(Pi) and Pi.
To generate a fully idealized model, i.e., a model where the medial surfaces are
147Chapter 5: Performing idealizations from construction graphs
Fb1
FLs
Fb2
Fb1
Fb2
Fb2
Fb1 Fb2 FLs
Fb2
FLs
C1 (P1) C2 (P1)
C1 (P2)
C2 (3) C2 (P1)
C1 (P3)
IP1/ P3 IP1/ P2
IP2/ P3
ID1 C1/C2
ID3 C1/C2 ID2 C1/C2
IP1/ P3
IP1/ P2
IP2/ P3
P1
P3
P2
Pi
Primitive
Base Faces
Lateral Faces
Interface between
primitives
Interface boundary
on primitive
ID Cm/Cn
Cm (Pi)
I Pi/Pj
FLs
Fb1,2
Interface between
contours
Fb1
FLs
Fb1
FLs
Fb1
FLs
Fb2
P3 P2
P1
P1
P3 P2
IP1/ P2
IP2/ P3
IP1/ P3
Graph GI
Fb1
FLs
Fb2
Fb1
Fb2
Fb2
Fb1 Fb2 FLs
Fb2
FLs
C1 (P1) C2 (P1)
C1 (P2)
C2 (P1)
C2 (3)
C1 (P3)
ID1 C1/C2
ID2 C1/C2 ID3 C1/C2
Fb1
FLs
Fb1
FLs
Fb1
FLs
Fb2 P P3 P2 1
Graph GID of P1 Graph GID of P2 Graph GID of P3
Graph GS
(a) (b)
(c)
(d)
Figure 5.16: Enrichment of the graph GI with the decomposition of each node into subsets
Ni(Fb1), Ni(Fb2), Ni(Fl). Illustration of an interface cycle between primitives P1, P2 and P3
built from GI and GID. (a) Initial primitives segmentation, (b) GI graph, (c) GID for P1,
P2 and P3, (d) GS graph. 148Chapter 5: Performing idealizations from construction graphs
connected, three algorithms have been developed to identify respectively:
• interface cycles;
• groups of parallel medial surfaces;
• and L-shaped primitives configurations.
The locations of medial surfaces are described here with orthogonal or parallel properties
for sake of simplicity. Therefore, each of them can be generalized to arbitrary
angular positions as described in Section 5.3.2. Each algorithm is now briefly described.
Interface cycles
Cycles of interfaces are of particular interest to robustly generate connections among
idealized sub-domains. To shorten their description, the focus is placed on a common
configuration where all the interfaces between primitives are of type (4). To define a
cycle of interfaces of type (4), it is mandatory, in a first step, to identify a cycle in
GI from connections between Pi. In a second step, the structure of connections inside
each Pi, as defined in GID, must contain themselves a path between their interface
boundaries Cj(Pi)s that extends the cycle in GI to a cycle in GS = GI∪GID. An example
of such a cycle is illustrated in Figure 5.16. This level of description of interfaces among
sub-domains indicates dependencies between boundaries of medial surfaces. Indeed,
such a cycle is a key information to the surface extension operator to connect the set of
medial surfaces simultaneously. The medial surfaces perpendicular to their interfaces
IG (of P3 in Figure 5.16) have to be extended not only to the medial surfaces parallel
to their interfaces (of P1 and P2 in Figure 5.16), but they have also to be extended in
accordance with the extrusion directions of their adjacent primitives. For example, to
generate a fully idealized model of the three primitives of Figure 5.16, the corner point
of the medial surface of P3, corresponding to the ID3C1/C2 edge, has to be extended to
intersect the medial surface of P1 as well as to intersect the medial surface of P2. As
described, the information available in an interface cycle enables a precise and robust
generation of connections among idealized sub-domains.
Interface cycles appear as one category of specific idealization processes because
they appear frequently in mechanical products and they fall into one category of connection
types in the taxonomy of Figure 5.9.
Groups of parallel medial surfaces
Connections of parallel medial surfaces can be handled with medial surface repositioning
(see P1 and P2 on Figure 5.17a) corresponding to the adjustment of the material
thickness on both sides of the idealized surface to generate a mechanical model consistent
with the shape of M. This is a current practice in linear analysis that has
been advantageously implemented using the relative position of extrusion primitives.
149Chapter 5: Performing idealizations from construction graphs
P2
P1
Offset of parallel medial surfaces
(a)
(b)
P2
P1
Offset of L-shaped medial surfaces
Figure 5.17: Examples of medial surface positioning improvement. (a) Offset of parallel
medial surfaces, (b) offset of L-shaped medial surfaces.
These groups of parallel medial surfaces can be identified in the graph GI as the set
of connected paths containing edges of type (2) only. Figure 5.18a shows two groups
of parallel medial surfaces extracted from GI presented in Figure 5.16. As a default
processing of these paths, the corresponding parallel medial surfaces are offset to a
common average position of the medial surfaces and weighted by their respective areas.
However, the user can also snap a medial surface to the outer or inner skins of
an extrusion primitive whenever this prescription is compatible with all the primitives
involved in the path. Alternatively, he, or she, may even specify a particular offset position.
Surfaces are offset to the reference plane as long as the surface remains within the
limits of the original volume of the component M. This restriction avoids generating
interferences between the set of parallel primitives and the other neighboring primitives.
For example, in Figure 5.18, the resulting medial surface of the group of parallel
primitives, containing P1 and P2, cannot intersect the volumes of its perpendicular
primitives such as P3. This simple process points out the importance to categorize the
interfaces between primitives.
Like interface cycles, groups of parallel medial surfaces refer to the taxonomy of
Figure 5.9 where they fall into the type (2) category.
L-shaped primitives configurations
When processing an interface of type (4) in GI , if an interface boundary Cj(Pi) is
located either on or close to the boundary of the primitive Pj which is parallel to the
150Chapter 5: Performing idealizations from construction graphs
Independent Medial
Surfaces
P1
P2
P3
P1
P2
P3
Aligned Medial
Surfaces
(c)
d2 d1
d3
P4
Primitives in connection through
only interfaces of type (4)
Border
primitives
P7
Interface of type (2)
P17
P1
P2
P6
P8
P9
P10 P13
P14 P15
Group of parallel medial surfaces
(a)
(b)
Interface Graph GI
P11
P5 P12 P16 P18 P11
P7
P3
Interface of type (2)
Interface of type (4)
P17
P1
P2
P4
P6
P8
P9
P10 P11 P12 P13
P14 P15
P16 P18
P5
P3
P2
P10
P14
P12
P1
P15
P6
P9
P13
P8
P4 P5
P16
P18
P17
P7
P3
Figure 5.18: Example of identification of a group of parallel medial surfaces and border primitives
configurations from the interface graph GI : (a) extraction of type (2) subgraphs from
GI , (b) set of L-shaped primtives extracted from GI , (c) initial and idealized configurations
of M when groups of parallel primitives and L-shaped configurations have been processed.
151Chapter 5: Performing idealizations from construction graphs
Fb2
Fb1
Fb2
FLs
IP1/ P2
P1
C1 (P1)
P2
IP1/ P2
Fb1
C1 (P1)
Fs
C1 (P2)
P1 (surface) P2 (volume)
Figure 5.19: Example of a volume detail configuration lying on an idealized primitive.
interface (see P1 and P2 on Figure 5.17b or P2 and P3 in Figure 5.18c), the medial
surfaces needs to be relocated to avoid meshing narrow areas along one of the subdomain
boundaries (here P3 is moved according to d3). This relocation is mandatory
because Cj(Pi) being on or close to the boundary of Pj , the distance between the
idealized representation of Pi and the boundary of Pj is of the order of magnitude
of the thickness of Pi. Because Pi is idealized, it means that the dimension of FEs
is much larger than the thickness of Pi, hence meshing the areas between Cj(Pi) and
the boundary of Pj would necessarily result in badly shaped FEs. The corresponding
configurations are designated as L-shaped because Pi and Pj are locally orthogonal or
close to orthogonal.
If this configuration refers to mesh generation issues, which have not been addressed
yet, L-shaped configurations where a subset of Cj(Pi) coincides with the boundary of a
connected primitive (see P2 in Figure 5.18c) can be processed unambiguously without
mesh generation parameters, as justified above. Processing configurations where Cj(Pi)
is only close to a primitive contour requires mesh parameters handling and is left for
future work. Primitives connected through interfaces of type (4) only, as illustrated
in Figure 5.18b, are part of L-shaped configurations if they have at least a primitive
contour Cj(Pi) close to Pj boundary. In Figure 5.18b, only P11, which is located in
the middle of P9 and P14, is not considered as an L-shaped primitive. L-shaped con-
figurations can be processed using the precise location of IG so that the repositioning
operated can stay into IG to ensure the consistency of the idealized model.
Identification criteria of Pi details
The relationships between extrusion information and primitive interfaces may also
be combined to analyze more precisely the morphology of standalone primitives, such as
small protrusions that can be considered as details. As an example, Figure 5.19 shows
the interaction between a primitive P1, which can be idealized as a plate and a primitive
P2, morphologically not idealizable. The enriched interface graph with GID indicates
that the boundary C1(P1) lies on a base face, F b1, of P1 whose boundary is used to
idealize P1. Then, if the morphological analysis of F b1 is such that: F = (F b1−∗FC1(P1) )
shows that F is still idealizable, this means that P2 has no morphological influence
152Chapter 5: Performing idealizations from construction graphs
relatively to P1, even though P2 is not idealizable.
As a result, P2 may be considered as a detail of P1 and processed accordingly
when generating the mesh of P1, P2. This simple example illustrates how further
analyses can be derived from the graph structures GID and GI . Identifying details
using the morphological properties of primitives is a way to be independent from the
FE mesh size. With the proposed idealization process, a volume can be considered as
a detail with respect to the surrounding geometry before the operation of dimensional
reduction. This is an a priori approach satisfying the requirement of Section 2.2.1
which stated that a skin detail cannot be directly identified in an idealized model.
Though the criterion described above is not generic, it is illustrative of the ability
of the graph structures GID and GI to serve as basis of other criteria to cover a much
wider range of configurations where skin and topological details could be identified with
respect to the idealization processes. A completely structured approach regarding these
categories of details is part of future work.
5.5.3 Generation of idealized models
To illustrate the benefits of the interface graph analyses of GID and GI , which have been
used to identify specific configurations with the algorithms described in Section 5.5.2,
an operator has been developed to connect the medial surfaces. Once the groups of
parallel medial surfaces have been correctly aligned, the medial surfaces involved in
interfaces of type (4) are connected using an extension operator. Because the precise
locations of the interfaces between primitives Pi and Pj are known through their geometric
imprint Cj(Pi) on these primitives, the surface requiring extension is bounded
by the imprint of Cj(Pi) on the adjacent medial surface. The availability of detailed
interface information in GI and GID increases the robustness of the connection operator
and prevents the generation of inconsistent surfaces located outside interface areas.
Connection operator
Firstly, the connection operator determines the imprints Cj(Pi) on the corresponding
medial surface of the primitive Pj . This image of Cj(Pi) on the medial surface of the
neighbor primitive Pi is noted Img(Cj(Pi)). Figure 5.20a shows, in red, three interface
boundaries on the medial surfaces Img(Cj(Pi)) of the three primitives P1, P2, P3. When
adjusting Cj(Pi), the medial surface boundary of Pi is also transferred on Img(Cj(Pi)).
Such regions, in green in Figure 5.20a, are noted ImgMS(Cj(Pi)). The next step extends
the medial surfaces involved in interfaces of type (4). The medial surfaces are extended
from Img(Cj(Pi)) to ImgMS(Cj(Pi)) (the red lines to the green lines in Figure 5.20a).
The extensions of the medial surfaces split Img(Cj(Pi)) into two or more sub-regions.
The sub-regions which contains edges coincident with edges of the primitive medial
surface are removed to avoid small mesh areas.
153Chapter 5: Performing idealizations from construction graphs
Imprint of interface
on primitive: C1(P2)
Medial surface
boundary
Image of C1(P2)
on medial surface:
Img(C1(P2))
P2
P1
P2
P1
P2
P1
P2
P1
P2
P1
P2
P1
Image of the medial surface
boundary of P2 on the medial
surface of P1 : ImgMS(C1(P2))
Suppression of
non-connected
areas of Img(C1(P2))
Extension of
medial surface
Offset of the
medial surface of P2
Offset of medial
surface image
Idealization
without offset
Idealization
with offset
(b)
C1 (P8) : interface boundary
on primitive
P2
P8
P3
C2 (P2): interface boundary
on primitive
ImgMS (C2 (P2) ): medial Surface representation
on Interface Boundary
Representation of geometric interfaces on Medial Surfaces
(a)
Img(C1(P2)): image of C1 (P8)
on the medial Surface
Figure 5.20: (a) Representation of the interface imprints on primitives and on medial surfaces.
(b)Connection process of two primitives P1 and P2 with and without offsetting their medial
surfaces.
154Chapter 5: Performing idealizations from construction graphs
Mesh Aligned Medial
Surfaces
Independent Medial
Surfaces with Interface
boundary
Connected Medial
Surfaces
· CAD Medial Surfaces
· Interfaces
Figure 5.21: Idealization process of a component that takes advantage of its interface graph
structures GID and GI .
It must be noticed that the regions ImgMS(Cj(Pi)) lie into Pj and can be obtained
easily as a translation of Cj(Pi). Therefore, it is not comparable to a general purpose
projection operator where existence and uniqueness of a solution is a weakness.
Here, ImgMS(Cj(Pi)) always exists and is uniquely defined from Cj(Pi) based on the
properties of extrusion primitives.
The interface cycles previously detected are used to identify intersection points
within ImgMS(Cj(Pi)). Figure 5.20a illustrates the interaction between the three
ImgMS(Cj(Pi)) corresponding to the intersection between the three green lines. When
processing L-shaped primitives, in case the medial surface of Pi is shifted to the boundary
of Pj , the corresponding images Img(Cj(Pi)) and ImgMS(Cj(Pi)) are also shifted
with the same direction and amplitude. This update of these images preserves the
connectivity of the idealized model when extending medial surfaces. Figure 5.20b illustrates
the connection process of two primitives P1 and P2 with and without moving
the medial surface of the primitive P2. This figure shows how the connection between
the idealized representations of P1 and P2 can preserve the connectivity of the idealized
model.
Results of idealized models
As shown in Figure 5.18b and c, the repositioning of medial surfaces inside P1,
P2 and P3 improves their connections and the overall idealized model. Figure 5.21
illustrates the idealization of this component. Firstly, the medial surface of each primitive
is generated. Then, the groups of parallel medial surfaces are aligned before the
generation of a fully connected idealized model.
Finally, the complete idealization process is illustrated in Figure 5.22. The initial
CAD model is segmented using the construction graph generation of Chapter 4 to
produce GD. It produces a set of volume primitives Pi with interfaces between them
resulting in the graph structures GI and GID. A morphological analysis is applied
on each Pi as described in Section 5.3.1. Here, the user has applied a threshold ratio
155Chapter 5: Performing idealizations from construction graphs
Idealized Mesh
model
Init CAD model Segmented model Interfaces
Dimensional
reduction
Analysis of
interfaces
Final CAD
idealized model
Morphological
analysis
Figure 5.22: Illustration of the successive phases of the idealization process (please read from
the left to the right on each of the two rows forming the entire sequence).
xu = 2 and an idealization ratio xr = 10. Using these values, all the primitives are
considered to be idealized as surfaces and lines. The final CAD idealized model is
generated with the algorithms proposed in Section 5.5.3 and exported to a CAE mesh
environment (see Chapter 6).
5.6 Conclusion
In this chapter, an analysis framework dedicated to assembly idealization has been
presented. This process exploits construction graphs of components that produce their
segmentation into primitives. Morphological criteria have been proposed to evaluate
each primitive with respect to their idealization process. The benefits of generative
process graphs have been evaluated in the context of idealization processes as needed
for FEA.
A morphological analysis forms the basis of an analysis of ’idealizability’ of primitives.
This analysis takes advantage of geometric interfaces between primitives to
assess stiffening effects that potentially propagate across the primitives when they are
iteratively merged to regenerate the initial component and to locate idealizable subdomains
over this component. Although the idealization concentrates on shell and
plates, it has been observed that the morphological analysis can be extended to derive
beam idealizations from primitives.
This morphological analysis also supports the characterization of geometric details
in relation to local and to idealizable regions of a component, independently of any nu-
156Chapter 5: Performing idealizations from construction graphs
merical method used to compute solution fields. Overall, the construction graph allows
an engineer to access non trivial variants of the shape decomposition into primitives,
which can be useful to evaluate different idealizations of a component.
Finally, this decomposition produces an accurate description into sub-domains and
into geometric interfaces which can be used to apply dimensional reduction operators.
These operators are effectively robust because interfaces between primitives are
precisely defined and they combine with the primitives to bound their idealized representations
and monitor the connections of the idealized model.
The principle of component segmentation appears also to be compatible with the
more general needs to process assembly models. Indeed, components are sub-domains
of assemblies and interfaces are also required explicitly to be able to let the engineer
assign them specific mechanical behavior as needed to meet the simulation objectives.
The proposed idealization process can now take part to the methodology dedicated
to the adaption of a DMU to FE assembly models, as described in the next chapter.
157Chapter 5: Performing idealizations from construction graphs
158Chapter 6
Toward a methodology to
adapt an enriched DMU to FE
assembly models
Having detailed the idealization process as a high-level operator taking benefits
from a robust shape enrichment, this chapter extends the approach toward a
methodology to adapt an enriched DMU to FE assembly models. Shape transformations
resulting from user-specified hypotheses are analyzed to extract preprocessing
tasks dependencies. These dependencies lead to the specification of
a model preparation methodology that addresses the shape transformation categories
specific to assemblies. To prove the efficiency of the proposed methodology,
corresponding operators have been developed and applied to an industrial DMU.
The obtained results point out a reduction in preparation times compared to
purely interactive processes. This time saved enables the automation of simulation
processes of large assemblies.
6.1 Introduction
Chapter 3 set the objectives of a new approach to efficiently adapt CAD assembly
models derived from DMUs as required for FE assembly models. Chapters 4 and 5
significantly contributed to solve two issues regarding the proposed approach. The
first challenge addresses the internal structure of CAD components that has to be
improved to provide the engineer with a robust segmentation that can be used as basis
for a morphological analysis. The second challenge deals with the implementation
of a robust idealization process automating the tedious tasks of dimensional reduction
operations and particularly the treatment of connections between idealized areas.
Then, the proposed algorithms have been specified to enable the transformations of
solid primitives as well as their associated interfaces. The set of solid primitives can
result either from a component segmentation or an assembly structure decomposed
159Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
into components in turn decomposed into solid primitives. Thus, the method allows an
engineer to transform components’ shapes while integrating the semantics of assembly
interfaces.
This chapter goes even further to widen the scope of shape transformations at the
assembly level and evolve toward a methodology of assembly pre-processing. The aim
is to enforce the ability of the proposed approach to challenge the current practices
to generate large assembly simulation models. The analysis of dependencies among
component shape transformations applied to assemblies will help us to formalize this
methodology. Thanks to the geometric interfaces of components and functional information
expressed as functional designations of components obtained with the method
of Shahwan et al. [SLF∗13] summarized in Section 3.3, new enriched DMU are now
available to engineers. Thanks also to the component segmentation into solid primitives
and their interfaces that can be used to idealize sub-domains as described in
Chapter 5, the models input to FEA pre-processing contains much more information
available to automate the geometric transformations required to meet the simulation
objectives. The method described in Section 6.1 of this chapter uses this enriched
DMU as input to structure the interactions between shape transformations, leading
to a methodology which structures the assembly preparation process. To prove the
validity of the proposed methodology, Sections 6.3 and 6.4 illustrate it with two test
cases of an industrial assembly structure (see Figure 1.6) to create a simplified volume
model and an idealized surface model. To this end, new developments are presented
that are based on operators that perform shape transformations using functional information
to efficiently automate the pre-processing. Section 6.3 develops the concept
of template-based transformation operators to efficiently transform groups of components.
This operator is illustratively applied to an industrial aeronautical use-case with
transformations of bolted junctions. Section 6.4 deploys the methodology using the idealization
algorithms of Chapter 5 to generate a fully idealized assembly model. Finally,
the software platform developed in this thesis is presented at the end of Section 6.4.
6.2 A general methodology to assembly adaptions
for FEA
Chapter 1 pointed out the impact of interactions between components on assembly
transformation. The idealization of components is not the only time consuming task
during assembly preparation. When setting up large structural assembly simulations,
processing contacts between components as well as transforming entire groups of components
are also tedious tasks for the engineer. The conclusion of Section 1.5.4 showed
that the shape transformations taking place during an assembly simulation preparation
process interact with simulation objectives, hypotheses and functions attached to
components and to their interfaces. Therefore, to reduce the amount of time spent
160Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
on assembly pre-processing, the purpose is now to analyze and structure the interactions
between shape transformations. This leads to a methodology that structures the
assembly preparation process.
6.2.1 From simulation objectives to shape transformations
How do shape transformations emerge from simulation objectives and how do they
interact between themselves? This is to be analyzed in the following section. However,
the intention is not to detail interactions but to focus on issues that help to structure
the shape transformations. Transformation criteria related to time that may influence
simulation objectives are not relevant, i.e., manual operations that have been performed
to save time are irrelevant. Indeed, the purpose is to structure shape transformations
to save time and improve the efficiency of preparation processes.
6.2.1.1 Observation areas
From the simulation objectives, the structural engineer derives hypotheses that address
components and/or interfaces among them, hence the concept of observation area.
Even if this engineer has to produce an efficient simplified model of the assembly
to meet performance requirements, anyhow he/she must be able to claim that his/her
result is correct and accurate enough in critical observations areas that are consistent
with the simulation objectives. Therefore, the mechanical model set up in these areas
must remain as close as possible to the real behavior of the assembly. Thus, the
geometric transformations performed in these areas must be addressed in a first place.
As an example, in Figure 6.1, the simulation objective is to observe displacements in
the identified region (circled area) due to the effects of local loading configurations,
the section of the domain being complex. A possible engineers hypothesis can be to
model precisely the 3D deformation in the observation area with a volume model and
a fine mesh and set up a coarse mesh or even idealized sub-domains outside the area
of interest. To explicit this hypothesis over the domain, the circled area should be
delimited before meshing the whole object. During a preparation process, setting up
observation areas and thus, subdividing an assembly into sub-domains, independently
of the component boundaries and their interfaces, acts as a prominent task.
6.2.1.2 Entire idealization of components
Idealizations have inherently a strong impact on shape transformations because of their
dimensional reduction. Applied to a standalone component, idealization is meaningful
to transform 3D domains up to 1D ones. In the context of assemblies, to meet simulation
objectives, performances, and reduce the number of unknowns, the engineer
161Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
Observation Area Geometric
transformation
Surface model
Mixed dimensional
model
(Line + Surface)
Fine Mesh
Coarse Mesh
Idealized element
Load F
Figure 6.1: Setting up an observation area consistent with simulation objectives.
A
B
Analytical model
Global Idealization
A + B
Figure 6.2: Entire idealization of two components.
can idealize a component up to a point (0D), e.g., a concentrated mass, or even replace
it by a pre-defined solution field, e.g., a rigid body behavior or a spring-damper
field. When analytical models are available, some groups of components, like the bolts
in Figure 6.3a, do not appear geometrically in the FE assembly. The planar flange
connected by the bolts forming the major interface is used as location of a section in
the FE assembly model to determine resulting internal forces and moments in that
section. Then, the analytical model is independent of the FE one and it is fed with
these parameters to determine the pre-stress parameters of the bolts. Figure 6.3b illustrates
the complete idealization of pulleys as boundary conditions. This time, an
analytical model has been used prior to the FE assembly model. Such categories of
idealizations can be also applied to a set of connected components (see Figure 6.2). In
either case, such transformations have a strong impact on the interfaces between the
idealized components and their neighboring ones.
Consequently, interfaces between idealized components can no longer be subjected
to other hypotheses, e.g., contact and/or friction. Again, this observation highlights
the prominence of idealization transformations over interfaces ones.
6.2.1.3 Processing Interfaces
Interfaces between components are the location of specific hypotheses (see Table 1.2)
since they characterize junctions between components. Naturally, they interact with
hypotheses and shape transformations applied to the components they connect. Let
162Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
Idealization of
groups of
components
Idealization of
component as BCs
(a) (b)
Figure 6.3: (a) Transformation of groups of components as analytical models and, (b) idealization
of components as BCs (courtesy of ANTECIM).
A C B ABC
Interface Idealization
(a) (b) (c)
A B
C
A
B
C
Figure 6.4: Influence of interfaces over shape transformations of components.
us consider the example of Figure 6.4. In a first place, a simulation objective can
be stated as: modeling the deformation of the assembly with relative movements of
plates A, B, C under friction. Under this objective, hypotheses are derived that require
modeling interfaces (A, C) and (B, C) with contact and friction. Then, even if A, B
and C, as standalone components, can be candidate to idealization transformations,
these idealizations cannot be idealized further because the interfaces would need to
be removed, which is incompatible with the hypotheses. In a second place, another
simulation objective can be stated as: modeling the deformation of the assembly where
the junctions between plates A, B, C are perfect, i.e., they behave like a continuous
medium. There, plates A, B, C can still be idealized as standalone components but the
hypothesis on interfaces enables merging the three domains (Figure 6.4b) and idealizing
further the components to obtain an even simpler model with variable thickness (see
Figure 6.4c).
Thus, there are priorities between shape transformations deriving from the hypotheses
applied to interfaces. Indeed, this indicates that hypotheses and shape transformations
addressing the interfaces should take place before those addressing components
as standalone objects. Effectively, interfaces are part of component boundaries; hence
163Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
their transformations modify these boundaries. It is more efficient to evolve the shape
of interfaces alone first and to process component shapes, as isolated domains, afterwards.
As explained in Section 5.4, once the role of interfaces has been defined in the
assembly according to the user’s simulation objectives and the corresponding transformations
have been performed, each individual component can be transformed on its
own to take into account these interfaces as external boundary conditions during its
idealization/simplification process.
6.2.2 Structuring dependencies between shape transformations
as contribution to a methodology of assembly preparation
Section 6.2.1 has analyzed the relationships between simulation objectives, hypotheses,
and shape transformations of assemblies. One outcome of this section structures the
dependencies between hypotheses and shape transformations that address an assembly
at different levels. The purpose is now to exploit these dependencies to organize the
various steps of an assembly simulation preparation process so that it appears as linear
as possible to be efficiently automatized.
Dependencies of geometric transformations of components and interfaces
upon simulation hypotheses
Section 6.2.1.1 has shown the dependency of observation areas upon the simulation
objectives. Defining observation areas acts as a partitioning operation of an assembly,
independently of its components boundaries. Section 6.2.1.2 introduced the concept of
entire idealization of components and pre-defined solutions fields. Indeed, the shape
transformations derived from Section 6.2.1.2 cover also sub-domains over the assembly
that can be designated as ‘areas of weak interest’. There, the assembly interfaces contained
in these areas are superseded by the transformations of Section 6.2.1.2. From a
complementary point of view, areas of interest, once defined, contain sub-domains, i.e.,
components or parts of components, that can still be subjected to idealizations, especially
transformations of volumes sub-domains into shells/membranes and/or plates.
Consequently, areas of weak interest are regarded as primary sub-domains to be de-
fined. Then, entire idealization of components and pre-defined solutions fields will take
place inside these areas, in a first place (identified as task 1 in Figure 6.5). These areas
are necessarily disjoint from the areas of interest, therefore their processing cannot
interfere with that of areas of interest.
Sections 1.5.4 and 6.2.1.3 have shown that hypotheses about assembly interfaces
influence the transformations of component boundaries. Hence, these hypotheses must
be located outside of areas of weak interest to preserve the consistency of the overall
simulation model. Subsequently, these hypotheses about interfaces are known once the
164Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
areas of weak interest have been specified. Consequently, they come as a task 2, after
the definition of the areas of weak interest and the corresponding shape transformations
of assembly interfaces should be applied at that stage.
As highlighted at Sections 1.5.3, 1.5.4, and 6.2.1.3, idealizations are shape transformations
having an important impact on component shapes. As mentioned at Section
2.2, the order of detail removal operations and idealizations has not been studied
precisely yet. However, once idealizations have been assigned to sub-domains corresponding
to primitives Pi of components, these transformations produce new interfaces
between these sub-domains (see Figure 5.1) in addition to the assembly interfaces originated
from the interactions between components. Independently of skin and topological
details, idealizations can be regarded as task 3 in the preparation process flow.
Effectively, these new interfaces are the consequences of idealizations of sub-domains
that result from idealization processes. Therefore, these new interfaces cannot be processed
during the second task. These new interfaces should be processed in a first
place after the idealizations performed during the third task. The corresponding shape
transformations attached to these new interfaces form task 4.
Now, as pointed out at Section 6.2.1.3, idealizations can interact between themselves
because the idealized sub-domains can be extended/merged in accordance to
their geometric configurations to produce a connected idealized model wherever it is
required by the simulation objectives. This new set of shape transformations can be
regarded as task 5 that could indeed appear as part of an iterative process spanning
tasks three and four. This has not yet been deeply addressed to characterize further
these stages and conclude about a really iterative process or not. Even though
task two addresses hypotheses attached to assembly interfaces and their corresponding
shape transformations, it cannot be swapped with task three to contribute to iterative
processes discussed before. Indeed, task 2 is connected to assembly interfaces between
components and their processing could be influenced by component idealizations, e.g.,
in a shaft/bearing junction, idealizing the shaft influences its contact area with the
bearings that guide its rotational movement.
Hypotheses and shape transformations previously mentioned enable the definition of
a mechanical model over each sub-domain resulting from the tasks described above but
this model must be available among the entities of CAE software. This is mandatory
to take advantage of this software where the FE mesh will be generated. Consequently,
if an engineer defines interface transformations consistent with the simulation hypotheses,
there may be further restrictions to ensure that the shapes and mechanical models
produced are effectively compatible with the targeted CAE software capabilities. For
sake of conciseness, this aspect is not addressed here.
Toward a methodology of assembly model preparation
This section has identified dependencies among shape transformations connected to
165Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
Task1 – Definition of areas of weak interest
Entire idealization of components, pre-defined solution fields
Task2 – Specification and transformations of components interfaces
Task3 – Idealization of sub-domains outside the areas of weak interest
Task4 – Specification and transformations of interfaces resulting from
idealization
Task5 – Interaction and transformations of idealized sub-domains
Task6 – Skin or topological transformations of sub-domains
Figure 6.5: Synthesis of the structure of an assembly simulation preparation process.
simulation objectives and hypotheses. Shape details on components can be identified
using the morphological analysis, as illustrated in Section 5.5.2. This analysis has
shown that the primitives Pi obtained from the construction graph GD could be further
decomposed into sub-domains after analyzing the result of a first MAT. This analysis
has also shown its ability to categorize sub-domains relatively to each other. However,
detail removal, which originates from different components or even represents an entire
component, needs to be identified through a full morphological analysis of the assembly.
This has not been investigated further and is part of future research. Currently, detail
removals can take place after task two but they can be prior or posterior to idealizations.
The definition of areas of interest has connections with the mesh generation process to
monitor the level of discretization of sub-domains. This definition acts as a partitioning
process that can take place at any time during the process flow of Figure 6.5.
166Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
6.2.3 Conclusion and methodology implementation
As a conclusion, the difference between a simulation model of a standalone component
and that of an assembly relates to:
• The interactions between components. The engineer formulates a hypothesis
for each interface between components. These hypotheses derive from assembly
simulation objectives;
• The ordering of shape transformations. The entire idealization of components and
the specification of pre-defined solution fields followed by shape transformations
of component interfaces are prioritized;
• The interactions between idealizations and assembly interface transformations.
To be able to model large assemblies, not only components but groups of components
have to be idealized, which can significantly increase the amount of
interactions between idealizations and transformations of assembly interfaces.
The simulation objectives are expressed through hypotheses that trigger shape
transformations. Studying the interactions between simulation objectives, hypotheses,
and shape transformations has revealed dependencies between categories of shape
transformations. These dependencies have been organized to structure the assembly
simulation model preparation process in terms of methodology and scope of shape
transformation operators. The proposed methodology aims at successfully selecting
and applying the geometric transformation operators corresponding to the simulation
objectives of the engineer.
Starting from an enriched structure of DMU as proposed in Section 3.3, the purpose
of the next sections is to illustrate how this methodology can be applied to industrial
use-cases. Two implementations are proposed and both are based on the exploitation
of functional features of the assembly using the interfaces between components (see
Figures 2.14 and 3.2).
As a first methodology implementation, Section 6.3 develops the concept of templatebased
operators. This concept uses functional information and the geometry of assembly
interfaces to identify configurations such as bolted junctions and to apply specific
simulation hypotheses to transform their assembly interfaces. This transformation
creates a simplified volume model with a sub-domain decomposition around bolted
junctions, as required by the simulation objectives (see Figure 6.6).
The second methodology implementation, presented in Section 6.4, leads to a full
idealization of an assembly use-case. This implementation confirms that the idealization
process of Chapter 5 can be generalized to assembly structures.
167Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
Clearance Fitted
contact
Hypothesis: bolted junctions represented by a
simplified bolt model
Geometric transformations: cylindrical volume
interface representing a screw/hole clearance
transformed into a fitted contact.
(a)
Figure 6.6: Use-Case 1: simplified solid model with sub-domains decomposition around bolted
junctions. Area enlarged (a): Illustration of task 2 that transforms bolted junction interfaces
into fitted contacts.
6.3 Template-based geometric transformations resulting
from function identifications
As illustrated in Section 1.5.4, repetitive configurations, e.g., junctions, and their processing
are critical when preparing assembly structures, justifying the need to automate
the preparation of large assembly models. To improve the efficiency of DMU
transformations for FEA, Section 3.5.2 has proposed to set up relationships between
simulation objectives and geometric transformations through the symbolic representation
of component functions and component interfaces. The method is based on an
enriched DMU as input (see Section 3.3) which contains explicit geometric interfaces
between components (contacts and interferences) as well as their functional designations.
This enriched DMU has been generated based on the research work of Shahwan
et al. [SLF∗12, SLF∗13]. The geometric interfaces feed instances of conventional interfaces
(CI) (see Section 1.3.2) classes structured into a taxonomy, TCI , that binds
geometric and symbolic data, e.g. planar contact, spherical partial contact, cylindrical
interference, . . . Simultaneously, CI and assembly components are organized into a CI
graph: CIG(C, I) where the components C are nodes and CI are arcs.
Starting from this enriched model, Section 6.3.2 extends the functional structure
to reach a level of product functions. Therefore, simulation objectives can be used to
specify new geometric operators using these functions to robustly identify the components
and assembly interfaces to transform [BSL∗14]. If close to Knowledge Based
Engineering (KBE), this scheme is nonetheless more generic and more robust than
KBE approaches due to the fact that functional designations and functions are generic
concepts. KBE aims at structuring engineering knowledge and at processing it with
symbolic representations [CP99, Roc12] using language-based approaches. Here, the
focus is on a robust connection between geometric models and symbolic representations
168Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
featuring functions.
To prove the validity of this approach and of the methodology proposed in Section
6.2.2, this section presents a template-based operator dedicated to the automation
of shape transformations of bolted junctions (see Figure 6.6). The template operator is
described in Section 6.3.3. Using functional information and geometric interfaces, this
operator applies a user-defined template to simplify bolts and sets control sub-domains
around them in their associated tightened components to enable the description of the
friction phenomenon between these components. This template is used to precisely
monitor the mesh generation process while preserving the consistency of contacts and
adapting the assembly model to simulation objectives. Finally, Section 6.3.4 illustrates
the result of the different tasks of the proposed methodology applied to the transformation
of the bolted junctions of the root joint model presented in Figure 1.6.
6.3.1 Overview of the template-based process
The overall principle of the template-based approach is introduced in Figure 6.7. It
uses the available functional information and geometric interfaces, see (1) in Figure 6.7,
as well as a library of pre-defined parametric templates, see (2). From this library, the
templates are selected by the user according to his, resp. her, simulation objectives.
Once the templates have been selected, the operator automatically identifies the functions
in the assembly the templates are related to, see (3). Then, as explained in
Section 6.3.2, the operator identifies in the CAD assembly the components and interfaces
to be transformed, see (4). In (5), the templates definition are fitted to the real
geometry, i.e. the components and interfaces dimensions involved in the geometrical
transformations are updated in the pre-definition of the templates. Section 6.3.3.1
detailed the compatibility conditions required by the templates insertion in the real
geometry. Finally, the real geoemtry is transformed according to the compatibility
conditions and the templates are inserted in the assembly model, see task 6. Section
6.3.4 describes the application of template operator on two aeronautical use-cases.
It results in a new CAD assembly model adapted to simulation objectives.
6.3.2 From component functional designation of an enriched
DMU to product functions
Though the bottom-up approach of Shahwan et al. [SLF∗12, SLF∗13] summarized in
Section 3.3 provides assembly components with a structured model incorporating functional
information that is independent of their dimensions, their functional designation
does not appear as an appropriate entry point to derive shape transformation operators
as required for FE analyses. Indeed, to set up FE assembly models, an engineer looks
for bolted junctions that he, resp. she, wants to transform to express friction phenomena,
pre-stressed state in the screw, . . . Consequently, the functional level needed is not
169Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
Template library:
pre-defined parametric templates
Definition of assembly interfaces
and functional designations
Geometrical adjustment of the
template to real shapes
Functions available
for transformation
Identification of components
and interfaces involved in
selected functions
Adaptation of real shapes with
template insertion
Real shape Template
1 2
3
4
5
6
Adapted assembly
Figure 6.7: Overview of the main phases of the template-based process.
170Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
Assembly
Disassemblable
Obstacle
Bolted adjusted
Screw + nut
Blocked nut
Adherence Dependent function
Dependent function
Bolted
...
...
... Nut
Screw
Plates
Counter nut
Figure 6.8: Subset of TF N , defining a functional structure of an assembly.
the functional designation, which is bound to a single component, it is the product
function itself that is needed to address the corresponding set of components and their
functional interfaces.
To this end, it is mandatory to refer to product functions. This is achieved with a
taxonomy of functions, TF N , that can produce a functional structure of an assembly
(see Figure 6.8). Blue items define the sub-path in TF N hierarchy that characterizes
bolted junctions.
Each instance of a class in TF N contains a set of components identified by their functional
designation, i.e., it contains their structured geometric models and functional
interfaces. As a result of the use of TF N , a component of a DMU can be automatically
identified when it falls into the category of cap screws, nuts, locking nuts, that are
required to define bolted junctions. This means that their B-Rep model incorporates
their geometric interfaces with neighboring components. The graph of assembly interfaces
set up as input to the process of functional designation assignment, identifies the
components contained in a bolted junction. Each component is assigned a functional
designation that intrinsically identifies cap screws, nuts, locking nuts, . . . , and connects
it with an assembly instance in TF N .
It is now the purpose of Section 6.3 to take advantage of this information to set up
the template-based transformations.
6.3.3 Exploitation of Template-based approach for FE models
transformations
As a result of the functional enrichment process, the DMU is now geometrically structured,
components are linked by their geometric interfaces, and groups of components
can be accurately identified and located in the DMU using their function and geo-
171Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
metric structure, e.g., adjusted bolted junctions with (screw+nut) (see Figure 6.8).
Now, the geometric transformations needed to adapt the DMU to FEA objectives are
strengthened because screws, nuts, locking nuts can be robustly identified, groups of
tightened components are also available through the load cycles attached to cap screws
(see Section 3.3).
Two possible accesses are proposed to define a function-based template T related
to an assembly function:
• A component C through a user-defined selection: from it and its functional designation,
a data structure gives access to the functions it contributes to. After
selecting C, the user selects the function of interest among the available functions
attached to C in TF N and compatible with T. Other components are recovered
through the selected function this component participates to;
• The function itself in TF N that can lead to the set of components needed to
define this function and all the instances of this function existing in the targeted
assembly.
These accesses can be subjected to constraints that can help identifying the proper
set of instances. Constraints aim at filtering out instances when a template T is defined
from a function to reduce a set of instances down to the users needs, e.g., assembly
function with bolts ‘constrained with’ 2 tightening plates componenti and component j .
Constraints aim at extending a set of instances when a template is defined from a
component, i.e., a single function instance recovered, and needs to be extended, e.g.,
assembly function with bolts ‘constrained with’ same tightened components and screw
head functional interface of type ‘planar support’ or ‘conical fit’.
6.3.3.1 Function-based template and compatibility conditions of transformations
The previous section has sketched how component functions can be used to identify
sets of components in an assembly. Indeed, this identification is based on classes
appearing in TF N . Here, the purpose is to define more precisely how the template can
be related to TF N and what constraints are set on shape transformations to preserve the
geometric consistency of the components and their assembly. Shape transformations
are application-dependent and the present context is structural mechanics and FEA to
define a range of possible transformations.
The simplest relationship between a template T and TF N is to relate T to a leaf of
TF N . In this case, T covers instances defining sets of components that contain a variable
number of components. T is also dimension independent since it covers any size of
component, i.e., it is a parameterized entity. Shape transformations on T are designated
as ST and the template devoted to an application becomes ST (T). Now, reducing the
172Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
scope to disassemblable assembly functions and more specifically bolted junctions, one
leaf of TF N can be used to define more precisely T and ST (T). Conforming to Figure 6.8,
let us restrict first to the leaf ‘screw+nut’ of TF N . Then, T contains the following
functional interfaces: one threaded link, two or more planar supports (one between nut
and plate and at least one between two plates), either one planar support or one conical
fit between the screw head and a plate, as many cylindrical loose fits as plates between
the screw and plates because the class of junctions is of type adjusted. The shape
transformations ST (T) of T set up to process bolted junctions can be summarized as
(see Figure 6.9):
• ST 1: merging screw and nut (see Section 6.3.3.1);
• ST 2: localization of friction effects with a sub-domain around a screw (see Section
6.3.3.1);
• ST 3: removal of the locking nut if it exists (see Section 6.3.3.2);
• ST 4: screw head transformation for mesh generation purposes (see Section 6.3.3.3);
• ST 5: cylindrical loose fit around the screw shaft to support the contact condition
with tightened plates (see Section 6.3.4).
Each of these transformations are detailed throughout the following sections.
Now, the purpose is to define ST so that ST (T) exists and preserves the consistency
of the components and the assembly. This defines compatibility conditions, CC, between
T and ST that are conceptually close to attachment constraints of form features
on an object [vdBvdMB03] (see Figure 6.9). CC applied to ST are introduced briefly
here. Given the set of components contained in T, this set can be subdivided into two
disjoint subsets as follows:
• IC is the set of components such that each of its components has all its functional
interfaces in T, e.g., the screw belongs to IC. Consequently, components
belonging to IC are entirely in T (see the green rectangle in Figure 6.9);
• PC is the set of components such that each of its components has some of its
functional interfaces in T, e.g., a plate belongs to PC. Components belonging to
PC are partially in T (see the red rectangle in Figure 6.9).
IC can be used to define a 3D sub-domain of T, TI defined as the union of all
components belonging to IC. Now, if a transformation ST takes place in IC and geometrically
lies inside TI , ST (T) is valid because it cannot create interferences with
other components of the assembly, i.e., CC are satisfied.
Let us consider some of these transformations to illustrate some CC. As an objective
of FEA, the purpose of the assembly model is to analyze the stress distribution between
173Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
Functionally
enriched
DMU
CIG
Function
FD
(screw, nut,
locking nut)
FI
CI
Ci Cn
T
Cc
Ic
Pc
Geometric Transformations
ST1 : C’ ← merge (C ,C ) screw nut
i j
ST3
ST5
: remove (C ) locking nut
i
: (C’ ,C’ )←preserve FI(C ,C ) j
screw
i
screw
i j
S : (C’ ,C’ ,…) ← subdivide(C ,C ,...) T2
ST4
i j i j
k
: (C’ i ,C’ j ) ← transform(C ,C ) i j
screw screw
Updated components (C’, C’,...C’ ) i j m
(Automated)
Ci : component i
CI: conventional interface
CIG: conventional interface graph
CC: compatibility conditions between T and ST
;
FD: functional designation of a component;
FI: functional interface
IC: set of components such that each of its components has all its FI in T
PC: set of components such that each of its components has some of its FI in T
ST : shape transformation incorporated in a template, hence ST (T);
T: function-based template performing shape transformations;
Figure 6.9: Principle of the template-based shape transformations. The superscript of a
component C, if it exists, identifies its functional designation.
Threaded
link
Pre- Load
pressure
Friction
IC :
Counter nut
Nut
Screw
PC:
Plates
Domain
decomposition
(CC to verify)
(a) (b) (c) (d) (e)
ST4
(CC to verify)
ST2
(CC satisfied) (CC satisfied)
ST1 ST3
Figure 6.10: Compatibility conditions (CC) of shape transformations ST applied to T. (a)
through (e) are different configurations of CC addressed in Section 6.3.3.
174Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
plates and their interactions with bolts. To this end, the stress distribution around
the threaded link between a screw and a nut is not relevant. Therefore, one shape
transformation, ST 1 is the removal of the threaded link to merge the screw and the nut
(see Figure 6.10c). ST 1 is always compatible since the screw and the nut belong to IC,
hence the CC are always valid.
Now, let us consider another transformation, ST 2, that specifies the localization
of friction effects between plates around the screw shaft and the representation of
the stress distribution nearby the screw shaft. This is modeled with a circular area
centered on the screw axis and a cylindrical sub-domain around the screw shaft (see
Figure 6.10d). Indeed, ST 2, is a domain decomposition [CBG08, Cha12] taking place in
the plates belonging to T. Because the plates belong to PC, CC are not trivial. However,
ST 2, takes place inside the plates so they cannot interfere with other components, rather
they can interfere with the boundary of the plates or they can interfere between them
when several screws are close to each other on the same plate (see Figure 6.11). In this
case, CC can be simply expressed, in a first place, as a non interference constraint.
Other shape transformations are listed when describing one example template in
Section 6.3.4.
6.3.3.2 Shape transformations and function dependency
The previous section has connected T to TF N in the simplest way possible, i.e., using
a leaf that characterizes a single function. The purpose of this section is to analyze
into which extent T can connect to classes of TF N that perform several functions in
the assembly.
In a first place, let us review shortly some concepts of functional analysis [Ull09].
There, it is often referred to several categories of functions that are related to a design
process, i.e., external, internal, auxiliary, . . . However, this does not convey consistency
conditions among these functions, especially from a geometric point of view. Here, the
current content of TF N refers to internal functions, i.e., functions strictly performed by
components of the assembly. The ‘screw+nut’ function, as part of bolted junctions, is
one of them. Bolted junctions can contain other functions. Let us consider the locking
function, i.e., the bolt is locked to avoid any loss of tension in the screw when the
components are subjected to vibrations. The locking process can take place either on
the screw or on the nut. For the purpose of the analysis, we consider here a locking
process on the nut, using a locking nut (see Figure 6.12a). In functional analysis,
this function is designated as auxiliary function but this concept does not characterize
geometric properties of these functions.
From a geometric point of view, it can be observed that functional interfaces of
the screw, nut and locking nut are located in 3D such that the functional interfaces
(planar support) between the nut and locking nut cannot exist if the nut does not
tighten the plates. Consequently, the locking function cannot exist if the tightening
175Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
Interference
No interference
User meshing
tolerance
Ø subdomains =
user ratio ´ Ø screw
Figure 6.11: Checking the compatibility of ST (T) with respect to the surrounding geometry
of T.
function does not exist. Rather than using the designation of auxiliary function, which
is geometrically imprecise, it is referred to dependent function.
The concept of dependent functions is inserted in TF N at different levels of TF N to
attach the corresponding functions when they exist (see Figure 6.8). Based on the concept
of dependent function, it is possible to extend the connection rule between T and
TF N . Rather than connections at the leaf level, higher level classes can be connected to
T if the dependent functions are taken into account in the CC of shape transformations
ST so that ST (T) exists and preserves the consistency of the assembly. As an illustration,
let us consider T connected to ‘Bolted adjusted’ (see Figure 6.8). Now, ST can
cover the class of bolted junctions with locking nut. Let ST 3, be the transformation
that removes the locking nut of a bolted junction, which meets also the FEA objectives
mentioned earlier. Because ST 3, applies to a dependent function of ‘screw+nut’, the
CC are always satisfied and the resulting model has a consistent layout of functional
interfaces, i.e., the removal of the locking nut cannot create new interfaces in the assembly
(see Figure 6.9 and 6.10b). Consequently, T can be effectively connected to
‘Bolted adjusted’, which is a generalization of T.
6.3.3.3 Template generation
T is generated on the basis of the components involved in its associated function in
TF N . T incorporates the objectives of the FEA to specify ST . Here, ST covers all
the transformations described previously, i.e., ST 1, ST 2, ST 3. Figure 6.10 and 6.11
illustrates the key elements of these shape transformations.
Other shape transformations, ST 4, can be defined to cover screw head transformations
and extend the range of screws to flat head ones. However, this may involve
176Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
(a) (b)
Figure 6.12: (a) Multi-scale simulation with domain decomposition around bolted junctions,
(b) Load transfers at critical holes (courtesy of ROMMA project [ROM14]).
geometric transformations where the volume of a screw head gets larger. In this case,
ST 4 takes place in PC and the compatibility conditions are not intrinsic to T (see Figure
6.9). Consequently, it is mandatory to perform an interface/interference checking
with the other components of the assembly to make sure that the transformation is
valid (see Figure 6.10).
Then, the set of shape transformations structures the dialog with the user to allow
him, resp. her, to select some of these transformations. However, the user settings are
applied to instances whenever possible, i.e., when the instance belongs to a class where
the shape transformations are applicable.
6.3.4 Example of template-based operator of bolted junctions
transformation
In an aeronautical company, simulation engineers perform specific FEAs on assembly
sub-structures such as the aircraft junction between wings and fuselage. Based on
pre-existing physical testing performed by ROMMA project [ROM14] partners, this
structure can be subjected to tensile and compressive forces to analyze:
• The distribution of the load transfer among the bolted junctions;
• The admissible extreme loads throughout this structure.
From the physical testing and preliminary numerical models, the following simulation
objectives have been set up that initiate the requirements for the proposed
template-based transformations. To adapt the FE model to these simulation objectives
while representing the physical behavior of the structure, an efficient domain decomposition
approach [CBG08, Cha12] uses a coarse mesh far enough from the bolted
junctions and a specific sub-domain around each bolted junction with friction and preload
phenomena (see Figure 6.12a, b). The objective is not to generate a detailed
stress distribution everywhere in this assembly but to observe the load distribution
areas among bolts using the mechanical models set in the sub-domains.
177Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
The objective of this section is to validate the methodology of Section 6.2 through
the template-based approach. The proposed demonstrator transforms automatically
the bolts into simplified sub-domains ready for meshing with friction areas definition
while preserving the consistency of the assembly. Consequently, there is no area of
weak interest. All the above transformations are aggregated into a parameterized template
whose input is the functional designation of components to locate the cap screws
in the assembly. Then, the template adapts to the screw dimensions, the number of
plates tightened, . . . , to apply the operators covering tasks 2 through 6. The template
features are aligned with the needs for setting up a simulation model able to exhibit
some of the physical phenomena observed during testing and expressed in the above
simulations results.
Operator description
Having enriched the assembly with functional information, the template interface
lets the engineer select a node of TF N that is compatible with T. In this example, the
function to select is: ‘assembly with Bolted junction’ (see Figure 6.14). Now, several
ST are either pre-set or user-accessible.
Figures 6.6a and 6.13 illustrates the task 2 of the methodology. An hypothesis
focuses on the interfaces between screw shafts and plate holes: the clearances there are
regarded as small enough in the DMU to be reduced to a fitted configuration where
shafts and holes are set to the same nominal diameter to produce a conform mesh
with contact condition at these interfaces. To precisely monitor the stress distribution
around bolts and friction between plates, ST 2 is user-selected. It a simplified model of
the Rotschers cone [Bic95] that enables generating a simple mesh pattern around bolts.
In this task also, hypothesizing that locking nuts, nuts, and screws can be reduced to
a single medium leads to the removal of assembly interfaces between them. ST 3 is
user-accessible and set here to remove the dependent function ‘locking with locking
nut’.
Then, there is no idealization taking place, hence no action in tasks 3 and 5. Task 4
connects to a hypothesis addressing the interfaces resulting from task 2, i.e., the interfaces
between plates need not to model contact and friction over the whole interface.
Friction effects can be reduced to a circular area around each screw, which produces
a subdivision of these interfaces. ST 5 is pre-set in T to preserve the cylindrical loose
fit between screw and plates to set up contact friction BCs without inter-penetration
over these functional interfaces. ST 1 is also pre-set as well as ST 4.
Finally, task 6 concentrates on skin and topological transformations. These are
achieved with locking nut, nut, and screw shape transformations. The latter is performed
on the functional interface (planar support) between the screw head/nut and
the plates to obtain a meshing process independent of nut and screw head shapes. Now,
T can cover any bolted junction to merge screw, nut and locking-nut into a single do-
178Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
Rotscher’s
cone
Sub-domains Structured hex
Mesh
(a) (b) (c)
Figure 6.13: Template based transformation ST (T) of a bolted junction into simple mesh
model with friction and contact areas definition around screw and nut. (a) the bolted junction
in the DMU, (b) the bolted junction after simplification to define the simulation model, (c)
the desired FE mesh of the bolted junction.
main, reduce the screw and nut shapes to a simple shape of revolution while preserving
the consistency of its interfaces.
Based on T, ST (T) is fairly generic and parameterized to intelligently select and
transform bolts, i.e., it is independent of the number and thicknesses of plates, of the
screw diameter, the length and head type (cylindrical (see Figure 6.13a) versus flat
ones) in addition to the location of each bolt.
Here, ST (T) contains ST 2, a generation of sub-domains taking into account the
physical effects of the Rotschers cone. This geometric transformation could interact
with plate boundaries to change the shape of these sub-domains and influence the mesh
generation process. Presently, templates are standalone entities and are not taking into
account these effects left for future developments. At present, the engineer can adjust
the sub-domain to avoid these interactions (see Figure 6.14a).
Implementation and results
The developed prototype is based on OpenCascade [CAS14] and Python scripting
language. The DMU is imported as STEP assembly models, the geometric interfaces
between components are represented as independent trimmed CAD faces with
identifiers of the initial face pairs of the functional interfaces. The assembly functional
description is imported as a text file from the specific application performing
the functional enrichment described in [SLF∗13] and linked to the assembly model by
component identifiers.
Figure 6.14a shows the user interface of the prototype. When selecting the ‘assembly
with Bolted Junction’ function, the user has a direct access to the list of bolted
junctions in the assembly. To allow the user filtering his selection, DMU parameters
are extracted from the functional designation of components, e.g., the screw and nut
types, the number of tightened components, or from geometry processing based on
functional interfaces, e.g., screw diameter. Using these parameters, the user is able
179Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
to select bolted junctions within a diameter range, e.g., between 10 and 16 mm (see
Figure 6.14b) or bolted junctions with screw and locking nut (see Figure 6.14c), etc.
The user can monitor the Rotschers cone dimension with a FEA parameter called ‘sub
domain ratio’ that represents the ratio between the screw nominal diameter and the
sub-domain diameter (see Figure 6.14d and e). Then, the user-defined ‘meshing tolerance’
is used during the verification phase to check the compatibility conditions, CC,
between instances and their surrounding geometry (see Figure 6.9 and 6.11).
Figure 6.15 shows two results of the template-based transformations on aircraft
structures:
Aircraft structure 1: A junction between the wing and the fuselage. The assembly
contains 45 bolted junctions with 3 different diameters and 2
different screw heads;
Aircraft structure 2: An engine pylon. The assembly contains over 250 bolted junctions
with identical screws and nuts.
The final CAD assembly (see Figure 6.14b) with simplified bolted junctions has
been exported to a CAE software, i.e., Abaqus [FEA14]. STEP files [ISO94, ISO03]
transfer the geometric model and associated xml files describes the interfaces between
components to trigger meshing strategies with friction area definitions. Appendix D
illustrates the STEP data structure used to transfer the Aircraft structure 1.
Comparing with the process pipeline used with existing industrial software (see
Section 1.4.3), the improvements are as follows. The model preparation from CAD
software to an Abaqus simulation model takes 5 days of interactive work for ‘Aircraft
structure 1’ mentioned above (see Figure 1.13b). Using the pipeline performing the
functional enrichment of the DMU and the proposed template-based shape transformations
to directly produce the meshable model in Abaqus and perform the mesh in
Abaqus, the overall time is reduced to one hour. The adequacy of this model conforms
to the preliminary numerical models set up in ROMMA project [ROM14] and
extending this conformity to testing results is ongoing since the template enables easy
adjustments of the mesh model.
Regarding the ‘Aircraft structure 2’, there is no reference evaluation of its model
preparation time from CAD to mesh generation because it is considered as too complex
to fit into the current industrial PDP. However, it is possible to estimate the time
reduction since the interactive time can be linearly scaled according with the number
of bolted junction. This ends up with 25 days of interactive work compared to 1.5 hour
with the proposed approach where the time is mostly devoted to the mesh generation
phase rather than the template-based transformations.
Though the automation is now very high, the template-based approach still leaves
the engineer with meaningful parameters enabling him/her to adapt the shape transformations
to subsets of bolted junctions when it is part of FE requirements.
180Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
(b) (d)
(c) (e)
(a)
Figure 6.14: (a) User interface of a template to transform ‘assembly Bolted Junctions’. Results
obtained when filtering bolts based on diameters (b) or screw type (c). Results of the
template-based transformations with (d) or without (e) sub-domains around bolts.
181Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
(a) (b) (c)
Aircraft structure 1
Aircraft structure 2
Figure 6.15: Results of template-based transformations on CAD assembly models: (a) CAD
models with functional designations and geometric interfaces, (b) models (a) after applying
ST (T) on bolts, (c) mesh assembly models obtained from (a) with friction area definition.
182Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
DMU
Analysis for idealization
Geometric Operations
CAD Interfaces Simplify fastener
SIMULATION
Volume Segmentation
Assembly Idealization
Assembly analysis
Components Idealization
Interfaces Graph
Idealized
Assembly
· Geometric Interfaces
· Functinal Information
· User Simulation Objectives
· Idealized CAD assembly ready for meshing
· Boundary conditions (contact)
· Physical information (thickness, offset)
Figure 6.16: Illustration of the idealization process of a CAD assembly model. All components
are fully idealized and bolted junctions are represented with FE fasteners. Solid plates and
stiffeners are idealized as surfaces.
6.4 Full and robust idealization of an enriched assembly
The methodology of Section 6.2.2 has also been applied to create an idealized plate
model of the ‘Root joint’ use-case presented in Figure 1.6. The simulation objectives
are set on the global analysis of the stress field in the structure and the analysis
of the maximal loads transferred through the bolted junctions. Consequently (see
Section 1.4.3), the generated FEM contains idealized components with shell FE and
each junction can be simplified with a fastener model (see Figure 6.17). Figure 6.16
illustrates the different data and processes used to transform the initial DMU model into
the final FE mesh model. Once the CAD data associated with functional interfaces
have been imported, all bolted connections are transformed into simplified fastener
models (see Section 6.4.1) in a first step. Then, a second step segments and idealizes all
components in accordance with the method described in Chapter 5 (see Section 6.4.2).
183Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
A
B
C
A
B
C
Hypothesis: bolted junctions represented by
fasteners’ models
Geometric transformations: a cylindral interfaces
transformed into a single mesh node
(a)
Figure 6.17: Illustration of Task 2: Transformation of bolted junction interfaces into mesh
nodes.
Fastener connection points
Figure 6.18: Results of the template-based transformation of bolted junctions. The blue
segment defines the screw axis. Red segments are projections of interfaces between the plates
and the screw. Yellow points are the idealizations of these interfaces to connect the screw to
the plates.
184Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
6.4.1 Extension of the template approach to idealized fastener
generation
Given the above simulation objectives, the first step of transformations is related to the
transfer of plate loads throughout the assembly and FE beam elements are sufficient
to model the bolts behavior. This hypothesis implies, in task 2 of the methodology, a
transformation of cylindrical interfaces between bolts and plate holes into single mesh
nodes (see Figure 6.17a) linked to beam elements that represent the idealized fasteners.
A specific template-based operator has been developed to automate the transformation
of bolted junctions into idealized fasteners. As in Section 6.3, the bolted junctions
are identified among all 3D components through the function they are involved in:
‘adjusted bolted junctions with screw+nut’. Then, the template applies a set of shape
transformations ST to generate the beam elements with their associated connection
points. These shape transformations are described as follows:
• ST 1: merging screw and nut (identical to Section 6.3.3.1);
• ST 2: removal of the locking nut if it exists (identical to Section 6.3.3.2);
• ST 3: screw transformation into beam elements (see Figure 6.18). FE beam elements
are represented by line segments. The axis of the screw is used as location
for the line segments;
• ST 4: transfer of interfaces between plates and screw as points on the line segments.
The blue part in Figure 6.18 represents the whole screw, the red parts are
the projections of the interfaces between the plates and the screw while yellow
points represent the idealization of these interfaces and define the connections
between each plate and the screw;
• ST 5: reduce junction holes in plates to single points. The fastener model used to
represent bolted junctions does not represent holes in the final idealized model
(see Figure 6.19).
Different idealizations, i.e., the set of ST , can be generated to match different
simulation objectives. For instance, one ST can focus on the screw stiffness in addition
to the current simulation objectives. To this end, a new objective can be set up
to compare the mechanical behavior when screws are modeled as beams and when
they are perfectly rigid. Consequently, the screws must now be represented as rigid
bodies. This means that the blue part (see Figure 6.18) representing the FE beam
element is no longer needed. Then, the yellow points (see Figure 6.18) can be used
directly to generate the mesh connection points in the final medial surfaces of the plate
components. These points can be used to set kinematic constraints.
Indeed, the list of previous geometric operations describes a new category of shape
transformation, ST i that would be needed to meet this new simulation objective. Be-
185Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
cause these transformations are close the templates described, this shows how the
principle of the template-based transformations can be extended to be adapted to new
simulation objectives using additional elementary transformations. Here, ST 3 would
be replaced by ST i that would produce the key points needed to express the kinematic
constraints.
Now that the bolted junctions have been simplified into FE fasteners, the next
section illustrates the idealization of the whole assembly.
6.4.2 Presentation of a prototype dedicated to the generation
of idealized assemblies
In order to generalize the idealization approach presented in Chapter 5, a prototype has
been developed to process not only components but also whole assemblies. Likewise the
template demonstrator, the prototype is based on OpenCascade [CAS14] and Python
scripting language. The CAD assembly as well as the geometric interfaces are imported
as STEP models.
Figure 6.19 illustrates the user interface of the prototype. Here, the 3D viewer shows
the result of task 2 where the bolted junctions of the CAD assembly have been transformed
into simple fasteners using the template-based approach. The interface graph
in the graph tag shows all the neighboring relationships between assembly components
(including the fasteners).
Figure 6.20 illustrates task 3 where the assembly components are segmented into
sub-domains according to shape primitives organized into a construction graph. The
set of solid primitives in red is extracted using the algorithm described in Section 4.5.2.
The primitives are then removed from the initial solid to obtain a simpler component
shape. Once the construction graph is generated, the user selects a construction process
which creates a component segmentation into volume sub-domains. Then, each
sub-domain is idealized wherever the primitive extent versus its thickness satisfies the
idealization criterion. The interfaces resulting from this idealization can be associated
with new transformations of assembly interfaces, e.g., a group of parallel idealized
surfaces linked to the same assembly interface can be aligned and connected. The analysis
of interactions between independently idealized sub-domains can guide geometric
transformations such as sub-domain offsets and connections. These transformations
are part of task 4. Figure 6.21 illustrates an intermediate result of the idealization
process of a component (task 5). The graph of primitives’ interfaces has been analyzed
in task 4 to identify and align groups of parallel medial surfaces. For example, the
medial surface highlighted in brown is offset by 2.9 mm from its original position. The
medial surfaces are then connected with the operator described in Section 5.5.3. The
result is a fully idealized representation of the component.
Finally, other idealized components are incorporated in the idealized assembly
186Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
List of
interfaces
List of
components
3D Viewer
Output/ Input
console
Graph of interfaces
Figure 6.19: User interface of the prototype for assembly idealization. The 3D viewer shows
the assembly after the transformation of bolted junctions into FE fasteners.
Configurations
of component’s
segmentation
3D Visualization of a set of
primitives to be removed from solid
Configuration
of the
primitives
extraction
algorithm
Figure 6.20: Illustration of a component segmentation which extract extruded volumes to be
idealized in task 3. The primitives to be removed from the initial solid are highlighted in red.
187Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
List of
interfaces
between
primitives
List of
component’s
primitives
3D Vizualization of
component idealization
Primitive
attributes
Graph of interfaces between primitives
Figure 6.21: Illustration of task 4: Identification and transformation of groups of idealized
surfaces connected to the same assembly interfaces.
model using complementary sub-domain transformations applied to each of them as
illustrated in Figure 6.22.
List of
idealized
components
and fasteners
3D Vizualization of
assembly idealization
Figure 6.22: Final result of the idealized assembly model ready to be meshed in CAE software.
Again, functional information about components and successive decomposition into
188Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
sub-domains as well as idealization processes reduce the preparation process to minutes:
up to approximately ten minutes to process all the components including all the
user interactions required to load each component, select the appropriate template or
process the components subjected to the segmentation process and the morphological
analysis. This is a significant time reduction compared to the days required when
performing interactively the same transformations using tedious interactions with low
level operators existing in current CAE software.
Yet, the constraints related to mesh generation through mesh size constraints, have
not been taken into account in the current analysis of preparation processes. These
constraints have to be addressed in future research work. It has also to be noted
that the process flow of Figure 6.5 turns into a sequential flow in all the simulation
frameworks illustrated.
6.5 Conclusion
In this chapter, dependencies between categories of shape transformations have been
organized to structure the assembly simulation model preparation process in terms
of methodology and scope of shape transformation operators. The proposed methodology
empowers the use of DMUs enriched with geometric interfaces and functional
information to automate CAD assembly pre-processing and to generate a ‘FE-friendly’
equivalence of this assembly.
The template-based method has shown that shape transformations highly benefit
from functional information. Using functional information strengthens the transformation
of complex models like assemblies where many components interact with each
other. The template can be instantiated over the whole assembly to quickly transform
repetitive configurations such as bolted junctions that are highly time-consuming when
processed purely interactively.
The idealization method introduces a robust geometric operator of assembly idealization.
This operator takes advantage of the assembly decomposition into sub-domains
and their associated geometric interfaces as produced in Chapter 5. This structure has
been used successfully to idealize sub-domains and address some general mesh generation
constraints to ensure obtaining high quality meshes.
Finally, a demonstrator has been implemented to prove the validity of the proposed
methodology. This prototype has been applied to an industrial use-case proposed in
ROMMA project [ROM14] to create a simplified solid model and an idealized, surfacebased
model, using the operators currently developed. This use-case has demonstrated
the benefits of the proposed methodology to:
1. Efficiently process real 3D assemblies extracted from large DMU;
189Chapter 6: Toward a methodology to adapt an enriched DMU to FE assembly models
2. Enable the implementation of a robust approach to monitor automated shape
transformations.
Thanks to this methodology, the preparation time can be drastically shortened
compared to purely interactive processes as commonly practiced by today’s engineers.
190Conclusion and perspectives
Assemblies, as sets of components, bring a new complexity level for CAD-FEA data
processing of mechanical structures. Here, new principles and associated operators have
been developed to automate the adaptation of CAD assembly models. The objective
targeted robust transformations to process the large amount of repetitive geometric
configurations of complex assemblies and to reduce the time spent by engineers to
prepare these assemblies.
Summary of conclusions
Now, each of the contributions stated in the previous chapters can be synthesized and
summarized.
In-depth analysis of FE pre-processing rules
The first contribution stands in the analysis of the current pre-processing of CAD
models derived from DMUs to produce FE models. Due to the lack of assembly-related
information in a DMU, very tedious tasks are required to process the large amount of
components as well as their connections. Preparing each component is already a tedious
task, especially when idealizations are necessary, that increases significantly with
the number of components and their interfaces. Additionally, these interfaces form
new entities to be processed. It has been observed that repetitive configurations and
their processing are also an issue of assembly preparation, justifying the need to automate
the preparation of large assembly models. This first analysis has concluded
that the adaption of an assembly to FEA requirements and geometric transformations
derive from simulation objectives and component functions are needed as well to geometrically
transform groups of components. Also, it has been shown that functional
information can be an efficient enrichment of a DMU to identify and process repetitive
configurations.
Challenging assembly models preparation
Studying the current CAD-FEA methods and tools related to data integration
191Conclusion and perspectives
reveals that operators currently available focus on the transformation of standalone
components. One main contribution of this thesis is the proposal of an approach
to assembly model preparation. Rather than reducing the preparation process to a
sequence of separately prepared parts, the entire assembly has been considered when
specifying shape transformations to reach simulation objectives, taking into account the
kinematics and physics associated with assembly interfaces. The proposed approach
to assembly pre-processing uses, as input model, a DMU enriched at an assembly level
with interface geometry between components, additional functional properties of these
components, and, at the component level, a structured volume segmentation using a
graph structure.
Geometrically enriched components
A new approach has been proposed to decompose a B-Rep solid into volume subdomains.
This approach robustly enriches a CAD component using a construction
graph and provides a volume decomposition of each component shape. This construction
graph is generated by iteratively identifying and removing a set of extrusion primitives
from the current B-Rep shape. It has been shown that, compared to any initial
construction process performed by a designer, the extracted graph is unique for a given
object and is intrinsic to its shape because it overcomes modeling, surface decomposition,
and topological constraints. In addition, it provides non trivial construction trees,
i.e., variants of extrusion directions producing the same primitive are not represented
and variants in primitives ordering are grouped into a single node. This generates a
compact representation of a large set of shape construction processes. Moreover, the
proposed approach, while enriching a standalone component shape, can be extended
to assembly structures after they have been enriched with component interfaces. Each
component construction graph can be nested into the component-interface assembly
structure, thus forming a robust data structure for CAD-FEA transformation processes.
Finally, a graph of generative processes of a B-Rep component is a promising
basis to gain a better insight about a shape structure. The criteria used to generate
this graph bring meaningful and simple primitives which can be subsequently used to
support the idealization process of component shapes.
Formalizing a shape idealization process through a morphological analysis
It has been shown that generating a fully idealized model cannot be reduced to a
pure application of a dimensional reduction operator such that this model is a mechanical
equivalence of the initial component. The incorporation of idealization hypotheses
requires the identification of candidate geometric areas associated with the connections
between idealized sub-domains. The proposed idealization process benefits from the
new enrichment of components with their shape structure. The segmentation of the
component into meaningful primitives and interfaces between them has been used as a
first step of a morphological analysis. This analysis evaluates each primitive with re-
192Conclusion and perspectives
spect to its dominant idealized shape. Then, using a taxonomy of geometric interfaces
between idealized sub-domains, this analysis is propagated over the whole component
and results in the decomposition of its shape into ’idealizable’ areas of type ’plate/shell,
beam and ’non-idealizable’ areas. Overall, the morphological analysis is independent
from any resolution method and is able to characterize geometric details in relation
to local and to ’idealizable’ regions of a component. Finally, an idealization operator
has been developed which transforms the sub-domains into medial surfaces/lines and
robustly connects them using the precise geometric definition of interfaces between
primitives.
Laying down the basis of a methodology to assembly preparation
To address the current industrial needs about assembly pre-processing for structural
simulation, the analysis of dependencies between geometric transformations, simulation
objectives, and simplification hypotheses led to a first methodology increasing the
level of automation of FE assembly model pre-processing. Using an enriched DMUs
containing geometric interfaces between components and their primitives as well as
functional information to end up with the generation of a ‘FE-friendly’ equivalence of
this assembly, the methodology is in line with the industrial needs to develop a new
generation of DMU: the Functional DMU. Finally, the development of a prototype platform
has illustrated that the methodology fits well with the methods and tools proposed
in this thesis. The template-based transformation, empowering the use of functional
information, has illustrated how repetitive configurations, such as assembly junctions,
can be automatically transformed. Then, the generation of the complete idealization
of an aeronautical structure has demonstrated the ability of the proposed idealization
approach to efficiently process CAD assemblies extracted from a large DMU.
As a final conclusion, compared to purely geometric operators currently available
in CAD-FEA integration, this thesis has proposed an approach based on a shape analysis
of a enriched DMU model that significantly shortens the time commonly spent
by today’s engineers and robustly performs repetitive idealization transformations of
components and assemblies as well.
Research perspectives
From the proposed approach of DMU pre-processing for structural assembly simulation,
future work can extend and build further on the methods and tools described
in this thesis. The perspectives presented in this section refers to the generation of
construction graph of B-Rep shapes and to the morphological analysis of DMU models.
193Conclusion and perspectives
Construction graph
Regarding the generation of construction graphs from B-Rep shapes, perspectives
are listed as follows:
• Extend the definition of primitive to include material removal as well
as additional operations (revolution, sweep,. . . ). In a first step, to reduce
the complexity of this research work, the choice has been made to concentrate
the extraction of generative processes on extrusion primitives. Primitives are
combined solely using a material addition operator. Clearly, future work will focus
on incorporating material removal operations and revolutions to extend the
range of objects that can be processed. Allowing new definitions of primitives
may increase the amount of primitives. However, the construction graph can
be even more compact. Indeed, groups of extrusion primitives can be replaced
by a unique revolution, or a sweeping primitive in the construction graph. To
reduce the complexity of the dimensional reduction of primitives, the presented
idealization process favored primitives adding material instead of primitives removing
material. Including primitives which removes material can be convenient
for other applications, e.g., to simplify components’ shapes for 3D simulation or
to identify cavities in components for computational fluid simulations.
• Extend the attachment condition of primitives. Regarding the attachment
of a primitive into an object, it has been shown all the benefits to avoid constraining
the primitive identification process with their attachment conditions and to
avoid looking at prioritizing primitives with geometric criteria such as: largest
visible boundaries within the object. Identifying primitives without restriction
on their ’visible’ boundaries is a way to release this constraint. However, to validate
the major concepts of the proposed approach, two restrictions have been
set on the primitive definition. The extrusion distance had to be represented by
a lateral edge and one of the primitive’s base face had to be totally ’visible’. A
future target stands in the generalization of the primitive definition to enlarge
the number of valid primitives and hence, will produce a more generic algorithm;
• Reduce the interaction between primitives. Currently, the computation
time is highly dependent on the number of extracted primitives which are compared
with each others. To reduce the complexity of the algorithm, future work
may integrate the identification of repetitions and symmetries [LFLT14]. It is
not only the global symmetries or repetitions, e.g., reflective symmetries valid at
each point of an object, which may directly reduce the extent of a shape being
analyzed, more frequently partial symmetries and repetitions are more efficient
to identify specific relationships between primitives. Partial symmetries and repetitions
initiated by the location of identical primitives convey a strong meaning
from a shape point of view. They can be used after the extraction of primitives
to generate groups of symmetrical/repetitive primitives or even before that
194Conclusion and perspectives
stage to help identifying primitives, e.g., selecting a set of base faces sharing the
same plane and the same orientation. Finally, symmetries and repetitions are
very relevant to structure an idealized model to propagate these shape structure
information across the mesh generation phase.
• Further applications of construction graphs. A construction graph structures
the shape of a B-Rep object independently from any CAD modeler. Applied
to hex-meshing, a shape intrinsic segmentation into extrusion primitives extracted
from its construction graph can be highly beneficial. Indeed, it directly provides
simple meshable volumes. Moreover, the complementary information about the
connections between primitive interfaces can help to generate a complete component
3D mesh. Applied to 3D direct modeling CAD software, this intrinsic shape
structure can be used to significantly extend this approach with larger shape
modification as well as parametrization capabilities. Because primitives are geometrically
independent of each other, the parametrization of a primitive can be
directly related to the object shape, i.e., the influence of the shape modification
of a primitive can be identified through the interface of this primitive with the
object.
Morphological analysis of assemblies.
The morphological analysis method of Chapter 5 has been presented as a preliminary
step for dimensional reduction operations. The perspectives related to this
morphological analysis are as follows:
• Extend the taxonomy of reference morphologies. The determination of
idealizable volume sub-domains in a component is based on a taxonomy of morphologies.
Each morphology is associated with one medial edge of the MAT
applied to the extrusion contour of a primitive. Clearly, this taxonomy is not
complete. Only morphologies associated with straight medial edges with constant
radius has been studied. To enable the processing of a larger range of
component shapes, this taxonomy can be extended, in a first step, to curved
edges, with or without radius variation, and, in a second step, to other types of
primitives (revolution, sweeping, . . . );
• Extend the taxonomy of connections. Regarding the propagation of the
morphological analysis of primitives to the whole object, the current taxonomy
of connections covers extrusion sub-domains to be idealized with planar medial
surfaces only. The detailed study of these configurations has demonstrated all
the robustness of the proposed approach. Here too, this taxonomy can be enlarged
to process beams, shells, or thick domains. In addition, the taxonomy of
connections is currently restricted to couples of sub-domains. In case of groups
of connected sub-domains, a new level of morphology may emerge, e.g., a set of
piled up thin extrusion primitives forming a beam. Analyzing and formalizing
195Conclusion and perspectives
this new taxonomy of connections between sub-domains will enlarge the shape
configurations which can be processed;
• Extend the approach to the morphological analysis of assemblies. Although
construction graph structures (primitives/interfaces) is compatible with
assembly structures (components/interfaces), the morphological analysis has been
applied on standalone components. When the input model is an assembly structure,
the assembly interfaces brings a new level of information. The influence of
assembly interfaces on the established taxonomies have to be studied to extend
the morphology analysis to assembly. For example, on large assemblies models,
a group of component can be viewed as a unique morphology. Propagating the
morphological analysis of components to the whole assembly will give the user a
complete access to multi-resolution morphological levels of a DMU.
Finally, we can mention the report from a 2013 ASME Panel on Geometric Interoperability
for Advanced Manufacturing [SS14]. The panelists involved had considerable
experience in the use of component geometry throughout various design and manufacturing
softwares. They stated that current CAD systems have hit hard limit with
the representation of 3D products. They came to the same conclusions that we have
highlighted about the need of a better interoperability of geometric models and design
systems with current DMUs. The proposed approaches made in this thesis with
construction graph and morphological analysis of assembly offers new opportunities to
adapt a model to the needs of the different applications involved in a product development
process.
196Bibliography
[AA96] Aichholzer O., Aurenhammer F.: Straight skeletons for general
polygonal figures in the plane. In Computing and Combinatorics, Cai
J.-Y., Wong C., (Eds.), vol. 1090. Springer Berlin Heidelberg, 1996,
pp. 117–126. 57
[AAAG96] Aichholzer O., Aurenhammer F., Alberts D., Grtner B.:
A novel type of skeleton for polygons. In J.UCS The Journal of Universal
Computer Science, Maurer H., Calude C., Salomaa A., (Eds.).
Springer Berlin Heidelberg, 1996, pp. 752–761. 57
[ABA02] Andujar C., Brunet P., Ayala D.: Topology-reducing surface
simplification using a discrete solid representation. ACM Trans.
Graph.. Vol. 21, Num. 2 (avril 2002), 88–105. 39, 44, 47, 64
[ABD∗98] Armstrong C. G., Bridgett S. J., Donaghy R. J., Mccune
R. W., Mckeag R. M., Robinson D. J.: Techniques for interactive
and automatic idealisation of CAD models. In Proceedings
of the Sixth International Conference on Numerical Grid Generation
in Computational Field Simulations (Mississippi State, Mississippi,
39762 USA, 1998), NSF Engineering Research Center for Computational
Field Simulation, ISGG, pp. 643–662. 35, 53
[ACK01] Amenta N., Choi S., Kolluri R. K.: The power crust, unions of
balls, and the medial axis transform. Computational Geometry. Vol.
19, Num. 2 (2001), 127–153. 54
[AFS06] Attene M., Falcidieno B., Spagnuolo M.: Hierarchical mesh
segmentation based on fitting primitives. The Visual Computer. Vol.
22, Num. 3 (2006), 181–193. 59
[AKJ01] Anderson D., Kim Y. S., Joshi S.: A discourse on geometric
feature recognition from cad models. J Comput Inform Sci Eng. Vol.
1, Num. 1 (2001), 440–746. 51
[AKM∗06] Attene M., Katz S., Mortara M., Patane G., Spagnuolo ´
M., Tal A.: Mesh segmentation-a comparative study. In Shape
197BIBLIOGRAPHY
Modeling and Applications, 2006. SMI 2006. IEEE International Conference
on (2006), IEEE, pp. 7–7. 59
[ALC08] ALCAS: Advanced low-cost aircraft structures project. http://
alcas.twisoftware.com/, 2005 – 2008. xv, 18
[AMP∗02] Armstrong C. G., Monaghan D. J., Price M. A., Ou H.,
Lamont J.: Engineering computational technology. Civil-Comp
press, Edinburgh, UK, UK, 2002, ch. Integrating CAE Concepts with
CAD Geometry, pp. 75–104. 35
[Arm94] Armstrong C. G.: Modelling requirements for finite-element analysis.
Computer-aided design. Vol. 26, Num. 7 (1994), 573–578. xvi,
48, 49, 132
[ARM∗95] Armstrong C., Robinson D., McKeag R., Li T., Bridgett
S., Donaghy R., McGleenan C.: Medials for meshing and more.
In Proceedings of the 4th International Meshing Roundtable (1995).
48, 49, 53
[B∗67] Blum H., et al.: A transformation for extracting new descriptors
of shape. Models for the perception of speech and visual form. Vol. 19,
Num. 5 (1967), 362–380. 48
[Bad11] Badin J.: Ing´enierie hautement productive et collaborative `a base
de connaissances m´etier: vers une m´ethodologie et un m´eta-mod`ele
de gestion des connaissances en configurations. PhD thesis, BelfortMontb´eliard,
2011. 44
[Bat96] Bathe K.-J.: Finite element procedures, vol. 2. Englewood Cliffs,
NJ: Prentice-Hall, 1996. 24
[BBB00] Belaziz M., Bouras A., Brun J.-M.: Morphological analysis
for product design. Computer-Aided Design. Vol. 32, Num. 5 (2000),
377–388. 63
[BBT08] Bellenger E., Benhafid Y., Troussier N.: Framework for
controlled cost and quality of assumptions in finite element analysis.
Finite Elements in Analysis and Design. Vol. 45, Num. 1 (2008), 25–
36. 44
[BC04] Buchele S. F., Crawford R. H.: Three-dimensional halfspace
constructive solid geometry tree construction from implicit boundary
representations. CAD. Vol. 36 (2004), 1063–1073. 62, 63, 107
198BIBLIOGRAPHY
[BCGM11] Badin J., Chamoret D., Gomes S., Monticolo D.: Knowledge
configuration management for product design and numerical simulation.
In Proceedings of the 18th International Conference on Engineering
Design (ICED11), Vol. 6 (2011), pp. 161–172. 44
[BCL12] Ba W., Cao L., Liu J.: Research on 3d medial axis transform via
the saddle point programming method. Computer-Aided Design. Vol.
44, Num. 12 (2012), 1161 – 1172. 54
[BEGV08] Barequet G., Eppstein D., Goodrich M. T., Vaxman A.:
Straight skeletons of three-dimensional polyhedra. In Algorithms-ESA
2008. Springer, 2008, pp. 148–160. 57
[Bic95] Bickford J.: An Introduction to the Design and Behavior of Bolted
Joints, Revised and Expanded, vol. 97. CRC press, 1995. 178
[BKR∗12] Barbau R., Krima S., Rachuri S., Narayanan A., Fiorentini
X., Foufou S., Sriram R. D.: Ontostep: Enriching product
model data using ontologies. Computer-Aided Design. Vol. 44, Num.
6 (2012), 575–590. 27
[BLHF12] Boussuge F., Leon J.-C., Hahmann S., Fine L. ´ : An analysis
of DMU transformation requirements for structural assembly simulations.
In The Eighth International Conference on Engineering Computational
Technology (Dubronik, Croatie, 2012), B.H.V. Topping, Civil
Comp Press. 65, 82
[BLHF14a] Boussuge F., Lon J.-C., Hahmann S., Fine L.: Extraction of
generative processes from b-rep shapes and application to idealization
transformations. Computer-Aided Design. Vol. 46, Num. 0 (2014), 79
– 89. 2013 {SIAM} Conference on Geometric and Physical Modeling.
86
[BLHF14b] Boussuge F., Lon J.-C., Hahmann S., Fine L.: Idealized models
for fea derived from generative modeling processes based on extrusion
primitives. In Proceedings of the 22nd International Meshing
Roundtable (2014), Sarrate J., Staten M., (Eds.), Springer International
Publishing, pp. 127–145. 86
[BLNF07] Bellec J., Ladeveze P., N ` eron D., Florentin E. ´ : Robust
computation for stochastic problems with contacts. In International
Conference on Adaptive Modeling and Simulation (2007). 82
[BR78] Babuvˇska I., Rheinboldt W. C.: Error estimates for adaptive
finite element computations. SIAM Journal on Numerical Analysis.
Vol. 15, Num. 4 (1978), 736–754. 46
199BIBLIOGRAPHY
[BS96] Butlin G., Stops C.: Cad data repair. In Proceedings of the 5th
International Meshing Roundtable (1996), pp. 7–12. 47
[BSL∗14] Boussuge F., Shahwan A., Lon J.-C., Hahmann S., Foucault
G., Fine L.: Template-based geometric transformations of a functionally
enriched dmu into fe assembly models. Computer-Aided Design
and Applications. Vol. 11, Num. 4 (2014), 436–449. 168
[BWS03] Beall M. W., Walsh J., Shephard M. S.: Accessing cad geometry
for mesh generation. In IMR (2003), pp. 33–42. 47
[CAS14] CASCADE O.: Open cascade technology, version 6.7.0 [computer
software]. http://www.opencascade.org/, 1990 – 2014. 12, 112, 179,
186
[CAT14] CATIA D.: Dassault syst`emes catia, version v6r2013x [computer
software]. http://www.3ds.com/products-services/catia/, 2008
– 2014. 89, I, IX
[CBG08] Champaney L., Boucard P.-A., Guinard S.: Adaptive multianalysis
strategy for contact problems with friction. Computational
Mechanics. Vol. 42, Num. 2 (2008), 305–315. 30, 175, 177
[CBL11] Cao L., Ba W., Liu J.: Computation of the medial axis of planar
domains based on saddle point programming. Computer-Aided
Design. Vol. 43, Num. 8 (2011), 979 – 988. 54
[Cha12] Champaney L.: A domain decomposition method for studying the
effects of missing fasteners on the behavior of structural assemblies
with contact and friction. Computer Methods in Applied Mechanics
and Engineering. Vol. 205 (2012), 121–129. 30, 175, 177
[CHE08] Clark B. W., Hanks B. W., Ernst C. D.: Conformal assembly
meshing with tolerant imprinting. In Proceedings of the 17th International
Meshing Roundtable. Springer, 2008, pp. 267–280. 65
[CP99] Chapman C., Pinfold M.: Design engineeringa need to rethink
the solution using knowledge based engineering. Knowledge-Based
Systems. Vol. 12, Num. 56 (1999), 257 – 267. 168
[CSKL04] Chong C., Senthil Kumar A., Lee K.: Automatic solid decomposition
and reduction for non-manifold geometric model generation.
Computer-Aided Design. Vol. 36, Num. 13 (2004), 1357–1369. 39, 44,
61, 95, 119
200BIBLIOGRAPHY
[CV06] Chouadria R., Veron P.: Identifying and re-meshing contact
interfaces in a polyhedral assembly for digital mock-up. Engineering
with Computers. Vol. 22, Num. 1 (2006), 47–58. 65
[DAP00] Donaghy R. J., Armstrong C. G., Price M. A.: Dimensional
reduction of surface models for analysis. Engineering with Computers.
Vol. 16, Num. 1 (2000), 24–35. 39, 45, 53
[DLG∗07] Drieux G., Leon J.-C., Guillaume F., Chevassus N., Fine ´
L., Poulat A.: Interfacing product views through a mixed shape
representation. part 2: Model processing description. International
Journal on Interactive Design and Manufacturing (IJIDeM). Vol. 1,
Num. 2 (2007), 67–83. 43
[DMB∗96] Donaghy R., McCune W., Bridgett S., Armstrong D.,
Robinson D., McKeag R.: Dimensional reduction of analysis
models. In Proceedings of the 5th International Meshing Roundtable
(1996). 53
[Dri06] Drieux G.: De la maquette numrique produit vers ses applications
aval : propositions de modles et procds associs. PhD thesis, Institut
National Polytechnique, Grenoble, FRANCE, 2006. 6, 19
[DSG97] Dey S., Shephard M. S., Georges M. K.: Elimination of the
adverse effects of small model features by the local modification of
automatically generated meshes. Engineering with Computers. Vol.
13, Num. 3 (1997), 134–152. 47
[Eck00] Eckard C.: Advantages and disavantadges of fem analysis in an
early state of the design process. In Proc. of the 2nd Worldwide
Automotive Conference, MSC Software Corp, Dearborn, Michigan,
USA (2000). 44
[EF11] E. Florentin S.Guinard P. P.: A simple estimator for stress
errors dedicated to large elastic finite element simulations: Locally
reinforced stress construction. Engineering Computations: Int J for
Computer-Aided Engineering (2011). 46
[FCF∗08] Foucault G., Cuilliere J.-C., Franc¸ois V., L ` eon J.-C., ´
Maranzana R.: Adaptation of cad model topology for finite element
analysis. Computer-Aided Design. Vol. 40, Num. 2 (2008),
176–196. xvi, 34, 50, 97
[FEA14] FEA A. U.: Dassault syst`emes abaqus unified fea, version 6.13
[computer software]. http://www.3ds.com/products-services/
simulia/portfolio/abaqus/overview/, 2005 – 2014. 180, XIX
201BIBLIOGRAPHY
[Fin01] Fine L.: Processus et mthodes d’adaptation et d’idalisation de modles
ddis l’analyse de structures mcaniques. PhD thesis, Institut National
Polytechnique, Grenoble, FRANCE, 2001. 22, 34, 45, 47
[FL∗10] Foucault G., Leon J.-C., et al. ´ : Enriching assembly cad models
with functional and mechanical informations to ease cae. In Proceedings
of the ASME 2010 International Design Engineering Technical
Conferences & Computers and Information in Engineering Conference
IDETC/CIE 2010 August 15-18, 2010, Montr´eal, Canada
(2010). 18, 20
[FLM03] Foskey M., Lin M. C., Manocha D.: Efficient computation of a
simplified medial axis. In Proceedings of the eighth ACM symposium
on Solid modeling and applications (2003), ACM, pp. 96–107. 54
[FMLG09] Ferrandes R., Marin P. M., Leon J. C., Giannini F. ´ : A
posteriori evaluation of simplification details for finite element model
preparation. Comput. Struct.. Vol. 87, Num. 1-2 (janvier 2009), 73–
80. 44
[Fou07] Foucault G.: Adaptation de modles CAO paramtrs en vue d’une
analyse de comportement mcanique par lments finis. PhD thesis, Ecole ´
de technologie sup´erieure, Montreal, CANADA, 2007. 13, 18, 46
[FRL00] Fine L., Remondini L., Leon J.-C.: Automated generation of
fea models through idealization operators. International Journal for
Numerical Methods in Engineering. Vol. 49, Num. 1-2 (2000), 83–108.
44
[GPu14] GPure: Gpure, version 3.4 [computer software]. http://www.
gpure.net, 2011 – 2014. 35
[GZL∗10] Gao S., Zhao W., Lin H., Yang F., Chen X.: Feature suppression
based cad mesh model simplification. Computer-Aided Design.
Vol. 42, Num. 12 (2010), 1178–1188. 44
[HC03] Haimes R., Crawford C.: Unified geometry access for analysis
and design. In IMR (2003), pp. 21–31. 47
[HLG∗08] Hamri O., Leon J.-C., Giannini F., Falcidieno B., Poulat ´
A., Fine L.: Interfacing product views through a mixed shape representation.
part 1: Data structures and operators. International Journal
on Interactive Design and Manufacturing (IJIDeM). Vol. 2, Num.
2 (2008), 69–85. 43
202BIBLIOGRAPHY
[HPR00] Han J., Pratt M., Regli W. C.: Manufacturing feature recognition
from solid models: a status report. Robotics and Automation,
IEEE Transactions on. Vol. 16, Num. 6 (2000), 782–796. 51
[HR84] Hoffman D. D., Richards W. A.: Parts of recognition. Cognition.
Vol. 18, Num. 1 (1984), 65–96. 59
[HSKK01] Hilaga M., Shinagawa Y., Kohmura T., Kunii T. L.: Topology
matching for fully automatic similarity estimation of 3d shapes. In
Proceedings of the 28th annual conference on Computer graphics and
interactive techniques (2001), ACM, pp. 203–212. 59
[Hut86] Huth H.: Influence of fastener flexibility on the prediction of load
transfer and fatigue life for multiple-row joints. Fatigue in mechanically
fastened composite and metallic joints, ASTM STP. Vol. 927
(1986), 221–250. 29
[IIY∗01] Inoue K., Itoh T., Yamada A., Furuhata T., Shimada K.:
Face clustering of a large-scale cad model for surface mesh generation.
Computer-Aided Design. Vol. 33, Num. 3 (2001), 251–261. 49
[IML08] Iacob R., Mitrouchev P., Leon J.-C. ´ : Contact identification
for assembly–disassembly simulation with a haptic device. The Visual
Computer. Vol. 24, Num. 11 (2008), 973–979. 6
[ISO94] ISO:. ISO TC184-SC4: ISO-10303 Part 203 - Application Protocol:
Configuration controlled 3D design of mechanical parts and assemblies,
1994. 33, 72, 89, 180, XIX
[ISO03] ISO:. ISO TC184-SC4: ISO-10303 Part 214 - Application Protocol:
Core data for automotive mechanical design processes, 2003. 33, 72,
89, 180, XIX
[JB95] Jones M.R. M. P., Butlin G.: Geometry management support
for auto-meshing. In Proceedings of the 4th International Meshing
Roundtable (1995), pp. 153–164. 47
[JBH∗14] Jourdes F., Bonneau G.-P., Hahmann S., Leon J.-C., Faure ´
F.: Computation of components interfaces in highly complex assemblies.
Computer-Aided Design. Vol. 46 (2014), 170–178. xvi, 65, 71,
72, 79, 116
[JC88] Joshi S., Chang T.-C.: Graph-based heuristics for recognition of
machined features from a 3d solid model. Computer-Aided Design.
Vol. 20, Num. 2 (1988), 58–66. 51
203BIBLIOGRAPHY
[JD03] Joshi N., Dutta D.: Feature simplification techniques for freeform
surface models. Journal of Computing and Information Science in
Engineering. Vol. 3 (2003), 177. 51
[JG00] Jha K., Gurumoorthy B.: Multiple feature interpretation across
domains. Computers in industry. Vol. 42, Num. 1 (2000), 13–32. 51,
94, 100
[KLH∗05] Kim S., Lee K., Hong T., Kim M., Jung M., Song Y.: An
integrated approach to realize multi-resolution of b-rep model. In
Proceedings of the 2005 ACM symposium on Solid and physical modeling
(2005), ACM, pp. 153–162. 51
[Kos03] Koschan A.: Perception-based 3d triangle mesh segmentation using
fast marching watersheds. In Computer Vision and Pattern Recognition,
2003. Proceedings. 2003 IEEE Computer Society Conference on
(2003), vol. 2, IEEE, pp. II–27. 59
[KT03] Katz S., Tal A.: Hierarchical mesh decomposition using fuzzy
clustering and cuts. ACM Trans. Graph.. Vol. 22, Num. 3 (juillet
2003), 954–961. 59
[KWMN04] Kim K.-Y., Wang Y., Muogboh O. S., Nnaji B. O.: Design
formalism for collaborative assembly design. Computer-Aided Design.
Vol. 36, Num. 9 (2004), 849–871. 27, 64
[LAPL05] Lee K. Y., Armstrong C. G., Price M. A., Lamont J. H.: A
small feature suppression/unsuppression system for preparing b-rep
models for analysis. In Proceedings of the 2005 ACM symposium on
Solid and physical modeling (New York, NY, USA, 2005), SPM ’05,
ACM, pp. 113–124. 35, 39, 44
[LDB05] Lavoue G., Dupont F., Baskurt A. ´ : A new cad mesh segmentation
method, based on curvature tensor analysis. Computer-Aided
Design. Vol. 37, Num. 10 (2005), 975–987. 59
[Ley01] Leyton M.: A Generative Theory of Shape (Lecture Notes in Computer
Science, LNCS 2145). Springer-Verlag, 2001. 93
[LF05] Leon J.-C., Fine L. ´ : A new approach to the preparation of models
for fe analyses. International journal of computer applications in
technology. Vol. 23, Num. 2 (2005), 166–184. 35, 39, 44, 45, 81
[LFLT14] Li K., Foucault G., Leon J.-C., Trlin M.: Fast global and
partial reflective symmetry analyses using boundary surfaces of mechanical
components. Computer-Aided Design, Num. 0 (2014), –. 194
204BIBLIOGRAPHY
[LG97] Liu S.-S., Gadh R.: Automatic hexahedral mesh generation by
recursive convex and swept volume decomposition. In Proceedings
6th International Meshing Roundtable, Sandia National Laboratories
(1997), Citeseer, pp. 217–231. 60
[LG05] Lockett H. L., Guenov M. D.: Graph-based feature recognition
for injection moulding based on a mid-surface approach. ComputerAided
Design. Vol. 37, Num. 2 (2005), 251–262. 51
[LGT01] Lu Y., Gadh R., Tautges T. J.: Feature based hex meshing
methodology: feature recognition and volume decomposition.
Computer-Aided Design. Vol. 33, Num. 3 (2001), 221–232. 60, 100
[LL83] Ladeveze P., Leguillon D.: Error estimate procedure in the
finite element method and applications. SIAM Journal on Numerical
Analysis. Vol. 20, Num. 3 (1983), 485–509. 46
[LLKK04] Lee J. Y., Lee J.-H., Kim H., Kim H.-S.: A cellular topologybased
approach to generating progressive solid models from featurecentric
models. Computer-Aided Design. Vol. 36, Num. 3 (2004),
217–229. 51
[LLM06] Li M., Langbein F. C., Martin R. R.: Constructing regularity
feature trees for solid models. In Geometric Modeling and ProcessingGMP
2006. Springer, 2006, pp. 267–286. 63
[LLM10] Li M., Langbein F. C., Martin R. R.: Detecting design intent
in approximate cad models using symmetry. Computer-Aided Design.
Vol. 42, Num. 3 (2010), 183–201. 63
[LMTS∗05] Lim T., Medellin H., Torres-Sanchez C., Corney J. R.,
Ritchie J. M., Davies J. B. C.: Edge-based identification of
dp-features on free-form solids. Pattern Analysis and Machine Intelligence,
IEEE Transactions on. Vol. 27, Num. 6 (2005), 851–860.
93
[LNP07a] Lee H., Nam Y.-Y., Park S.-W.: Graph-based midsurface extraction
for finite element analysis. In Computer Supported Cooperative
Work in Design, 2007. CSCWD 2007. 11th International Conference
on (2007), pp. 1055–1058. 56
[LNP07b] Lee H., Nam Y.-Y., Park S.-W.: Graph-based midsurface extraction
for finite element analysis. In Computer Supported Cooperative
Work in Design, 2007. CSCWD 2007. 11th International Conference
on (2007), pp. 1055–1058. 119
205BIBLIOGRAPHY
[LOC16] LOCOMACHS: Low cost manufacturing and assembly of composite
and hybrid structures project. http://www.locomachs.eu/, 2012 –
2016. xv, 18
[LPA∗03] Lee K., Price M., Armstrong C., Larson M., Samuelsson
K.: Cad-to-cae integration through automated model simplification
and adaptive modelling. In International Conference on Adaptive
Modeling and Simulation (2003). 47, 49
[LPMV10] Lou R., Pernot J.-P., Mikchevitch A., Veron P. ´ : Merging enriched
finite element triangle meshes for fast prototyping of alternate
solutions in the context of industrial maintenance. Computer-Aided
Design. Vol. 42, Num. 8 (2010), 670–681. 65
[LSJS13] Lee-St John A., Sidman J.: Combinatorics and the rigidity of cad
systems. Computer-Aided Design. Vol. 45, Num. 2 (2013), 473–482.
19
[LST∗12] Li K., Shahwan A., Trlin M., Foucault G., Leon J.-C. ´ : Automated
contextual annotation of b-rep cad mechanical components
deriving technology and symmetry information to support partial retrieval.
In Proceedings of the 5th Eurographics Conference on 3D
Object Retrieval (Aire-la-Ville, Switzerland, Switzerland, 2012), EG
3DOR’12, Eurographics Association, pp. 67–70. 20
[LZ04] Liu R., Zhang H.: Segmentation of 3d meshes through spectral
clustering. In Computer Graphics and Applications, 2004. PG 2004.
Proceedings. 12th Pacific Conference on (2004), IEEE, pp. 298–305.
59
[Man88] Mantyla M.: An Introduction to Solid Modeling. W. H. Freeman
& Co., New York, NY, USA, 1988. 93
[MAR12] Makem J., Armstrong C., Robinson T.: Automatic decomposition
and efficient semi-structured meshing of complex solids. In
Proceedings of the 20th International Meshing Roundtable, Quadros
W., (Ed.). Springer Berlin Heidelberg, 2012, pp. 199–215. xvi, 60,
119, 126
[MCC98] Mobley A. V., Carroll M. P., Canann S. A.: An object
oriented approach to geometry defeaturing for finite element meshing.
In Proceedings of the 7th International Meshing Roundtable, Sandia
National Labs (1998), pp. 547–563. 35
[MCD11] Musuvathy S., Cohen E., Damon J.: Computing medial axes
of generic 3d regions bounded by b-spline surfaces. Computer-Aided
206BIBLIOGRAPHY
Design. Vol. 43, Num. 11 (2011), 1485 – 1495. ¡ce:title¿Solid and
Physical Modeling 2011¡/ce:title¿. 54
[MGP10] Miklos B., Giesen J., Pauly M.: Discrete scale axis representations
for 3d geometry. ACM Transactions on Graphics (TOG). Vol.
29, Num. 4 (2010), 101. 54
[OH04] O. Hamri J-C. Leon F. G.: A new approach of interoperability
between cad and simulation models. In TMCE (2004). 48
[PFNO98] Peak R. S., Fulton R. E., Nishigaki I., Okamoto N.: Integrating
engineering design and analysis using a multi-representation
approach. Engineering with Computers. Vol. 14, Num. 2 (1998), 93–
114. 44
[QO12] Quadros W. R., Owen S. J.: Defeaturing cad models using a
geometry-based size field and facet-based reduction operators. Engineering
with Computers. Vol. 28, Num. 3 (2012), 211–224. 47
[QVB∗10] Quadros W., Vyas V., Brewer M., Owen S., Shimada K.: A
computational framework for automating generation of sizing function
in assembly meshing via disconnected skeletons. Engineering with
Computers. Vol. 26, Num. 3 (2010), 231–247. 65
[RAF11] Robinson T., Armstrong C., Fairey R.: Automated mixed
dimensional modelling from 2d and 3d cad models. Finite Elements
in Analysis and Design. Vol. 47, Num. 2 (2011), 151 – 165. xvi, 44,
53, 54, 81, 119
[RAM∗06] Robinson T. T., Armstrong C. G., McSparron G., Quenardel
A., Ou H., McKeag R. M.: Automated mixed dimensional
modelling for the finite element analysis of swept and revolved
cad features. In Proceedings of the 2006 ACM symposium on Solid
and physical modeling (2006), SPM ’06, pp. 117–128. xvi, 53, 60, 61,
87, 126, 146
[RBO02] Rib R., Bugeda G., Oate E.: Some algorithms to correct a geometry
in order to create a finite element mesh. Computers and Structures.
Vol. 80, Num. 1617 (2002), 1399 – 1408. 47
[RDCG12] Russ B., Dabbeeru M. M., Chorney A. S., Gupta S. K.: Automated
assembly model simplification for finite element analysis. In
ASME 2012 International Design Engineering Technical Conferences
and Computers and Information in Engineering Conference (2012),
American Society of Mechanical Engineers, pp. 197–206. 64
207BIBLIOGRAPHY
[Req77] Requicha A.: Mathematical models of rigid solid objects. Tech.
rep., Technical Memorandum - 28, Rochester Univ., N.Y. Production
Automation Project., 1977. 8
[Req80] Requicha A. G.: Representations for rigid solids: Theory, methods,
and systems. ACM Computing Surveys (CSUR). Vol. 12, Num. 4
(1980), 437–464. 8
[Rez96] Rezayat M.: Midsurface abstraction from 3d solid models: general
theory and applications. Computer-Aided Design. Vol. 28, Num. 11
(1996), 905 – 915. xvi, 35, 55, 56, 95, 100, 134
[RG03] Ramanathan M., Gurumoorthy B.: Constructing medial axis
transform of planar domains with curved boundaries. Computer-Aided
Design. Vol. 35, Num. 7 (2003), 619 – 632. 39, 54
[RG04] Ramanathan M., Gurumoorthy B.: Generating the mid-surface
of a solid using 2d mat of its faces. Computer-Aided Design and
Applications. Vol. 1, Num. 1-4 (2004), 665–674. 56
[RG10] Ramanathan M., Gurumoorthy B.: Interior medial axis
transform computation of 3d objects bound by free-form surfaces.
Computer-Aided Design. Vol. 42, Num. 12 (2010), 1217 – 1231. 54
[Roc12] Rocca G. L.: Knowledge based engineering: Between {AI} and cad.
review of a language based technology to support engineering design.
Advanced Engineering Informatics. Vol. 26, Num. 2 (2012), 159 – 179.
Knowledge based engineering to support complex product design. 168
[ROM14] ROMMA: Robust mechanical models for assemblies. http://romma.
lmt.ens-cachan.fr/, 2010 – 2014. xix, 177, 180, 189
[Sak95] Sakurai H.: Volume decomposition and feature recognition: Part
1polyhedral objects. Computer-Aided Design. Vol. 27, Num. 11
(1995), 833–843. 61
[SBBC00] Sheffer A., Bercovier M., Blacker T., Clements J.: Virtual
topology operators for meshing. International Journal of Computational
Geometry & Applications. Vol. 10, Num. 03 (2000), 309–331.
49
[SBO98] Shephard M. S., Beall M. W., O’Bara R. M.: Revisiting the
elimination of the adverse effects of small model features in automatically
generated meshes. In IMR (1998), pp. 119–131. 47
208BIBLIOGRAPHY
[SD96] Sakurai H., Dave P.: Volume decomposition and feature recognition,
part ii: curved objects. Computer-Aided Design. Vol. 28, Num.
6 (1996), 519–537. 61
[SGZ10] Sun R., Gao S., Zhao W.: An approach to b-rep model simplifi-
cation based on region suppression. Computers & Graphics. Vol. 34,
Num. 5 (2010), 556–564. 39
[Sha95] Shah J. J.: Parametric and feature-based CAD/CAM: concepts,
techniques, and applications. John Wiley & Sons, 1995. 13, 50
[She01] Sheffer A.: Model simplification for meshing using face clustering.
Computer-Aided Design. Vol. 33, Num. 13 (2001), 925–934. 34, 49
[Sil81] Silva C. E.: Alternative definitions of faces in boundary representatives
of solid objects. Tech. rep., Tech. Memo. 36, Production Automation
Project, Univ. of Rochester, Rochester, N.Y., 1981, 1981.
97
[SLF∗12] Shahwan A., Leon J.-C., Fine L., Foucault G., et al. ´ : Reasoning
about functional properties of components based on geometrical
descriptions. In Proceedings of the ninth International Symposium
on Tools and Methods of Competitive Engineering, Karlsruhe, Germany)
(2012), TMCE12. 38, 71, 73, 79, 116, 168, 169
[SLF∗13] Shahwan A., Leon J.-C., Foucault G., Trlin M., Palombi ´
O.: Qualitative behavioral reasoning from components interfaces to
components functions for dmu adaption to fe analyses. ComputerAided
Design. Vol. 45, Num. 2 (2013), 383–394. 19, 20, 27, 38, 71, 72,
73, 79, 116, 142, 160, 168, 169, 179
[SRX07] Stroud I., Renner G., Xirouchakis P.: A divide and conquer
algorithm for medial surface calculation of planar polyhedra.
Computer-Aided Design. Vol. 39, Num. 9 (2007), 794–817. 44, 54
[SS14] Shapiro V., Srinivasan V.: Opinion: Report from a 2013
asme panel on geometric interoperability for advanced manufacturing.
Comput. Aided Des.. Vol. 47 (fvrier 2014), A1–A2. 196
[SSCO08] Shapira L., Shamir A., Cohen-Or D.: Consistent mesh partitioning
and skeletonisation using the shape diameter function. The
Visual Computer. Vol. 24, Num. 4 (2008), 249–259. 59
[SSK∗05] Seo J., Song Y., Kim S., Lee K., Choi Y., Chae S.: Wraparound
operation for multi-resolution cad model. Computer-Aided
Design & Applications. Vol. 2, Num. 1-4 (2005), 67–76. 51
209BIBLIOGRAPHY
[SSM∗10] Sheen D.-P., Son T.-g., Myung D.-K., Ryu C., Lee S. H.,
Lee K., Yeo T. J.: Transformation of a thin-walled solid model
into a surface model via solid deflation. Comput. Aided Des.. Vol. 42,
Num. 8 (aot 2010), 720–730. 39, 44, 57, 95, 100, 119, 126, 134
[SSR∗07] Sheen D.-P., Son T.-g., Ryu C., Lee S. H., Lee K.: Dimension
reduction of solid models by mid-surface generation. International
Journal of CAD/CAM. Vol. 7, Num. 1 (2007). 57
[Str10] Stroud I.: Boundary Representation Modelling Techniques, 1st ed.
Springer Publishing Company, Incorporated, 2010. 10
[SV93] Shapiro V., Vossler D. L.: Separation for boundary to csg conversion.
ACM Trans. Graph.. Vol. 12, Num. 1 (1993), 35–55. 62, 63,
107
[Sza96] Szabo B.: The problem of model selection in numerical simulation.
Advances in computational methods for simulation (1996), 9–16. 22
[Tau01] Tautges T. J.: Automatic detail reduction for mesh generation
applications. In Proceedings 10th International Meshing Roundtable
(2001), pp. 407–418. 35, 51
[TBG09] Thakur A., Banerjee A. G., Gupta S. K.: A survey of {CAD}
model simplification techniques for physics-based simulation applications.
Computer-Aided Design. Vol. 41, Num. 2 (2009), 65 – 80. 39,
51
[TNRA14] Tierney C. M., Nolan D. C., Robinson T. T., Armstrong
C. G.: Managing equivalent representations of design and analysis
models. Computer-Aided Design and Applications. Vol. 11, Num. 2
(2014), 193–205. 12
[Tro99] Troussier N.: Contribution to the integration of mechanical calculation
in engineering design : methodological proposition for the
use and the reuse. PhD thesis, University Joseph Fourier, Grenoble,
FRANCE, 1999. 29, 44, 76
[TWKW59] Timoshenko S., Woinowsky-Krieger S., Woinowsky S.: Theory
of plates and shells, vol. 2. McGraw-hill New York, 1959. 25, 126
[Ull09] Ullman D. G.: The mechanical design process, vol. Fourth Edition.
McGraw-Hill Science/Engineering/Math, 2009. 175
[vdBvdMB03] van den Berg E., van der Meiden H. A., Bronsvoort W. F.:
Specification of freeform features. In Proceedings of the Eighth ACM
210BIBLIOGRAPHY
Symposium on Solid Modeling and Applications (New York, NY, USA,
2003), SM ’03, ACM, pp. 56–64. 173
[VR93] Vandenbrande J. H., Requicha A. A.: Spatial reasoning for the
automatic recognition of machinable features in solid models. Pattern
Analysis and Machine Intelligence, IEEE Transactions on. Vol. 15,
Num. 12 (1993), 1269–1285. 51
[VSR02] Venkataraman S., Sohoni M., Rajadhyaksha R.: Removal of
blends from boundary representation models. In Proceedings of the
seventh ACM symposium on Solid modeling and applications (2002),
ACM, pp. 83–94. 35, 52, 93
[Woo03] Woo Y.: Fast cell-based decomposition and applications to solid
modeling. Computer-Aided Design. Vol. 35, Num. 11 (2003), 969–
977. 51, 61, 94
[Woo14] Woo Y.: Abstraction of mid-surfaces from solid models of thinwalled
parts: A divide-and-conquer approach. Computer-Aided Design.
Vol. 47 (2014), 1–11. xvi, 44, 61, 62, 89, 94, 119
[WS02] Woo Y., Sakurai H.: Recognition of maximal features by volume
decomposition. Computer-Aided Design. Vol. 34, Num. 3 (2002), 195–
207. 51, 61, 94, 100
[ZM02] Zhu H., Menq C.: B-rep model simplification by automatic
fillet/round suppressing for efficient automatic feature recognition.
Computer-Aided Design. Vol. 34, Num. 2 (2002), 109–123. 35, 52,
93
[ZPK∗02] Zhang Y., Paik J., Koschan A., Abidi M. A., Gorsich D.:
Simple and efficient algorithm for part decomposition of 3-d triangulated
models based on curvature analysis. In Image Processing. 2002.
Proceedings. 2002 International Conference on (2002), vol. 3, IEEE,
pp. III–273. 59
[ZT00] Zienkiewicz O. C., Taylor R. L.: The Finite Element Method:
Solid Mechanics, vol. 2. Butterworth-heinemann, 2000. 24
211BIBLIOGRAPHY
212Appendix A
Illustration of generation
processes of CAD components
This appendix shows the construction process of two industrial use-cases. The
components have been designed in CATIA [CAT14] CAD software.
A.1 Construction process of an injected plastic part
The followings figures present the complete shape generation process of the use-case as
shown in Figure 4.1. The component has been designed with the successive application
of 37 modeling features. The used features are:
1. Material addition or removal;
2. Surface operations: fillets, chamfers.
Two views, defined as top and bottom view, are associated with the construction
tree to present all modeling step of the object shape.
A.2 Construction process of an aeronautical metallic
part
The following figures present a complete shape generation of a simple metallic component
which is commonly found in aeronautical structures. The component has been
designed using the successive application of boolean operations of type addition and
removal. This is a common practice for aeronautical metallic design. This technique
shows directly the machining steps in the component design but turn the shape generation
process quite complex. The simple example presented in Figure A.6 contains nine
main operations which could be reduced to three majors operations ( one extrusion
and two hole drillings).
IAppendix A: Illustration of CAD component generation
Construction Tree TOP view BOTTOM view
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
Figure A.1: An example of a shape generation process of an injected plastic part 1/5
IIAppendix A: Illustration of CAD component generation
Construction Tree TOP view BOTTOM view
9
10
11
12
13
14
15
16
9
10
11
12
13
14
15
16
Figure A.2: An example of a shape generation process of an injected plastic part 2/5
IIIAppendix A: Illustration of CAD component generation
Construction Tree TOP view BOTTOM view
17
18 17
18
19
20
21
22
23
24
19
20
21
22
23
24
Figure A.3: An example of a shape generation process of an injected plastic part 3/5
IVAppendix A: Illustration of CAD component generation
Construction Tree TOP view BOTTOM view
25
26
27
28
29
30
31
32
25
26
27
28
29
30
31
32
Figure A.4: An example of a shape generation process of an injected plastic part 4/5
VAppendix A: Illustration of CAD component generation
Construction Tree TOP view BOTTOM view
33
34
35
36
37
33
34
35
36
37
Figure A.5: An example of a shape generation process of an injected plastic part 5/5
VIAppendix A: Illustration of CAD component generation
Main Solid Solids removed or added
with boolean operations
Construction Tree
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
Figure A.6: An example of a shape generation process of a simple metallic component. The
component has been mainly designed using boolean operations.
VIIAppendix A: Illustration of CAD component generation
VIIIAppendix B
Features equivalence
This appendix illustrates the main modeling features used in CAD software to
design components (Illustrations courtesy of Dassault Syst`emes [CAT14])
IXAppendix B: Features equivalence
Additive
extrusion
Non perpendicular
additive extrusion
Removal
extrusion
Additive
revolution
Removal
revolution
Sketch-Based Features
Figure B.1: Examples of Sketch-Based Features
XAppendix B: Features equivalence
Hole Drilling
1 removal
extrusion or 1
removal
revolution
Equivalent to:
Hole type:
2 Removal
extrusion or 1
removal
revolution
1 removal
revolution
1 removal
revolution
1 removal
revolution
Additive sweep
Removal sweep
Stiffener
Equivalent to additive extrusion
Sketch-Based Features
Figure B.2: Examples of Sketch-Based Features
XIAppendix B: Features equivalence
Fillet
Fillet which can be included
in extrusion contour
Usual fillet
Chamfer
Equivalent to removal extrusion of can be included in
additive extrusion contour
Draft angle feature can be equivalent to various additive and removal extrusions
Draft angle
Simple draft can be included in additive extrusion contour
Shell
Equivalent to removal extrusion
Equivalent to additive extrusion
Thickness
Dress-Up Features
Figure B.3: Examples of Dress-Up Features
XIIAppendix B: Features equivalence
Boolean operations
Difference
Union
Intersection
Figure B.4: Examples of Boolean operations
XIIIAppendix B: Features equivalence
XIVAppendix C
Taxonomy of a primitive
morphology
This appendix illustrates 18 morphological configurations associated with a MAT
medial edge of a volume primitive Pi of type extrusion.
The two tables differ according to whether the idealization direction of Pi corresponds
to the extrusion direction, see Table C.1, or whether the idealization direction
of Pi is included in the extrusion contour, see Table C.2. The reference ratio xr and user
ratio xu are used to specify, in each table, the intervals of morphology differentiating
beams, plates or shells and 3D thick domains.
XVAppendix C: Features equivalence
y = L2 / max Ø
x = max Ø / L1
y < xu
xu < y < xr
xr < y
xr < x xr < x < xu x < xu
L1 < max Ø / xr
max Ø < L2 < xu . max Ø
max Ø / xr < L1 < max Ø / xu
max Ø < L2 < xu . max Ø
xu . max Ø < L1 < max Ø
max Ø < L2 < xu . max Ø
L1 < max Ø / xr
xu . max Ø < L2 < xr . max Ø
L1 < max Ø / xr
xu . max Ø < L2
max Ø / xr < L1 < max Ø / xu
xu . max Ø < L2 < xr . max Ø
xu . max Ø < L1 < max Ø
xu . max Ø < L2 < xr . max Ø
max Ø / xr < L1 < max Ø / xu
xu . max Ø < L2
xu . max Ø < L1 < max Ø
xu . max O < L2
x = max Ø / L1 if max Ø > L1
x = L1 / max Ø if max Ø <= L1
y = L2 / max Ø L1
Ø L2
L1
Ø
L2
Ø
L2
L1 L1
Ø
L2
L1
Ø
L2
L1
Ø L2
L1
Ø L2
L1
Ø L2
L1
Ø
L2
L1
Ø
L2
PLATE THICK PLATE under user
hypothesis (in plane Ø )
BEAM under user
hypothesis (direction L2)
BEAM to PLATE
under user hypothesis
BEAM to BAND
under user hypothesis
PLATE (plane orthogonal Ø ) to BEAM (direction L2)
BAND under user hypothesis
BAND
Table C.1: Morphology associated with a MAT medial edge of a primitive Pi. 1/2
XVIAppendix C: Features equivalence
L1 > 10 . L2
x < xu xu < x < xr xr < x
max Ø < L1 < xu . max Ø
max Ø < L2 < xu . max Ø
L1
Ø
L2
xu . max Ø < L1 < xr . max Ø
max Ø < L2 < xu . max Ø
L1
Ø L2
xr . max Ø < L1
max Ø < L2 < xu . max Ø
L1
L Ø 2
max Ø < L1 < xu . max Ø
xu . max Ø < L2 < xr . max Ø
max Ø < L1 < xu . max Ø
xu . max Ø < L2
xu . max Ø < L1 < xr . max Ø
xu . max Ø < L2 < xr . max Ø
xr . max Ø < L1
xu . max Ø < L2
< xr . max Ø
xu . max Ø < L1 < xr . max Ø
xu . max Ø < L2
xr . max Ø < L1
xu . max Ø < L2
L1
Ø L2
L1
Ø
L2
L1
Ø L2
L1 > 10 . L2
L1
Ø L2
L1
Ø L2
L2 > 10 . L1 BAND
MASSIF BEAM under user
hypothesis (direction L1) BEAM (direction L1)
BEAM (direction L2)
SHELL under user
hypothesis (plane L1/L2)
SHELL (plan L1/L2)
BAND
SHELL under user
hypothesis (plane L1/L2)
BEAM (direction L1)
to SHELL under user
hypothesis (plane L1/L2)
BEAM
L1
Ø L2
y = L2 / max Ø
x = max Ø / L1
y < xu
xu < y < xr
xr < y
x = max Ø / L1 if max Ø > L1
x = L1 / max Ø if max Ø <= L1
y = L2 / max Ø L1
Ø L2
SHELL under user
hypothesis (plane L1/L2)
Ø
Table C.2: Morphology associated with a MAT medial edge of a primitive Pi. 2/2
XVIIAppendix C: Features equivalence
XVIIIAppendix D
Export to CAE software
This appendix illustrates the data structure used to transfer the adapted DMU to
a CAE software, i.e., Abaqus [FEA14]. STEP files [ISO94, ISO03] are used to transfer
the geometric model and associated xml files.
XIXAppendix D: Export to CAE software
BoltedJunction_patch
(a) (b)
Figure D.1: Illustration of the STEP export of a Bolted Junction with sub-domains around
screw. (a) Product structure open in CATIA software, (b) associated xml file containing the
association between components and interfaces
BoltedJunction_patch
Figure D.2: Illustration of the STEP export of a Bolted Junction. Each component containing
volume sub-domains is exported as STEP assembly.
XXAppendix D: Export to CAE software
BoltedJunction_patch
Inner Interfaces
Figure D.3: Illustration of the STEP export of a Bolted Junction. Each inner interface
between sub-domains is part of the component assembly.
XXIAppendix D: Export to CAE software
BoltedJunction_patch
Outer Interfaces
Figure D.4: Illustration of the STEP export of a Bolted Junction. Each outer interface
between components is part of the root assembly.
XXIIAppendix D: Export to CAE software
Root Joint
Patchs CAD assembly
Interfaces
Figure D.5: Illustration of the STEP export of the full Root Joint assembly.
XXIII
Modélisation et d´etection statistiques pour la
criminalistique num´erique des images
Thanh Hai Thai
To cite this version:
Thanh Hai Thai. Mod´elisation et d´etection statistiques pour la criminalistique num´erique des
images. Statistics. Universit´e de Technologie de Troyes, 2014. French.
HAL Id: tel-01072541
https://tel.archives-ouvertes.fr/tel-01072541
Submitted on 8 Oct 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of scientific
research documents, whether they are published
or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.PHD THESIS
to obtain the degree of
DOCTOR of UNIVERSITY
of TECHNOLOGY of TROYES
Speciality: Systems Optimization and Dependability
presented and defended by
Thanh Hai THAI
August 28th 2014
Statistical Modeling and Detection for Digital
Image Forensics
COMMITTEE:
Mrs. Jessica FRIDRICH Professor Reviewer
M. Patrick BAS Chargé de recherche CNRS Reviewer
M. William PUECH Professeur des universités Examinator
M. Igor NIKIFOROV Professeur des universités Examinator
M. Rémi COGRANNE Maître de conférences Examinator
M. Florent RETRAINT Enseignant-Chercheur SupervisorTo my Mom, my Dad, and my little brother,
To Thu Thao, my fiancée,
for their unlimited support, encouragement, and love.iii
Acknowledgments
This work has been carried out within the Laboratory of Systems Modeling and
Dependability (LM2S) at the University of Technology of Troyes (UTT). It is funded
in part by the strategic program COLUMBO.
This work has been accomplished under the supervision of M. Florent RETRAINT.
I would like to express my deepest gratitude to him for his highly professional
guidance and incessant support. He has accompanied with me from my
master’s internship and encouraged me to discover this field. I highly value the
friendly yet professional environment created during my three-year doctoral. The
high confidence he has given to me is possibly the greatest reward of my endeavors.
I am greatly indebted to my PhD co-advisor, M. Rémi COGRANNE who has
assisted me with high efficiency and availability. It was my honor and pleasure to
work with him. His personal and professional help during my doctoral is invaluable.
I would like to express my special thanks to Mrs. Jessica FRIDRICH and M.
Patrick BAS for accepting to review my PhD thesis. I would also like to thank M.
Igor NIKOFOROV and M. William PUECH for agreeing to examine this thesis.
Valuable remarks provided by the respectful experts in this field like them would
improve the thesis’s quality.
I would like to thank the members of LM2S for the friendly and adorable environment
which they have offered me since I joint the LM2S team.v
Résumé
Le XXIème siècle étant le siècle du passage au tout numérique, les médias digitaux
jouent maintenant un rôle de plus en plus important dans la vie de tous les jours. De
la même manière, les logiciels sophistiqués de retouche d’images se sont démocratisés
et permettent aujourd’hui de diffuser facilement des images falsifiées. Ceci pose un
problème sociétal puisqu’il s’agit de savoir si ce que l’on voit a été manipulé. Cette
thèse s’inscrit dans le cadre de la criminalistique des images numériques. Deux
problèmes importants sont abordés : l’identification de l’origine d’une image et la
détection d’informations cachées dans une image. Ces travaux s’inscrivent dans le
cadre de la théorie de la décision statistique et proposent la construction de dé-
tecteurs permettant de respecter une contrainte sur la probabilité de fausse alarme.
Afin d’atteindre une performance de détection élevée, il est proposé d’exploiter les
propriétés des images naturelles en modélisant les principales étapes de la chaîne
d’acquisition d’un appareil photographique. La méthodologie, tout au long de ce
manuscrit, consiste à étudier le détecteur optimal donné par le test du rapport de
vraisemblance dans le contexte idéal où tous les paramètres du modèle sont connus.
Lorsque des paramètres du modèle sont inconnus, ces derniers sont estimés
afin de construire le test du rapport de vraisemblance généralisé dont les performances
statistiques sont analytiquement établies. De nombreuses expérimentations
sur des images simulées et réelles permettent de souligner la pertinence de l’approche
proposée.
Abstract
The twenty-first century witnesses the digital revolution that allows digital media
to become ubiquitous. They play a more and more important role in our everyday
life. Similarly, sophisticated image editing software has been more accessible,
resulting in the fact that falsified images are appearing with a growing frequency
and sophistication. The credibility and trustworthiness of digital images have been
eroded. To restore the trust to digital images, the field of digital image forensics was
born. This thesis is part of the field of digital image forensics. Two important problems
are addressed: image origin identification and hidden data detection. These
problems are cast into the framework of hypothesis testing theory. The approach
proposes to design a statistical test that allows us to guarantee a prescribed false
alarm probability. In order to achieve a high detection performance, it is proposed
to exploit statistical properties of natural images by modeling the main steps of
image processing pipeline of a digital camera. The methodology throughout this
manuscript consists of studying an optimal test given by the Likelihood Ratio Test
in the ideal context where all model parameters are known in advance. When the
model parameters are unknown, a method is proposed for parameter estimation in
order to design a Generalized Likelihood Ratio Test whose statistical performances
are analytically established. Numerical experiments on simulated and real images
highlight the relevance of the proposed approach.Table of Contents
1 General Introduction 1
1.1 General Context and Problem Description . . . . . . . . . . . . . . . 1
1.2 Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Publications and Authors’ Contribution . . . . . . . . . . . . . . . . 6
I Overview on Digital Image Forensics and Statistical Image
Modeling 9
2 Overview on Digital Image Forensics 11
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Image Processing Pipeline of a Digital Camera . . . . . . . . . . . . 12
2.2.1 RAW Image Formation . . . . . . . . . . . . . . . . . . . . . 13
2.2.2 Post-Acquisition Processing . . . . . . . . . . . . . . . . . . . 15
2.2.3 Image Compression . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Passive Image Origin Identification . . . . . . . . . . . . . . . . . . . 19
2.3.1 Lens Aberration . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.2 Sensor Imperfections . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.3 CFA Pattern and Interpolation . . . . . . . . . . . . . . . . . 25
2.3.4 Image Compression . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4 Passive Image Forgery Detection . . . . . . . . . . . . . . . . . . . . 27
2.5 Steganography and Steganalysis in Digital Images . . . . . . . . . . . 29
2.5.1 LSB Replacement Paradigm and Jsteg Algorithm . . . . . . . 32
2.5.2 Steganalysis of LSB Replacement in Spatial Domain . . . . . 33
2.5.3 Steganalysis of Jsteg Algorithm . . . . . . . . . . . . . . . . . 38
2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3 Overview on Statistical Modeling of Natural Images 41
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 Spatial-Domain Image Model . . . . . . . . . . . . . . . . . . . . . . 41
3.2.1 Poisson-Gaussian and Heteroscedastic Noise Model . . . . . . 42
3.2.2 Non-Linear Signal-Dependent Noise Model . . . . . . . . . . . 44
3.3 DCT Coefficient Model . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3.1 First-Order Statistics of DCT Coefficients . . . . . . . . . . . 45
3.3.2 Higher-Order Statistics of DCT Coefficients . . . . . . . . . . 46
3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46viii Table of Contents
II Statistical Modeling and Estimation for Natural Images from
RAW Format to JPEG Format 47
4 Statistical Image Modeling and Estimation of Model Parameters 49
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2 Statistical Modeling of RAW Images . . . . . . . . . . . . . . . . . . 50
4.2.1 Heteroscedastic Noise Model . . . . . . . . . . . . . . . . . . . 50
4.2.2 Estimation of Parameters (a, b) in the Heteroscedastic Noise
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Statistical Modeling of TIFF Images . . . . . . . . . . . . . . . . . . 57
4.3.1 Generalized Noise Model . . . . . . . . . . . . . . . . . . . . . 57
4.3.2 Estimation of Parameters (˜a, ˜b) in the Generalized Noise Model 59
4.3.3 Application to Image Denoising . . . . . . . . . . . . . . . . . 61
4.3.4 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 62
4.4 Statistical Modeling in DCT Domain . . . . . . . . . . . . . . . . . . 65
4.4.1 Statistical Model of Quantized DCT Coefficients . . . . . . . 65
4.4.2 Estimation of Parameters (α, β) from Unquantized DCT Coefficients
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4.3 Estimation of Parameters (α, β) from Quantized DCT Coeffi-
cients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.4.4 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 71
4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
III Camera Model Identification in Hypothesis Testing Framework
75
5 Camera Model Identification Based on the Heteroscedastic Noise
Model 77
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.2 Camera Fingerprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.3 Optimal Detector for Camera Model Identification Problem . . . . . 80
5.3.1 Hypothesis Testing Formulation . . . . . . . . . . . . . . . . . 80
5.3.2 LRT for Two Simple Hypotheses . . . . . . . . . . . . . . . . 80
5.4 GLRT with Unknown Image Parameters . . . . . . . . . . . . . . . . 83
5.5 GLRT with Unknown Image and Camera Parameters . . . . . . . . . 86
5.6 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.6.1 Detection Performance on Simulated Database . . . . . . . . 89
5.6.2 Detection Performance on Two Nikon D70 and Nikon D200
Camera Models . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.6.3 Detection Performance on a Large Image Database . . . . . . 91
5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.8 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95Table of Contents ix
5.8.1 Expectation and Variance of the GLR Λhet(z
wapp
k,i ) under Hypothesis
Hj . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.8.2 Expectation and Variance of the GLR Λehet(z
wapp
k,i ) under Hypothesis
Hj . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6 Camera Model Identification Based on the Generalized Noise
Model 99
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2 Camera Fingerprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.3 Optimal Detector for Camera Model Identification Problem . . . . . 102
6.3.1 Hypothesis Testing Formulation . . . . . . . . . . . . . . . . . 102
6.3.2 LRT for Two Simple Hypotheses . . . . . . . . . . . . . . . . 103
6.4 Practical Context: GLRT . . . . . . . . . . . . . . . . . . . . . . . . 105
6.4.1 GLRT with Unknown Image Parameters . . . . . . . . . . . . 105
6.4.2 GLRT with Unknown Image and Camera Parameters . . . . . 106
6.5 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.5.1 Detection Performance on Simulated Database . . . . . . . . 108
6.5.2 Detection Performance on Two Nikon D70 and Nikon D200
Camera Models . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.5.3 Detection Performance on a Large Image Database . . . . . . 110
6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7 Camera Model Identification Based on DCT Coefficient Statistics115
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.2 Camera Fingerprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.2.1 Design of Camera Fingerprint . . . . . . . . . . . . . . . . . . 116
7.2.2 Extraction of Camera Fingerprint . . . . . . . . . . . . . . . . 117
7.2.3 Property of Camera Fingerprint . . . . . . . . . . . . . . . . . 119
7.3 Optimal Detector for Camera Model Identification Problem . . . . . 120
7.3.1 Hypothesis Testing Formulation . . . . . . . . . . . . . . . . . 120
7.3.2 LRT for Two Simple Hypotheses . . . . . . . . . . . . . . . . 121
7.4 Practical Context: GLRT . . . . . . . . . . . . . . . . . . . . . . . . 123
7.4.1 GLRT with Unknown Parameters αk . . . . . . . . . . . . . . 123
7.4.2 GLRT with Unknown Parameters (αk, c˜k,1,
˜dk,1) . . . . . . . . 124
7.5 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 126
7.5.1 Detection Performance on Simulated Database . . . . . . . . 127
7.5.2 Detection Performance on Two Canon Ixus 70 and Nikon
D200 Camera Models . . . . . . . . . . . . . . . . . . . . . . 128
7.5.3 Detection Performance on a Large Image Database . . . . . . 129
7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.7.1 Relation between the Parameters (˜a, ˜b, γ) and (αu,v, βu,v) . . 132
7.7.2 Laplace’s Approximation of DCT Coefficient Model . . . . . . 133x Table of Contents
7.7.3 Expectation and Variance of the LR Λdct(Ik,i) under Hypothesis
Hj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.7.4 Asymptotic Expectation and Variance of the GLR Λedct(Ik,i)
under Hypothesis Hj . . . . . . . . . . . . . . . . . . . . . . . 135
IV Statistical Detection of Hidden Data in Natural Images 137
8 Statistical Detection of Data Embedded in Least Significant Bits
of Clipped Images 139
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.2 Cover-Image Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.2.1 Non-Clipped Image Model . . . . . . . . . . . . . . . . . . . . 141
8.2.2 Clipped Image Model . . . . . . . . . . . . . . . . . . . . . . 142
8.3 GLRT for Non-Clipped Images . . . . . . . . . . . . . . . . . . . . . 142
8.3.1 Impact of LSB Replacement: Stego-Image Model . . . . . . . 142
8.3.2 Hypothesis Testing Formulation . . . . . . . . . . . . . . . . . 143
8.3.3 ML Estimation of Image Parameters . . . . . . . . . . . . . . 144
8.3.4 Design of GLRT . . . . . . . . . . . . . . . . . . . . . . . . . 145
8.4 GLRT for Clipped Images . . . . . . . . . . . . . . . . . . . . . . . . 148
8.4.1 ML Estimation of Image Parameters . . . . . . . . . . . . . . 148
8.4.2 Design of GLRT . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.5 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.5.1 Detection Performance on Simulated Database . . . . . . . . 151
8.5.2 Detection Performance on Real Image Database . . . . . . . . 153
8.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.7.1 Denoising Filter for Non-Clipped RAW Images Corrupted by
Signal-Dependent Noise . . . . . . . . . . . . . . . . . . . . . 155
8.7.2 Statistical distribution of the GLR Λbncl(Z) under hypothesis
H0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
8.7.3 Statistical distribution of the GLR Λbncl(Z) under hypothesis
H1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
8.7.4 ML Estimation of Parameters in Truncated Gaussian Data . 159
8.7.5 Statistical distribution of the GLR Λbcl(Z) . . . . . . . . . . . 160
9 Steganalysis of Jsteg Algorithm Based on a Novel Statistical Model
of Quantized DCT Coefficients 163
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.2 Optimal Detector for Steganalysis of Jsteg Algorithm . . . . . . . . . 164
9.2.1 Hypothesis Testing Formulation . . . . . . . . . . . . . . . . . 164
9.2.2 LRT for Two Simple Hypotheses . . . . . . . . . . . . . . . . 165
9.3 Quantitative Steganalysis of Jsteg Algorithm . . . . . . . . . . . . . 168
9.3.1 ML Estimation of Embedding Rate . . . . . . . . . . . . . . . 168Table of Contents xi
9.3.2 Revisiting WS estimator . . . . . . . . . . . . . . . . . . . . . 169
9.4 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 169
9.4.1 Detection Performance of the proposed LRT . . . . . . . . . . 169
9.4.2 Accuracy of the Proposed Estimator . . . . . . . . . . . . . . 172
9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
10 Conclusions and Perspectives 175
10.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
10.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.2.1 Perspectives to Digital Forensics . . . . . . . . . . . . . . . . 178
10.2.2 Perspectives to Statistical Image Modeling . . . . . . . . . . . 180
10.2.3 Perspectives to Statistical Hypothesis Testing Theory Applied
for Digital Forensics . . . . . . . . . . . . . . . . . . . . . . . 180
A Statistical Hypothesis Testing Theory 181
A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
A.2 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
A.3 Test between Two Simple Hypotheses . . . . . . . . . . . . . . . . . . 183
A.4 Test between Two Composite Hypotheses . . . . . . . . . . . . . . . 187
A.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Bibliography 193List of Figures
1.1 Example of falsification. . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Structure of the work presented in this thesis. . . . . . . . . . . . . . 4
2.1 Image processing pipeline of a digital camera. . . . . . . . . . . . . . 13
2.2 Sample color filter arrays. . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 JPEG compression chain. . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Typical steganographic system. . . . . . . . . . . . . . . . . . . . . . 29
2.5 Operations of LSB replacement (top) and Jsteg (bottom). . . . . . . 31
2.6 Diagram of transition probabilities between trace subsets under LSB
replacement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1 Scatter-plot of pixels’ expectation and variance from a natural RAW
image with ISO 200 captured by Nikon D70 and Nikon D200 cameras.
The image is segmented into homogeneous segments. In each
segment, the expectation and variance are calculated and the parameters
(a, b) are estimated as proposed in Section 4.2.2. The dash line
is drawn using the estimated parameters (a, b). Only the red channel
is used in this experiment. . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Scatter-plot of pixels’ mean and variance from JPEG images with
ISO 200 issued from Nikon D70 and Nikon D200 cameras. The red
channel is used in this experiment. The image is segmented into
homogeneous segments to estimate local means and variances. The
generalized noise model is used to fit to the data. . . . . . . . . . . . 58
4.3 Estimated parameters (˜a, ˜b) on JPEG images issued from different
camera models in Dresden image database. . . . . . . . . . . . . . . 63
4.4 Comparison between the proposed method and Farid’s for estimation
of gamma factor on JPEG images issued from Nikon D200 camera
model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.5 Comparison between the Laplacian, GΓ and proposed model of DCT
coefficients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.6 Comparison between the quantized Laplacian, quantized GΓ and proposed
model for quantized AC coefficient. . . . . . . . . . . . . . . . 68
4.7 Averaged χ
2
test statistics of GG, GΓ and proposed model for 63
quantized AC coefficients . . . . . . . . . . . . . . . . . . . . . . . . 71
5.1 Estimated camera parameters (a, b) on 20 RAW images of different
camera model with ISO 200 and different camera settings. . . . . . . 79
5.2 Estimated camera parameters (a, b) of different devices per camera
model with ISO 200 and different camera settings. . . . . . . . . . . 79xiv List of Figures
5.3 The detection performance of the GLRT δ
?
het with 50 pixels selected
randomly from simulated images. . . . . . . . . . . . . . . . . . . . . 85
5.4 The detection performance of the GLRT δe?
het with 50 pixels selected
randomly on simulated images. . . . . . . . . . . . . . . . . . . . . . 88
5.5 The detection performance of the test δ
?
het with 200 pixels selected
randomly on simulated images for a0 = 0.0115 and different parameters
a1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.6 The detection performance of the test δ
?
het and δe?
het on simulated
images for different numbers of pixels. . . . . . . . . . . . . . . . . . 89
5.7 The detection performance of the GLRTs δ
?
het and δe?
het on the Dresden
database for different numbers of pixels. . . . . . . . . . . . . . . . . 91
5.8 Comparison between the theoretical false alarm probability (FAP)
and the empirical FAP, from real images of Dresden database, plotted
as a function of decision threshold τ . . . . . . . . . . . . . . . . . . . 92
6.1 Empirical distribution of noise residuals z˜
res
k,i in a segment compared
with theoretical Gaussian distribution. . . . . . . . . . . . . . . . . . 101
6.2 Estimated parameters (˜a, ˜b) on JPEG images issued from Canon Ixus
70 camera with different camera settings. . . . . . . . . . . . . . . . 102
6.3 Estimated parameters (˜a, ˜b) on JPEG images issued from different
devices of Canon Ixus 70 model. . . . . . . . . . . . . . . . . . . . . 103
6.4 Detection performance of the proposed tests for 50 and 100 pixels
extracted randomly from simulated JPEG images with quality factor
100. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.5 Detection performance of the GLRT δe?
gen for 100 pixels extracted
randomly from simulated JPEG images with different quality factors. 109
6.6 Detection performance of the GLRT δ
?
gen and δe?
gen for 50 and 100 pixels
extracted randomly from JPEG images of Nikon D70 and Nikon
D200 cameras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.7 Comparison between the theoretical false alarm probability (FAP)
and the empirical FAP, plotted as a function of decision threshold τ . 111
7.1 Estimated parameters (α, β) at frequency (0, 1) and (8, 8) of uniform
images generated using a˜ = 0.1,
˜b = 2, γ = 2.2. . . . . . . . . . . . . 117
7.2 Estimated parameters (α, β) at frequency (8, 8) of natural JPEG images
issued from Canon Ixus 70 and Nikon D200 camera models. . . 119
7.3 Estimated parameters (˜c, ˜d) at frequency (8, 8) of natural JPEG images
issued from different camera models in Dresden database. . . . . 120
7.4 Detection performance of proposed tests on simulated vectors with
1024 coefficients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
7.5 Detection performance of proposed tests on simulated vectors with
4096 coefficients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125List of Figures xv
7.6 Detection performance of proposed GLRTs for 1024 coefficients at
frequency (8, 8) extracted randomly from simulated images with different
quality factors. . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
7.7 Detection performance of proposed tests for different number of coefficients
at frequency (8, 8) of natural JPEG images taken by Canon
Ixus 70 and Nikon D200 camera models. . . . . . . . . . . . . . . . 127
7.8 Detection performance of the GLRT δe?
dct for 4096 coefficients at different
frequencies of natural JPEG images taken by Canon Ixus 70
and Nikon D200 camera models. . . . . . . . . . . . . . . . . . . . . 128
7.9 Comparison between the theoretical false alarm probability (FAP)
and the empirical FAP, plotted as a function of decision threshold τ ,
of the proposed tests at the frequency (8,8) of natural images. . . . . 129
8.1 Detection performance on non-clipped simulated images for embedding
rate R = 0.05. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.2 Detection performance on clipped simulated images for embedding
rate R = 0.05. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.3 Detection performance on real clipped images for embedding rate
R = 0.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.4 Detection performance on real clipped images for embedding rate
R = 0.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
8.5 Detection performance on real clipped images for embedding rate
R = 0.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
8.6 Detection performance on 12-bit images taken by Canon 400D with
ISO 100 from BOSS database for embedding rate R = 0.05. . . . . . 153
8.7 Detection performance on 5000 images from BOSS database for embedding
rate R = 0.05. . . . . . . . . . . . . . . . . . . . . . . . . . . 153
8.8 Empirical false-alarm probability from real images of BOSS database
plotted as a function of decision threshold, compared with theoretical
FAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
9.1 Detection performance of the test δ
?
jst based on the proposed model
with embedding rate R = 0.05 on the simulated images and real images.167
9.2 Detection performance of the test δ
?
jst based on the quantized Laplacian,
quantized GG, quantized GΓ, and proposed model on the BOSSBase
with embedding rate R = 0.05. . . . . . . . . . . . . . . . . . . 170
9.3 Detection performance of the test δ
?
jst based on the quantized Laplacian,
quantized GG, quantized GΓ, and proposed model on the subset
of 1000 images from the BOSSBase with embedding rate R = 0.05. . 171
9.4 Comparison between the proposed test δ
?
jst, ZMH-Sym detector, ZP
detector, WS detector and quantized Laplacian-based test. . . . . . . 172
9.5 Mean absolute error for all estimators. . . . . . . . . . . . . . . . . . 172
9.6 Mean absolute error for proposed ML estimator, standard WS estimator
and improved WS estimator. . . . . . . . . . . . . . . . . . . . 173xvi List of Figures
9.7 Comparison between the proposed test δ
?
jst, standard WS detector
and improved WS detector. . . . . . . . . . . . . . . . . . . . . . . . 174List of Tables
4.1 Parameter estimation on synthetic images . . . . . . . . . . . . . . . 63
4.2 PSNR of the extended LLMMSE filter . . . . . . . . . . . . . . . . . 64
4.3 χ
2
test statistics of Laplacian, GG, GΓ, and proposed model for the
first 9 quantized coefficients of 3 testing standard images. . . . . . . 71
5.1 Camera Model Used in Experiments (the symbol * indicates our own
camera) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.2 Detection performance of the proposed detector. . . . . . . . . . . . 93
5.3 Detection performance of PRNU-based detector for ISO 200. . . . . 93
6.1 Camera Model Used in Experiments . . . . . . . . . . . . . . . . . . 112
6.2 Performance of proposed detector . . . . . . . . . . . . . . . . . . . . 112
6.3 Performance of SVM-based detector (the symbol * represents values
smaller than 2%) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.4 Performance of PRNU-based detector . . . . . . . . . . . . . . . . . 113
7.1 Camera Model Used in Experiments . . . . . . . . . . . . . . . . . . 129
7.2 Detection performance of proposed detector δe?
dct (the symbol * represents
values smaller than 2%) . . . . . . . . . . . . . . . . . . . . . 130
7.3 Detection performance of SVM-based detector . . . . . . . . . . . . 130
7.4 Detection performance of PRNU-based detector . . . . . . . . . . . 131
7.5 Detection performance of proposed detector δ
?
dct . . . . . . . . . . . 131
7.6 Detection performance of proposed detector δe?
dct on 4 camera models
of BOSS database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
10.1 List of proposed statistical tests. . . . . . . . . . . . . . . . . . . . . 177List of Abbreviations
Acronym What (it) Stands For
AC Alternating Current.
AUMP Asymptotically Uniformly Most Powerful.
AWGN Additive White Gaussian Noise.
cdf cumulative distribution function.
CCD Charge-Coupled Device.
CFA Color Filter Array.
CLT Central Limit Theorem.
CMOS Complementary Metal-Oxide Semiconductor.
CRF Camera Response Function.
DC Direct Current.
DCT Discrete Cosine Transform.
DSC Digital Still Camera.
DSLR Digital Single Lens Reflex.
EXIF Exchangeable Image File.
GG Generalized Gaussian.
GLR Generalized Likelihood Ratio.
GLRT Generalized Likelihood Ratio Test.
GOF goodness-of-fit.
GΓ Generalized Gamma.
IDCT Inverse Discrete Cosine Transform.
i.i.d independent and identically distributed.
JPEG Join Photographic Expert Group.
KS Kolmogorov-Smirnov.
LSB Least Significant Bit.
LR Likelihood Ratio.
LRT Likelihood Ratio Test.
LS Least Squares.
mAE Median Absolute Error.
MAE Mean Absolute Error.
MGF Moment-Generating Function
ML Maximum Likelihood.
MM Method of Moments.
MP Most Powerful.
MSE Mean Squared Error.
NP Neyman-Pearson.
PCE Peak to Correlation Energy.
pdf probability density function.
PRNU Photo-Response Non-Uniformity.
RLE Run-Length Encoding.xx List of Tables
ROC Receiver Operating Characteristic.
R/T Rounding and Truncation.
SPA Sample Pair Analysis.
SPN Sensor Pattern Noise.
SVM Support Vector Machine.
TIFF Tagged Image File Format.
UMP Uniformly Most Powerful.
WLS Weighted Least Squares.
WS Weighted Stego-image.
ZMH Zero Message Hypothesis.
ZP Zhang and Ping.Glossary of Notations
Notation Definition
α0 False alarm probability.
β(δ) Power of the test δ.
χ
2
K Chi square distribution with K degree of freedom.
γ Gamma factor.
δ Statistical test.
η Noise.
κ Quantization step in the spatial domain.
µ Expectation.
ν Bit-depth.
σ Standard deviation.
τ Decision threshold.
ξ Number of collected electrons, which is modeled by Poisson distribution.
ϕ 2-D normalized wavelet scaling function.
φ Probability density function of a standard Gaussian random variable.
∆ Quantization step in the DCT domain.
Λ Likelihood Ratio.
Φ Cumulative distribution function of a standard Gaussian random variable.
Θ Parameter space.
Cov Covariance.
E Mathematical expectation.
P[E] Probability that an event E occurs.
R Set of real numbers.
Var Variance.
Z Set of integer numbers.
B(n, p) Binomial distribution where n is the number of experiments and p is
the success probability of each experiment.
D Denoising filter.
G(α, β) Gamma distribution with shape parameter α and scale parameter β.
H0, H1 Null hypothesis and alternative hypothesis.
I Set of pixel indices.
Kα0 Class of tests whose false alarm probability is upper-bounded by α0.
L Log-likelihood function.
N (µ, σ2
) Gaussian distribution with mean µ and variance σ
2
.xxii List of Tables
P(λ) Poisson distribution with mean λ and variance λ.
Q∆ Quantization with step ∆.
S Source of digital images.
U[a, b] Uniform distribution on the interval [a, b].
Z Set of possible pixel values.
Z
N Image space.
C Cover-image that is used for data hiding.
D Quantized DCT coefficients.
F Fisher information.
HDM Linear filter for demosaicing.
I Unquantized DCT coefficients.
Idn Identity matrix of size n × n.
K PRNU.
M Secret message to be embedded.
PCFA CFA pattern.
S Stego-image that contains hidden data.
Z Natural image.
(a, b) Parameters of heteroscedastic noise model.
(˜a, ˜b) Parameters of generalized noise model.
c Color channel.
(˜c, ˜d) Parameters characterizing the relation between α and β, which are
the parameters of DCT coefficient model.
fCRF Camera response function.
fX(x) Probability density function of a random variable X.
gWB Gain for white-balancing.
pX(x) Probability mass function of a random variable X.
r Change rate, r =
R
2
.
B Boundary of the dynamic range, B = 2ν − 1.
L Secret message length.
N Number of pixels in the image Z, N = Nr × Nc if Z is a grayscale
image and N = Nr × Nc × 3 if Z is a full-color image.
Nblk Number of blocks.
Nc Number of columns.
Nr Number of rows.
Ns Number of sources.
Pθ Probability distribution characterized by a parameter vector θ.
QR,θ Probability distribution after embedding at rate R.
R Embedding rate.Chapter 1
General Introduction
1.1 General Context and Problem Description
Traditionally, images are considered as trustworthy since as they are known as being
captured through analog acquisition devices to depict the real-world happenings.
This traditional trustworthiness is built on remarkable difficulties of image content
modification. Indeed, modifying the content of a film-based photo requires special
skills, yet time-consuming and costly, through dark room tricks. Therefore, this
modification is of limited extent.
In the past decades, we have witnessed the evolution of digital imaging technology
with a dramatic improvement of digital images’ quality. This improvement is not
only due to advances in semiconductor fabrication technology that makes it possible
to reduce the pixel size in an image sensor and thus raises the total number of pixels,
but also advances in image processing technology that allows to reduce noise introduced
in a camera and enhance details of the physical scene. The digital revolution
largely replace their analog counterparts to enable ease of digital content creation
and processing at affordable cost and in mass scale. Nowadays, digital still cameras
(DSCs) are taking over a major segment of the consumer photography marketplace.
Only at the very high end (large format, professional cameras with interchangeable
and highly adjustable lenses) and very low end (inexpensive automated snapshot
cameras) are traditional film cameras holding their own. Besides, the development
of communication and networking infrastructure allows digital content to be more
accessible. One of the greatest advantage of digital images acquired by DSCs is the
ease of transmission over communication networks, which film cameras are difficult
to enable.
Unfortunately, this path of technological evolution may provide means for malicious
purposes. Digital images can be easily edited, altered or falsified because
of a large availability of low-cost image editing tools. Consequently, falsified photographs
are appearing with a growing frequency and sophistication. The credibility
and trustworthiness of digital images have been eroded. This is more crucial when
falsified images that were utilized as evidence in a courtroom could mislead the
judgement and lead to either imprisonment for the innocent or freedom for the
guilty. In general, the falsification of digital images may result in important consequences
in terms of political, economic, and social issues.
One example of falsification that causes political issues is given in Figure 1.1.
In the left corner image, President G.W. Bush and a young child are both reading
from America: A Patriotic Primer by Lynne Cheney. But if we look closely, it2 Chapter 1. General Introduction
(a) Forged image. (b) Original image.
Figure 1.1: Example of falsification.
appears that President Bush is holding his book upside down. An unknown hoaxer
has horizontally and vertically flipped the image on the back of the book in Bush’s
hands. This photo of George Bush holding a picture book the wrong way up during
a visit to a school delighted some opponents of the Republican president, and helped
foster his buffoonish image. But press photos from the event in 2002 revealed that
Mr Bush had been holding the book correctly, i.e. hoaxers had simply used photo
editing software to rotate the cover. The original version of the photo (right corner)
was taken in the Summer of 2002 while Bush was visiting George Sanchez Charter
School in Houston. It was distributed by the Associated Press. By comparing the
forged photo and original photo, it can be noted that a dark blue spot is close to the
spine of Bush’s book, but this same spot in the girl’s copy is near the left-hand edge
of the book. This forensic clue can be considered as evidence of forgery. However in
most of the cases, the forgery is not as easy to detect. The human eyes can hardly
differentiate a genuine scene from a deliberately forged scene. Overall, the digital
revolution has raised a number of information security challenges.
To restore the trust to digital images, the field of digital image forensics was
born. Because of importance of information security in many domains, digital image
forensics has attracted a great attention from academic researchers, law enforcement,
security, and intelligence agencies. Conducting forensic analysis is a difficult mission
since forensic analysts need to answer several questions before stating that digital
content is authentic:
1. What is the true origin of this content? How was it generated? By whom was
it taken?
2. Is the image still depicting the captured original scene? Has its content been
altered in some way? How has it been processed?
The first question involves the problem of image origin identification. Source
information of digital images represents useful forensic clues because knowing the1.2. Outline of the Thesis 3
source device that captures the inspected image can facilitate verification or tracing
of device owner as well as the camera taker. This situation is as identical as bullet
scratches allowing forensic analysts to match a bullet to a particular barrel or gun
and trace the gun owner.1 Besides, knowing device model or brand information can
help forensic analysts know more about characteristics of acquisition devices, which
leads to a potential improvement of detecting the underlying forgeries that could
be performed in the inspected image. Another issue is to determine what imaging
mechanism has been used to generate the inspected image (e.g. scanners, cell-phone
cameras, or computer graphics) before assuming that the inspected image is taken
by a digital camera, which can significantly narrow down the search range for the
next step of the investigation.
The second problem is image content integrity. An image has to be proven
authentic and its content has not be forged before it can be used as forensic clues
or as evidence in a legal context. Determining whether an image is forged, which
manipulation has been performed on the image, or which region of the image has
been altered are fundamental tasks.
Beside some basic manipulations such as adding, splicing, and removal, the
image can be also manipulated by embedding a message into image content directly.
The message remains secret such that it is only known by the sender and receiver
and an adversary does not recognize its existence visually. This concept is called
steganography, which is a discipline of the field of information hiding. However, the
concept of steganography has been misused for illegal activities. Detecting existence
of secret messages and revealing their content are also the tasks of forensic analysts.
This task is called steganalysis.
The field of digital image forensics, including steganalysis, is part of an effort to
counter cyber-attacks, which is nowadays one of strategy priorities for defence and
national security in most countries.
1.2 Outline of the Thesis
The main goal of this thesis is to address information security challenges in the field
of digital image forensics. In particular, the problems of image origin identification
and hidden data detection are studied. The thesis is structured in four main parts.
Apart from the first part providing an overview on the field of digital image forensics
and statistical image modeling, the rest of the thesis involves many contributions.
All the work presented in this thesis is illustrated in Figure 1.2.
Part II establishes a profound statistical modeling of natural images by analyzing
the image processing pipeline of a digital camera, as well as proposes efficient
algorithms for estimation of model parameters from a single image. Typically, the
image processing pipeline is composed of three main stages: RAW image acquisition,
1Evidently, tracing an imaging device owner is more difficult as average users have rights to buy
a camera easily in a market with millions of cameras while the use of guns is banned or controlled
in many countries and a gun user has to register his identities.4 Chapter 1. General Introduction
Figure 1.2: Structure of the work presented in this thesis.
post-acquisition enhancement, and JPEG compression that employs Discrete Cosine
Transform (DCT). Therefore, the statistical image modeling in Part II is performed
both in the spatial domain and the DCT domain. By modeling the photo-counting
and read-out processes, a RAW image can be accurately characterized by the heteroscedastic
noise model in which the RAW pixel is normally distributed and its
variance is linearly dependent on its expectation. This model is more relevant than
the Additive White Gaussian Noise (AWGN) model widely used in image processing
since the latter ignores the contribution of Poisson noise in the RAW image
acquisition stage. The RAW image then undergoes post-acquisition processes in
order to provide a high-quality full-color image, referred to as TIFF image. Therefore,
to study image statistics in a TIFF image, it is proposed to start from the
heteroscedastic noise model and take into account non-linear effect of gamma correction,
resulting in a generalized noise model. This latter involves a non-linear
relation between pixel’s expectation and variance. This generalized noise model has
not been proposed yet in the literature. Overall, the study of noise statistics in the
spatial domain indicates the non-stationarity of noise in a natural image, i.e. pixel’s
variance is dependent on the expectation rather than being constant in the whole
image. Besides, pixels’ expectations, namely the image content, are also heterogeneous.
Apart from studying image statistics in the spatial domain, it is proposed to
study DCT coefficient statistics. Modeling the distribution of DCT coefficients is
not a trivial task due to heterogeneity in the natural image and complexity of image
statistics. It is worth noting that most of existing models of DCT coefficients, which
are only verified by conducting the goodness-of-fit test with empirical data, are given
without a mathematical justification. Instead, this thesis provides a mathematical
framework of modeling the statistical distribution of DCT coefficients by relying on
the double stochastic model that combines the statistics of DCT coefficients in a
block whose variance is constant with the variability of block variance in a natural
image. The proposed model of DCT coefficients outperforms the others including
the Laplacian, Generalized Gaussian, and Generalized Gamma model. Numerical1.2. Outline of the Thesis 5
results on simulated database and real image database highlight the relevance of the
proposed models and the accuracy of the proposed estimation algorithms.
The solid foundation established in Part II emphasizes several aspects of interest
for application in digital image forensics. Relying on a more relevant image
model and an accurate estimation of model parameters, the detector is expected
to achieve better detection performance. Part III addresses the problem of image
origin identification within the framework of hypothesis testing theory. More particularly,
it involves designing a statistical test for camera model identification based
on a parametric image model to meet the optimality bi-criteria: the warranting of
a prescribed false alarm probability and the maximization of the correct detection
probability. Camera model identification based on the heteroscedastic noise model,
generalized noise model, and DCT coefficients is respectively presented in Chapter
5, Chapter 6, and Chapter 7. The model parameters are proposed to be exploited
as unique fingerprint for camera model identification. In general, the procedure in
those chapters is similar. The procedure starts from formally stating the problem
of camera model identification into hypothesis testing framework. According to
Neyman-Pearson lemma, the most powerful test for the decision problem is given
by the Likelihood Ratio Test (LRT). The statistical performance of the LRT can be
analytically established. Moreover, the LRT can meet the two required criteria of
optimality. However, this test is only of theoretical interest because it is based on
an assumption that all model parameters are known in advance. This assumption
is hardly met in practice. To deal with the difficulty of unknown parameters, a
Generalized Likelihood Ratio Test (GLRT) is proposed. The GLRT is designed by
replacing unknown parameters by their Maximum Likelihood (ML) estimates in the
Likelihood Ratio. Consequently, the detection performance of the GLRT strongly
depends on the accuracy of employed image model and parameter estimation. It is
shown in Chapter 5, 6, and 7 that the proposed GLRTs can warrant a prescribed
false alarm probability while ensuring a high detection performance. Moreover, the
efficiency of the proposed GLRTs is highlighted when applying on a large image
database.
The problem of hidden data detection is addressed in Part IV. This problem
is also formulated into hypothesis testing framework. The main idea is to rely
on an accurate image model to detect small changes in statistical properties of a
cover image due to message embedding. The formulation in the hypothesis testing
framework allows us to design a test that can meet two above criteria of optimality.
Chapter 8 addresses the steganalysis of Least Significant Bit (LSB) replacement
technique in RAW images. More especially, the phenomenon of clipping is studied
and taken into account in the design of the statistical test. This phenomenon is
due to to limited dynamic range of the imaging system. The impact of the clipping
phenomenon on the detection performance of steganalysis methods has not been
studied yet in the literature. The approach proposed in Chapter 8 is based on the
heteroscedastic noise model instead of the AWGN model. Besides, the approach
proposes to exploit the state-of-the-art denoising method to improve the estimation
of pixels’ expectation and variance. The detection performance of the proposed6 Chapter 1. General Introduction
GLRTs on non-clipped images and clipped images is studied. It is shown that
the proposed GLRTs can warrant a prescribed false alarm probability and achieve
a high detection performance while other detectors fail in practice, especially the
Asymptotically Uniformly Most Powerful (AUMP) test. Next, Chapter 9 addresses
the steganalysis of Jsteg algorithm. It should be noted that Jsteg algorithm is a
variant of LSB replacement technique. Instead of embedding message bits in the
spatial domain, Jsteg algorithm utilizes the LSB of quantized DCT coefficients and
embeds message bits in the DCT domain. The goal of Chapter 9 is to exploit the
state-of-the-art model of quantized DCT coefficients in Chapter 4 to design a LRT
for the steganalysis of Jsteg algorithm. For the practical use, unknown parameters
of the DCT coefficient model are replaced by their ML estimates in the Likelihood
Ratio. Experiments on simulated database and real image database show a very
small loss of power of the proposed test. Furthermore, the proposed test outperforms
other existing detectors. Another contributions in Chapter 9 are that a Maximum
Likelihood estimator for embedding rate is proposed using the proposed model of
DCT coefficients as well as the improvement of the existing Weighted Stego-image
estimator by modifying the technique of calculation of weights.
1.3 Publications and Authors’ Contribution
Most of the material presented in this thesis appears in the following publications
that represent original work, of which the author has been the main contributor.
Patents
1. T. H. Thai, R. Cogranne, and F. Retraint, "Système d’identification d’un
modèle d’appareil photographique associé à une image compressée au format
JPEG, procédé, utilisations and applications associés", PS/B52545/FR, 2014.
2. T. H. Thai, R. Cogranne, and F. Retraint, "Système d’identification d’un
modèle d’appareil photographique associé à une image compressée au format
JPEG, procédé, utilisations and applications associés", PS/B52546/FR, 2014.
Journal articles
1. T. H. Thai, R. Cogranne, and F. Retraint, "Camera model identification
based on the heteroscedastic noise model", IEEE Transactions on Image Processing,
vol. 23, no. 1, pp. 250-263, Jan. 2014.
2. T. H. Thai, F. Retraint, and R. Cogranne, "Statistical detection of data
hidden in least significant bits of clipped images", Elsevier Signal Processing,
vol. 98, pp. 263-274, May 2014.
3. T. H. Thai, R. Cogranne, and F. Retraint, "Statistical model of quantized
DCT coefficients: application in the steganalysis of Jsteg algorithm", IEEE
Transactions on Image Processing, vol. 23, no. 5, pp. 1980-1993, May 2014.1.3. Publications and Authors’ Contribution 7
Journal articles under review
1. T. H. Thai, F. Retraint, and R. Cogranne, "Generalized signal-dependent
noise model and parameter estimation for natural images", 2014.
2. T. H. Thai, F. Retraint, and R. Cogranne, "Camera model identification
based on the generalized noise model in natural images", 2014.
3. T. H. Thai, R. Cogranne, and F. Retraint, "Camera model identification
based on DCT coefficient statistics", 2014.
Conference papers
1. T. H. Thai, F. Retraint, and R. Cogranne, "Statistical model of natural
images", in IEEE International Conference on Image Processing, pp. 2525-
2528, Sep. 2012.
2. T. H. Thai, R. Cogranne, and F. Retraint, "Camera model identification
based on hypothesis testing theory", in European Signal Processing Conference,
pp. 1747-1751, Aug. 2012.
3. T. H. Thai, R. Cogranne, and F. Retraint, "Steganalysis of Jsteg algorithm
based on a novel statistical model of quantized DCT coefficients", in IEEE
International Conference on Image Processing, pp. 4427-4431, Sep. 2013.
4. R. Cogranne, T. H. Thai, and F. Retraint, "Asymptotically optimal detection
of LSB matching data hiding", in IEEE International Conference on Image
Processing, pp. 4437-4441, Sep. 2013.
5. T. H. Thai, R. Cogranne, and F. Retraint, "Optimal detector for camera
model identification based on an accurate model of DCT coefficient", in IEEE
International Workshop on Multimedia Signal Processing (in press), Sep. 2014.
6. T. H. Thai, R. Cogranne, and F. Retraint, "Optimal detection of OutGuess
using an accurate model of DCT coefficients", in IEEE International Workshop
on Information Forensics and Security (in press), Dec. 2014.Part I
Overview on Digital Image
Forensics and Statistical Image
ModelingChapter 2
Overview on Digital Image
Forensics
Contents
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Image Processing Pipeline of a Digital Camera . . . . . . . 12
2.2.1 RAW Image Formation . . . . . . . . . . . . . . . . . . . . . 13
2.2.2 Post-Acquisition Processing . . . . . . . . . . . . . . . . . . . 15
2.2.3 Image Compression . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Passive Image Origin Identification . . . . . . . . . . . . . . . 19
2.3.1 Lens Aberration . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.2 Sensor Imperfections . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.3 CFA Pattern and Interpolation . . . . . . . . . . . . . . . . . 25
2.3.4 Image Compression . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4 Passive Image Forgery Detection . . . . . . . . . . . . . . . . 27
2.5 Steganography and Steganalysis in Digital Images . . . . . 29
2.5.1 LSB Replacement Paradigm and Jsteg Algorithm . . . . . . . 32
2.5.2 Steganalysis of LSB Replacement in Spatial Domain . . . . . 33
2.5.2.1 Structural Detectors . . . . . . . . . . . . . . . . . . 33
2.5.2.2 WS Detectors . . . . . . . . . . . . . . . . . . . . . 35
2.5.2.3 Statistical Detectors . . . . . . . . . . . . . . . . . . 36
2.5.2.4 Universal Classifiers . . . . . . . . . . . . . . . . . . 37
2.5.3 Steganalysis of Jsteg Algorithm . . . . . . . . . . . . . . . . . 38
2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.1 Introduction
The goal of this chapter is to provide an overview on the field of digital image
forensics. As described in Section 1.1, digital image forensics involve two key problems:
image origin identification and image forgery detection. In general, there are
two approaches to address these problems. Active forensics aims to authenticate
image content by generating extrinsically security measures such as digital watermarks
[1–6] and digital signatures [7–10] and adding them to the image file. These12 Chapter 2. Overview on Digital Image Forensics
security measures are referred to as extrinsic fingerprints. Although active forensics
can provide powerful tools to secure a digital camera and restore the credibility of
digital images, it is of limited extent due to many strict constraints in its protocols.
In order to solve these problems in their entirety, passive forensics has been
quickly evolved. In contrast to active forensics, passive forensics does not impose
any constraint and do not require any prior information including the original reference
image. Forensic analysts have only the suspect image at their disposal and
must explore useful information from the image to gather forensic evidence, trace
the acquisition device and detect any act of manipulation. Passive forensics works
on an assumption that the image contains some internal traces left from the camera.
Every stage from real-world scene acquisition to image storage can provide
clues for forensic analysis. These internal traces are called intrinsic fingerprints.
Extrinsic and intrinsic fingerprints are two forms of digital fingerprints in digital
forensics, which are analogous to human fingerprints in criminal domain. Since passive
forensics does not require neither any external security measures generated in
the digital camera, nor any prior information, it can authenticate an image in a
blind manner and can widely be applied to millions of images that circulate daily
on communication networks.
This thesis mainly addresses the problem of origin identification and integrity
based on passive approach. The chapter is organized as follows. Before discussing
active and passive forensics, it is vital to understand deeply the creation and characteristics
of digital images. Section 2.2 briefly introduces the typical image processing
pipeline of a digital camera, highlighting several aspects of potential interest for applications
in digital image forensics. Section 2.3 analyzes passive methods proposed
for image origin identification. Section 2.4 briefly discusses passive methods for image
forgery detection. Next, Section 2.5 introduces the concept of steganography,
which is a type of image content manipulation, and presents prior-art methods for
detecting secret data embedded in digital images. Finally, Section 2.6 concludes the
chapter.
2.2 Image Processing Pipeline of a Digital Camera
This thesis only deals with DSCs and digital images acquired by them. By terminology,
a natural image means a digital image acquired by a DSC. Other sources
of digitized images such as scanners are not addressed in this thesis but a similar
methodology can be easily derived.
Image processing pipeline involves several steps from light capturing to image
storage performed in a digital camera [11]. After measuring light intensity at each
pixel, RAW image that contains exactly information recorded by the image sensor
goes through some typical post-acquisition processes, e.g. demosaicing, whitebalancing
and gamma correction, to render a full-color high-quality image, referred
to as TIFF image. Image compression can be also performed for ease of storage
and transmission. The image processing pipeline of a digital camera is shown in2.2. Image Processing Pipeline of a Digital Camera 13
Figure 2.1: Image processing pipeline of a digital camera.
Figure 2.1. It should be noted that the sequence of operations differs from manufacturer
to manufacturer but basic operations remain similar. In general, the image
processing pipeline designed in a digital camera is complex, with trade-offs in the
use of buffer memory, computing operations, image quality and flexibility [12]. This
section only discusses some common image processing operations such as demosaicing,
white balancing, gamma correction and image compression. Other processing
operations, e.g. camera noise reduction and edge enhancement, are not included in
this discussion.
A full-color digital image consists of three primary color components: red, green,
and blue. These three color components are sufficient to represent millions of colors.
Formally, the full-color image of a DSC can be represented as a three-dimensional
matrix of size Nr × Nc × 3 where Nr and Nc are respectively the number of rows
and columns. Let c ∈ {R, G, B} denote a color channel where R, G and B stand
for respectively the red, green and blue color. Typically, the output image is coded
with ν bits and each pixel value is a natural integer. The set of possible pixel values
is denoted by Z = {0, 1, . . . , B} with B = 2ν − 1. Therefore, an arbitrary image
belongs to the finite image space Z
N with N = Nr × Nc × 3. In general, the image
space Z
N is high dimensional because of a large number of pixels. To facilitate
discussions, let Z denote an image in RAW format and Ze denote an image in TIFF
or JPEG format. Each color component of the image Z is denoted by Z
c and a pixel
of the color channel c at the location (m, n) is denoted by z
c
(m, n), 1 ≤ m ≤ Nr,
1 ≤ n ≤ Nc.
2.2.1 RAW Image Formation
Typically, a digital camera includes an optical sub-system (e.g. lenses), an image
sensor and an electronic sub-system, which can be regarded as the eye, retina, and
brain in the human visual system. The optical sub-system allows to attenuate
effects of infrared rays and to provide an initial optic image. The image sensor
consists of a two-dimensional arrays of photodiodes (or pixels) fabricated on a silicon14 Chapter 2. Overview on Digital Image Forensics
Figure 2.2: Sample color filter arrays.
wafer. Two common types of an image sensor are Charge-Coupled Device (CCD)
and Complementary Metal-Oxide Semiconductor (CMOS). Each pixel enables to
convert light energy to electrical energy. The output signals of the image sensor are
analog. These signals are then converted to digital signals by an analog-to-digital
(A/D) converter inside the camera. The RAW image is obtained at this stage.
Depending on the analog-to-digital circuit of the camera, the RAW image is recorded
with 12, 14 or even 16 bits. One key advantage is that the RAW image exactly
contains information recorded by the image sensor and it has not yet undergone
post-acquisition operations. This offers more flexibility for further adjustments.
Although the image sensor is sensitive to light intensity, it does not differentiate
light wavelength. Therefore, to record a color image, a Color Filter Array (CFA)
is overlaid on the image sensor. Each pixel records a limited range of wavelength,
corresponding to either red, green or blue. Some examples of CFA patterns are
shown in Figure 2.2. Among available CFA patterns, the Bayer pattern is the most
popular. It contains twice as many green as red or blue samples because the human
eye is more sensitive to green light than both red or blue light. The higher rate of
sampling for the green component enables to better capture the luminance component
of light and, thus, provides better image quality. There are few digital cameras
that allow to acquire a full-resolution information for all three color components
(e.g. Sigma SD9 or Polaroid x530 ). This is not only due to high production cost
but also due to the requirement of a perfect alignment of three color planes together.
Let Z represent the RAW image recorded by the image sensor. Because of
the CFA sampling, the RAW image Z is a single-channel image, namely that it is
represented as a two-dimensional matrix of size Nr × Nc. Each pixel value of the
RAW image Z corresponds to only one color channel. For subsequent processing
operations, each color component is extracted from the RAW image Z. A pixel of
each color component is given by
z
c
(m, n) = (
z(m, n) if PCFA(m, n) = c
0 otherwise,
(2.1)
where PCFA is the CFA pattern.
The RAW image acquisition stage is not ideal due to the degradation introduced
by several noise sources. This stage involves two predominant random noise sources.
The first is the Poisson-distributed noise associated with the stochastic nature of
the photo-counting process (namely shot noise) and dark current generated by the
thermal energy in the absence of light. Dark current is also referred to as Fixed
Pattern Noise (FPN). While shot noise results from the quantum nature of light
and it can not be eliminated, dark current can be subtracted from the image [13].2.2. Image Processing Pipeline of a Digital Camera 15
The second random noise sources account for all remaining electronic noises involved
in the acquisition chain, e.g. read-out noise, which can be modeled by a Gaussian
distribution with zero-mean. Apart from random noises, there is also a multiplicative
noise associated with the sensor pattern. This noise accounts for differences of pixels
response to the incident light due to imperfections during the manufacturing process
and inhomogeneity of silicon wafers. Therefore, this noise is referred to as PhotoResponse
Non-Uniformity (PRNU). The PRNU, which is typically small compared
with the signal, is a deterministic component that is present in every image. FPN
and PRNU are two main components of Sensor Pattern Noise (SPN). The PRNU
is unique for each sensor, thus it can be further used for forensic analysis.
2.2.2 Post-Acquisition Processing
Although the use of the CFA allows to reduce the cost of the camera, this requires to
estimate the missing color values at each pixel location in order to render a full-color
image. It means that all the zero values in the sub-images need to be interpolated.
This estimation process is commonly referred to as CFA demosaicing or CFA interpolation
[14]. Technically, demosaicing algorithms estimate a missing pixel value by
using its neighborhood information. The performance of CFA demosaicing affects
greatly the image quality. Demosaicing algorithms can be generally classified into
two categories: non-adaptive and adaptive algorithms. Non-adaptive algorithms
apply the same interpolation technique for all pixels. The nearest neighborhood, bilinear,
bicubic, and smooth hue interpolations are typical examples in this category.
For example, the bilinear interpolation can be written as a linear filtering
Z
c
DM = Hc
DM ~ Z
c
, (2.2)
where ~ denotes the two-dimensional convolution, Z
c
DM stands for the demosaiced
image of the color channel c, and Hc
DM is the linear filter for the color channel c
HG
DM =
1
4
0 1 0
1 4 1
0 1 0
, HR
DM = HB
DM =
1
4
1 2 1
2 4 2
1 2 1
. (2.3)
Although non-adaptive algorithms can provide satisfactory results in smooth regions
of an image, they usually fail in textured regions and edges. Therefore, adaptive
algorithms, which are more computationally intensive, employ edge information or
inter-channel correlation to find an appropriate set of coefficients to minimize the
overall interpolation error. Because the CFA interpolation commonly estimates a
missing pixel value using its neighbors, it thus creates a correlation between adjacent
pixels. This spatial correlation may be amplified during subsequent processing
stages.
Furthermore, to improve the visual quality, the RAW image needs to go through
another processing step, e.g. white balancing [11]. In fact, an object may appear
different in color when it is illuminated under different light sources. This is due to
the color temperature difference of the light sources, which causes the shift of the16 Chapter 2. Overview on Digital Image Forensics
reflection spectrum of the object from the true color. In other words, when a white
object is illuminated under a light source with low color temperature, the reflection
become reddish. On the other hand, a light source with high color temperature can
cause the white object to become bluish. The human visual system can recognize
the white color of the white object under different light sources. This phenomenon
is called color constancy. However, the digital camera does not have such luxury of
millions of year of evolution as human visual system. Therefore, the white balance
adjustment is implemented in the digital camera to compensate this illumination
imbalance so that a captured white object is rendered white in the image. Basically,
white balance adjustment is performed by multiplying pixels in each color channel
by a different gain factor. For instance, one classical white balancing algorithm is
the Gray World assuming that the average value of three color channels will average
to a common gray value
z
R
DM = z
G
DM = z
B
DM, (2.4)
where z
c
DM denotes the average intensity of the demosaiced image Z
c
DM
z
c
DM =
1
Nr · Nc
X
Nr
m=1
X
Nc
n=1
z
c
DM(m, n). (2.5)
In this algorithm, the green channel is fixed because human eye is more sensitive to
this channel (i.e. g
G
WB = 1). The gain factor for other color channels is given by
g
R
WB =
z
G
DM
z
R
DM
, and g
B
WB =
z
G
DM
z
B
DM
, (2.6)
where g
c
WB denotes the gain factor of the color channel c for white balance adjustment.
Therefore, the white-balanced image Z
c
WB is simply given by
Z
c
WB = g
c
WB · Z
c
DM. (2.7)
Other white-balancing algorithms may be also designed using different gain factors.
Actually, the white balance adjustment is a difficult task due to estimation or selection
of appropriate gain factors to correct for illumination imbalance. In this task,
the prior knowledge of light sources is critical so that the camera knows to select
appropriate gain factors. Therefore, some typical light sources such as daylight, incandescent
or fluorescent are stored in the camera. The white balance can be done
automatically in the camera. Some expensive cameras employ preprogrammed or
manual white balance for adapting to illumination conditions correctly.
Generally, the pixel intensity is still linear with respect to scene intensity before
gamma correction [11, 12]. However, most displays have non-linear characteristics.
The transfer function of these devices can be fairly approximated by a simple power
function that relates the luminance L to voltage V
L = V
γ
. (2.8)2.2. Image Processing Pipeline of a Digital Camera 17
Figure 2.3: JPEG compression chain.
Typically, γ = 2.2. To compensate this effect and render the luminance into a
perceptually uniform domain, the gamma correction is done in the image processing
pipeline. Gamma correction is roughly the inverse of Equation (2.8), applying to
each input pixel value
z
c
GM(m, n) =
z
c
WB(m, n)
1
γ
. (2.9)
After going through all post-acquisition processes, a full-color high-quality image,
referred to as TIFF image, is rendered. For the sake of simplicity, let ZeTIFF denote
the final full-color TIFF image.
2.2.3 Image Compression
The TIFF format does not make ease of storage or transmission. Therefore, most
digital cameras commonly apply lossy compression algorithms to reduce the image
data size. Lossy compression algorithms allow to discard information that is not
visually significant. Therefore, lossy compression algorithms are irreversible when
the image reconstructed from the compressed image data is not as identical as the
original image. Moreover, the use of a lossy compression algorithm is a balancing act
between storage size and image quality. An image which is compressed with a high
compression factor requires little storage space, but it will probably be reconstructed
with a poor quality.
Although many lossy compression algorithms have been proposed, most manufacturers
predominately utilize JPEG compression. The JPEG compression scheme
consists of three fundamental settings: color space, subsampling technique, and
quantization table. Even though JPEG was already proposed by the standard Independent
JPEG Group [15], manufacturers typically design their own compression
scheme for optimal trade-off of image quality versus file size. Fundamental steps of
the typical JPEG compression chain are shown in Figure 2.3.
The JPEG compression scheme works in the different color space, typically
YCbCr color space, rather than the RGB color space. The transformation to the
YCbCr color space is to reduce correlations among red, green and blue components.
It allows for more efficient compression. The channel Y represents the luminance of18 Chapter 2. Overview on Digital Image Forensics
a pixel, and the channels Cb and Cr represent the chrominance. Each channel Y, Cb
and Cr is processed separately. In addition, the channels Cb and Cr are commonly
supsampled by a factor of 2 horizontally and vertically. The transformation from
the RGB color space to the YCbCr color space is linear
Y
Cb
Cr
=
0.299 0.587 0.114
−0.169 −0.331 0.5
0.5 −0.419 0.081
R
G
B
+
0
128
128
. (2.10)
To avoid introducing too many symbols, let ZeTIFF denote also the image obtained
after this transformation.
The JPEG compression algorithm consists of two key steps: Discrete Cosine
Transform (DCT), and quantization. It works separately on each 8 × 8 block of a
color component. The DCT operation converts pixel values from the spatial domain
into transform coefficients
I(u, v) = 1
4
TuTv
X
7
m=0
X
7
n=0
z˜TIFF(m, n)
× cos
(2m + 1)uπ
16
cos
(2n + 1)vπ
16
, (2.11)
where z˜TIFF(m, n) is a pixel in a 8 × 8 block, 0 ≤ m, n ≤ 7, I(u, v) denotes the
two-dimensional DCT coefficient, 0 ≤ u, v ≤ 7, and Tu is the normalized weight
Tu =
(
√
1
2
for u = 0
1 for u > 0
. (2.12)
The index of color channel Y, Cb, and Cr is omitted for simplicity as each color
channel is processed separately. The coefficient at location (0, 0), called the Direct
Current (DC) coefficient, represents the mean value of pixels in the 8 × 8 block.
The remaining 63 coefficients are called the Alternating Current (AC) coefficients.
The DCT is known as sub-optimal transform with two important properties: energy
compaction and decorrelation. In a natural image, the majority of the energy tends
to be more located in low frequencies (i.e. the upper left corner of the 8 × 8 grid)
while high frequencies contains information that is not visually significant.
Then, the DCT coefficients go through the quantization process. The quantization
is carried out by simply dividing each coefficient by the corresponding quantization
step, and then rounding to the nearest integer
D(u, v) = round
I(u, v)
∆(u, v)
, (2.13)
where D(u, v) is the quantized DCT coefficient and ∆(u, v) is the corresponding
element in the 8 × 8 quantization table ∆. The quantization table is designed
differently for each color channel. The quantization is irreversible, which results
in an impossibility of recovering the original image exactly. The final processing2.3. Passive Image Origin Identification 19
step is entropy coding that is a form of lossless process. It arranges quantized DCT
coefficients into the zig-zag sequence and then employs the Run-Length Encoding
(RLE) algorithm and Huffman coding. This step is perfectly reversible.
The JPEG decompression works in the reverse order: entropy decoding, dequantization,
and Inverse DCT (IDCT). When the image is decompressed, the entropy is
decoded, we obtain the two-dimensional quantized DCT coefficients. The dequantization
is performed by multiplying the quantized DCT coefficient D(u, v) by the
corresponding quantization step ∆(u, v)
I(u, v) = ∆(u, v) · D(u, v), (2.14)
where I(u, v) stands for the dequantized DCT coefficient. The IDCT operation is
applied to dequantized DCT coefficients to return to the spatial domain
z˜IDCT(m, n) = X
7
uh=0
X
7
uv=0
1
4
TuTvI(u, v)
× cos
(2m + 1)uπ
16
cos
(2n + 1)vπ
16
. (2.15)
After upsampling color components and transforming into the RGB color space, the
values are rounded to the nearest integers and truncated to a finite dynamic range
(typically [0, 255])
z˜JPEG(m, n) = trunc
roundh
z˜IDCT(m, n)
i, (2.16)
where z˜JPEG(m, n) is the final decompressed JPEG pixel. In general, the JPEG pixel
z˜JPEG(m, n) differs from the original TIFF pixel z˜TIFF(m, n) due to the quantization,
rounding and truncation (R/T) errors in the process. Note that in this image
processing pipeline, R/T errors are only take into account one time for the sake of
simplification.
The way that JPEG compression works separately on each 8×8 block generates
discontinuities across the boundaries of the blocks, which are also known as block
artifacts [16]. The blocking artifacts are more severe when the quantization steps
are coarser. Moreover, because of the quantization in the DCT domain, the DCT
coefficients obtained by applying the DCT operation on the decompressed JPEG
image will cluster around integer multiples of ∆(u, v), even though those DCT
coefficients are perturbed by R/T errors. These two artifacts provide a rich source
of information for forensic analysis of digital images.
2.3 Passive Image Origin Identification
Basically, when an image is captured by a camera, it is stored with the metadata
headers in a memory storage device. The metadata, e.g. Exchangeable Image File
(EXIF) and JPEG headers, contain all recording and compression history. Therefore,
a simplest way to determine the image’s source is to read out directly from the20 Chapter 2. Overview on Digital Image Forensics
metadata. However, such metadata headers are not always available in practice if
the image is resaved in a different format or recompressed. Another problem is that
the metadata headers are not reliable as they can be easily removed or modified
using low-cost editing tools. Therefore, the metadata should not be considered in
forensic analysis.
The common philosophy of passive approach for image origin identification is
to rely on inherent intrinsic fingerprints that the digital camera leaves in a given
image. The fingerprint can discriminate different camera brands, camera models,
and even camera units. Any method proposed for image origin identification must
respond to following questions:
1. Which fingerprints are utilized for origin identification?
2. How to extract these fingerprints accurately from a given image?
3. Under which frameworks is the method designed to exploit the discriminability
of fingerprints extracted from images captured by different sources1 and to
calculate the similarity of fingerprints extracted from images captured by the
same source?
Every stage from real-world scene acquisition to image storage can provide intrinsic
fingerprints for forensic analysis (see Figure 2.1). Although the image processing
pipeline is common for most cameras, each processing step is performed according
to manufacturers’ own design. Thus the information left by each processing step is
useful to trace down to the device source. A fingerprint must satisfy three following
important requirements:
• Generality: the fingerprint should be present in every image.
• Invariant: the fingerprint does not vary for different image contents.
• Robustness: the fingerprint survives non-linear operations such as lossy compression
or gamma correction.
The second question involves a challenge for any forensic method since the fingerprint
extraction may be severely contaminated by non-linear operations (e.g.
gamma correction and lossy compression).
Generally, the image origin identification problem can be formulated into two
frameworks: supervised classification [17–19] and hypothesis testing [20]. Compared
with hypothesis testing framework, supervised classification framework is utilized by
most of existing methods in the literature. The construction of a classifier typically
consists of two stages: training stage and testing stage. It is assumed that the entire
image space Z
N that includes all images from all the sources in the real world can be
1The term source means an individual camera instance, a camera model, or a camera brand.
Other sources such as cell-phone cameras, scanners, computer graphic are not addressed in this
thesis.2.3. Passive Image Origin Identification 21
divided into disjoint subsets in which images with same characteristics from the same
source are grouped together. Let {S1, S2, . . . , SNs} be Ns different sources that are
required to be classified. Typically, each source Sn, 1 ≤ n ≤ Ns, is a subset of Z
N .
In the training stage, suppose that Nim images are collected to be representative for
each source. Each image in the source Sn is denoted by Zn,i. Then a feature vector
is extracted from each image. Formally, a feature vector is a mapping f : Z
N → F
where each image Z is mapped to a Nf-dimensional vector v = f(Z). Here, F is called
feature space and Nf
is the number of selected features, which is also the dimension
of the feature space F. The number of features Nf
is very small compared with the
number of pixels N. Working in a low-dimensional feature space F that represent
the input images is much simpler than working on high-dimensional noisy image
space Z
N . The choice of an appropriate feature vector is primordial in supervised
classification framework since the accuracy of a classifier highly depends on it. Thus
we obtain a set of feature vectors {vn,i} that is representative for each source. In
this training stage feature refinement can be also performed such as dimensionality
reduction or feature selection to avoid overtraining and redundant features. The
knowledge learnt from the set of refined feature vectors helps build a classifier using
supervised machine learning algorithms. A classifier typically is a learned function
that can map an input feature vector to a corresponding source. Therefore, in the
testing stage, the same steps such as feature selection and feature refinement are
performed on the testing images. The output of the trained classifier is the result
predicted for input testing images. Among many existing powerful machine learning
algorithms, Support Vector Machines (SVM) [18, 19] seem to be the most popular
choice in passive forensics. SVM are established in a solid mathematical foundation,
namely statistical learning theory [18]. Moreover, their implementation is available
for download and is easy to use [21].
Supervised classification framework involves two main drawbacks. To achieve
high accuracy, supervised classification framework requires an expensive training
stage that involves many images with different characteristics (e.g. image content
or camera settings) from various sources for representing a real-world situation,
which might be unrealistic in practice. Another drawback of this framework is that
the trained classifier can not establish the statistical performances analytically since
it does not rely on knowledge of a priori statistical distribution of images. In the
operational context, such as for law enforcement and intelligent agencies, the design
of an efficient method might not be sufficient. Forensic analysts also require that the
probability of false alarm should be guaranteed and below a prescribed rate. The
analytic establishment of statistical performances still remains an open problem in
machine learning framework [22].
On the other hand, the problem of image origin identification problem lends
itself to a binary hypothesis testing formulation.
Definition 2.1. (Origin identification problem). Given an arbitrary image Z under
investigation, to identify the source of the image Z, forensic analysts decide between22 Chapter 2. Overview on Digital Image Forensics
two following hypotheses
(
H0 : Z is acquired by the source of interest S0
H1 : Z is acquired by a certain source S1 that differs from the source S0.
(2.17)
Suppose that the source S0 is available, so forensic analysts can have access to its
characteristics, or its fingerprints. Therefore, they can make a decision by checking
whether the image in question Z contains the fingerprints of the source. Relying on
a priori statistical distribution of the image Z under each source, forensic analysts
can establish a test statistic that can give a decision rule according to some criteria
of optimality.
Statistical hypothesis testing theory has been considerably studied and applied
in many fields. Several statistical tests as well as criteria of optimality have been proposed.
While supervised learning framework only requires to find an appropriate set
of forensic features, the most challenging part in hypothesis testing framework is to
establish a statistical distribution to accurately characterize a high-dimensional real
image. In doing so, hypothesis testing framework allows us to establish analytically
the performance of the detector and warrant a prescribed false alarm probability,
which are two crucial criteria in the operational context that supervised classification
framework can not enable. However, hypothesis testing framework is of limited
exploitation in forensic analysis. For the sake of clarity, hypothesis testing theory
will be more detailed in Chapter A.
There are many passive forensic methods proposed in the literature for image
origin identification. In this thesis, we limit the scope of our review to methods for
identification of the source of a digital camera (e.g. camera brand, camera model,
or individual camera instance). The methods to identify other imaging mechanisms
such as cell-phone cameras, scanners, and computer graphics will not be addressed.
It is important to distinguish the problem of camera instance identification and the
problem of camera model/brand identification. More specifically, fingerprints used
for camera instance identification should capture individuality, especially cameras
coming from the same brand and model. For camera model/brand identification,
it is necessary to exploit fingerprints that are shared between cameras of the same
model/brand but discriminative for different camera models/brands.
Existing methods in the literature can be broadly divided into two categories.
Methods in the first category exploit differences in image processing techniques
and component technologies among camera models and manufacturers such as lens
aberration [23], CFA patterns and interpolation [24–26] and JPEG compression
[27,28]. The main challenge in this category is that the image processing techniques
remain identical or similar, and the components produced by a few manufacturers
are shared among camera models. Methods in the second category aim to identify
unique characteristics or fingerprints of the acquisition device such as PRNU [29–36].
The ability to reliably extract this fingerprint from an image is the main challenge
in the second category since different image contents and non-linear operations may2.3. Passive Image Origin Identification 23
severely affect this extraction. Below we will present the methods according to
the position of exploited fingerprints in the image acquisition pipeline of a digital
camera.
2.3.1 Lens Aberration
Digital cameras use lenses to capture incident light. Due to the imperfection of the
design and manufacturing process, lenses cause undesired effects in output images
such as spherical aberration, chromatic aberration, or radial distorsion. Spherical
aberration occurs when all incident light rays end up focusing at different points
after passing through a spherical surface, especially light rays passing through the
periphery of the spherical lens. Chromatic aberration is a failure of lens to converge
different wavelengths at the same position on the image sensor. Radial distorsion
causes straight lines rendered as curved lines on the image sensor and it occurs when
the transverse magnification (ratio of the image distance to the object distance) is
not a constant but a function of the off-axis image distance. Among these effects,
radial distorsion may be the most severe part that lens produces in output images.
Different manufacturers design different lens systems to compensate the effect of
radial distorsion. Moreover, focal length also affects the degree of radial distorsion.
As a result, each camera brand or model may leave an unique degree of radial
distorsion on the output images. Therefore, radial distorsion of lens is exploited
in [23] to identify the source of the image.
The authors in [23] take the center of an image as the center of distorsion and
model the undistorted radius ru as a non-linear function of distorted radius rd
ru = rd + k1r
3
d + k2r
5
d
. (2.18)
Distorsion parameters (k1, k2) are estimated using the straight line method [37,38].
Then the distorsion parameters (k1, k2) are exploited as forensic features to train a
SVM classifier. Although experiments provided a promising result, experiments were
only conducted on three different camera brands. Experiments on large database
including different devices per camera model and different camera models are more
desirable. However, this lens aberration-based classifier would fail for a camera with
possibly interchangeable lenses, e.g. Digital Single Lens Reflex (DSLR) camera.
2.3.2 Sensor Imperfections
As discussed in Section 2.2, imperfections during the manufacturing process and
inhomogeneity of silicon wafers leads to slight variations in the response of each
pixel to incident light. These slight variations are referred as to PRNU, which is
unique for each sensor pattern. Thus PRNU can be exploited to trace down to
individual camera instance. The FPN was also used in [39] for camera instance
identification. However, the FPN can be easily compensated, thus it is not a robust
fingerprint and no longer used in later works.24 Chapter 2. Overview on Digital Image Forensics
Generally, PRNU is modeled as a multiplicative noise-like signal [30, 32]
Z = µ + µK + Ξ, (2.19)
where Z is the output noisy image, µ is the ideal image in the absence of noise, K
represents the PRNU, and Ξ accounts for the combination of other noise sources.
All operations in (2.19) are pixel-wise.
Like supervised classification, the PRNU-based method also consists two stages.
The training stage involves collecting Nim images that are acquired by the camera
of interest S0 and extracting the PRNU K0 characterizing this camera. This is
accomplished by applying a denoising filter on each image to suppress the image
content, then performing Maximum Likelihood (ML) estimation of K0
K0 =
PNim
i=1 Z
res
i Zi
PNim
i=1
Zi
2
, (2.20)
where Z
res
i = Zi −D
Z
is the noise residual corresponding to the image Zi
, 1 ≤ i ≤
Nim, and D stands for the denoising filter. Note that when the PRNU is extracted
from JPEG images, it may contain artifacts introduced by CFA interpolation and
JPEG compression. These artifacts are not unique to each camera instance and
shared among different camera units of the same model. To render the PRNU
unique to the camera and improve the accuracy of the method, a preprocessing
step is performed to suppress these artifacts by subtracting the averages from each
row and column, and applying the Wiener filter in the Fourier domain [30]. In the
testing stage, given an image under investigation Z, the problem of camera source
identification (2.17) is rewritten as follows
(
H0 : Z
res = µK0 + Ξe
H1 : Z
res = µK1 + Ξe,
(2.21)
where the noise term Ξe includes the noise Ξ and additional terms introduced by
the denoising filter. This formulation must be understood as follows: hypothesis
H0 means that the noise residual Z
res contains the PRNU K0 characterizing the
camera of interest S0 while hypothesis H1 means the opposite. It should be noted
that the PRNU detection problem in [30, 32] is formulated in the reverse direction.
The sub-optimal detector for the problem (2.21) is the normalized cross correlation
between the PRNU term µK and the noise residual Z
res [30]. In fact, the normalized
cross correlation is derived from the Generalized Likelihood Ratio Test (GLRT)
by modeling the noise term Ξe as white noise with known variance [40]. A more
stable statistic derived in [32] is the Peak to Correlation Energy (PCE) as it is
independent of the image size and has other advantages such as its response to
the presence of weak periodic signals. Theoretically, the decision threshold for the
problem (2.21) is given by τ =
Φ
−1
(1 − α0)
2
where α0 is the prescribed false
alarm probability, Φ(·) and Φ
−1
(·) denotes respectively the cumulative distribution
function (cdf) of the standard Gaussian random variable and its inverse. If the2.3. Passive Image Origin Identification 25
PCE is smaller than a threshold τ , the image Z is claimed taken by the camera in
question. The detection performance can be improved by selecting an appropriate
denoising filter [33], attenuating scene details in the test image [34,35], or recognizing
the PRNU term with respect to each sub-sample of the CFA pattern [36].
Beside individual camera instance identification, the fingerprint PRNU can be
also used for camera model identification [31]. This is based on an assumption
that the fingerprint obtained from TIFF or JPEG images contains traces of postacquisition
processes (e.g. CFA interpolation) that carry information about the
camera model. In this case, the above preprocessing step that removes the linear
pattern from the PRNU will not be performed. The features extracted from the
PRNU term including statistical moments, cross correlation, block covariance, and
linear pattern, are used to train a SVM classifier.
2.3.3 CFA Pattern and Interpolation
Based on the assumption that different CFA patterns and CFA interpolation algorithms
are employed by different manufacturers, even in different camera models,
thus they can be used to discriminate camera brands and camera models. Typically,
both CFA pattern and interpolation coefficients are unknown in advance. They must
be estimated together from a single image. An algorithm has been developed in [24]
to jointly estimate CFA pattern and interpolation coefficients, which has shown the
robustness to JPEG compression with low quality factors. Firstly, a search space
including 36 possible CFA patterns is established based on the observation that
most cameras use a RGB type of CFA with a fixed periodicity of 2 × 2. Since a
camera may employ different interpolation algorithms for different types of regions,
it is desirable to classify the given image into three types of regions based on gradient
information in a local neighborhood of a pixel: region contains parts of the
image with a significant horizontal gradient, region contains parts of the image with
a significant vertical gradient, and region includes the remaining smooth parts of
the image.
For every CFA pattern PCFA in the search space, the interpolation coefficients
are computed separately in each region by fitting linear models. Using the final
output image Z and the assumed CFA pattern PCFA, we can identify the set of
pixels that acquired directly from the image sensor and those obtained by interpolation.
The interpolated pixels are assumed to be a weighted average of the
pixels acquired directly. The interpolation coefficients are then obtained by solving
these equations. To overcome the difficulty of noisy pixel values and interference of
non-linear post-acquisition processes, singular value decomposition is employed to
estimate the interpolation coefficients. These coefficients are then use to re-estimate
the output image Zb, and find the interpolation error Zb − Z. The CFA pattern that
gives the lowest interpolation error and its corresponding coefficients are chosen as
final results [24].
As soon as the interpolation coefficients are estimated from the given image,
they are used as forensic features to train a SVM classifier for classification of cam-26 Chapter 2. Overview on Digital Image Forensics
era brands and models [24]. The detection performance can be further enhanced by
taking into account intra-channel and inter-channel correlations and more sophisticated
interpolation algorithms in the estimation methodology [26]. Other features
can be used together with interpolation coefficients such as the peak location and
magnitudes of the frequency spectrum of the probability map [41].
2.3.4 Image Compression
Image compression is the final step in the image processing pipeline. As discussed
in Section 2.2, manufacturers have their own compression scheme for optimal tradeoff
of image quality versus file size. Different component technologies (e.g. lenses,
sensors), different in-camera processing operations (e.g. CFA interpolation, whitebalancing),
together with different quantization matrices will jointly result in statistical
difference of quantized DCT coefficient. Capturing this statistical difference
and extracting useful features from it may enable to discriminate different camera
brands or camera models.
To this end, instead of extracting statistical features directly from quantized
DCT coefficients, features are extracted from the difference JPEG 2-D array [28].
The JPEG 2-D array consists of the magnitudes (i.e. absolute values) of quantized
DCT coefficients. Three reasons behind taking absolute values are the followings:
1. The magnitudes of DCT coefficients decrease along the zig-zag order.
2. Taking absolute values can reduce the dynamic range of the resulting array.
3. The signs of DCT coefficients mainly carry information of the outlines and
edges of the original spatial-domain image, which does not involve information
about camera models. Thus by taking absolute values, all the information
regarding camera models remains.
Then to reduce the influence of image content and enhance statistical difference
introduced in image processing pipeline, the difference JPEG 2-D array, which is
defined by taking the difference between an element and one of its neighbors in
the JPEG 2-D array, is introduced. The difference can be calculated along four
directions: horizontal, vertical, main diagonal, and minor diagonal. To model the
statistical difference of quantized DCT coefficients and take into account the correlation
between coefficients, the Markovian transition probability matrix is exploited.
Each difference JPEG 2-D array from a direction generates its own transition probability
matrix. Each probability value in the transition matrix is given by
P
h
X(uh + 1, uv) = k
X(uh, uv) = l
i
=
PNh
uh=1
PNv
uv=1 1X(uh,uv)=l,X(uh+1,uv)=k
PNh
uh=1
PNv
uv=1 1X(uh,uv)=l
,
(2.22)
where X(uh, uv) denotes an element in the difference JPEG 2-D array and 1E is an
indicator function
1E =
(
1 if E is true
0 otherwise.
(2.23)2.4. Passive Image Forgery Detection 27
These steps are performed for the Y and Cb components of the compressed JPEG
image. Totally, we can collect 324 transition probabilities for Y component and 162
transition probabilities for Cb component. The transition probabilities are then used
as forensic features for SVM classification. Experiments are then conducted on a
large database including 40000 images of 8 different camera models, providing a good
classification performance [28]. In this method it is more desirable to perform feature
refinement to reduce the number of features and the complexity of the algorithm.
2.4 Passive Image Forgery Detection
Image forgery detection is another fundamental task of forensic analysts, which aims
to detect any act of manipulation on image content. The main assumption is that
even though a forger with skills and powerful tools does not leave any perceptible
trace of manipulation, the manipulation creates itself inconsistencies in image content.
Depending on which type of inconsistencies is investigated and how passive
forensic methods operate, they can broadly be divided into five categories. A single
method can hardly detect all types of forgery, so forensic analysts should use these
methods together to reliably detect a wide variety of tampering.
1. Universal Classifiers: Any act of manipulation may lead to statistical changes
in the underlying image. Instead of capturing these changes directly in a
high-dimensional and non-stationary image, which is extremely difficult, one
approach is to detect changes in a set of features that represent an image.
Based on these features, supervised classification is employed to provide universal
classifiers to discriminate between unaltered images and manipulated
images. Some typical forensic features are higher-order wavelet statistics [42],
image quality and binary similarity measures [43, 44]. These universal classifiers
are not only able to detect some basic manipulations such as resizing,
splicing, contrast enhancement, but also reveal the existence of hidden messages
[45].
2. Camera Fingerprints-Based: A typical scenario of forgery is to cut a portion
of an image and paste it into a different image, then create the so-called
forged image. The forged region may not be taken by the same camera as
remaining regions of the image, which results in inconsistencies in camera
fingerprints between those regions. Therefore, if these inconsistencies exist in
an image, we could assume that the image is not authentic. For authentication,
existing methods have exploited many camera fingerprints such as chromatic
aberration [46], PRNU [30,32], CFA interpolation and correlation [25, 47–49],
gamma correction [50, 51], Camera Response Function (CRF) [52–54].
3. Compression and Coding Fingerprints-Based: Nowadays most commercial
cameras export images in JPEG format for ease of storage and transmission.
As discussed in Section 2.2, JPEG compression introduces two important artifacts:
clustering of DCT coefficients around integer multiples of the quan-28 Chapter 2. Overview on Digital Image Forensics
tization step, and blocking artifacts. Checking inconsistencies in these two
artifacts can trace the processing history of an image and determine its origin
and authenticity. A possible scenario is that while the original image is saved
in JPEG format, a forger could save it in a lossless format after manipulation.
Existence of these artifacts in an image in a lossless format can show that it has
been previously compressed [55–57]. Another scenario is that the forger could
save the manipulated image in JPEG format, which means that the image
has undergone JPEG compression twice. Detection of double JPEG compression
can be performed by checking periodic patterns (e.g. double peaks and
missing centroids) in the histogram of DCT coefficients due to different quantization
steps [51, 58, 59], which are not present in singly compressed images,
or using the distribution of the first digit of DCT coefficients [60, 61]. The
detection of double JPEG compression is of greater interest since it can reveal
splicing or cut-and-paste forgeries due to the fact that the the forged region
and remaining regions of the image may not have the same processing history.
Inconsistencies can be identified either in DCT domain [62–65] or in spatial
domain via blocking artifacts [66, 67]. Furthermore, the detection of double
JPEG compression can be applied for detecting hidden messages [58, 59].
4. Manipulation-Specific Fingerprints-Based: Each manipulation may leave specific
fingerprints itself within an image, which can be used as evidence of
tampering. For example, resampling causes specific periodic correlations between
neighboring pixels. These correlations can be estimated based on the
Expectation Maximization (EM) algorithm [68], and then used to detect the
resampling [68, 69]. Furthermore, resampling can be also detected by identifying
periodicities in the average of an image’s second derivative along its row
and columns [70], or periodicities in the variance of an image’s derivative [71].
Contrast enhancement creates impulsive peaks and gaps in the histogram of
the image’s pixel value. These fingerprints can be detected by measuring the
amount of high frequency energy introduced into the Fourier transform of an
image’s pixel value histogram [72]. Median filtering introduces streaking into
the signals [73]. Streaks correspond to a sequence of adjacent signal observations
all taking the same value. Therefore, median filtering can be detected
by analyzing statistical properties of the first difference of an image’s pixel
values [74–77]. Splicing disrupts higher-order Fourier statistics, which leaves
traces to detect splicing [78].
5. Physical Inconsistencies-Based: Methods in this category do not make use
of any form of fingerprints but exploit properties of lighting environment for
forgery detection. The main assumption is that all the objects within an image
are typically illuminated under the same light sources, so the same properties
of lighting environments. Therefore, difference in lighting across an image can
be used as evidence of tampering, e.g. splicing. To this end, it is necessary
to estimate the direction of the light source illuminating an object. This can
be accomplished by considering two-dimensional [79] or three-dimensional [80]2.5. Steganography and Steganalysis in Digital Images 29
Figure 2.4: Typical steganographic system.
surface normals, and illumination under a single light source [79] or even under
multiple light sources [81]. The lighting environment coefficients of all objects
in an image are then used for checking inconsistencies.
2.5 Steganography and Steganalysis in Digital Images
Steganography is the art and science of hiding communication. The concept of
steganography is used for invisible communication between only two parties, the
sender and the receiver, such that the message exchanged between them can not
be detected by an adversary. This communication can be illustrated by prisoners’
problem [82]. Two prisoners, Alice and Bob, want to develop an escape plan but
all communications between them are unfortunately monitored by a warden named
Wendy. The escape plan must be kept secret and exchanged without raising Wendy’s
suspicion. It means that the communication does not only involve the confidentiality
of the escape plan but also its undetectability. For this purpose, a practical way is
to hide the the escape plan, or the secret message in a certain ordinary object and
send it to the intended receiver. By terminology, the original object that is used
for message hiding is called cover-object and the object that contains the hidden
message is called stego-object. The hiding technique does not destroy the object
content perceptibly to not raise Wendy’s suspicion, nor modify the message content
so the receiver could totally understand the message.
The advances in information technologies make digital media (e.g. audio, image,
or video) ubiquitous. This ubiquity facilitates the choice of a harmless object in
which the sender can hide a secret message, so sending such media is inconspicuous.
Furthermore, the size of digital media is typically large compared to the size of
secret message. Thus the secret message can be easily hidden in digital media
without visually destroying digital content. Most of researches focus on digital
images, which are also the type of media addressed in this thesis.
A typical steganographic system is shown in Figure 2.4. It consists of two stages:30 Chapter 2. Overview on Digital Image Forensics
embedding stage and extraction stage. When Alice wants to send a secret message
M, she hides it into a cover-image C using a key and an embedding algorithm.
The secret message M is a binary sequence of L bits, M = (m1, m2, . . . , mL)
T with
mi ∈ {0, 1}, 1 ≤ i ≤ L. The resulting stego-image S is then transmitted to Bob via
an insecure channel. Bob can retrieve the message M since he knows the embedding
algorithm used by Alice and has access to the key used in embedding process.
Bob does not absolutely require the original cover-image C for message extraction.
From the Kerckhoffs’ principle [83], it is assumed that in digital steganography,
steganographic algorithms are public so that all parties including the warden Wendy
have access to them. The security of the steganographic system relies solely on the
key. The key could be secret key exchanged between Alice and Bob through a secure
channel, or public key.
In general, steganographic systems can be evaluated by three basic criteria: capacity,
security, and robustness. Capacity is defined as the maximum length of a
secret message. The capacity depends on the embedding algorithm and properties
of cover-images. The security of a steganographic system is evaluated by the undetectability
rather than the difficulty of reading the message content in case of
cryptographic system. However, we can see that steganographic systems also exploit
the idea of exchange of keys (secret and public) from cryptographic system to
reinforce the security. Robustness means the difficulty of removing a hidden message
from a stego-image, so the secret message survives some accidental channel
distortions or systematic interference of the warden that aims to prevent the use
of steganography. It can be noted that longer messages will lead to more changes
in the cover image, thus less security. In brief, these three criteria are mutually
dependent and are balanced when designing a steganographic system.
The purpose of steganography is to secretly communicate through a public channel.
However, this concept has been misused by anti-social elements, criminals, or
terrorists. It could lead to important consequences to homeland security or national
defence when, for example, two terrorists exchange a terrorist plan. Therefore, it
is urgent for law enforcement and intelligence agencies to build up a methodology
in order to detect the mere existence of a secret message and break the security of
steganographic systems. Embedding a secret message into a cover-image is also an
act of manipulating image content, so steganalysis is one of important tasks of forensic
analysts, or steganalysts in this case. Unlike in cryptanalysis, the steganalyst
Wendy does not require to retrieve the actual message content. As soon as she have
detected its existence in an image, she can cut off the communication channel by
putting two prisoners in separate cells. This is the failure of steganography. Besides,
the task of steganalysis must be accomplished blindly without knowledge of original
cover image.
Generally, the steganalyst Wendy can play either active or passive role. While
the active steganalyst is allowed to modify exchanged objects through the public
channel in order to prevent the use of steganography, the passive steganalyst is not.
The only goal of passive steganalyst is to detect the presence of a hidden message in
a given image, which is also the typical scenario on which most of researches mainly2.5. Steganography and Steganalysis in Digital Images 31
Figure 2.5: Operations of LSB replacement (top) and Jsteg (bottom).
focus. It can be noted that the steganalysis is like the coin-tossing game since the
decision of steganalysts is made by telling that the given image is either a coverimage
or a stego-image. Hence in any case, steganalysts can get a correct detection
probability of 50%. However, steganalysts should establish the problem of hidden
message detection in a more formal manner and design a powerful steganalysis tool
with higher correct detection probability, rather than a random guess. Apart from
detecting the presence of a hidden message, it may be desirable for steganalysts to
estimate the message length or brute-force the secret key and retrieve the message
content. The estimation of the message length is called quantitative steganalysis.
Brute-forcing the secret key and extraction of the message content are referred to
as forensic steganalysis.
As stated above, designing a steganographic system is a trade-off between three
basic criteria. Thus many steganographic algorithms have been proposed for different
purposes such as mimic natural processing [84–86], preserve a model of coverimages
[87, 88], or minimize the distorsion function [89, 90]. Among available algorithms,
Least Significant Bit (LSB) replacement might be the oldest embedding
technique in digital steganography. This algorithm is simple and easy to implement,
thus it is available in numerous low-cost steganographic softwares on the Internet
despite its relative insecurity. In addition, LSB replacement inspires a majority
of other steganographic algorithms (e.g. LSB matching [91], Jsteg [92]). Jsteg
algorithm is simply the implementation of LSB replacement in the DCT domain.
Therefore, understanding LSB replacement paradigm is a good starting point before
addressing more complex embedding paradigms. In this thesis, we only review LSB
replacement and Jsteg algorithm, and their powerful steganalysis detectors proposed
in the literature. The readers can be referred to [93–95] for other steganographic
and steganalysis methods.32 Chapter 2. Overview on Digital Image Forensics
2.5.1 LSB Replacement Paradigm and Jsteg Algorithm
Considering the cover-image C as a column vector, the LSB replacement technique
involves choosing a subset of L cover-pixels {c1, c2, . . . , cL}, and replacing the LSB
of each cover pixel by a message bit. The LSB of a cover-pixel ci
is defined as follows
LSB(ci) = ci − 2
j
ci
2
k
, (2.24)
where b·c is the floor function. The LSB of the cover-pixel ci takes values in {0, 1}.
Therefore, by embedding a message bit mi
into the cover-pixel ci
, the stego-pixel si
is given by
si = 2j
ci
2
k
+ mi
. (2.25)
We see that when LSB(ci) = mi
, the pixel value does not change after embedding,
si = ci
. By contrast, when LSB(ci) 6= mi
, the stego-pixel si can be defined as a
function of the cover-pixel ci
in the following manner
si = ci + 1 − 2 · LSB(ci) = ci + (−1)ci , ci
, (2.26)
where ci
is the pixel with flipped LSB. In other words, even values are never decremented
whereas odd values are never incremented. The absolute difference between
a cover-pixel ci and a stego-pixel si
is smaller than 1, |ci − si
| ≤ 1, thus the artifact
caused by the insertion of secret message M could be imperceptible under human
vision. The operation of LSB replacement technique is illustrated in Figure 2.5.
One problem that remains to be solved is the choice of the subset of cover-pixels or
the sequence of pixel indices used in embedding process. To increase the complexity
of the algorithm, the sender could create a pseudorandom path generated from the
secret key shared between the sender and the receiver so that the secret message
bits are spread randomly over the cover-image. Therefore, the distance between
two embedded bits is also determined pseudorandomly, which would not raise the
suspicion of the warden. We can see that the number of message bits that can be
embedded does not exceed beyond the number of pixels of the image Z: L ≤ N,
which leads us to define an embedding rate R
R =
L
N
. (2.27)
This embedding rate R is a measure of the capacity of the steganographic system
based on LSB replacement technique.
Jsteg algorithm is a variant of LSB replacement technique in spatial domain.
Jsteg algorithm embeds the secret message into the DCT domain by replacing LSBs
of quantized DCT coefficients by message bits. The difference from the LSB replacement
technique in spatial domain is that Jsteg algorithm does not embed message
bits in the coefficients that are equal to 0 and 1 since artifacts caused by such embedding
can be perceptibly and easily detected. The DC coefficient is not used as
well for the same reason. The AC coefficients that differ from 0 et 1 are usable2.5. Steganography and Steganalysis in Digital Images 33
coefficients. Consequently, the embedding rate R in Jsteg algorithm is defied as the
ratio of the length L and the number of usable coefficients in the cover-image C
R =
L
P64
k=2 nk
. (2.28)
where nk is the number of usable coefficients at the frequency k, 2 ≤ k ≤ 64.
2.5.2 Steganalysis of LSB Replacement in Spatial Domain
Like the origin identification problem (2.17), the steganalysis problem can be also
formulated as a binary hypothesis testing.
Definition 2.2. (Steganalysis problem). Given a suspect image Z, to verify whether
the image Z contains a secret message or not, the steganalyst decides between two
following hypotheses
(
H0 : Z = C, no hidden message.
H1 : Z = S, with hidden message.
(2.29)
To solve the steganalysis problem (2.29), several methods have been proposed
in the literature. Even though the secret message is imperceptible to human eye,
the act of embedding a secret message modifies the cover content and leaves itself
artifacts that can be detected. Steganalysis methods of LSB replacement can be
roughly divided into four categories: structural detectors, Weighted Stego-image
(WS) detectors, statistical detectors, and universal classifiers. Typically, structural
detectors and WS detectors are quantitative detectors that provide an estimation of
secret message length while statistical detectors and universal classifiers attempt to
separate stego-images from cover-images based on changes in statistical properties
of cover-images due to message embedding. Below we briefly discuss each category
of detectors.
2.5.2.1 Structural Detectors
Structural detectors exploit all combinatorial measures of the artificial dependence
between sample differences and the parity structure of the LSB replacement in
order to estimate the secret message length. Some representatives in this category
are the Regular-Singular (RS) analysis [96], the Sample Pair Analysis (SPA) [97–
99], and the Triple/Quadruple analysis [100, 101]. The common framework is to
model effects of LSB replacement as a function of embedding rate R, invert these
effects to approximate cover-image properties from the stego-image, and find the
best candidate Rb to match cover assumptions.
Both RS and SPA methods rely on evaluating groups of spatially adjacent pixels.
The observations made in RS analysis were formally justified in SPA. For pedagogical
reasons, we discuss the SPA method. For the representation of the SPA method, we34 Chapter 2. Overview on Digital Image Forensics
E2k+1 O2k
E2k O2k−1
R
2
(1−
R
2
)
R
2
(1−
R
2
)
R
2
(1−
R
2
)
R
2
(1−
R
2
)
R
2
4
R
2
4
(1−
R
2
)
2
(1−
R
2
)
2
(1−
R
2
)
2
(1−
R
2
)
2
Figure 2.6: Diagram of transition probabilities between trace subsets under LSB
replacement.
use the extensible alternative notations in [95, 101]. Given an image Z, we define a
trace set Ck that collect all pairs of adjacent pixels (z2i
, z2i+1) as follows
Ck =
n
(2i, 2i + 1) ∈ I2
j
z2i
2
k
=
j
z2i+1
2
k
+ k
o
, (2.30)
where I is the set of pixel indices. Each trace set Ck is then partitioned into four
trace subsets, Ck = E2k ∪ E2k+1 ∪ O2k ∪ O2k−1, where Ek and Ck are defined by
Ek =
n
(2i, 2i + 1) ∈ I2
z2i = z2i+1 + k, z2i+1 is eveno
Ok =
n
(2i, 2i + 1) ∈ I2
z2i = z2i+1 + k, z2i+1 is oddo
.
(2.31)
We can observe that the LSB replacement technique never changes the trace set
Ck of a sample pair but can move sample pairs between trace subsets. Therefore,
we establish transition probabilities as functions of the embedding rate R, which is
shown in Figure 2.6. Thus we can derive the relation between trace subsets of a
stego-image and those of a cover-image
|Es
2k+1|
|Es
2k
|
|Os
2k
|
|Os
2k−1
|
=
1 −
R
2
2 R
2
1 −
R
2
R
2
1 −
R
2
R2
4
R
2
1 −
R
2
1
−
R
2
2 R2
4
R
2
1 −
R
2
R
2
1 −
R
2
R2
4
1 −
R
2
2 R
2
1 −
R
2
R2
4
R
2
1 −
R
2
R
2
1 −
R
2
1
−
R
2
2
|Ec
2k+1|
|Ec
2k
|
|Oc
2k
|
|Oc
2k−1
|
,
(2.32)
where E
c
k
and Oc
k
are trace subsets of the cover-image, and E
s
k
and Os
k
are trace
subsets of the stego-image. Here |S| denotes the cardinality of the set S. After
inverting the transition matrix and assuming that |Ec
2k+1| = |Oc
2k+1|, we obtain a
quadratic equation
0 = R
2
|Ck| − |Ck+1|
+ 4
|Es
2k+1| − |Os
2k+1|
+ 2R
|Es
2k+2| + |Os
2k+2| − 2|Es
2k+1| + 2|Os
2k+1| − |Es
2k
| − |Os
2k
|
. (2.33)2.5. Steganography and Steganalysis in Digital Images 35
The solution of Equation (2.33) is an estimator of the embedding rate R. The SPA
method was further improved by combining with Least Squares (LS) method [98]
and Maximum Likelihood [99], or generalizing from analysis of pairs to analysis of
k-tuples [100, 101].
2.5.2.2 WS Detectors
WS detectors were originally proposed by J. Fridrich in [102] and then improved
in [103, 104]. The key idea of WS is that the embedding rate can be estimated via
the weight that minimizes the distance between the weighted stego image and the
cover image [95]. The weighted stego-image with scalar parameter λ of the image
Z is defined by
∀i ∈ I, z
(λ)
i = (1 − λ)zi + λzi
, with zi = zi + (−1)zi
. (2.34)
The estimator Rb can be provided by minimizing the Euclidian distance between the
weighed stego-image and the cover image
Rb = 2 arg min
λ
X
N
i=1
wi
z
(λ)
i − ci
2
, (2.35)
where the normalized weight vector w with PN
i=1 wi = 1 is taken into account in
the minimization problem (2.35) to reflect the heterogeneity in a natural image. By
solving the root of the first derivative in (2.35), a simplified estimator is given as
Rb = 2X
N
i=1
wi(zi − zi)(zi − ci). (2.36)
Since the cover-pixels ci are unknown in advance, a local estimator for each pixel
from its spatial neighborhood can be employed, or more generally, a linear filter D,
to provide an estimate of cover-image: Cb = D(Z). The estimator Rb in (2.36) follows
immediately. From above observations, the choices of an appropriate linear filter
D and weight vector w are crucial for improvement of WS detectors’ performance
[95, 103, 104].
Both structural detectors and WS detectors are established into quantitative
steganalysis framework, which means that instead of indicating a suspect image Z
is either a cover-image or stego-image, the output of those detectors is a real-value
estimate of the secret message length. In other words, even no secret message is
embedded in the image, i.e. R = 0, we could still obtain a negative or positive value.
Nevertheless, quantitative detectors offer an additional advantage over statistical
detectors, namely that the detection performance can be measured by evaluating
the deviation of the estimator Rb from the true embedding rate R. Some criteria can
be used as measures of performance such as
• Mean Absolute Error (MAE):
1
Nim
X
Nim
n=1
|Rbn − R|, (2.37)36 Chapter 2. Overview on Digital Image Forensics
where Nim is number of images.
• Median Absolute Error (mAE):
mediann|Rbn − R|. (2.38)
2.5.2.3 Statistical Detectors
In contrast to structural detectors and WS detectors, statistical detectors rely on
changes in statistical properties due to message embedding to detect the presence
of the secret message. The output of statistical detectors is a binary decision. Some
representatives are χ
2 detector [105] and Bayesian approach-based detector [106].
Another interesting approach is the one proposed in [107] that is based on the
statistical hypothesis testing theory. To this end, two preliminary assumptions are
given in the following proposition:
Proposition 2.1. In the LSB replacement embedding technique, we assume that
1. The secret message bits are uniformly distributed over the cover-image, namely
that the probability of embedding a message bit into every cover-pixel is identique.
Moreover, message bits and cover pixels are statistically uncorrelated
[107].
2. Secret message bits are independent and identically distributed (i.i.d), and each
message bit mi is drawn from the Binomial distribution B(1,
1
2
)
P[mi = 0] = P[mi = 1] = 1
2
, (2.39)
where P[E] denotes the probability that an event E occurs.
Therefore, from the mechanism of LSB replacement, we can see that the probability
that the pixel does not change after embedding is 1 −
R
2
while the probability that its
LSB is flipped is R
2
P[si = ci
] = 1 −
R
2
and P[si = ci
] = R
2
. (2.40)
Let P0 be the probability distribution of cover-images. Due to message embedding
at rate R whose properties are given in Proposition 2.1, the cover image moves
from the probability distribution P0 to a different probability distribution, denoted
PR. Thus the steganalysis problem (2.29) can be rewritten as follows
(
H0 : Z ∼ P0
H1 : Z ∼ PR.
(2.41)
Based on the assumption that all pixels are independent and identically distributed,
the authors in [107] have developed two schemes depending on the knowledge of
the probability distribution P0. When the probability distribution P0 is not known,2.5. Steganography and Steganalysis in Digital Images 37
the authors study the asymptotically optimal detector (as the number of pixels
N → ∞) according to Hoeffding’s test [108]. When the probability distribution
P0 is known in advance, an optimal detector is given in the sense of NeymanPearson
[20, Theorem 3.2.1]. Although the statistical detector proposed in [107]
is interesting from theoretical point of view, its performance in practice is quite
moderate due to the fact that the cover model used in [107] is not sufficiently
accurate to describe a natural image. The assumption of independence between
pixels does not hold since the image structure and the non-stationarity of noises
during image acquisition process are not taken into account.
Some later works [109–114] rely on a simplistic local polynomial model in which
pixel’s expectations are different in order to design a statistical detector, providing
high detection performance compared with structural and WS ones. Far from assuming
that all the pixels are i.i.d as in [107], those works propose to model each
cover-pixel by the Gaussian distribution, ci ∼ N (µi
, σ2
i
), in order to design the
Likelihood Ratio Test (LRT) in which the Likelihood Ratio (LR) Λ can be given by
Λ(Z) ∝
X
i
1
σ
2
i
(zi − zi)(zi − µi). (2.42)
The LRT is the most powerful test in the sense of Neyman-Pearson approach [20,
Theorem 3.2.1] that can meet simultaneously two criteria of optimality: warranting a
prescribed false alarm probability and maximizing the correct detection probability.
Moreover, the specificity in this approach is to show that the WS detector [102–104]
is indeed a variant of the LRT, which justifies the good detection performance of
such ad hoc detector. Besides, hypothesis testing theory has been also extended to
other complex embedding algorithm, e.g. LSB matching [115, 116].
2.5.2.4 Universal Classifiers
Three previous families of detectors are targeted to a specific steganographic algorithm,
namely LSB replacement. In other words, these three families work on an
assumption that steganalysts know in advance the embedding algorithm used by the
steganographer. Such scenario may not be realistic in the practical context. Universal
classifiers are employed by steganalysts to work in a blind manner in order to
discriminate stego-images and cover-images. Even though universal classifiers have
lower performance than specific embedding-targeted detectors, they are still important
because of their flexibility and ability to be adjusted to completely unknown
steganographic methods.
Typically, universal classifiers can be divided into two types: supervised and
unsupervised. Supervised classification [45, 76, 117–120] has been already discussed
in Section 2.3. While supervised classification requires to know in advance the
label of each image (i.e. cover-image or stego-image) and then build a classifier
based on labeled images, unsupervised classification works in a scenario of unlabeled
images and classifies them automatically without user interference. The accuracy of
supervised classifiers is limited if the training data is not perfectly representative of38 Chapter 2. Overview on Digital Image Forensics
cover source, which may result in mismatch problem [121]. Unsupervised classifiers
try to overcome this problem of model mismatch by postponing building a cover
model until the classification stage. However, to the best of our knowledge, there
has not been yet a reliable method dealing with this scenario in steganalysis.
In universal steganalysis, the design of features is of crucial importance. Features
used for classification should be sensitive to changes caused by embedding, yet
insensitive to variations between covers including also some non-steganographic processing
techniques. In general, the choice of suitable features and machine learning
tools remains open problems [121].
2.5.3 Steganalysis of Jsteg Algorithm
Like steganalysis of LSB replacement in spatial domain, existing methods for steganalysis
of Jsteg algorithm can be also divided into four categories. Structural
detectors detect the presence of secret message by employing the symmetry of the
histogram of DCT coefficients in natural images, which is disturbed by the operation
of Jsteg embedding. Some representative structural detectors are Zhang
and Ping (ZP) detector [122], DCT coefficient-based detector [123], and category
attack [124, 125]. Furthermore, the power of structural detectors can be combined
with theoretically well-founded ML principle [99] or the concept of Zero Message Hypothesis
(ZMH) [96]. These two approaches have been formally analyzed in [126].
Similar to structural detectors for steganalysis of LSB replacement technique, the
ZHM framework starts by choosing a feature vector x of the cover-image (e.g. trace
subsets in case of SPA method), establishes the change in the feature vector x due to
embedding algorithm Emb, then inverts embedding effects to provide a hypothetical
feature vector ˆx
ˆx = Emb−1
(x
r
, r), (2.43)
where x
r
is the stego vector and r is the change rate defined as the ratio between
the number of modified DCT coefficients and the maximum number of usable coefficients,
thus r =
R
2
. Using cover assumptions and zero message properties (e.g.
natural symmetry of the histogram of DCT coefficients), an appropriate penalty
function zmh(x) ≥ 0 is defined so that it returns zero on cover features and nonzero
otherwise. Therefore, the change rate estimator rˆ is defined as the solution of
a minimization problem
rˆ = arg min
r≥0
zmh(ˆx) = arg min
r≥0
zmh(Emb−1
(x
r
, r)). (2.44)
The minimization in (2.44) can be performed either analytically or numerically by
implementing a one-dimensional gradient-descent search over r. The main interest
in [126] is that all features proposed in [104,122,124,125] have been revisited within
ZHM framework. The detector proposed in [123] has been also improved in [126]
within ML framework using a more accurate model of DCT coefficients, namely
Generalized Cauchy distribution. It can be noted that although ZMH is only a2.6. Conclusion 39
heuristic framework and less statistically rigorous than ML framework, it has some
important advantages in terms of low computational complexity and flexibility.
Although Jsteg algorithm replaces LSBs by secret message bits in DCT domain,
the mathematical foundation of WS detectors can be also applied for steganalysis
of Jsteg [127, 128]. Given a vector of AC coefficients D = {D1, D2, . . . , DN }, the
WS-like detector is given by
Rb ∝
X
i
wi(Di − Di)Di
. (2.45)
The difference between the WS detector in (2.36) and the one in (2.45) is that
the local predictor for cover AC coefficients is omitted since the expected value of
AC coefficients is zero in natural images. The weigh wi for each coefficient Di
is
estimated by taking the coefficients at the same location as Di but in four adjacent
blocks. More details were provided in [128].
The hypothesis testing theory was also applied to the steganalysis of Jsteg algorithm.
By relying the Laplacian model of DCT coefficients, a statistical test was
designed in [129]. However, a considerable loss of power was revealed due to the fact
that the Laplacian model is not accurate enough to characterize DCT coefficients.
2.6 Conclusion
This chapter discusses the emerging field of digital image forensics consisting of two
main problems: image origin identification and image forgery detection. To address
these problems, active forensic approach has been proposed by generating extrinsically
fingerprints and adding them into the digital image in the image formation
process, thus creates a trustworthy digital camera. However, active approach is of
limited application due to many strict contraints in its protocols. Therefore, passive
forensic approach has been considerably evolved to help solve these problems in their
entirety. This approach relies on intrinsic traces left by the digital cameras in the
image processing pipeline and by the manipulations themselves to gather forensic
evidence of image origin or forgery. Some intrinsic fingerprints for identification of
image source such as lens aberration, PRNU, CFA pattern and interpolation, and
JPEG compression are reviewed. The task of steganalysis that aims to detect the
mere presence of a secret message in a digital image is also discussed in this chapter.
The state of the art has shown that most of existing methods have been designed
within classification framework. Hypothesis testing framework is of limited
exploitation although this framework offers many advantages, namely that the statistical
performance of detectors can be analytically established and a prescribed
false alarm probability can be guaranteed. Besides, existing methods are designed
using simplistic image models, which results in overall poor detection performance.
This thesis focuses on applying the hypothesis testing theory in digital image forensics
based on an accurate image model, which is established by modeling the main
steps in the image processing pipeline. These aspects will be discussed in the rest
of the thesis.Chapter 3
Overview on Statistical Modeling
of Natural Images
Contents
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 Spatial-Domain Image Model . . . . . . . . . . . . . . . . . . 41
3.2.1 Poisson-Gaussian and Heteroscedastic Noise Model . . . . . . 42
3.2.2 Non-Linear Signal-Dependent Noise Model . . . . . . . . . . 44
3.3 DCT Coefficient Model . . . . . . . . . . . . . . . . . . . . . . 45
3.3.1 First-Order Statistics of DCT Coefficients . . . . . . . . . . . 45
3.3.2 Higher-Order Statistics of DCT Coefficients . . . . . . . . . . 46
3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.1 Introduction
The application of hypothesis testing theory in digital image forensics requires an accurate
statistical image model to achieve high detection performance. For instance,
the PRNU-based image origin identification [30,32] takes into account various noise
sources during image acquisition inside a digital camera, which provides an image
model allowing to accurately extract the fingerprint for source identification. An
inaccurate image model will result in a poor detection performance, e.g. in case
of statistical detectors [107, 129]. Therefore in this chapter, the state of the art on
statistical modeling of natural images is reviewed. The statistical image modeling
can be performed either in spatial domain or DCT domain.
The chapter is organized as follows. Section 3.2 analyzes noise statistics in
spatial domain and presents some dominant image models widely used in image
processing. Section 3.3 discusses empirical statistical models of DCT coefficients.
Finally, Section 3.4 concludes the chapter.
3.2 Spatial-Domain Image Model
In this section, we adopt the representation of an arbitrary image Z as a column
vector of length N = Nr × Nc. The representation as a two-dimensional matrix is of42 Chapter 3. Overview on Statistical Modeling of Natural Images
no interest in the study of statistical noise properties. The index of color channel is
omitted for simplicity. Due to the stochastic nature of noise, a pixel is regarded as
a random variable. Generally, the random variable zi
, i ∈ I, can be decomposed as
zi = µzi + ηzi
, (3.1)
where I = {1, . . . , N} denotes the set of pixel indices, µzi denotes the expectation
of the pixel zi
in the absence of noise, and ηzi
accounts for all noise sources that
interfere with the original signal. By convention, µX and σ
2
X denote respectively
the expectation and variance of a random variable X. Here, the expectation µzi
is
considered deterministic and will not be modeled. However, the expectations differ
from each other due to heterogeneity in a natural image. From (3.1), it is easily
seen that the variance of noise ηzi
is equal to the variance of pixel zi
, i.e. σ
2
zi = σ
2
ηzi
.
Some models have been proposed in the literature for the noise ηzi
in an uncompressed
image. They can be classified into two groups: signal-independent
and signal-dependent noise models. While signal-independent noise models assume
the stationarity of noise in the whole image, regardless original pixel intensity,
signal-dependent noise models take into account the proportional dependence of
noise variance on the original pixel intensity. A typical example for the group of
signal-independent noise is the Additive White Gaussian Noise (AWGN). Besides,
signal-dependent noise includes Poisson noise or film-grain noise [130], PoissonGaussian
noise [131, 132], heteroscedastic noise model [133, 134], and non-linear
noise model [135]. Although the AWGN model is widely adopted in image processing
because of its simplicity, it ignores the contribution of Poisson noise to the
image acquisition chain, which is the case of an image acquired by a digital camera.
Noise sources in a natural image are inherently signal-dependent. Therefore,
a signal-dependent noise model is more expected to be employed in further applications.
Since our works mainly focus on signal-dependent noise, only the group of
signal-dependent noise models are discussed in this section.
3.2.1 Poisson-Gaussian and Heteroscedastic Noise Model
The study of noise statistics requires to take into account the impact of Poisson
noise related to the stochastic nature of photon-counting process and dark current
[131–134, 136]. Let ξi denote the number of collected electrons with respect to the
pixel zi
. The number of collected electrons ξi follows the Poisson distribution with
mean λi and variance λi
ξi ∼ P(λi). (3.2)
This Poisson noise results in the dependence of noise variance on original pixel
intensity. The number of collected electrons is further degraded by the AWGN
read-out noise ηr with variance ω
2
. Therefore, the RAW image pixel recorded by
the image sensor can be defined as [136]
zi = a ·
ξi + ηr
, (3.3)3.2. Spatial-Domain Image Model 43
where a is the analog gain controlled by the ISO sensitivity. This leads to the
statistical distribution of the RAW pixel zi
zi ∼ a ·
h
P(λi) + N (0, ω2
)
i
. (3.4)
This model is referred to as Poisson-Gaussian noise model [131,132]. One interesting
property of this model is the linear relation of pixel’s expectation and variance.
Taking mathematical expectation and variance from (3.4), we obtain
µzi = E[zi
] = a · λi
, (3.5)
σ
2
zi = Var[zi
] = a
2
· (λi + ω
2
), (3.6)
where E[X] and Var[X] denote respectively the mathematical expectation and variance
with respect to a random variable X. Consequently, the heteroscedastic relation
is derived as
σ
2
zi = a · µzi + b, (3.7)
where b = a
2ω
2
.
In some image sensors, the collected electrons ξi may be added by a base pedestal
parameter p0 to constitute an offset-from-zero of the output pixel [133]
zi ∼ a ·
h
p0 + P(λi − p0) + N (0, ω2
)
i
. (3.8)
Hence, the parameter b is now given by b = a
2ω
2 − a
2p0. Therefore, the parameter
b can be negative when p0 > ω2
.
To facilitate the application of this signal-dependent noise model, some works
[133, 134] have attempted to approximate the Poisson distribution by the Gaussian
distribution in virtue of a large number of collected electrons
P(λi) ≈ N (λi
, λi). (3.9)
In fact, for λi ≥ 50, the Gaussian approximation is already very accurate [133]
while full-well capacity is largely above 100000 electrons. Finally, the statistical
distribution of the RAW pixel zi can be approximated as
zi ∼ N
µzi
, a · µzi + b
. (3.10)
This model is referred to as heteroscedastic noise model in [134]. The term "heteroscedasticity"
means that each pixel exposes a different variability with the other.
Both Poisson-Gaussian and heteroscedastic noise models are more accurate to
characterize a RAW image than the conventional AWGN, but they do not take
into account yet non-linear post-acquisition operations. Therefore, they are not
appropriate for modeling a TIFF or JPEG image. Besides, it should be noted
that the Poisson-Gaussian and heteroscedastic noise model assume that the effect
of PRNU is negligible, namely that all the pixels respond to the incident light
uniformly. The very small variation in pixel’s response does not strongly affect its
statistical distribution [136].44 Chapter 3. Overview on Statistical Modeling of Natural Images
3.2.2 Non-Linear Signal-Dependent Noise Model
To establish a statistical model of a natural image in TIFF or JPEG format, it is
necessary to take into account effects of post-acquisition operations in the image
processing pipeline. However, as discussed in Section 2.2, the whole image processing
pipeline is not as simple. Some processing steps that are implemented in
a digital camera are difficult to model parametrically. One approach is to consider
the digital camera as a black box in which we attempt to establish a relation
between input irradiance and output intensity. This relation is called Camera Response
Function (CRF), which is described by a sophisticated non-linear function
fCRF(·) [137]. Gamma correction might be the simplest model for the CRF with
only one parameter. Other parametric models have been proposed for CRF such as
polynomial model [137] or generalized gamma curve model [138].
Therefore, the pixel zi can be formally defined as [135, 139]
zi = fCRF
Ei + ηEi
, (3.11)
where Ei denotes the image irradiance and ηEi
accounts for all signal-independent
and signal-dependent noise sources. We can note that although some methodologies
have been proposed for estimation of CRF [50, 137, 140], it is also difficult to study
noise statistics with those sophisticated models.
To facilitate the study of noise statistics, the authors in [135] exploit the first
order of Taylor’s series expansion
zi = fCRF
Ei + ηEi
≈ fCRF
Ei
+ f
0
CRF
Ei
ηEi
, (3.12)
where f
0
CRF denotes the first derivative of the CRF fCRF. Therefore, a relation
between noises before and after transformation by the CRF is obtained
ηzi = f
0
CRF
Ei
ηEi
. (3.13)
It can be noted that even when noise before transformation is independent of the
signal, the non-linear transformation fCRF generates a dependence between pixel’s
expectation and variance.
Based on experimental observations, the authors in [135] obtain a non-linear
parametric model
zi = µzi + µ
γ˜
zi
· ηu, (3.14)
where ηu is zero-mean stationary Gaussian noise, ηu ∼ N (0, σ2
ηu
), and γ˜ is an
exponential parameter to account for the non-linearity of the camera response. Here,
taking variance on the both sides of (3.14), we obtain
σ
2
zi = µ
2˜γ
zi
· Var[ηu] = µ
2˜γ
zi
· σ
2
ηu
. (3.15)
In this model, the pixel zi still follows the Gaussian distribution and the noise
variance σ
2
ηzi
is non-linearly dependent on the original pixel intensity µzi
zi ∼ N
µzi
, µ2˜γ
zi
· σ
2
ηu
. (3.16)
This model allows to represent several kinds of noise such as film-grain, Poisson
noise by changing the parameters γ˜ and σ
2
ηu
(e.g γ˜ = 0.5).3.3. DCT Coefficient Model 45
3.3 DCT Coefficient Model
3.3.1 First-Order Statistics of DCT Coefficients
Apart from modeling an image in the spatial domain, many researches attempt to
model it in the DCT domain since the DCT is a fundamental operation in JPEG
compression. The model of DCT coefficients has been considerably studied in the
literature. However, a majority of DCT coefficient models has just been proposed
without giving any mathematical foundation and analysis. Many researches focus
on comparing the empirical data with a variety of popular statistical models by
conducting the goodness-of-fit (GOF) test, e.g. the Kolmogorov-Smirnov (KS) or χ
2
test. Firstly, the Gaussian model for the DCT coefficients was conjectured in [141].
The Laplacian model was verified in [142] by performing the KS test. This Laplacian
model remains a dominant choice in image processing because of its simplicity and
relative accuracy. Other possible models such as Gaussian mixture [143] and Cauchy
[144] were also proposed. In order to model the DCT coefficients more accurately, the
previous models were extended to the generalized versions including the Generalized
Gaussian (GG) [145] and the Generalized Gamma (GΓ) [146] models. It has been
recently reported in [146] that the GΓ model outperforms the Laplacian and GG
model. Far from giving a mathematical foundation of DCT coefficient model, these
empirical models were only verified using GOF test on a few standard images. Thus,
they can not guarantee the accuracy of the chosen model to a wide range of images,
which leads to a lack of robustness.
The first mathematical analysis for DCT coefficients is given in [147]. It relies
on a doubly stochastic model combining DCT coefficient statistics in a block
whose variance is constant with the variability of block variance in a natural image.
However, this analysis is incomplete due to the lack of mathematical justification
for the block variance model. Nevertheless, it has shown an interest for further
improvements. Therefore, here we provide a discussion about this mathematical
foundation.
Let I denote AC coefficient and σ
2
blk denote block variance. The DC coefficient
is not addressed in this work [147]. The index of frequency is omitted for the sake
of clarity. Using the conditional probability, the doubly stochastic model is given
by
fI (x) = Z ∞
0
fI|σ
2
blk
(x|t)fσ
2
blk
(t)dt x ∈ R, (3.17)
where fX(x) denotes the probability density function (pdf) of a random variable
X. This doubly stochastic model can be considered as infinite mixture of Gaussian
distributions [148, 149]. From the establishment of DCT coefficients in (2.11), it is
noted that each DCT coefficient is a weighted sum of random variables. If the block
variance σ
2
blk is constant, the AC coefficient I can be approximated as a zero-mean
Gaussian random variable based on the Central Limit Theorem (CLT)
fI|σ
2
blk
(x|t) = 1
√
2πt
exp
−
x
2
2t
. (3.18)46 Chapter 3. Overview on Statistical Modeling of Natural Images
Even though the pixels are spatially correlated in a 8 × 8 block due to demosaicing
algorithms implemented in a digital camera, the CLT can still be used for Gaussian
approximation of a sum of correlated random variables [150]. It remains to find the
pdf of σ
2
blk to derive the final pdf of the AC coefficient I. To this end, it was reported
in [147] that from experimental observations, the block variance σ
2
blk can be modeled
by exponential or half-Gaussian distribution. These two distributions can lead to
the Laplacian distribution for the DCT coefficient I [147]. However, as stated above,
due to the fact that the pdf of block variance σ
2
blk is not mathematically justified,
this mathematical framework is incomplete.
3.3.2 Higher-Order Statistics of DCT Coefficients
The above discussion only considers the first-order statistics (i.e. histogram) of
DCT coefficients. The DCT coefficients at the same frequency are collected and
treated separately. An implicite assumption adopted in this procedure is that the
DCT coefficients at the same frequency are i.i.d realizations of a random variable.
However, this is not always true in a natural image because DCT coefficients exhibit
dependencies (or correlation) between them. There are two fundamental kinds of
correlation between DCT coefficients [151], which have been successfully exploited
in some applications [126, 151, 152]
1. intra-block correlation: A well-known feature of DCT coefficients in a natural
image is that the magnitudes of AC coefficients decrease as the frequency
increases along the zig-zag order. This correlation reflects the dependence
between DCT coefficients within a same 8×8 block. Typically, this correlation
is weak since coefficients at different frequencies correspond to different basis
functions.
2. inter-block correlation: Although the DCT base can provide a good decorrelation,
resulting coefficients are still correlated slightly with their neighbors at
the same frequency. We refer this kind of correlation as inter-block correlation.
In general, the correlation between DCT coefficients could be captured by adjacency
matrix [126].
3.4 Conclusion
This chapter reviews some statistical image models in spatial domain and DCT domain.
In spatial domain, two groups of signal-independent and signal-dependent
noise models are discussed. From above statistical analysis, we can draw an important
insight: noise in natural images is inherently signal-dependent. In DCT
domain, some empirical models of DCT coefficient are presented. However, most
of DCT coefficient models are given without mathematical justification. It is still
necessary to establish an accurate image model that can be exploited in further
applications.Part II
Statistical Modeling and
Estimation for Natural Images
from RAW Format to JPEG
FormatChapter 4
Statistical Image Modeling and
Estimation of Model Parameters
Contents
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.2 Statistical Modeling of RAW Images . . . . . . . . . . . . . . 50
4.2.1 Heteroscedastic Noise Model . . . . . . . . . . . . . . . . . . 50
4.2.2 Estimation of Parameters (a, b) in the Heteroscedastic Noise
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2.2.1 WLS Estimation . . . . . . . . . . . . . . . . . . . . 53
4.2.2.2 Statistical Properties of WLS Estimates . . . . . . . 54
4.3 Statistical Modeling of TIFF Images . . . . . . . . . . . . . . 57
4.3.1 Generalized Noise Model . . . . . . . . . . . . . . . . . . . . . 57
4.3.2 Estimation of Parameters (˜a, ˜b) in the Generalized Noise Model 59
4.3.2.1 Edge Detection and Image Segmentation . . . . . . 59
4.3.2.2 Maximum Likelihood Estimation . . . . . . . . . . . 60
4.3.3 Application to Image Denoising . . . . . . . . . . . . . . . . . 61
4.3.4 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 62
4.4 Statistical Modeling in DCT Domain . . . . . . . . . . . . . 65
4.4.1 Statistical Model of Quantized DCT Coefficients . . . . . . . 65
4.4.1.1 Statistical Model of Block Variance and Unquantized
DCT Coefficients . . . . . . . . . . . . . . . . . . . . 65
4.4.1.2 Impact of Quantization . . . . . . . . . . . . . . . . 68
4.4.2 Estimation of Parameters (α, β) from Unquantized DCT Coefficients
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4.3 Estimation of Parameters (α, β) from Quantized DCT Coeffi-
cients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.4.4 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . 71
4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7350
Chapter 4. Statistical Image Modeling and Estimation of Model
Parameters
4.1 Introduction
Chapter 3 has presented an overview on statistical modeling of natural images in
spatial domain and DCT domain. Most of existing models in the literature were
provided empirically. The goal of this chapter is to establish a mathematical framework
of studying statistical properties of natural images along image processing
pipeline of a digital camera. The study is performed in spatial domain and DCT
domain. In the spatial domain, the heteroscedastic noise model is firstly recalled,
and a method for estimating the parameters of the heteroscedastic noise model following
the Weighted Least Square (WLS) approach is proposed in Section 4.2. The
analytic establishment of WLS estimates allows us to study their statistical properties,
which is of importance for designing statistical tests. The WLS estimation of
parameters (a, b) has been presented in [134].
Next, Section 4.3 presents the study of noise statistics in a TIFF image by
starting from the heteroscedastic noise model and take into account the effect of
gamma correction, resulting in the generalized signal-dependent noise model. It is
shown that the generalized noise model is also relevent to characterize JPEG images
with moderate-to-high quality factors (Q ≥ 70). This section also proposes a method
that can estimate the parameters of the generalized noise model accurately from a
single image. Numerical results on a large image database show the relevance of the
proposed method. The generalized noise model could be useful in many applications.
A direct application for image denoising is proposed in this section. The foundation
of generalized noise model and estimation of model parameters have been presented
in [153].
Section 4.4 describes the mathematical framework of modeling the statistical
distribution of DCT coefficients. To simplify the study, the approach is based on
the main assumption that the pixels are identically distributed (not necessarily independent)
within a 8 × 8 block. Consequently, the statistical distribution of block
variance can be approximated, thus the model of unquantized DCT coefficients is
provided. Moreover, it is proposed to take into account the quantization operation
to provide a final model of quantized DCT coefficients. The parameters of DCT
coefficient model can be estimated following the ML approach. Numerical results
show that the proposed model outperforms other existing models including Laplacian,
GG, and GΓ model. Section 4.5 concludes the chapter. The foundation of
DCT coefficient model has been presented in [154].
4.2 Statistical Modeling of RAW Images
4.2.1 Heteroscedastic Noise Model
The RAW image acquisition has been discussed in Section 2.2.1. Let Z = (zi)i∈I
denote a RAW image acquired by the image sensor. Typically, the model of RAW
pixel consists of a Poissonian part that addresses the photon shot noise and dark
current and a Gaussian part for the remaining stationary disturbances, e.g. read-4.2. Statistical Modeling of RAW Images 51
estimated expectation: µˆk
estimated variance: ˆσ
2
k
0 0.05 0.1 0.15 0.2 0.25 0
0.4
0.8
1.2
1.6
2
×10−5
Nikon D70: Estimated data
Nikon D70: Fitted data
Nikon D200: Estimated data
Nikon D200: Fitted data
Figure 4.1: Scatter-plot of pixels’ expectation and variance from a natural RAW
image with ISO 200 captured by Nikon D70 and Nikon D200 cameras. The image
is segmented into homogeneous segments. In each segment, the expectation and
variance are calculated and the parameters (a, b) are estimated as proposed in Section
4.2.2. The dash line is drawn using the estimated parameters (a, b). Only the
red channel is used in this experiment.
out noise. For the sake of simplification, the Gaussian approximation of the Poisson
distribution can be exploited because of a large number of collected electrons, which
leads to the heteroscedastic noise model [133, 134]
zi ∼ N
µi
, aµi + b
, (4.1)
where µi denotes the expectation of the pixel zi
. The heteroscedastic noise model,
which gives the noise variance as a linear function of pixel’s expectation, characterizes
a RAW image more accurately than the conventional AWGN model. The
heteroscedastic noise model (4.1) is illustrated in Figure 4.1. It is assumed that the
noise corrupting each RAW pixel is statistically independent of those of neighbor
pixels [133, 136]. In this section it is assumed that the phenomenon of clipping is
absent from a natural RAW image for the sake of simplification, i.e. the probability
that one observation zi exceeds over the boundary 0 or B = 2ν−1 is negligible. More
details about the phenomenon of clipping are given in [133, 155] and in Chapter 8.
In practice, the PRNU weakly affects the parameter a in the heteroscedastic
noise model (4.1). Nevertheless, in the problem of camera model identification, the
PRNU is assumed to be negligible, i.e. the parameter a remains constant for every
pixel.52
Chapter 4. Statistical Image Modeling and Estimation of Model
Parameters
4.2.2 Estimation of Parameters (a, b) in the Heteroscedastic Noise
Model
Estimation of noise model parameters can be performed from a single image or
multiple images. From a practical point of view, we mainly focus on noise model
parameter estimation from a single image. Several methods have been proposed
in the literature for estimation of signal-dependent noise model parameters, see
[133–135, 139, 156]. They rely on similar basic steps but differ in details. The
common methodology starts from obtaining local estimates of noise variance and
image content, then performing the curve fitting to the scatter-plot based on the
prior knowledge of noise model. The existing methods involve two main difficulties:
influence of image content and spatial correlation of noise in a natural image. In
fact, homogeneous regions where local expectations and variances are estimated
are obtained by performing edge detection and image segmentation. However, the
accuracy of those local estimates may be contaminated due to the presence of outliers
(textures, details and edges) in the homogeneous regions. Moreover, because of
the spatial correlation between pixels, the local estimates of noise variance can be
overestimated. Overall, the two difficulties may result in inaccurate estimation of
noise parameters.
For the design of subsequent tests, the parameters (a, b) should be estimated
following the ML approach and statistical properties of ML estimates should be
analytically established. One interesting method is proposed in [133] for ML estimation
of parameters (a, b). However, that method can not provide an analytic
expression of ML estimates due to the difficulty of resolving the complicated system
of partial derivatives. Therefore, ML estimates are only numerically solved by
using the Nelder-Mead optimization method [157]. Although ML estimates given
by that method are relatively accurate, they involve three main drawbacks. First,
the convergence of the maximization process and the sensitivity of the solution to
initial conditions have not been analyzed yet. Second, the Bayesian approach used
in [133] with a fixed uniform distribution might be doubtful in practice. Finally, it
seems impossible to establish statistical properties of the estimates.
This section proposes a method for estimation of parameters (a, b) from a single
image. The proposed method relies on the same technique of image segmentation
used in [133] in order to obtain local estimates in homogeneous regions. Subsequently,
the proposed method is based on the WLS approach to take into account
heteroscedasticity and statistical properties of local estimates. One important advantage
is that WLS estimates can be analytically provided, which allows us to
study statistical properties of WLS estimates. Moreover, the WLS estimates are
asymptotically equivalent to the ML estimates in large samples when the weights
are consistently estimated, as explained in [158, 159].4.2. Statistical Modeling of RAW Images 53
4.2.2.1 WLS Estimation
The RAW image Z is first transformed into the wavelet domain and then segmented
into K non-overlapping homogeneous segments, denoted Sk, of size nk,
k ∈ {1, . . . , K}. The readers are referred to [133] for more details of segmentation
technique. In each segment Sk, pixels are assumed to be i.i.d, thus they
have the same expectation and variance. Let z
wapp
k = (z
wapp
k,i )i∈{1,...,nk} and
z
wdet
k = (z
wdet
k,i )i∈{1,...,nk} be respectively the vector of wavelet approximation coefficients
and wavelet detail coefficients representing the segment Sk. Because the
transformation is linear, the coefficients z
wapp
k,i and z
wdet
k,i also follow the Gaussian
distribution
z
wapp
k,i ∼ N
µk, kϕk
2
2 σ
2
k
, (4.2)
z
wdet
k,i ∼ N
0, σ2
k
, (4.3)
where µk denotes expectation of all pixels in the segment Sk, σ
2
k = aµk + b, and ϕ
is the 2-D normalized wavelet scaling function. Hence, the ML estimates of local
expectation µk and local variance σ
2
k
are given by
µˆk =
1
nk
Xnk
i=1
z
wapp
k,i , (4.4)
σˆ
2
k =
1
nk − 1
Xnk
i=1
z
wdet
k,i − z
wdet
k
2
, with z
wdet
k =
1
nk
Xnk
i=1
z
wdet
k,i . (4.5)
The estimate µˆk is unbiased and follows the Gaussian distribution
µˆk ∼ N
µk,
kϕk
2
2
nk
σ
2
k
, (4.6)
while the estimate σˆ
2
k
follows a scaled chi-square distribution with nk − 1 degrees
of freedom. This distribution can also be accurately approximated as the Gaussian
distribution for large nk [160]:
σˆ
2
k ∼ N
σ
2
k
,
2
nk − 1
σ
4
k
, (4.7)
Figure 4.1 illustrates a scatter-plot of all the pairs {(ˆµk, σˆ
2
k
)} extracted from real
natural RAW images of Nikon D70 and Nikon D200 cameras.
The parameters (a, b) are estimated by considering all the pairs {(ˆµk, σˆ
2
k
)}
K
k=1
where the local variance σˆ
2
k
is treated as a heteroscedastic model of the local expectation
µˆk. This model is formulated as follows
σˆ
2
k = aµˆk + b + skk, (4.8)
where k are independent and identically distributed as standard Gaussian variable
and sk is a function of local mean µk. A direct calculation from (4.8) shows that
s
2
k = Var
σˆ
2
k
− Var
aµˆk + b
=
2
nk − 1
σ
4
k − a
2
kϕk
2
2
nk
σ
2
k =
2
nk − 1
(aµk + b)
2 − a
2
kϕk
2
2
nk
(aµk + b). (4.9)
Inférence d’invariants pour le model checking de
systèmes paramétrés
Alain Mebsout
To cite this version:
Alain Mebsout. Inf´erence d’invariants pour le model checking de syst`emes param´etr´es. Other.
Universit´e Paris Sud - Paris XI, 2014. French. .
HAL Id: tel-01073980
https://tel.archives-ouvertes.fr/tel-01073980
Submitted on 11 Oct 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of scientific
research documents, whether they are published
or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.Université Paris-Sud
École Doctorale d’Informatiqe
Laboratoire de Recherche en Informatiqe
Discipline : Informatiqe
Thèse de doctorat
soutenue le 29 septembre 2014
par
Alain Mebsout
Inférence d’Invariants pour le Model
Checking de Systèmes Paramétrés
Directeur de thèse : M. Sylvain Conchon Professeur (Université Paris-Sud)
Co-encadrante : Mme Fatiha Zaïdi Maître de conférences (Université Paris-Sud)
Composition du jury :
Président du jury : M. Philippe Dague Professeur (Université Paris-Sud)
Rapporteurs : M. Ahmed Bouajjani Professeur (Université Paris Diderot)
M. Silvio Ranise Chercheur (Fondazione Bruno Kessler)
Examinateurs : M. Rémi Delmas Ingénieur de recherche (ONERA)
M. Alan Schmitt Chargé de recherche (Inria Rennes)À Magali.Résumé
Cette thèse aborde le problème de la vérication automatique de systèmes paramétrés
complexes. Cette approche est importante car elle permet de garantir certaines propriétés
sans connaître a priori le nombre de composants du système. On s’intéresse en particulier
à la sûreté de ces systèmes et on traite le côté paramétré du problème avec des méthodes
symboliques. Ces travaux s’inscrivent dans le cadre théorique du model checking modulo
théories et ont donné lieu à un nouveau model checker : Cubicle.
Une des contributions principale de cette thèse est une nouvelle technique pour inférer
des invariants de manière automatique. Le processus de génération d’invariants est intégré
à l’algorithme de model checking et permet de vérier en pratique des systèmes hors de
portée des approches symboliques traditionnelles. Une des applications principales de cet
algorithme est l’analyse de sûreté paramétrée de protocoles de cohérence de cache de taille
industrielle.
Enn, pour répondre au problème de la conance placée dans le model checker, on
présente deux techniques de certication de notre outil Cubicle utilisant la plate-forme
Why3. La première consiste à générer des certicats dont la validité est évaluée de manière
indépendante tandis que la seconde est une approche par vérication déductive du cœur
de Cubicle.
Abstract
This thesis tackles the problem of automatically verifying complex parameterized systems.
This approach is important because it can guarantee that some properties hold
without knowing a priori the number of components in the system. We focus in particular
on the safety of such systems and we handle the parameterized aspect with symbolic
methods. This work is set in the theoretical framework of the model checking modulo
theories and resulted in a new model checker: Cubicle.
One of the main contribution of this thesis is a novel technique for automatically inferring
invariants. The process of invariant generation is integrated with the model checking
algorithm and allows the verication in practice of systems which are out of reach for
traditional symbolic approaches. One successful application of this algorithm is the safety
analysis of industrial size parameterized cache coherence protocols.
Finally, to address the problem of trusting the answer given by the model checker, we
present two techniques for certifying our tool Cubicle based on the framework Why3. The
rst consists in producing certicates whose validity can be assessed independently while
the second is an approach by deductive verication of the heart of Cubicle.
vRemerciements
Mes remerciements vont en tout premier lieu à mes encadrants de thèse, qui m’ont accompagné
tout au long de ces années. Ils ont su orienter mes recherches tout en m’accordant
une liberté et une autonomie très appréciables. Les discussions aussi bien professionnelles
que personnelles ont été très enrichissantes et leur bonne humeur a contribué à faire de ces
trois (et cinq) années un plaisir. Merci à Fatiha pour sa disponibilité et son soutien. Je tiens
particulièrement à remercier Sylvain d’avoir cru en moi et pour son optimisme constant. Il
m’apparaît aujourd’hui que cette dernière qualité est essentielle à tout bon chercheur et
j’espère en avoir tiré les enseignements.
Merci également à mes rapporteurs, Ahmed Bouajjani et Silvio Ranise, d’avoir accepté
de relire une version préliminaire de ce document et pour leurs corrections et remarques
pertinentes. Je remercie aussi les autres examinateurs, Philippe Dague, Rémi Delmas et
Alan Schmitt, d’avoir accepté de faire partie de mon jury.
Le bon déroulement de cette thèse doit aussi beaucoup aux membres de l’équipe VALS
(anciennement Proval puis Toccata). Ils ont su maintenir une ambiance chaleureuse et
amicale tout au long des divers changements de noms, déménagements et remaniements
politiques. Les galettes des rois, les gâteaux, les paris footballistiques, et les discussions
du coin café resteront sans aucun doute gravés dans ma mémoire. Pour tout cela je les en
remercie.
Je tiens à remercier tout particulièrement Mohamed pour avoir partagé un bureau avec
moi pendant près de cinq années. J’espère avoir été un aussi bon co-bureau qu’il l’a été, en
tout cas les collaborations et les discussions autour d’Alt-Ergo furent très fructueuses. Merci
également à Régine pour son aide dans la vie de doctorant d’un laboratoire de recherche et
pour sa capacité à transformer la plus intimidante des procédures administratives en une
simple tâche.
Merci à Jean-Christophe, Guillaume, François, Andrei, Évelyne et Romain pour leurs
nombreux conseils et aides techniques. Merci aussi à Sava Krstic et Amit Goel d’Intel pour `
leur coopération scientique autour de Cubicle.
Enn un grand merci à toute ma famille pour leur soutien moral tout au long de ma
thèse. Je souhaite remercier en particulier mes parents qui ont cru en moi et m’ont soutenu
dans mes choix. Merci également à mes sœurs pour leurs nombreux encouragements. Pour
nir je tiens à remercier ma moitié, Magali, pour son soutien sans faille et pour son aide.
Rien de ceci n’aurait été possible dans elle.
viiTable des matières
Table des matières 1
Table des figures 5
Liste des Algorithmes 7
1 Introduction 9
1.1 Model checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 Systèmes nis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.2 Systèmes innis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 Model checking de systèmes paramétrés . . . . . . . . . . . . . . . . . . . 14
1.2.1 Méthodes incomplètes . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.2 Fragments décidables . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Plan de la thèse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 Le model checker Cubicle 19
2.1 Langage d’entrée . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.1 Algorithme d’exclusion mutuelle . . . . . . . . . . . . . . . . . . . 23
2.2.2 Généralisation de l’algorithme de Dekker . . . . . . . . . . . . . . 25
2.2.3 Boulangerie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.4 Cohérence de Cache . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3 Non-atomicité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4 Logique multi-sortée et systèmes à tableaux . . . . . . . . . . . . . . . . . 36
2.4.1 Syntaxe des formules logiques . . . . . . . . . . . . . . . . . . . . . 37
2.4.2 Sémantique de la logique . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4.3 Systèmes de transition à tableaux . . . . . . . . . . . . . . . . . . . 41
2.5 Sémantique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5.1 Sémantique opérationnelle . . . . . . . . . . . . . . . . . . . . . . . 44
2.5.2 Atteignabilité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.5.3 Un interpréteur de systèmes à tableaux . . . . . . . . . . . . . . . . 47
1TABLE DES MATIÈRES
3 Cadre théorique : model checking modulo théories 49
3.1 Analyse de sûreté des systèmes à tableaux . . . . . . . . . . . . . . . . . . 50
3.1.1 Atteignabilité par chaînage arrière . . . . . . . . . . . . . . . . . . 50
3.1.2 Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.1.3 Eectivité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2 Terminaison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2.1 Indécidabilité de l’atteignabilité . . . . . . . . . . . . . . . . . . . . 57
3.2.2 Conditions pour la terminaison . . . . . . . . . . . . . . . . . . . . 59
3.2.3 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3 Gardes universelles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.3.1 Travaux connexes . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.3.2 Calcul de pré-image approximé . . . . . . . . . . . . . . . . . . . . 69
3.3.3 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.3.4 Relation avec le modèle de panne franche et la relativisation des
quanticateurs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.4.1 Exemples sans existence d’un bel ordre . . . . . . . . . . . . . . . . 74
3.4.2 Résumé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4 Optimisations et implémentation 79
4.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.2 Optimisations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.1 Appels au solveur SMT . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.2 Tests ensemblistes . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.2.3 Instantiation ecace . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.3 Suppressions a posteriori . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.4 Sous-typage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.5 Exploration parallèle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.6 Résultats et conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5 Inférence d’invariants 105
5.1 Atteignabilité approximée avec retour en arrière . . . . . . . . . . . . . . . 107
5.1.1 Illustration sur un exemple . . . . . . . . . . . . . . . . . . . . . . 107
5.1.2 Algorithme abstrait . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.1.3 Algorithme complet pour les systèmes à tableaux . . . . . . . . . . 113
5.2 Heuristiques et détails d’implémentation . . . . . . . . . . . . . . . . . . . 117
5.2.1 Oracle : exploration avant bornée . . . . . . . . . . . . . . . . . . . 117
5.2.2 Extraction des candidats invariants . . . . . . . . . . . . . . . . . . 119
5.2.3 Retour en arrière . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
2TABLE DES MATIÈRES
5.2.4 Invariants numériques . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.2.5 Implémentation dans Cubicle . . . . . . . . . . . . . . . . . . . . . 124
5.3 Évaluation expérimentale . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.4 Étude de cas : Le protocole FLASH . . . . . . . . . . . . . . . . . . . . . . . 130
5.4.1 Description du protocole FLASH . . . . . . . . . . . . . . . . . . . 131
5.4.2 Vérication du FLASH : État de l’art . . . . . . . . . . . . . . . . . 133
5.4.3 Modélisation dans Cubicle . . . . . . . . . . . . . . . . . . . . . . . 135
5.4.4 Résultats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.5 Travaux connexes sur les invariants . . . . . . . . . . . . . . . . . . . . . . 139
5.5.1 Génération d’invariants . . . . . . . . . . . . . . . . . . . . . . . . 139
5.5.2 Cutos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
5.5.3 Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6 Certification 145
6.1 Techniques de certication d’outils de vérication . . . . . . . . . . . . . . 145
6.2 La plateforme de vérication déductive Why3 . . . . . . . . . . . . . . . . 147
6.3 Production de certicats . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6.3.1 Invariants inductifs pour l’atteignabilité arrière . . . . . . . . . . . 147
6.3.2 Invariants inductifs et BRAB . . . . . . . . . . . . . . . . . . . . . . 152
6.4 Preuve dans Why3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7 Conclusion et perspectives 161
7.1 Résumé des contributions et conclusion . . . . . . . . . . . . . . . . . . . . 161
7.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
A Syntaxe et typage des programmes Cubicle 165
A.1 Syntaxe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
A.2 Typage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Bibliographie 173
Index 187
3Table des figures
2.1 Algorithme d’exclusion mutuelle . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Code Cubicle du mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3 Graphe de Dekker pour le processus i . . . . . . . . . . . . . . . . . . . . . 26
2.4 Code Cubicle de l’algorithme de Dekker . . . . . . . . . . . . . . . . . . . . 27
2.5 Code Cubicle de l’algorithme de la boulangerie de Lamport . . . . . . . . . 29
2.6 Diagramme d’état du protocole German-esque . . . . . . . . . . . . . . . . 31
2.7 Code Cubicle du protocole de cohérence de cache German-esque . . . . . . 32
2.8 Évaluation non atomique des conditions globales par un processus i . . . . 34
2.9 Encodage de l’évaluation non atomique de la condition globale ∀j , i. c(j) 35
2.10 Grammaire de la logique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.1 Machine de Minsky à deux compteurs . . . . . . . . . . . . . . . . . . . . 58
3.2 Traduction d’un programme et d’une machine de Minsky dans un système
à tableaux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.3 Dénition des congurations M et N . . . . . . . . . . . . . . . . . . . . . 61
3.4 Plongement de M vers N . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.5 Séquence nie d’idéaux inclus calculée par l’algorithme 4 . . . . . . . . . . 64
3.6 Trace fallacieuse pour un système avec gardes universelles . . . . . . . . . 71
3.7 Transformation avec modèle de panne franche . . . . . . . . . . . . . . . . 72
3.8 Trace fallacieuse pour la transformation avec modèle de panne franche . . 73
3.9 Relations entre les diérentes approches pour les gardes universelles . . . 74
3.10 Séquence innie de congurations non comparables pour l’encodage des
machines de Minksy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.11 Séquence innie de congurations non comparables pour une relation
binaire quelconque . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.12 Diérences de restrictions entre la théorie du model checking modulo
théories et Cubicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.1 Architecture de Cubicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.2 Benchmarks et statistiques pour une implémentation naïve . . . . . . . . . 83
4.3 Arbre préxe représentant V . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.4 Benchmarks pour tests ensemblistes . . . . . . . . . . . . . . . . . . . . . . 87
5TABLE DES FIGURES
4.5 Benchmarks pour l’instantiation ecace . . . . . . . . . . . . . . . . . . . 91
4.6 Graphes d’atteignabilité arrière pour diérentes stratégies d’exploration
sur l’exemple Dijkstra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.7 Suppression a posteriori . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.8 Préservation de l’invariant après suppression d’un nœud . . . . . . . . . . 95
4.9 Benchmarks pour la suppression a posteriori . . . . . . . . . . . . . . . . . 95
4.10 Système Cubicle annoté avec les contraintes de sous-typage . . . . . . . . 97
4.11 Benchmarks pour l’analyse de sous-typage . . . . . . . . . . . . . . . . . . 98
4.12 Une mauvaise synchronisation des tests de subsomption eectués en parallèle 100
4.13 Utilisation CPU pour les versions séquentielle et parallèle de Cubicle . . . 101
4.14 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.1 Système de transition à tabeaux du protocole German-esque . . . . . . . . 108
5.2 Exécution partielle de BRAB sur le protocole German-esque . . . . . . . . 110
5.3 Transition sur un état avec variable non-initialisée . . . . . . . . . . . . . . 119
5.4 Apprentissage à partir d’une exploration supplémentaire avant redémarrage 122
5.5 Architecture de Cubicle avec BRAB . . . . . . . . . . . . . . . . . . . . . . 125
5.6 Résultats de BRAB sur un ensemble de benchmarks . . . . . . . . . . . . . 128
5.7 Architecture FLASH d’une machine et d’un nœud . . . . . . . . . . . . . . 130
5.8 Structure d’un message dans le protocole FLASH . . . . . . . . . . . . . . . 132
5.9 Description des transitions du protocole FLASH . . . . . . . . . . . . . . . 134
5.10 Résultats pour la vérication du protocole FLASH avec Cubicle . . . . . . 138
5.11 Protocoles de cohérence de cache hiérarchiques . . . . . . . . . . . . . . . 143
6.1 Invariants inductifs calculés par des analyses d’atteignabilité avant et arrière 148
6.2 Vérication du certicat Why3 de German-esque par diérents prouveurs
automatiques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.3 Invariant inductif calculé par BRAB . . . . . . . . . . . . . . . . . . . . . . 154
6.4 Vérication par diérents prouveurs automatiques du certicat (Why3) de
German-esque généré par BRAB . . . . . . . . . . . . . . . . . . . . . . . . 154
6.5 Vérication de certicats sur un ensemble de benchmarks . . . . . . . . . 155
6.6 Informations sur le développent Why3 . . . . . . . . . . . . . . . . . . . . 158
6.7 Aperçu d’une technique d’extraction . . . . . . . . . . . . . . . . . . . . . 159
A.1 Grammaire des chiers Cubicle . . . . . . . . . . . . . . . . . . . . . . . . 167
A.2 Règles de typage des termes . . . . . . . . . . . . . . . . . . . . . . . . . . 169
A.3 Règles de typage des formules . . . . . . . . . . . . . . . . . . . . . . . . . 170
A.4 Règles de typage des actions des transitions . . . . . . . . . . . . . . . . . 170
A.5 Vérication de la bonne formation des types . . . . . . . . . . . . . . . . . 171
A.6 Règles de typage des déclarations . . . . . . . . . . . . . . . . . . . . . . . 171
6Liste des Algorithmes
1 Code de Dekker pour le processus i . . . . . . . . . . . . . . . . . . . . . . . 26
2 Pseudo-code de la boulangerie de Lamport pour le processus i . . . . . . . . 29
3 Interpréteur d’un système à tableaux . . . . . . . . . . . . . . . . . . . . . . 47
4 Analyse d’atteignabilité par chaînage arrière . . . . . . . . . . . . . . . . . . 53
5 Test de satisabilité naïf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6 Analyse d’atteignabilité abstraite avec approximations et retour en arrière
(BRAB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7 Analyse d’atteignabilité avec approximations et retour en arrière (BRAB) . . 115
8 Oracle : Exploration avant limitée en profondeur . . . . . . . . . . . . . . . 116
71
Introduction
Sommaire
1.1 Model checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.1.1 Systèmes nis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.2 Systèmes innis . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 Model checking de systèmes paramétrés . . . . . . . . . . . . . . . 14
1.2.1 Méthodes incomplètes . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.2 Fragments décidables . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Plan de la thèse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Les systèmes informatiques sont aujourd’hui omniprésents, aussi bien dans les objets
anodins de la vie courante que dans les systèmes critiques comme les contrôleurs automatiques
utilisés par l’industrie aéronautique. Tous ces systèmes sont généralement très
complexes, il est donc particulièrement dicile d’en construire qui ne comportent pas
d’erreurs. On peut notamment constater le succès récent des architectures multi-cœurs,
multi-processeurs et distribuées pour les serveurs haut de gamme mais aussi pour les
terminaux mobiles personnels. Un des composants les plus complexes de telles machines
est leur protocole de cohérence de cache. En vaut pour preuve le célèbre dicton :
« Il y a seulement deux choses compliquées en informatique : l’invalidation des
caches et nommer les choses. »
— Phil Karlton
En eet pour fonctionner de manière optimale chaque composant qui partage la mémoire
(processeur, cœur, etc.) possède son propre cache (une zone mémoire temporaire) lui
permettant de conserver les données auxquelles il a récemment accédé. Le protocole en
question assure que tous les caches du système se trouvent dans un état cohérent, ce qui en
9Chapitre 1 Introduction
fait un élément vital. Les méthodes les plus courantes pour garantir la qualité des systèmes
informatiques sont le test et la simulation. Cependant, ces techniques sont très peu adaptées
aux programmes concurrents comme les protocoles de cohérence de cache.
Pour garantir une ecacité optimale, ces protocoles sont souvent implantés au niveau
matériel et fonctionnent par échanges de messages, de manière entièrement asynchrone.
Le moment auquel un message arrivera ou un processeur accédera à la mémoire centrale
est donc totalement imprévisible [36]. Pour concevoir de tels systèmes on doit alors
considérer de nombreuses « courses critiques » (ou race conditions en anglais), c’est-à-dire
des comportements qui dépendent de l’ordre d’exécution ou d’arrivée des messages. Ces
scénarios sont par ailleurs très compliqués, faisant intervenir plusieurs dizaines d’échanges
de messages. Ainsi, la rareté de leurs apparitions fait qu’il est très dicile de reproduire
ces comportements par des méthodes de test et de simulation.
Une réponse à ce problème est l’utilisation de méthodes formelles pouvant garantir
certaines propriétés d’un système par des arguments mathématiques. De nombreuses
techniques revendiquent leur appartenance à cette catégorie, comme le model checking,
qui s’attache à considérer tous les comportements possibles d’un système an d’en vérier
diérentes propriétés.
1.1 Model checking
La technique du model checking a été inventée pour résoudre le problème dicile de la
vérication de programmes concurrents. Avant 1982, les recherches sur ce sujet intégraient
systématiquement l’emploi de la preuve manuelle [131]. Pnueli [137], Owicki et Lamport
[132] proposèrent, vers la n des années 70, l’usage de la logique temporelle pour spécier
des propriétés parmi lesquelles :
— la sûreté : un mauvais comportement ne se produit jamais, ou
— la vivacité : un comportement attendu nira par arriver.
Le terme model dans l’expression « model checking » peut faire référence à deux choses.
Le premier sens du terme renvoie à l’idée qu’on va représenter un programme par un
modèle abstrait. Appelons ce modèle M. Le deuxième sens du terme fait référence au fait
qu’on va ensuite essayer de vérier des propriétés de M, autrement dit que M est un modèle
de ces propriétés. Le model checking implique donc de dénir « simplement » ce qu’est (1)
un modèle, et (2) une propriété d’un modèle.
1. Un modèle doit répondre aux trois questions suivantes :
a) À tout instant, qu’est-ce qui caractérise l’état d’un programme ?
b) Quel est l’état initial du système ?
c) Comment passe-t-on d’un état à un autre ?
101.1 Model checking
2. Les propriétés qu’on cherche à vérier sont diverses : Est-ce qu’un mauvais état peut
être atteint (à partir de l’état inital) ? Est-ce qu’un certain état sera atteint quoiqu’il
advienne ? Plus généralement, on souhaite vérier des propriétés qui dépendent de
la manière dont le programme (ou le modèle) évolue. On exprime alors ces formules
dans des logiques temporelles.
1.1.1 Systèmes finis
Les travaux fondateurs de Clarke et Emerson [38] et de Queille et Sifakis [140] intègrent
cette notion de logique temporelle à l’exploration de l’ensemble des états du système
(espace d’état). Dans leur article de 1981 [38], Clarke et Emerson montrent comment
synthétiser des programmes à partir de spécications dans une logique temporelle (appelée
CTL, pour Computational Tree Logic). Mais c’est surtout la seconde partie de l’article qui
a retenu l’attention de la communauté. Dans celle-ci, ils construisent une procédure de
décision fondée sur des calculs de points-xes pour vérier des propriétés temporelles de
programmes nis.
L’avantage principal du model checking par rapport aux autres techniques de preuve
est double. C’est d’abord un processus automatique et rapide, de sorte qu’il est souvent
considéré comme une technique « presse-bouton ». C’est aussi une méthode qui fonctionne
déjà avec des spécications partielles. De cette façon, l’eort de vérication peut être commencé
tôt dans le processus de conception d’un système complexe. Un de ses désavantages,
qui est aussi inhérent à toutes les autres techniques de vérication, est que la rédaction des
spécications est une tâche dicile qui requiert de l’expertise. Mais l’inconvénient majeur
du model checking est apparu dans les années 80, et est connu sous le nom du phénomène
d’explosion combinatoire de l’espace d’états. Seuls les systèmes avec un nombre petit d’états
(de l’ordre du million) pouvaient être analysés à cette époque, alors que les systèmes réels
en possèdent beaucoup plus. Par conséquent, un important corpus de recherche sur le
model checking aborde le problème du passage à l’échelle.
Une avancée notoire pour le model checking a été l’introduction de techniques symboliques
pour résoudre ce problème d’explosion. Alors que la plupart des approches avant
1990 utilisaient des représentations explicites, où chaque état individuel est stocké en
mémoire, Burch et al. ont montré qu’il était possible de représenter de manière symbolique
et compacte des ensembles d’états [29]. McMillan reporte dans sa thèse [116] une représentation
de la relation des états de transition avec BDD [27] (diagrammes de décision binaire)
puis donne un nombre d’algorithmes basés sur des graphes dans le langage du µ-calcul.
L’utilisation de structures de données compactes pour représenter de larges ensembles
d’états permet de saisir certaines des régularités qui apparaissent naturellement dans les
circuits ou autres systèmes. La complexité en espace du model checking symbolique a
été grandement diminuée et, pour la première fois, des systèmes avec 1020 états ont été
vériés. Cette limite a encore été repoussée avec divers ranements du model checking
11Chapitre 1 Introduction
symbolique.
Le point faible des techniques symboliques utilisant des BDD est que la quantité d’espace
mémoire requise pour stocker ces structures peut augmenter de façon exponentielle. Biere
et al. décrivent une technique appelée le model checking borné (ou BMC pour Bounded Model
Checking) qui sacrie la correction au prot d’une recherche d’anomalies ecace [19].
L’idée du BMC est de chercher les contre-exemples parmi les exécutions du programme
dont la taille est limitée à un certain nombre d’étapes k. Ce problème peut être encodé
ecacement en une formule propositionnelle dont la statisabilité mène directement à
un contre-exemple. Si aucune erreur n’est trouvée pour des traces de longueur inférieure
à k, alors la valeur de la borne k est augmentée jusqu’à ce qu’une erreur soit trouvée,
ou que le problème devienne trop dicile à résoudre 1
. La force de cette technique vient
principalement de la mise à prot des progrès fait par les solveurs SAT (pour la satisabilité
booléenne) modernes. Elle a rencontré un succès majeur dans la vérication d’implémentations
de circuits matériels et fait aujourd’hui partie de l’attirail standard des concepteurs
de tels circuits. Comme l’encodage vers des contraintes propositionnelles capture la sé-
mantique des circuits de manière précise, ces model checkers ont permis de découvrir
des erreurs subtiles d’implémentation, dues par exemple à la présence de débordements
arithmétiques.
Une extension de cette technique pour la preuve de propriétés, plutôt que la seule
recherche de bogues, consiste à ajouter une étape d’induction. Cette extension de BMC
s’appelle la k-induction et dière d’un schéma d’induction classique par le point suivant :
lorsqu’on demande à vérier que la propriété d’induction est préservée, on suppose qu’elle
est vraie dans les k étapes précédentes plutôt que seulement dans l’étape précédente [52,
151].
Une autre vision du model checking est centrée sur la théorie des automates. Dans
ces approches, spécications et implémentations sont toutes deux construites avec des
automates. Vardi et Wolper notent une correspondance entre la logique temporelle et les
automates de Büchi [164] : chaque formule de logique temporelle peut être considérée
comme un automate à états ni (sur des mots innis) qui accepte précisément les séquences
satisfaites par la formule. Grâce à cette connexion, ils réduisent le problème du model
checking à un test de vacuité de l’automate AM ∩A¬φ (où AM est l’automate du programme
M et A¬φ est l’automate acceptant les séquences qui violent la propriété φ) [161].
1.1.2 Systèmes infinis
Le problème d’explosion combinatoire est encore plus frappant pour des systèmes
avec un nombre inni d’états. C’est le cas par exemple lorsque les types des variables du
1. Dans certains cas un majorant sur la borne k est connu (le seuil de complétion) qui permet d’armer
que le système vérie la propriété donnée.
121.1 Model checking
programme sont innis (e.g. entiers mathématiques) ou les structures de données sont
innies (e.g. buers, les d’attente, mémoires non bornés). Pour traiter de tels systèmes,
deux possibilités s’orent alors : manipuler des représentations d’ensembles d’états innis
directement, ou construire une abstraction nie du système.
Dans la première approche, des représentations symboliques adaptées reposent par
exemple sur l’utilisation des formules logiques du premier ordre. Pour cela, les techniques
utilisant précédemment les solveurs SAT (traitant exclusivement de domaines nis encodés
par de la logique propositionnelle) ont été peu à peu migrées vers les solveurs SMT
(Satisabilité Modulo Théories). Ces derniers possèdent un moteur propositionnel (un
solveur SAT) couplé à des méthodes de combinaison de théories. La puissance de ces
solveurs vient du grand nombre de théories qu’ils supportent en interne, comme la théorie
de l’arithmétique linéaire (sur entiers mathématiques), la théorie de l’égalité et des fonctions
non interprétées, la théorie des tableaux, la théorie des vecteurs de bits, la théorie des types
énumérés, etc. Certains solveurs SMT supportent même les quanticateurs. Par exemple,
une application récente de la technique de k-induction aux programmes Lustre (un langage
synchrone utilisé notamment dans l’aéronautique) utilisant des solveurs SMT est disponible
dans le model checker Kind [85, 86].
La seconde approche pour les systèmes innis – qui peut parfois être utilisée en complément
de la technique précédente – consiste à sacrier la précision de l’analyse en
simpliant le problème an de se ramener à l’analyse d’un système ni. La plupart de ces
idées émergent d’une certaine manière du cadre de l’interprétation abstraite dans lequel un
programme est interprété sur un domaine abstrait [46,47]. Une forme d’abstraction particulière
est celle de l’abstraction par prédicats. Dans cette approche, la fonction d’abstraction
associe chaque état du système à un ensemble prédéni de prédicats. L’utilisateur fournit
des prédicats booléens en nombre ni pour décrire les propriétés possibles d’un système
d’états innis. Ensuite une analyse d’accessibilité est eectuée sur le modèle d’états nis
pour fournir l’invariant le plus fort possible exprimable à l’aide de ces prédicats [81].
Généralement, les abstractions accélèrent la procédure de model checking lorsqu’elles
sont les plus générales possibles. Parfois trop grossières, ces abstractions peuvent empêcher
l’analyse d’un système pourtant sûr en exposant des contre-exemples qui n’en sont pas dans
le système réel. Il devient alors intéressant de raner le domaine abstrait utilisé précédemment
de manière automatique an d’éliminer ces mauvais contre-exemples. Cette idée a
donné naissance à la stratégie itérative appelée CEGAR (pour Counter-Example Guided Abstraction
Renement, i.e. ranement d’abstraction guidé par contre-exemples) [13, 35, 147].
Elle est utilisée par de nombreux outils et plusieurs améliorations ont été proposées au
l des années. On peut mentionner par exemple les travaux de Jahla et al. [89, 94] sur
l’abstraction paresseuse implémentés dans le model checker Blast [18] ainsi que ceux de
McMillan [119] qui combine cette technique avec un ranement par interpolation de
Craig [48] permettant ainsi de capturer les relations entre variables utiles à la preuve de la
propriété souhaitée.
13Chapitre 1 Introduction
Un système peut aussi être inni, non pas parce que ses variables sont dans des domaines
innis, mais parce qu’il est formé d’un nombre non borné de composants. Par exemple, un
protocole de communication peut être conçu pour fonctionner quelque soit le nombre de
machines qui y participent. Ces systèmes sont dits paramétrés, et c’est le problème de leur
vérication qui nous intéresse plus particulièrement dans cette thèse.
1.2 Model checking de systèmes paramétrés
Un grand nombre de systèmes réels concurrents comme le matériel informatique ou
les circuits électroniques ont en fait un nombre ni d’états possibles (déterminés par la
taille des registres, le nombre de bascules, etc.). Toutefois, ces circuits et protocoles (e.g.
les protocoles de bus ou les protocoles de cohérence de cache) sont souvent conçus de
façon paramétrée, dénissant ainsi une innité de systèmes. Un pour chaque nombre de
composants.
Souvent le nombre de composants de tels systèmes n’est pas connu à l’avance car ils
sont conçus pour fonctionner quelque soit le nombre de machines du réseau, quelque soit
le nombre de processus ou encore quelque soit la taille des buers (mémoires tampons).
En vérier les propriétés d’une manière paramétrée est donc indispensable pour s’assurer
de leur qualité. Dans d’autres cas ce nombre de composants est connu mais il est tellement
grand (plusieurs milliers) que les techniques traditionnelles sont tout bonnement incapables
de raisonner avec de telles quantités. Il est alors préférable dans ces circonstances de vérier
une version paramétrée du problème.
Apt et Kozen montrent en 1986 que, de manière générale, savoir si un programme P
paramétré par n, P (n) satisfait une propriété φ(n), est un problème indécidable [10]. Pour
mettre en lumière ce fait, ils ont simplement créé un programme qui simule n étapes d’une
machine de Turing et change la valeur d’une variable booléenne à la n de l’exécution si
la machine simulée ne s’est pas encore arrêtée 2
. Ce travail expose clairement les limites
intrinsèques des systèmes paramétrés. Même si le résultat est négatif, la vérication automatique
reste possible dans certains cas. Face à un problème indécidable, il est coutume
de restreindre son champ d’application en imposant certaines conditions jusqu’à tomber
dans un fragment décidable. Une alternative consiste à traiter le problème dans sa globalité,
mais avec des méthodes non complètes.
1.2.1 Méthodes incomplètes
Le premier groupe à aborder le problème de la vérication paramétrée fut Clarke et
Grumberg [39] avec une méthode fondée sur un résultat de correspondance entre des
2. Ce programme est bien paramétré par n. Bien qu’il ne fasse pas intervenir de concurrence, le résultat
qui suit peut être aussi bien obtenu en mettant n processus identiques P (n) en parallèle.
141.2 Model checking de systèmes paramétrés
systèmes de diérentes tailles. Notamment, ils ont pu vérier avec cette technique un
algorithme d’exclusion mutuelle en mettant en évidence une bisimulation entre ce système
de taille n et un système de taille 2.
Toujours dans l’esprit de se ramener à une abstraction nie, de nombreuses techniques
suivant ce modèle ont été développées pour traiter les systèmes paramétrés. Par exemple
Kurshan et McMillan utilisent un unique processus Q qui agrège les comportements de n
processus P concurrents [105]. En montrant que Q est invariant par composition parallèle
asynchrone avec P, on déduit que Q représente bien une abstraction du système paramétré.
Bien souvent l’enjeu des techniques utilisées pour vérier des systèmes paramétrés est
de trouver une représentation à même de caractériser des familles innies d’états. Par
exemple si la topologie du système (i.e. l’organisation des processus) n’a pas d’importance,
il est parfois susant de « compter » le nombre de processus se trouvant dans un état
particulier. C’est la méthode employée par Emerson et al. dans l’approche dite d’abstraction
par compteurs [64]. Si l’ordre des processus importe (e.g. un processus est situé « à gauche »
d’un autre) une représentation adaptée consiste à associer à chaque état un mot d’un
langage régulier. Les ensembles d’états sont alors représentés par les expressions régulières
de ce langage et certaines relations de transition peuvent être exprimées par des transducteurs
nis [1, 99]. Une généralisation de cette idée a donné naissance au cadre du model
checking régulier [24] qui propose des techniques d’accélération pour calculer les clôtures
transitives [96, 128]. Des extensions de ces approches ont aussi été adaptées à l’analyse de
systèmes de topologies plus complexes [22].
Plutôt que de chercher à construire une abstraction nie du système paramétré dans son
intégralité, l’approche dite de cuto (coupure) cherche à découvrir une borne supérieure
dépendant à la fois du système et de la propriété à vérier. Cette borne k, appelée la valeur
de cuto, est telle que si la propriété est vraie pour les système plus petits que k alors
elle est aussi vraie pour les systèmes de taille supérieure à k [62]. Parfois cette valeur
peut-être calculée statiquement à partir des caractéristiques du système. C’est l’approche
employée dans la technique des invariants invisibles de Pnueli et al. alliée à une génération
d’invariants inductifs [12, 138]. L’avantage de la technique de cuto est qu’elle s’applique
à plusieurs formalismes et qu’elle permet simplement d’évacuer le côté paramétré d’un
problème. En revanche le calcul de cette borne k donne souvent des valeurs rédhibitoires
pour les systèmes de taille réelle. Pour compenser ce désavantage, certaines techniques
ont été mises au point pour découvrir les bornes de cuto de manière dynamique [4, 98].
1.2.2 Fragments décidables
Il est généralement dicile d’identier un fragment qui soit à la fois décidable et utile en
pratique. Certaines familles de problèmes admettent cependant des propriétés qui en font
des cadres théoriques intéressants. Partant du constat que les systèmes synchrones sont
souvent plus simples à analyser, Emerson et Namjoshi montrent que certaines propriétés
15Chapitre 1 Introduction
temporelles sont en fait decidables pour ces systèmes [63]. Leur modèle fonctionne pour
les systèmes composés d’un processus de contrôle et d’un nombre arbitraire de processus
homogènes. Il ne s’applique donc pas, par exemple, aux algorithmes d’exclusion mutuelle.
L’écart de méthodologie qui existe entre les méthodes non complètes et celles qui identient
un fragment décidable du model checking paramétré n’est en réalité pas si grand.
Dans bien des cas, les chercheurs qui mettent au point ces premières mettent aussi en
évidence des restrictions susantes sur les systèmes considérés pour rendre l’approche
complète et terminante. C’est par exemple le cas des techniques d’accélération du model
checking régulier ou encore celui du model checking modulo théories proposé par Ghilardi
et Ranise [77, 79].
À la diérence des techniques pour systèmes paramétrés mentionnées précédemment,
cette dernière approche ne construit pas d’abstraction nie mais repose sur l’utilisation
de quanticateurs pour représenter les ensembles innis d’états de manière symbolique.
En eet lorsqu’on parle de systèmes paramétrés, on exprime naturellement les propriétés
en quantiant sur l’ensemble des éléments du domaine paramétré. Par exemple, dans un
système où le nombre de processus n’est pas connu à l’avance, on peut avoir envie de
garantir que quelque soit le processus, sa variable locale x n’est jamais nulle. Le cadre
du model checking modulo théories dénit une classe de systèmes paramétrés appelée
systèmes à tableaux permettant de maîtriser l’introduction des quanticateurs. La sûreté
de tels systèmes n’est pas décidable en général mais il existe des restrictions sur le problème
d’entrée qui permettent de construire un algorithme complet et terminant. Un autre
avantage de cette approche est de tirer parti de la puissance des solveurs SMT et de leur
grande versatilité.
Toutes ces techniques attaquent un problème intéressant et important. Celles qui sont
automatiques (model checking) ne passent pourtant pas à l’échelle sur des problèmes
réalistes. Et celles qui sont applicables en pratique demandent quant à elles une expertise
humaine considérable, ce qui rend le processus de vérication très long [33,49,118,134,135].
Le problème principal auquel répond cette thèse est le suivant.
Comment vérier automatiquement des propriétés de sûreté de systèmes
paramétrés complexes ?
Pour répondre à cette question, on utilise le cadre théorique du model checking modulo
théories pour concevoir de nouveaux algorithmes qui infèrent des invariants de manière
automatique. Nos techniques s’appliquent avec succès à l’analyse de sûreté paramétrée de
protocoles de cohérence de cache conséquents, entre autres.
1.3 Contributions
Nos contributions sont les suivantes :
161.4 Plan de la thèse
— Un model checker open source pour systèmes paramétrés : Cubicle. Cubicle implémente
les techniques présentées dans cette thèse et est librement disponible à
l’adresse suivante : http://cubicle.lri.fr.
— Un ensemble de techniques pour l’implémentation d’un model checker reposant sur
un solveur SMT.
— Un nouvel algorithme pour inférer des invariants de qualité de manière automatique :
BRAB. L’idée de cet algorithme est d’utiliser les informations extraites d’un modèle
ni du système an d’inférer des invariants pour le cas paramétré. Le modèle ni
est en fait mis à contribution en tant qu’oracle dont le rôle se limite à émettre un
jugement de valeur sur les candidats invariants qui lui sont présentés. Cet algorithme
fonctionne bien en pratique car dans beaucoup de cas les instances nies, même
petites, exhibent déjà la plupart des comportements intéressants du système.
— Une implémentation de BRAB dans le model checker Cubicle.
— L’application de ces techniques à la vérication du protocole de cohérence de cache
de l’architecture multi-processeurs FLASH. À l’aide des invariants découverts par
BRAB, la sûreté de ce protocole a pu être vériée entièrement automatiquement pour
la première fois.
— Deux approches pour la certication de Cubicle à l’aide de la plate-forme Why3.
La première est une approche qui fonctionne par certicats (ou traces) et vérie le
résultat produit par le model checker. Ce certicat prend la forme d’un invariant
inductif. La seconde approche est par vérication déductive du cœur de Cubicle. Ces
deux approches mettent en avant une autre qualité de BRAB : faciliter ce processus
de certication, par réduction de la taille des certicats pour l’une, et grâce à son
ecacité pour l’autre.
1.4 Plan de la thèse
Ce document de thèse est organisé de la façon suivante : Le chapitre 2 introduit le
langage d’entrée du model checker Cubicle à travers diérents exemples d’algorithmes et
de protocoles concurrents. Une seconde partie de ce chapitre présente la représentation
formelle des systèmes de transitions utilisés par Cubicle ainsi que leur sémantique. Le
chapitre 3 présente le cadre théorique du model checking modulo théories conçu par
Ghilardi et Ranise. Les résultats et théorèmes associés sont également donnés et illustrés
dans ce chapitre, qui constitue le contexte dans lequel s’inscrivent nos travaux. Le chapitre 4
donne un ensemble d’optimisations nécessaires à l’implémentation d’un model checker
reposant sur un solveur SMT comme Cubicle. Nos travaux autour de l’inférence d’invariants
sont exposés dans le chapitre 5. On y présente et illustre l’algorithme BRAB. Les détails
pratiques de son fonctionnement sont également expliqués et son intérêt est appuyé par une
17Chapitre 1 Introduction
évaluation expérimentale sur des problèmes diciles pour la vérication paramétrée. En
particulier on détaille le résultat de la preuve du protocole de cohérence de cache FLASH.
Le chapitre 6 présente deux techniques de certication que nous avons mis en œuvre
pour certier le model checker Cubicle. Ces deux techniques reposent sur la plate-forme
de vérication déductive Why3. Enn le chapitre 7 donne des pistes d’amélioration et
d’extension de nos travaux et conclut ce document.
182
Le model checker Cubicle
Sommaire
2.1 Langage d’entrée . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2.1 Algorithme d’exclusion mutuelle . . . . . . . . . . . . . . . . . 23
2.2.2 Généralisation de l’algorithme de Dekker . . . . . . . . . . . . 25
2.2.3 Boulangerie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.4 Cohérence de Cache . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3 Non-atomicité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.4 Logique multi-sortée et systèmes à tableaux . . . . . . . . . . . . . 36
2.4.1 Syntaxe des formules logiques . . . . . . . . . . . . . . . . . . . 37
2.4.2 Sémantique de la logique . . . . . . . . . . . . . . . . . . . . . . 39
2.4.3 Systèmes de transition à tableaux . . . . . . . . . . . . . . . . . 41
2.5 Sémantique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5.1 Sémantique opérationnelle . . . . . . . . . . . . . . . . . . . . . 44
2.5.2 Atteignabilité . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.5.3 Un interpréteur de systèmes à tableaux . . . . . . . . . . . . . . 47
L’outil issu des travaux présentés dans ce document est un model checker pour systèmes
paramétrés, dénommé Cubicle. Le but de ce chapitre est de familiariser le lecteur avec l’outil,
tout particulièrement son langage d’entrée et sa syntaxe concrète. Quelques exemples
variés de tels systèmes sont formulés dans ce langage an d’orir un premier aperçu des
possibilités du model checker. On donne également une description plus formelle de la
sémantique des systèmes décrits dans le langage de Cubicle.
19Chapitre 2 Le model checker Cubicle
2.1 Langage d’entrée
Cubicle est un model checker pour systèmes paramétrés. C’est-à-dire qu’il permet de
vérier statiquement des propriétés de sûreté d’un programme concurrent pour un nombre
quelconque de processus 1
. Pour représenter ces programmes, on utilise des systèmes de
transition car ils permettent facilement de modéliser des comportements asynchrones ou
non déterministes. Ces systèmes décrivent les transitions possibles d’un état du programme
à un autre. Ils peuvent être considérés comme un langage bas niveau pour la vérication.
Étant donné que les systèmes manipulés par Cubicle sont paramétrés, les transitions qui
le composent le sont également. Dans cette section, on présente de manière très informelle
le langage d’entrée de Cubicle 2
.
La description d’un système commence par des déclarations de types, de variables
globales et de tableaux. Cubicle connaît quatre types en interne : le type des entiers (int),
le type des réels (real), le type des booléens (bool) et le type des identicateurs de
processus (proc). Ce dernier est particulièrement important car c’est par les éléments de
ce type que le système est paramétré. Le paramètre du système est la cardinalité du type
proc. L’utilisateur a aussi la liberté de déclarer ses propres types abstraits ou ses propres
types énumérés. L’exemple suivant dénit un type énuméré state à trois constructeurs
Idle, Want et Crit ainsi qu’un type abstrait data.
type state = Idle | Want | Crit
type data
Les tableaux et variables représentent l’état du système ou du programme. Tous les
tableaux ont la particularité d’être uniquement indexés par des éléments du type proc, leur
taille est par conséquent inconnue. C’est une limitation de l’implémentation actuelle du
langage de Cubicle qui existe pour des raisons pratiques et rien n’empêcherait d’ajouter la
possibilité de déclarer des tableaux indexés par des entiers. Dans ce qui suit on déclare une
variable globale Timer de type real et trois tableaux indexés par le type proc.
var Timer : real
array State[proc] : state
array Chan[proc] : data
array Flag[proc] : bool
Les tableaux peuvent par exemple être utilisés pour représenter des variables locales ou
des canaux de communication.
1. Un programme concurrent est souvent paramétré par son nombre de processsus ou threads mais ce
paramètre peut aussi être la taille de certains buers ou le nombre de canaux de communication par exemple.
2. Une description plus précise de la syntaxe et de ce langage est faite en annexe A de ce document.
202.1 Langage d’entrée
Les états initiaux du système sont dénis à l’aide du mot clef init. Cette déclaration qui
vient en début de chier précise quelles sont les valeurs initiales des variables et tableaux
pour tous leurs indices. Notons que certaines variables peuvent ne pas être initialisées et on
a le droit de mentionner seulement les relations entres elles. Dans ce cas tout état dont les
valeurs des variables et tableaux respectent les contraintes xées dans la ligne init sera
un état initial correct du système. Par exemple, la ligne suivante dénit les états initiaux
comme ceux ayant leurs tableaux Flag à false et State à Idle pour tout processus z, ainsi
que leur variable globale Timer valant 0.0. Le paramètre z de init représente tous les
processus du système.
init (z) { Flag[z] = False && State[z] = Idle && Timer = 0.0 }
Remarque. On ne précise pas le type des paramètres car seuls ceux du type proc sont
autorisés
Le reste du système est donné comme un ensemble de transitions de la forme garde/action.
Chaque transition peut être paramétrée (ou non) par un ou plusieurs identicateurs de
processus comme dans l’exemple suivant.
transition t (i j)
requires { i < j && State[i] = Idle && Flag[i] = False &&
forall_other k. (Flag[k] = Flag[j] || State[k] <> Want) }
{
Timer := Timer + 1.0;
Flag[i] := True;
State[k] := case
| k = i : Want
| State[k] = Crit && k < i : Idle
| _ : State[k];
}
Dans cet exemple, la transition t est déclenchable s’il existe deux processus distincts
d’identicateurs i et j tels que i est inférieur à j. Les tableaux State et Flag doivent
contenir respectivement la valeur Idle et la valeur false à l’indice i. En plus de cette garde
locale (aux processus i et j), on peut mentionner l’état des autres processus du système.
La partie globale de la garde est précédée du mot clef forall_other. Ici, on dit que tous
les autres processus (sous-entendu diérents de i et j) doivent soit avoir la même valeur
de Flag que j, soit contenir une valeur diérente de Want dans le tableau State.
Remarque. Dans Cubicle, les processus sont diérenciés par leur identicateurs. Ici, les
paramètres i et j de la transition sont implicitement quantiés existentiellement et doivent
21Chapitre 2 Le model checker Cubicle
être des identicateurs de processus deux à deux distincts. L’ensemble des identicateurs
de processus est seulement muni d’un ordre total. La présence de cet ordre induit une
topologie linéaire sur les processus. La comparaison entre identicateurs est donc autorisée
et on peut par exemple écrire i < j dans une garde.
La garde de la transition est donnée par le mot clef requires. Si elle est satisfaite, les
actions de la transition sont exécutées. Chaque action est une mise à jour d’une variable
globale ou d’un tableau. La sémantique des transitions veut que l’ensemble des mises
à jour soit réalisé de manière atomique et chaque variable qui apparaît à droite d’un
signe := dénote la valeur de cette variable avant la transition. L’ordre des aectations
n’a donc pas d’importance. La première action Timer := Timer + 1.0 incrémente la
variable globale de 1 lorsque la transition est prise. Les mises à jour de tableaux peuvent être
codées comme de simples aectations si une seule case est modiée. L’action Flag[i] :=
True modie le tableau Flag à l’indice i en y mettant la valeur true. Le reste du tableau
n’est pas modié. Si la transition modie plusieurs cases d’un même tableau, on utilise une
construction case qui précise les nouvelles valeurs contenues dans le tableau à l’aide d’un
ltrage. State[k] := case ... mentionne ici les valeurs du tableau State pour tous
les indices k (k doit être une variable fraîche) selon les conditions suivantes. Pour chaque
indice k, le premier cas possible du ltrage est exécuté. Le cas
| k = i : Want
nous demande de vérier tout d’abord si k vaut i, c’est-à-dire de mettre de la valeur Want
à l’indice i du tableau State. Le deuxième cas
| State[k] = Crit && k < i : Idle
est plus compliqué : si le premier cas n’est pas vérié (i.e. k est diérent de i), que State
contenait Crit à l’indice k, et que k est inférieur à i alors la nouvelle valeur de State[k]
est Idle. Plus simplement, cette ligne change les valeurs Crit du tableau State se trouvant
à gauche de i (k < i) en Idle. Enn tous les ltrages doivent se terminer par un cas par
défaut noté _. Dans notre exemple, le cas par défaut du ltrage
| _ : State[k]
dit que toutes les autres valeurs du tableau restent inchangées (on remet l’ancienne valeur
de State[k]).
La relation de transition, décrite par l’ensemble des transitions, dénit l’exécution du
système comme une boucle innie qui à chaque itération :
1. choisit de manière non déterministe une instance de transition dont la garde est vraie
dans l’état courant du système ;
2. met à jour les variables et tableaux d’état conformément aux actions de la transition
choisie.
222.2 Exemples
Les propriétés de sûreté à vérier sont exprimées sous forme négative, c’est-à-dire qu’on
caractérise les états dangereux du système. On les exprime dans Cubicle à l’aide du mot
clef unsafe, éventuellement suivi d’un ensemble de variables de processus distinctes. La
formule dangereuse suivante exprime que les mauvais états du système sont ceux où il
existe au moins deux processus distincts x et y tels que le tableau State contienne la valeur
Crit à ces deux indices.
unsafe (x y) { State[x] = Crit && State[y] = Crit }
On dira qu’un système est sûr si aucun des états dangereux ne peut être atteint à partir
d’un des états initiaux.
Remarque. Bien que Cubicle soit conçu pour vérier des systèmes paramétrés, il est tout
de même possible de l’utiliser pour vérier des systèmes dont le nombre de processus est
xé à l’avance. Pour cela, il sut d’inclure la ligne “number_procs n” dans le chier, où n
est le nombre de processus. Dans ce cas, on pourra mentionner explicitement les processus
1 à n en utilisant les constantes #1 à #n.
Remarque. Le langage d’entrée de Cubicle est une version moins riche mais paramétrée du
langage de Murφ [57] et similaire à Uclid [28]. Bien que limité pour l’instant, il est assez
expressif pour permettre de décrire aisément des systèmes paramétrés conséquents (75
transitions, 40 variables et tableaux pour le protocole FLASH [106] par exemple).
2.2 Exemples
Dans cette section, on montre comment utiliser Cubicle et son expressivité pour modéliser
diérents algorithmes et protocoles de la littérature. Ces exemples sont donnés à titre
didactique et sont choisis de manière à illustrer les caractéristiques fondamentales du
langage. Le but de cette section est de donner au lecteur une bonne intuition des possibilités
oertes par l’outil de manière à pouvoir modéliser et expérimenter le model checker sur
ses propres exemples.
2.2.1 Algorithme d’exclusion mutuelle
L’exclusion mutuelle est un problème récurrent de la programmation concurrente. L’algorithme
décrit ci-dessous résout ce problème, i.e. il permet à plusieurs processus de
partager une ressource commune à accès unique sans conit, en communiquant seulement
au travers de variables partagées. C’est une version simpliée de l’algorithme de Dekker
(présenté en 2.2.2) qui fonctionne pour un nombre arbitraire de processus identiques.
Sur la gure 2.1, on matérialise n processus concurrents qui exécutent tous le même
protocole. Chaque processus est représenté par le graphe de transition entre ses états : Idle,
23Chapitre 2 Le model checker Cubicle
Process 1
Idle
Want
Crit
Turn = 1
Turn := ?
Process 2
Idle
Want
Crit
Turn = 2
Turn := ?
Process n
Idle
Want
Crit
Turn = n
Turn := ?
. . .
Process 3
Idle
Want
Crit
Turn = 3
Turn := ?
Figure 2.1 – Algorithme d’exclusion mutuelle
Want, et Crit, l’état initial étant Idle. Un processus peut demander l’accès à la section critique
à tout moment en passant dans l’état Want. La synchronisation est eectuée au travers
de la seule variable partagée Turn. La priorité est donnée au processus dont l’identiant
i est contenu dans la variable Turn, qui peut dans ce cas passer en section critique. Un
processus en section critique peut en sortir sans contrainte si ce n’est celle de « donner la
main » à un de ses voisins en changeant la valeur de Turn. Cette dernière est modiée de
façon non-déterministe (Turn := ? dans le schéma), donc rien n’interdit à un processus
de se redonner la main lui-même. La propriété qui nous intéresse ici est de vérier la sûreté
du système, c’est-à-dire qu’à tout moment, au plus un processus est en section critique.
Le code Cubicle correspondant à ce problème est donné ci-dessous en gure 2.2. Pour
modéliser l’algorithme on a choisi de représenter l’état des processus par un tableau
State contenant les valeur Idle, Want, ou Crit. On peut aussi voir State[i] comme la
valeur d’une variable locale au processus d’identicateur i. La variable partagée Turn est
simplement une variable globale au système. On dénit les états initiaux comme ceux où
tous les processus sont dans l’état Idle, quelque soit la valeur initiale de la variable Turn.
Prenons l’exemple de la transition req. Elle correspond au moment où un processus
demande l’accès à la section critique, passant de l’état Want à Idle. Elle se lit : la transition
req est possible s’il existe un processus d’identiant i tel que State[i] a pour valeur
Idle. Dans ce cas, la case State[i] prend pour valeur Want.
La formule unsafe décrit les mauvais états du système comme ceux dans lesquels il
existe (au moins) deux processus distincts en section critique (i.e. pour lesquels la valeur de
State est Crit). Autrement dit, on veut s’assurer que la formule suivante est un invariant
du système :
∀i,j. (i , j ∧ State[i] = Crit) =⇒ State[j] , Crit
242.2 Exemples
mutex.cub
type state = Idle | Want | Crit
array State[proc] : state
var Turn : proc
init (z) { State[z] = Idle }
unsafe (z1 z2) { State[z1] = Crit &&
State[z2] = Crit }
transition req (i)
requires { State[i] = Idle }
{ State[i] := Want }
transition enter (i)
requires { State[i] = Want &&
Turn = i }
{ State[i] := Crit; }
transition exit (i)
requires { State[i] = Crit }
{ Turn := ? ;
State[i] := Idle; }
Figure 2.2 – Code Cubicle du mutex
Pour vérier que le système décrit dans le chier mutex.cub précédent est sûr, la manière
la plus simple d’invoquer Cubicle est de lui passer seulement le nom de chier en argument :
cubicle mutex.cub
On peut voir ci-après la trace émise par le model checker sur la sortie standard. Lorsque
ce dernier ache sur la dernière ligne “The system is SAFE”, cela signie qu’il a été en
mesure de vérier que l’état unsafe n’est jamais atteignable dans le système.
node 1: unsafe[1] 5 (2+3) remaining
node 2: enter(#2) -> unsafe[1] 7 (2+5) remaining
node 3: req(#2) -> enter(#2) -> unsafe[1] 8 (1+7) remaining
The system is SAFE
2.2.2 Généralisation de l’algorithme de Dekker
L’algorithme de Dekker est en réalité la première solution au problème d’exclusion mutuelle.
Cette solution est attribuée au mathématicien Th. J. Dekker par Edsger W. Dijkstra
qui en donnera une version fonctionnant pour un nombre arbitraire de processus [54]. En
1985, Alain J. Martin présente une version simple de l’algorithme généralisé à n processus
[115]. C’est cette version qui est donnée ici sous forme d’algorithme (Algorithme 1) et
sous forme de graphe de ot de contrôle (Figure 2.3).
La variable booléenne x (i) est utilisée par un processus pour signaler son intérêt à entrer
en section critique. La principale diérence avec l’algorithme original est l’utilisation d’une
valeur spéciale pour réinitialiser t. Dans Cubicle, les constantes de processus ne sont pas
explicites donc on ne peut pas matérialiser l’identiant spécial 0 dans le type proc. Pour
25Chapitre 2 Le model checker Cubicle
Algorithme 1 : Code de Dekker pour
le processus i [115]
Variables :
x (i) : variable booléenne,
initialisée à false
t : variable partagée,
initialisée à 0
p(i) :
NCS while true do
x (i) := true;
WANT while ∃j , i. x (j) do
AWAIT x (i) := false;
await [ t = 0 ∨ t = i ];
TURN t := i;
x (i) := true;
CS // Section critique;
x (i) := false;
t := 0;
NCS
CS
WANT
∃j , i. x (j)
AWAIT
t = 0 t = i
TURN
x (i):= true
x (i):= false
x (i):= false
t:= 0
t:= i
x (i):= true
Figure 2.3 – Graphe de Dekker pour le
processus i
modéliser t, on a choisit d’utiliser deux variables globales : T représente t lorsque celle-ci
est non nulle et la variable booléenne T_set vaut False lorsque t a été réinitialisée (à 0).
Le tableau P est utilisé pour représenter le compteur de programme et peut prendre les
valeurs des étiquettes de l’algorithme 1. On peut remarquer que dans notre modélisation,
la condition de la boucle while est évaluée de manière atomique. Les transitions wait et
enter testent en une seule étape l’existence (ou non) d’un processus j dont la variable x (j)
est vraie. De la même manière que précédemment, on veut s’assurer qu’il y ait au plus un
processus au point de programme correspondant à l’étiquette CS, i.e. en section critique.
Cubicle est capable de prouver la sûreté de ce système. À vue d’œil, la correction de
l’algorithme n’est pas triviale lorsqu’un nombre arbitraire de processus s’exécutent de
manière concurrente. Pour illustrer ce propos, admettons que le concepteur ait fait une
erreur et qu’un processus i puisse omettre de passer sa variable x (i) à true lorsqu’il arrive à
l’étiquette TURN. Il sut alors de rajouter la transition suivante au système pour modéliser
les nouveaux comportements introduits :
262.2 Exemples
dekker_n.cub
type location =
NCS | WANT | AWAIT | TURN | CS
array P[proc] : location
array X[proc] : bool
var T : proc
var T_set : bool
init (i) {
T_set = False &&
X[i] = False && P[i] = NCS
}
unsafe (i j) { P[i] = CS && P[j] = CS }
transition start (i)
requires { P[i] = NCS }
{ P[i] := WANT;
X[i] := True; }
transition wait (i j)
requires { P[i] = WANT && X[j] = True }
{ P[i] := AWAIT;
X[i] := False; }
transition enter (i)
requires { P[i] = WANT &&
forall_other j. X[j] = False }
{ P[i] := CS }
transition awaited_1 (i)
requires { P[i] = AWAIT &&
T_set = False }
{ P[i] := TURN }
transition awaited_2 (i)
requires { P[i] = AWAIT &&
T_set = True && T = i }
{ P[i] := TURN }
transition turn (i)
requires { P[i] = TURN }
{ P[i] := WANT;
X[i] := True;
T := i; T_set := True; }
transition loop (i)
requires { P[i] = CS }
{ P[i] := NCS;
X[i] := False;
T_set := False; }
Figure 2.4 – Code Cubicle de l’algorithme de Dekker
transition turn_buggy (i)
requires { P[i] = TURN }
{ P[i] := WANT;
T := i; T_set := True; }
Si on exécute Cubicle sur le chier maintenant obtenu, il nous fait savoir que la propriété
n’est plus vériée en exposant une trace d’erreur. On peut voir sur la trace suivante qu’un
mauvais état est atteignable en dix étapes avec deux processus.
Error trace: start(#2) -> enter(#2) -> start(#1) -> wait(#1, #2) ->
loop(#2) -> awaited_1(#1) -> turn_buggy(#1) -> enter(#1) ->
start(#2) -> enter(#2) -> unsafe[1]
27Chapitre 2 Le model checker Cubicle
UNSAFE !
Des constantes sont introduites pour chaque processus entrant en jeu dans la trace
et sont notées #1, #2, #3, . . . . La notation “. . . wait(#1, #2) -> . . . ” signie que pour
reproduire la trace, il faut prendre la transition wait instanciée avec les processus #1 et #2.
Remarque. Un état dangereux de ce système est en réalité atteignable en seulement huit
étapes mais il faut pour cela qu’au moins trois processus soient impliqués. On peut s’en
rendre compte en forçant Cubicle à eectuer une exploration purement en largeur grâce
à l’option -postpone 0. De cette façon, on est assuré d’obtenir la trace d’erreur la plus
courte possible :
Error trace: start(#1) -> start(#3) -> wait(#1, #3) -> awaited_1(#1) ->
turn_buggy(#1) -> enter(#1) -> start(#2) -> enter(#2) ->
unsafe[1]
2.2.3 Boulangerie
L’algorithme de la boulangerie a été proposé par Leslie Lamport en réponse au problème
d’exclusion mutuelle [110]. Contrairement aux approches précédentes, la particularité
de cet algorithme est qu’il est résistant aux pannes et qu’il fonctionne même lorsque les
opérations de lecture et d’écriture ne sont pas atomiques. Son pseudo-code est donné dans
l’algorithme 2 (voir page suivante).
On peut faire le parallèle entre le principe de base de l’algorithme et celui d’une boulangerie
aux heures de pointe, d’où son nom. Dans cette boulangerie, chaque client (matérialisant
un processus) choisi un numéro en entrant dans la boutique. Le client qui souhaite acheter
du pain ayant le numéro le plus faible s’avance au comptoir pour être servi. La propriété
d’exclusion mutuelle de cette boulangerie se manifeste par le fait que son fonctionnement
garantit qu’un seul client est servi à la fois. Il est possible que deux clients choisissent le
même numéro, celui dont le nom (unique) est avant l’autre a alors la priorité.
La modélisation faite dans Cubicle est reprise de la version modélisée dans PFS par
Abdulla et al. [3] et est donnée ci-dessous. C’est une version simpliée de l’algorithme
original de Lamport dans laquelle le calcul du maximum à l’étiquette Choose et l’ensemble
des tests de la boucle for à l’étiquette Wait sont atomiques. Les lectures et écritures sont
aussi considérées comme étant instantanées.
Cet algorithme est tolérant aux pannes, c’est à dire qu’il continue de fonctionner même
si un processus s’arrête. Un processus i a le droit de tomber en panne et de redémarrer en
section non critique tout en réinitialisant ses variables locales. Ce comportement peut être
modélisé dans Cubicle en rajoutant un point de programme spécial Crash représentant
le fait qu’un processus est en panne. On ajoute une transition sans garde qui dit qu’un
processus peut tomber en panne à tout moment, ainsi qu’une transition lui permettant de
redémarrer (voir page 30).
282.2 Exemples
Algorithme 2 : Pseudo-code de la boulangerie de Lamport pour le
processus i [110]
Variables :
choosing[i] : variable booléenne, initialisée à false
number[i] : variable entière initialisée à 0
p(i) :
NCS begin
choosing[i] := true;
Choose number[i] := 1 + max(number[1], . . . , number[N]);
choosing[i] := false;
for j = 1 to N do
Wait await [ ¬ choosing[j] ];
await [ number[j] = 0 ∨ (number[i], i) < (number[j], j) ];
CS // Section critique;
number[i] := 0;
goto NCS;
bakery_lamport.cub
type location = NCS | Choose | Wait | CS
array PC[proc] : location
array Ticket[proc] : int
array Num[proc] : int
var Max : int
init (x) { PC[x] = NCS && Num[x] = 0 &&
Max = 1 && Ticket[x] = 0 }
invariant () { Max < 0 }
unsafe (x y) { PC[x] = CS && PC[y] = CS }
transition next_ticket ()
{
Ticket[j] := case | _ : Max;
Max := Max + 1;
}
transition take_ticket (x)
requires { PC[x] = NCS &&
forall_other j. Num[j] < Max }
{
PC[x] := Choose;
Ticket[x] := Max;
}
transition wait (x)
requires { PC[x] = Choose }
{
PC[x] := Wait;
Num[x] := Ticket[x];
}
transition turn (x)
requires { PC[x] = Wait &&
forall_other j.
(PC[j] <> Choose && Num[j] = 0 ||
PC[j] <> Choose && Num[x] < Num[j] ||
PC[j] <> Choose &&
Num[x] = Num[j] && x < j) }
{
PC[x] := CS;
}
transition exit (x)
requires { PC[x] = CS }
{
PC[x] := NCS;
Num[x] := 0;
}
Figure 2.5 – Code Cubicle de l’algorithme de la boulangerie de Lamport
29Chapitre 2 Le model checker Cubicle
type location =
NCS | Choose | Wait | CS | Crash
...
transition fail (x)
{ PC[x] := Crash }
transition recover (x)
requires { PC[x] = Crash }
{
PC[x] := NCS;
Num[x] := 0;
}
2.2.4 Cohérence de Cache
Dans une architecture multiprocesseurs à mémoire partagée, l’utilisation de caches
est requise pour réduire les eets de la latence des accès mémoire et pour permettre la
coopération. Les architectures modernes possèdent de nombreux caches ayant des fonctions
particulières, mais aussi plusieurs niveaux de cache. Les caches qui sont les plus près des
unités de calcul des processeurs orent les meilleures performances. Par exemple un accès
en lecture au cache (succès de cache ou cache hit) L1 consomme seulement 4 cycles du
processeur sur une architecture Intel Core i7, alors qu’un accès mémoire (défaut de cache
ou cache miss) consomme en moyenne 120 cycles [113]. Cependant l’utilisation de caches
introduit le problème de cohérence de cache : toutes les copies d’un même emplacement
mémoire doivent être dans des états compatibles. Un protocole de cohérence de cache fait en
sorte, entre autres, que les écritures eectuées à un emplacement mémoire partagé soient
visibles de tous les autres processeurs tout en garantissant une absence de conits lors des
opérations.
Il existe plusieurs types de protocoles de cohérence de cache :
— cohérence par espionnage 3
: la communication se fait au travers d’un bus central sur
lequel les diérentes transactions sont visibles par tous les processeurs.
— cohérence par répertoire 4
: chaque processeur est responsable d’une partie de la
mémoire et garde trace des processeurs ayant une copie locale dans leur cache. Ces
protocoles fonctionnent par envoi de messages sur un réseau de communication.
Bien que plus dicile à mettre en œuvre, cette dernière catégorie de protocoles est
aujourd’hui la plus employée, pour des raisons de performance et de passage à l’échelle. Les
architectures réelles sont souvent très complexes, elles implémentent plusieurs protocoles
de cohérence de manière hiérarchique, et utilisent plusieurs dizaines voire centaines de
variables. Néanmoins leur fonctionnement repose sur le même principe fondamental. Un
protocole simple mais représentatif de cette famille à été donné par Steven German à
3. Ces protocoles sont aussi parfois qualiés de snoopy.
4. On qualie parfois ces protocoles par le terme anglais directory based.
302.2 Exemples
la communauté académique [138]. L’exemple suivant dénommé German-esque est une
version simpliée de ce protocole.
E
S
I
Shr[i] := true
Exg := true
Exg := true
Shr[i] := true
Shr[i] := false
Exg := false
Exg := false
Shr[i] := false
Figure 2.6 – Diagramme d’état du protocole German-esque
L’état d’un processeuri est donné par la variable Cache[i] qui peut prendre trois valeurs :
(E)xclusive (accès en lecture et écriture), (S)hared (accès en lecture seulement) ou (I)nvalid
(pas d’accès à la mémoire). Les clients envoient des requêtes au responsable lorsqu’un défaut
de cache survient : RS pour un accès partagé (défaut de lecture), RE pour un accès exclusif
(défaut d’écriture). Le répertoire contient quatre informations : une variable booléene
Exg signale par la valeur true qu’un des clients possède un accès exclusif à la mémoire
principale, un tableau de booléens Shr, tel que Shr[i] est vrai si le client i possède une
copie (avec un accès en lecture ou écriture) de la mémoire dans son cache, Cmd contient la
requête courante (ϵ marque l’absence de requête) dont l’émetteur est enregistré dans Ptr.
Les états initiaux du système sont représentés par la formule logique suivante :
∀i. Cache[i] = I ∧ ¬Shr[i] ∧ ¬Exg ∧ Cmd = ϵ
signiant que les caches de tous les processeurs sont invalides, aucun accès n’a encore été
donné et il n’y a pas de requête à traiter.
La Figure 2.6 donne une vue assez haut niveau de l’évolution d’un seul processeur. Les
èches pleines montrent l’évolution du cache du processeur selon ses propres requêtes, alors
que les èches pointillées représentent les transitions résultant d’une requête d’un autre
client. Par exemple, un cache va de l’état I à S lors d’un défaut de lecture : le répertoire lui
accorde un accès partagé tout en enregistrant cette action dans le tableau Shr[i] := true. De
31Chapitre 2 Le model checker Cubicle
germanesque.cub
type msg = Epsilon | RS | RE
type state = I | S | E
(* Client *)
array Cache[proc] : state
(* Directory *)
var Exg : bool
var Cmd : msg
var Ptr : proc
array Shr[proc] : bool
init (z) {
Cache[z] = I && Shr[z] = False &&
Exg = False && Cmd = Epsilon
}
unsafe (z1 z2) {
Cache[z1] = E && Cache[z2] <> I
}
transition request_shared (n)
requires { Cmd = Epsilon &&
Cache[n] = I }
{
Cmd := RS;
Ptr := n ;
}
transition request_exclusive (n)
requires { Cmd = Epsilon &&
Cache[n] <> E }
{
Cmd := RE;
Ptr := n;
}
transition invalidate_1 (n)
requires { Shr[n]=True && Cmd = RE }
{
Exg := False;
Cache[n] := I;
Shr[n] := False;
}
transition invalidate_2 (n)
requires { Shr[n]=True &&
Cmd = RS && Exg=True }
{
Exg := False;
Cache[n] := S;
Shr[n] := True;
}
transition grant_shared (n)
requires { Ptr = n &&
Cmd = RS && Exg = False }
{
Cmd := Epsilon;
Shr[n] := True;
Cache[n] := S;
}
transition grant_exclusive (n)
requires {
Cmd = RE && Exg = False &&
Ptr = n && Shr[n] = False &&
forall_other l. Shr[l] = False }
{
Cmd := Epsilon;
Exg := True;
Shr[n] := True;
Cache[n] := E;
}
Figure 2.7 – Code Cubicle du protocole de cohérence de cache German-esque
322.3 Non-atomicité
façon similaire, si un défaut en écriture survient dans un autre cache, le répertoire invalide
tous les clients enregistrés dans Shr avant de donner l’accès exclusif. Cette invalidation
générale a pour eet de passer les caches dans les états E et S à l’état I.
La modélisation qui est faite de ce protocole est assez immédiate et est donnée ci-dessous.
On peut toutefois remarquer qu’on s’intéresse ici seulement à la partie contrôle du protocole
et on oublie les actions d’écriture et lecture réelles de la mémoire. La seule propriété qu’on
souhaite garantir dans ce cas est que si un processeur a son cache avec accès exclusif alors
tous les autres sont invalides :
∀i,j. (i , j ∧ Cache[i] = E) =⇒ Cache[j] = I
Le responsable du répertoire est abstrait et on ne s’intéresse qu’à une seule ligne de
mémoire donc l’état du répertoire est représenté avec des variables globales.
2.3 Non-atomicité
Dans la plupart des travaux existants sur les systèmes paramétrés, on fait la supposition
que l’évaluation des conditions globales est atomique. La totalité de la garde est évaluée en
une seule étape. Cette hypothèse est raisonnable lorsque les conditions globales masquent
des détails d’implémentation qui permettent de faire cette évaluation de manière atomique.
En revanche, beaucoup d’implémentations réelles d’algorithmes concurrents et de protocoles
n’évaluent pas ces conditions instantanément mais plutôt par une itération sur
une structure de donnée (tableau, liste chaînée, etc.). Par exemple, l’algorithme de Dekker
(voir Section 2.2.2) implémente le test ∃j , i.x (j) de l’étiquette WANT par une boucle
qui recherche un j tel que x (j) soit vrai. De même, on a modélisé la boucle d’attente de
l’algorithme de la boulangerie (Section 2.2.3, algorithme 2, étiquette Choose) par une garde
universelle dans la transition turn.
En supposant que des conditions sont atomiques, on simplie le problème mais cette
approximation n’est pas conservatrice. En eet, l’évaluation d’une condition peut être
entrelacée avec les actions des autres processus. Il est possible par exemple, qu’un processus
change la valeur de sa variable locale alors que celle-ci à déjà été comptabilisée pour une
condition globale avec son ancienne valeur. Il peut dans ce cas exister dans l’implémentation
des congurations qui ne sont représentées par aucun état du modèle atomique. Si on veut
prendre en compte ces éventualités dans notre modélisation, on se doit de reéter tous les
comportements possibles dans le système de transition.
La vérication de propriétés d’exclusion mutuelle dans des protocoles paramétrés comme
l’algorithme de la boulangerie à déjà été traitée par le passé [32, 114]. Ces preuves ne sont
souvent que partiellement automatisées et font usage d’abstractions ad-hoc. La première
approche permettant une vérication automatique de protocoles avec gardes non atomiques
a été développée en 2008 [5]. Les auteurs utilisent ici un protocole annexe de ranement
33Chapitre 2 Le model checker Cubicle
pour modéliser l’évaluation non atomique des conditions globales. L’approche qu’on
présente dans la suite de cette section est identique en substance à celle de [5]. On montre
cependant qu’on peut rester dans le cadre déni par Cubicle pour modéliser ces conditions
non atomiques.
Comme les tests non atomiques sont implémentés par des boucles, on choisit de modéliser
les implémentations de la gure 2.8. La valeur N représente ici le paramètre du système,
∀j , i. c(j) ∃j , i. c(j)
j := 1 ;
while (j ≤ N) do
if j , i ∧ ¬c(j) then return false ;
j := j + 1
end ;
return true
j := 1 ;
while (j ≤ N) do
if j , i ∧ c(j) then return true ;
j := j + 1
end ;
return false
Figure 2.8 – Évaluation non atomique des conditions globales par un processus i
c’est-à-dire la cardinalité du type proc. Pour une condition universelle, on parcourt tous
les éléments (de 1 à N) et on s’assurent qu’ils vérient la condition c. Si on trouve un
processus qui ne respecte pas cette condition on sort de la boucle et on renvoie false.
Malheureusement dans Cubicle, on ne peut pas mentionner ce paramètre N explicitement
(car le solveur SMT ne peut pas raisonner sur la valeur de la cardinalité des modèles). On
ne peut dès lors pas utiliser de compteur entier. Pour s’en sortir, on construit une boucle
avec un compteur abstrait pour lequel on peut seulement tester si sa valeur est 0 ou N
5
.
On peut également incrémenter, décrémenter, et aecter ce compteur à ces mêmes valeurs.
Dans Cubicle on modélise ce compteur par un tableau de booléens représentant un
encodage unaire de sa valeur entière. Le nombre de cellules contenant la valeur true
correspond à la valeur du compteur. Incrémenter ce compteur revient donc à passer une
case de la valeur false à la valeur true. Pour modéliser l’évaluation non atomique de la
condition ∀j , i. c(j)
6
, on utilise un compteur Cpt. Comme on veut qu’il soit local à un
processus, on a besoin d’un tableau bi-dimensionnel. Le tableau en Cpt[i] matérialise le
compteur local au processusi. Le résultat de l’évaluation de cette condition sera stocké dans
la variable Res[i] à valeurs dans le type énuméré {E,T,F}. Res[i] contient E initialement
ce qui signie que la condition n’est pas nie d’être évaluée. Lorsqu’elle contient T alors la
condition globale a été évalué à vrai, et si elle contient F la condition globale est fausse.
Les transitions correspondantes sont données gure 2.9.
Avec cette modélisation, on spécie juste que la condition locale c(j) doit être vériée
5. On peut généraliser à tester si sa valeur est k ou N − k où k est une constante positive entière.
6. L’encodage de l’évaluation de la condition duale ∃j , i. c(j) est symétrique.
342.3 Non-atomicité
Transition Commentaire
type result = E | T | F
array Cpt[proc,proc] : bool
array Res[proc] : result
transition start (i)
requires { ... }
{ Res[i] := E
Cpt[x,y] := case
| x=i : False
| _ : Cpt[x,y] }
On initialise le compteur à 0 et on signale
que la condition est en cours
d’évaluation avec la variable locale
Res.
transition iter (i j)
requires { Cpt[i,j] = False && c(j) }
{ Cpt[i,j] := True }
La condition c(j) n’a pas encore été
vériée. Comme elle est vraie on incrémente
le compteur.
transition abort (i j)
requires { Cpt[i,j] = False && ¬c(j) }
{ Res[i] := F }
La condition c(j) n’a pas encore été
vériée mais elle est fausse. On sort
de la boucle, la condition globale est
fausse.
transition exit (i)
requires { forall_other j.
Cpt[i,j] = True }
{ Res[i] := T }
Toutes les conditionsc(j) ont étés véri-
ées. On sort de la boucle, la condition
globale est vraie.
Figure 2.9 – Encodage de l’évaluation non atomique de la condition globale ∀j , i. c(j)
35Chapitre 2 Le model checker Cubicle
pour tous les processus, indépendamment de l’ordre. Cette sous-spécication reste conservatrice.
Il est toutefois possible d’ajouter des contraintes d’ordre sur la variable j si c’est
important pour la propriété de sûreté.
On peut remarquer qu’on se sert d’une garde universelle dans la transition exit. Ceci
n’inue en rien la « non-atomicité » de l’évaluation de la condition car le quanticateur
est seulement présent pour encoder le test Cpt[i] = N.
Remarque. Avec cette garde universelle, on risque de tomber dans le cas d’une fausse alarme
à cause du traitement détaillé en section 3.3 du chapitre suivant. L’intuition apportée par
le modèle de panne franche, nous permet d’identier ces cas problématiques. Si la sûreté
dépend du fait qu’un processus puisse disparaître du système pendant l’évaluation d’une
condition globale, alors on risque de tomber dans ce cas défavorable.
Les versions non atomiques des algorithmes sont bien plus compliquées que leurs
homologues atomiques. Les raisonnements à mettre en œuvre pour dérouler la relation
de transition (aussi bien en avant qu’en arrière) sont plus longs. C’est ce qui en fait des
candidats particulièrement coriaces pour les procédures de model checking.
2.4 Logique multi-sortée et systèmes à tableaux
On a montré dans la section précédente que le langage de Cubicle permet de représenter
des algorithmes d’exclusion mutuelle ou des protocoles de cohérence de cache à l’aide
de systèmes de transitions paramétrés. Le lecteur intéressé par la dénition précise de
la syntaxe concrète et des règles de typage de ce langage trouvera leur description en
annexe A.
Dans cette section, on dénit formellement la sémantique des programmes Cubicle. Pour
ce faire, on utilise le formalisme des systèmes à tableaux conçu par Ghilardi et Ranise [77].
Ainsi, on montre dans un premier temps que chaque transition d’un système paramétré peut
simplement être interprétée comme une formule dans un fragment de la logique du premier
ordre multi-sortée. Ce fragment est déni par l’ensemble des types et variables déclarés au
début du programme (ainsi que ceux prédénis dans Cubicle). Dans un deuxième temps, on
donne une sémantique opérationnelle à ces formules à l’aide d’une relation de transition
entre les modèles logiques de ces formules.
On rappelle ici brièvement les notions usuelles de la logique du premier ordre multisortée
et de la théorie des modèles qui sont utilisées pour dénir les systèmes à tableaux
de Cubicle. Des explications plus amples peuvent être trouvées dans des manuels traitant
du sujet [92, 152]. Le lecteur familier avec ces concepts peut ne lire que la section 2.4.3 qui
présente plus en détail la construction des ces systèmes.
362.4 Logique multi-sortée et systèmes à tableaux
2.4.1 Syntaxe des formules logiques
La logique du premier ordre multi-sortée est une extension classique qui possède essentiellement
les mêmes propriétés que la logique du premier ordre sans sortes [149]. Son
intérêt est qu’elle permet de partitionner les éléments qu’on manipule selon leur type.
Dénition 2.4.1. Une signature Σ est un tuple (S,F ,R) où :
— S est un ensemble non vide de symboles de sorte
— F est un ensemble de symboles de fonction, chacun étant associé à un type de la
forme
— s pour les symboles d’arité 0 (zéro) avec s ∈ S. On appelle ces symboles des
constantes.
— s1 × . . . × sn → s pour les symboles d’arité n, avec s1,. . . ,sn,s ∈ S
— R est un ensemble de symboles de relation (aussi appelé prédicats). On associe à
chaque prédicat d’arité n un type de la forme s1 × . . . × sn avec s1,. . . ,sn ∈ S.
Dans la suite on supposera que le symbole d’égalité = est inclus dans toutes les signatures
qui seront considérées et qu’il a les types s × s quelque soit la sorte s ∈ S
7
. On notera
par exemple par f : s1 × . . . × sn → s un symbole de fonction f dans F dont le type est
s1 × . . . × sn → s.
L’en-tête du chier Cubicle est en fait une façon de donner cette signature. Par exemple
l’en-tête du code de l’exemple du mutex de la section 2.2.1 correspond à la dénition de la
signature Σ suivante :
en-tête Cubicle signature Σ = (S,F ,R)
type state = Idle | Want | Crit
array State[proc] : state
var Turn : proc
S = {state,proc}
F = {Idle : state,Want : state,
Crit : state,
State : proc → state,
Turn : proc}
R = {= : state × state
: proc × proc}
Remarque. Le mot-clef var ne permet pas d’introduire une variable dans la logique mais
permet de déclarer une constante logique avec son type. Le choix des mots-clefs var et
array reète une vision plus intuitive des systèmes Cubicle comme des programmes.
7. On pourrait éviter cette formulation avec un symbole d’égalité polymorphe mais cela demande
d’introduire des variables de type.
37Chapitre 2 Le model checker Cubicle
Remarque. Cubicle connaît en interne les symboles de sorte proc, bool, int et real ainsi
que les symboles de fonction +, −, les constantes 0,1,2,. . . ,0.0,1.0,. . . et les relations < et
≤. Par convention, ils apparaîtront dans la signature Σ seulement s’ils sont utilisés dans le
système.
On distinguera parmi ces signatures celles qui ne permettent pas d’introduire de nouveaux
termes car il est important pour Cubicle de maîtriser la création des processus.
Dénition 2.4.2. Une signature est dite relationnelle si elle ne contient pas de symboles
de fonction.
Dénition 2.4.3. Une signature est dite quasi-relationnelle si les symboles de fonction
qu’elle contient sont tous des constantes.
On dénit les Σ-termes, Σ-atomes (ou Σ-formules atomiques), Σ-littéraux et Σ-formules
comme les expressions du langage déni par la grammaire décrite en gure 2.10. Les types
des variables quantiées ne sont pas indiqués par commodité.
Σ-terme : hti ::= c où c est une constante de Σ
| f (hti, . . . , hti) où f est un symbole de fonction de Σ d’arité
> 0
| ite (hφi, hti, hti)
Σ-atome : hai ::= false | true
| p(hti, . . . , hti) où p est un symbole de relation de Σ
Σ-littéral : hli ::= hai | ¬hai
Σ-formule : hφi ::= hli | hφi ∧ hφi | hφi ∨ hφi | ∀i. hφi | ∃i. hφi
Figure 2.10 – Grammaire de la logique
En plus de cette restriction syntaxique, on impose que les Σ-termes et Σ-atomes soient
bien typés en associant des sortes aux termes :
1. Chaque constante c de sorte s ∈ S est un terme de sorte s.
2. Soient f un symbole de fonction de type s1 × . . . × sn → s, t1 un terme de sorte s1,
. . . , et tn un terme de sorte sn alors f (t1,. . . tn) est un terme de sorte s.
3. Soit p un symbole de relation de type s1 × . . . ×sn. L’atome p(t1,. . . ,tn) est bien typé
ssi t1 est un terme de sorte s1, . . . , et tn est un terme de sorte sn.
382.4 Logique multi-sortée et systèmes à tableaux
On appelle une Σ-clause une disjonction de Σ-littéraux. On dénote par, Σ-CNF (resp.
Σ-DNF) une conjonction de disjonction de littéraux (resp. une disjonction de conjonction
de littéraux).
Conventions et notations
Pour éviter la lourdeur de la terminologie on dénotera par termes, atomes (ou formules
atomiques), littéraux, formules, clauses, CNF et DNF respectivement les Σ-termes, Σ-atomes
(ou Σ-formules atomiques), Σ-littéraux, Σ-formules, Σ-clauses, Σ-CNF et Σ-DNF lorsque le
contexte ne permet pas d’ambiguïté.
Les termes de la forme ite (φcond ,tthen,telse ) correspondent à la construction conditionnelle
classique if φcond then tthen else telse . On suppose l’existence d’une fonction elim_ite
qui prend en entrée une formule sans quanticateurs dont les termes peuvent contenir le
symbole ite et renvoie une formule équivalente sansite. De plus on suppose que elim_ite
renvoie une formule en forme normale disjonctive (Σ-DNF). Par exemple,
elim_ite(x = ite (φ,t1,t2)) = (φ ∧ x = t1) ∨ (¬φ ∧ x = t2)
On notera par i¯ une séquence i1,i2,. . . ,in. Pour les formules quantiées, on écrira
∃x,y. φ (resp. ∀x,y. φ) pour ∃x∃y. φ (resp. ∀x∀y. φ). En particulier, on écrira ∃i¯. φ pour
∃i1∃i2 . . . ∃in. φ.
2.4.2 Sémantique de la logique
Dénition 2.4.4. Une Σ-structure A est une paire (D,I) où D est un ensemble appelé le
domaine de A (ou l’univers de A) et dénoté par dom(A). Les éléments de D sont appelés
les éléments de la structure A. On notera |dom(A)| le cardinal du domaine de A et on dira
qu’une structure A est nie si |dom(A)| est ni. I est l’interprétation qui :
1. associe à chaque sorte s ∈ S de Σ un sous ensemble non vide de D.
2. associe à chaque constante de Σ de type s un élément du sous-domaine I(s).
3. associe à chaque symbole de fonction f ∈ F d’arité n > 0 et de type s1 × . . . × sn → s
une fonction totale I(f ) : I(s1) × . . . × I(sn) → I(s).
4. associe à chaque symbole de relation p ∈ R d’arité n > 0 et de type s1 × . . . × sn une
fonction totale I(p) : I(s1) × . . . × I(sn) → {true,false}.
Cette interprétation peut être étendue de manière homomorphique aux Σ-termes et Σ-
formules – elle associe à chaque terme t de sorte s un élément I(t) ∈ I(s) et à chaque
formule φ une valeur I(φ) ∈ {true,false}.
Dénition 2.4.5. On appelle une Σ-théorie T un ensemble (potentiellement inni) de
Σ-structures. Ces structures sont aussi appelées les modèles de T .
39Chapitre 2 Le model checker Cubicle
Dénition 2.4.6. On dit qu’un Σ-modèle M = (A,I) satisfait une formule φ ssi I(φ) =
true, dénoté par M |= φ.
Dénition 2.4.7. Une formule φ est dite satisable dans une théorie T (ou T -satisable)
ssi il existe un modèle M ∈ T qui satisfait φ.
Une formule φ est dite conséquence logique d’un ensemble Γ de formules dans une théorie
T (et noté par Γ |=T φ) ssi tous les modèles de T qui satisfont Γ satisfont aussi φ.
Dénition 2.4.8. Une formule φ est dite valide dans une théorie T (ou T -valide) ssi sa
négation est T -insatisable, dénoté par T |= φ ou ∅ |=T φ.
Dénition 2.4.9. Soient A et B deux Σ-structures. A est une sous-structure de B, noté
A ⊆ B si dom(A) ⊆ dom(B).
Dénition 2.4.10. Soient A une Σ-structure et X ⊆ dom(A), alors il existe une unique
plus petite sous-structure B de A tel que dom(B) ⊆ X. On dit que B est la sous-structure de
A générée par X et on note B = hXiA.
Les deux dénitions suivantes seront importantes pour caractériser la théorie des processus
supportée par Cubicle.
Dénition 2.4.11. Une Σ-théorie T est localement nie ssi Σ est nie, chaque sous ensemble
ni d’un modèle de T génère une sous-structure nie.
Remarque. Si Σ est relationnelle ou quasi-relationnelle alors toute Σ-théorie est localement
nie.
Dénition 2.4.12. Une Σ-théorie T est close par sous-structure ssi chaque sous-structure
d’un modèle de T est aussi un modèle de T .
Exemple. La théorie ayant pour modèle la structure de domaine N et signature ({int},
{0,1}, {=,≤}) où ces symboles sont interprétés de manière usuelle (comme dans la théorie
de l’arithmétique de Presburger) est localement nie et close par sous-structure. Si on étend
cette signature avec le symbole de fonction + alors elle n’est plus localement nie mais
reste close par sous-structure. Une théorie ayant pour modèle une structure nie, avec une
signature (_,∅,{=,R}) où R est interprétée comme une relation binaire qui caractériserait
un anneau est localement nie mais n’est pas close par sous-structure.
Le problème de la satisabilité modulo une Σ-théorie T (SMT) consiste à établir la
satisabilité de formules closes sur une extension arbitraire de Σ (avec des constantes).
Une extension de ce problème, beaucoup plus utile en pratique, est d’établir la satisabilité
modulo la combinaison de deux (ou plus) théories.
402.4 Logique multi-sortée et systèmes à tableaux
Exemples de théories. La théorie de l’égalité (aussi appelée théorie vide, ou EUF) est
la théorie qui a comme modèles tous les modèles possibles pour une signature donnée.
Elle n’impose aucune restriction sur l’interprétation faite de ses symboles (ses symboles
sont dits non-interprétés). Les fonctions non-interprétées sont souvent utilisées comme
technique d’abstraction pour s’aranchir d’une complexité ou de détails inutiles.
La théorie de l’arithmétique est une autre théorie omniprésente en pratique. Elle est
utilisée pour modéliser l’arithmétique des programmes, la manipulation de pointeurs et de
la mémoire, les contraintes de temps réels, les propriétés physiques de certains systèmes,
etc. Sa signature est {0,1,...,+,−,∗,/,≤} étendue à un nombre arbitraire de constantes, et
ses symboles sont interprétés de manière usuelle sur les entiers et les réels.
Une théorie de types énumérés est une théorie ayant une signature Σ quasi-relationnelle
contenant un nombre ni de constantes (constructeurs). L’ensemble de ses modèles consiste
en une unique Σ-structure dont chaque symbole est interprété comme un des constructeurs
de Σ. Dans ce qui va suivre on verra que ces théories seront utiles pour modéliser les points
de programme de processus et les messages échangés par ces processus dans les systèmes
paramétrés.
2.4.3 Systèmes de transition à tableaux
Cette section introduit le formalisme des systèmes à tableaux conçu par Ghilardi et
Ranise [77]. Il permet de représenter une classe de systèmes de transition paramétrés dans
un fragment restreint de la logique du premier ordre multi-sortée. Pour ceci on aura besoin
des théories suivantes :
— une théorie des processus TP sur signature ΣP localement nie dont le seul symbole
de type est proc, et telle que la TP -satisabilité est décidable sur le fragment sans
quanticateurs
— une théorie d’éléments TE sur signature ΣE localement nie dont le seul symbole
de type est elem, et telle que la TE-satisabilité est décidable sur le fragment sans
quanticateurs. TE peut aussi être l’union de plusieurs théories TE = TE1 ∪ . . . ∪ TEk
,
dans ce cas TE a plusieurs symboles de types elem1, . . ., elemk .
— la théorie d’accès TA sur signature ΣA, obtenue en combinant la théorie TP et la
théorie TE de la manière suivante. ΣA = ΣP ∪ ΣE ∪ Q où Q est un ensemble de
symboles de fonction de type proc × . . . × proc → elemi
(ou constantes de type
elemi
). Étant donnée une structure S, on note par S|ty la structure S dont le domaine
est restreint aux éléments de type ty. Les modèles de TA sont les structures S où
S|proc est un modèle de TP et S|elem est un modèle de TE et S|proc×...×proc→elem est
l’ensemble des fonctions totales de proc × . . . × proc → elem.
41Chapitre 2 Le model checker Cubicle
On suppose dans la suite que les théories TE et TP ne partagent pas de symboles, i.e.
ΣE ∩ ΣP = {=} (seule l’égalité apparaît dans toutes les signatures).
Dénition 2.4.13. Un système (de transition) à tableaux est un triplet S = (Q,I,τ ) avec Q
partitionné en Q0,. . . ,Qm où
— Qi est un ensemble de symboles de fonction d’arité i. Chaque f ∈ Qi a comme
type proc × . . . × proc
| {z }
i fois
→ elemf
. Les fonctions d’arité 0 représentent les variables
globales du système. Les fonctions d’arité non nulle représentent quand à elles les
tableaux (indicés par des processus) du système.
— I est une formule qui caractérise les états initiaux du système (où les variables de Q
peuvent apparaître libres).
— τ est une relation de transition.
La relation τ peut être exprimée sous la forme d’une disjonction de formules quantiées
existentiellement par zéro, une, ou plusieurs variables de type proc. Chaque composante de
cette disjonction est appelée une transition et est paramétrée par ses variables existentielles.
Elle met en relation les variables globales et tableaux d’états avant et après exécution
de la transition. Si x ∈ Q est un tableau (ou une variable globale), on notera par x
′
la
valeur de x après exécution de la transition et par Q
′
l’ensemble des variables et tableaux
après exécution de la transition. La forme générale des transitions qu’on considère est la
suivante :
t(Q,Q
′
) = ∃i¯. γ (i¯,Q)
| {z }
garde
∧
^
x∈Q
∀j¯.x
′
(j¯) = δx (i¯,j¯,Q)
| {z }
action
où
— γ est une formule sans quanticateurs 8
appelée la garde de t
— δx est une formule sans quanticateurs appelée la mise à jour de x
Les variables de Q peuvent apparaître libres dans γ et les δx . Cette formule est équivalente
à la variante suivante où les fonctions x
′
sont écrites sous forme fonctionnelle avec un
lambda-terme :
t(Q,Q
′
) = ∃i¯. γ (i¯,Q) ∧
^
x∈Q
x
′ = λj¯. δx (i¯,j¯,Q)
8. On autorise une certaine forme de quantication universelle dans γ en section 3.3.
422.4 Logique multi-sortée et systèmes à tableaux
Intuitivement, une transition t met en jeu un ou plusieurs processus (les variables
quantiées existentiellement i¯, ses paramètres) qui peuvent modier l’état du système
(cf. Section 2.5). Ici γ représente la garde de la transition et les δx sont les mises à jour
des variables et tableaux d’état. Un tel système S = (Q,I,τ ) est bien décrit par la syntaxe
concrète de Cubicle et de manière analogue, sa sémantique est une boucle innie qui, à
chaque tour, exécute une transition choisie arbitrairement dont la garde γ est vraie, et met
à jour les valeurs des variables de Q en conséquence.
Soit un système S = (Q,I,τ ) et une formule Θ (dans laquelle les variables de Q peuvent
apparaître libres), le problème de la sûreté (ou de l’atteignabilité) est de déterminer s’il
existe une séquence de transitions t1,. . . ,tn dans τ telle que
I (Q
0
) ∧ t1 (Q
0
,Q
1
) ∧ . . . ∧ tn (Q
n−1
,Q
n
) ∧ Θ(Q
n
)
est satisable modulo les théories mises en jeu. S’il n’existe pas de telle séquence, alors S
est dit sûr par rapport à Θ. Autrement dit, ¬Θ est une propriété de sûreté ou un invariant
du système.
Exemple (Mutex). On prend ici l’exemple d’un mutex simple paramétré par son nombre
de processus (dans type proc) dont un aperçu plus détaillé a été donnée en section 2.2.1.
Pour cet exemple, on prendra TP la théorie de l’égalité de signature ΣP = ({proc},∅,{=})
ayant pour symbole de type proc. On considère comme théorie des éléments, l’union
d’une théorie des types énumérés TE de signature ΣE = ({state},{Idle,Want,Crit},{=}) où
Idle, Want, et Crit en sont les constructeurs de type state et de la théorie TP . La théorie
d’accès TA est dénie comme la combinaison de TP et TE. Elle a pour signature ΣA =
ΣP ∪ ΣE ∪ (_,{State,Turn},∅) où State est un symbole de fonction de type proc →
state, et Turn est une constante de type proc.
Avec le formalisme décrit précédemment, le système S = (Q,I,τ ) représentant le problème
du mutex s’exprime de la façon suivante. L’ensemble Q contient les symboles State
et Turn. Les états initiaux du système sont décris par la formule
I ≡ ∀i. State(i) = Idle
La relation de transition τ est la disjonction treq ∨ tenter ∨ texit avec :
treq ≡ ∃i. State(i) = Idle ∧
∀j. State′
(j) = ite (i = j, Want, State(j)) ∧ Turn′ = Turn
tenter ≡ ∃i. State(i) = Want ∧ Turn = i ∧
∀j. State′
(j) = ite (i = j, Crit, State(j)) ∧ Turn′ = Turn
texit ≡ ∃i. State(i) = Crit ∧
∀j. State′
(j) = ite (i = j, Idle, State(j))
43Chapitre 2 Le model checker Cubicle
Enn la formule caractérisant les mauvais états du système est :
Θ ≡ ∃i,j. i , j ∧ State(i) = Crit ∧ State(j) = Crit
La description de ce système faite dans la syntaxe concrète de Cubicle donnée en section
2.2.1 est l’expression directe de cette représentation. Pour plus de simplicité, on notera les
transitions avec le sucre syntaxique (de Cubicle)
transition t (i¯) requires { д } { a }
dans la section suivante. Par exemple, la première transition treq sera notée par
transition treq (i)
requires {State[i] = Idle}
{ State[j]:= case i = j : Want | _ : State′
[j] } .
2.5 Sémantique
La sémantique d’un programme décrit par un système de transition à tableaux dans
Cubicle est dénie pour un nombre de processus donné. On xe n la cardinalité du domaine
proc. C’est-à-dire que tous les modèles qu’on considère dans cette section interprètent le
type proc vers un ensemble à n éléments.
Remarque. On peut aussi voir un Σ-modèle M comme un dictionnaire qui associe les
éléments de Σ aux éléments du domaine de M.
Soit S = (Q,I,τ ) un système à tableaux, et TA la combinaison des théories comme dénie
en section 2.4.3. Dans la suite de cette section, on verra les modèles de TA comme des
dictionnaires associant tous les symboles de ΣA à des éléments de leur domaine respectif.
La première partie de ce chapitre donne au lecteur tous les éléments nécessaires pour
réaliser un interprète du langage d’entrée de Cubicle et s’assurer qu’il soit correct.
2.5.1 Sémantique opérationnelle
On dénit la sémantique opérationnelle du système à tableaux S pour n processus sous
forme d’un triplet K
n = (Sn,In,−→) où :
— Sn est l’ensemble potentiellement inni des modèles de TA où la cardinalité de l’ensemble
des éléments de proc est n.
— In = {M ∈ Sn | M |= I} est le sous ensemble de Sn dont les modèles satisfont la
formule initiale I du système à tableaux S.
442.5 Sémantique
— −→ ⊆ Sn × Sn est la relation de transition d’états dénie par la règle suivante :
transition t (i¯) requires { д } { a }
σ : i¯7→ proc M |= дσ A = all(a,i¯)
M −→ update(Aσ,M)
oùA = all(a,proc) est l’ensemble des actions de a instanciées avec tous les éléments
de proc et σ est une substitution des paramètres i¯ de la transition vers éléments
de proc. L’application de substitution de variables sur les actions s’eectue sur les
termes de droite. On note Aσ l’ensemble des actions de A auxquelles on a appliqué
la substitution σ :
x := t ∈ A ⇐⇒ x := tσ ∈ Aσ
A[i¯] := case c1 : t1
| . . .
| cm : tm
| _ : tm+1
∈ A ⇐⇒
A[i¯] := case c1σ : t1σ
| . . .
| cmσ : tmσ
| _ : tm+1σ
∈ Aσ
update(Aσ,M) est le modèle obtenu après application des actions de Aσ. On dénit
une fonction d’évaluation ⊢I telle que :
M ⊢I e : v ssi l’interprétation de e dans M est v
L’union des dictionnaires M1 ∪ M2 est dénie comme l’union des ensembles des
liaisons lorsque ceux-ci sont disjoints. Lorsque certaines liaisons apparaissent à la
fois dans M1 et M2, on garde seulement celles de M2. On note la fonction update
par le symbole ⊢up dans les règles qui suivent.
M ⊢up a1 : M1 M ⊢up a2 : M2
M ⊢up a1; a2 : M1 ∪ M2
M ⊢I t : v
M ⊢up x := t : M ∪ {x 7→ v}
M 6|= c1 M ⊢up A[i¯]:= case c2:t2 | . . . | _ : tn+1 : M′
M ⊢up A[i¯]:= case c1:t1 | c2:t2 | . . . | _ : tn+1 : M′
M |= c1 M ⊢I t1 : v1 M ⊢I A : fA M ⊢I i¯: vi
M ⊢up A[i¯]:= case c1:t1 | c2:t2 | . . . | _ : tn+1 : M ∪ {A 7→ fA ∪ {vi
7→ v1}}
M ⊢up M ⊢I t : v M ⊢I A : fA M ⊢I i¯: vi
M ⊢up A[i¯]:= case _ : t : M ∪ {A 7→ fA ∪ {vi
7→ v}}
45Chapitre 2 Le model checker Cubicle
Exécuter l’action x := t revient à changer la liaison de x dans le modèle M par la
valeur de t, comme montré par la deuxième règle. L’avant dernière règle dénit quant à
elle l’exécution d’une mise à jour du tableau A en position i¯. Si la première condition c1 de
la construction case est satisable dans le modèle courant, l’interprétation de la fonction
correspondant à A est changée pour qu’elle associe maintenant la valeur de t aux valeurs
de i¯.
Remarque. La construction case du langage de Cubicle est similaire aux constructions
traditionnelles de certains langages de programmation comme le branchement conditionnel
switch . . .case . . .break de C, ou match de ML. En eet les instructions d’un cas sont
exécutées seulement si toutes les conditions précédentes sont fausses. Le dernier cas _
permet de s’assurer qu’au moins une instruction est exécutée. En utilisant la forme logique
de la section 2.4.3, une mise à jour avec case s’exprime sous la forme de termes ite (ifthen-else)
imbriqués. Une fois les ite éliminés, l’ensemble des conditions obtenues forme
une partition (i.e. elles sont mutuellement insatisables et leur disjonction est valide).
Ces règles montrent notamment qu’on évalue les termes des aectations dans le modèle
précédent et non celui qu’on est en train de construire. Le caractère ; dans les actions ne
représente pas une séquence car les actions d’une transition sont exécutées simultanément
et de manière atomique.
Un système paramétré a alors toutes les sémantiques {K
n
| n ∈ N}. On peut aussi voir la
sémantique d’un système à tableaux paramétré comme une fonction qui à chaque n associe
K
n 9
.
2.5.2 Atteignabilité
Ici on ne s’intéresse qu’aux propriétés de sûreté des systèmes à tableaux, ce qui revient
au problème de l’atteignabilité d’un ensemble d’états.
Soit K
n = (Sn,In,−→) un triplet exprimant la sémantique d’un système à tableaux S =
(Q,I,τ ) comme déni précédemment. On note par −→∗
la clôture transitive de la relation
−→ sur Sn. On dit qu’un état s ∈ Sn est atteignable dans K
n
et on note Reachable(s,K
n
) ssi
il existe un état s0 ∈ In tel que
s0 −→∗
s
Soit une propriété dangereuse caractérisée par la formule Θ. On dira qu’un système à
tableaux S est non sûr par rapport à Θ ssi
∃n ∈ N. ∃s ∈ Sn. s |= Θ ∧ Reachable (s,K
n
)
9. On peut remarquer que K
n
est en réalité une structure de Kripke innie pour laquelle on omet la
fonction de labellisation L des états de la structure de Kripke car elle correspond exactement aux modèles, i.e.
L(M) = M.
462.5 Sémantique
Au contraire, on dira que S est sûr par rapport à Θ ssi
∀n ∈ N. ∀s ∈ Sn. s |= Θ =⇒ ¬Reachable (s,K
n
)
2.5.3 Un interpréteur de systèmes à tableaux
À l’aide de la sémantique précédente, on peut dénir diérentes analyses sur un système
à tableaux. Par exemple, un interpréteur de systèmes à tableaux est un programme dont les
états sont des états de K
n = (Sn,In,−→), qui démarre en un état initial de In et qui « suit »
une séquence de transitions de −→.
Comme dans la section précédente, l’interpréteur pour un système à tableaux S =
(Q,I,τ ) représente l’état du programme par un dictionnaire sur Q (i.e. un modèle de TA).
On suppose ici que In est non vide, c’est-à-dire qu’il existe au moins un modèle de I où
proc est de cardinalité n. Dans ce cas, on note Init la fonction qui prend n en argument et
qui retourne un état de In au hasard.
L’interpréteur consiste en une procédure run (Algorithme 3) prenant en argument
le paramètre n et la relation de transition τ . Elle maintient l’état du système dans le
dictionnaire M et exécute une boucle innie qui modie M en fonction des transitions
choisies.
Algorithme 3 : Interpréteur d’un système à tableaux
Variables :
M : l’état courant du programme (un modèle de TA)
procedure run(n,τ ) : begin
M := Init(n);
while true do
let L = enabled(n,τ ,M) in
if L = ∅ then deadlock ();
let a,σ = select(L) in
M := update(all(a,proc)σ,M);
La fonction enabled(n,τ ,M) renvoie l’ensemble des transitions de τ dont la garde est
vraie pour M. Autrement dit,
(transition t (i¯) requires { д } { a }) ∈ enabled(n,τ ,M) =⇒ M |= ∃i¯. д(i¯)
où д est la garde de la transition t et i¯ en sont les paramètres. Si aucune des transitions
n’est possible alors on sort de la boucle d’exécution avec la fonction deadlock. Si M
47Chapitre 2 Le model checker Cubicle
correspond à un état nal du système, le programme s’arrête. Sinon on est dans une
situation d’interblocage (ou deadlock).
La fonction select choisit de manière aléatoire une des transitions de L et retourne les
actions a correspondantes ainsi qu’une substitution σ des arguments vers les éléments du
domaine de proc telle que M |= д(i¯)σ.
Ensuite la fonction update(all(a,proc)σ,M) retourne un nouveau dictionnaire où les
valeurs des variables et tableaux sont mises à jour en fonction des actions de l’instance de
la transition choisie, comme déni précédemment.
Remarque. On peut aussi ajouter une instruction assert(¬M |= Θ) à l’algorithme 3 qui
lève une erreur dès qu’on passe par un état mauvais, i.e. lorsque M |= Θ.
Exemple (Mutex). Prenons l’exemple du mutex de la section précédente avec deux processus
#1 et #2. Une conguration de départ satisfaisant la formule initiale I possible est le
modèle M ci-dessous. En exécutant la transition treq pour le processus #1, on change la
valeur de State(#1) et obtient le nouveau modèle M′
.
M =
F dom(M)
Turn 7→ #2
State 7→
(
#1 7→ Idle
#2 7→ Idle
.
.
.
tr eq [i\#1]
−−−−−−−−−−−−→ M′ =
F dom(M′
)
Turn 7→ #2
State 7→
(
#1 7→ Want
#2 7→ Idle
.
.
.
Maintenant qu’il est aisé de se faire une idée du langage de description et de sa sémantique
qu’on utilise pour les systèmes de transition paramétrés, on montre dans la suite de ce
document comment construire un model checker capable de prouver la sûreté des exemples
de la section 2.2. Les algorithmes présentés dans ce chapitre sont des systèmes jouets, dans
le sens où ils représentent des problèmes de vérication formelle de taille assez réduite.
Cependant les preuves manuelles de ses systèmes paramétrés ne sont pas pour autant
triviales et les outils automatiques sont souvent mis au point sur ce genre d’exemples. On
s’eorcera bien sûr d’expliquer comment développer des algorithmes qui passent à l’échelle
sur des problèmes plus conséquents.
483
Cadre théorique : model checking modulo
théories
Sommaire
3.1 Analyse de sûreté des systèmes à tableaux . . . . . . . . . . . . . . 50
3.1.1 Atteignabilité par chaînage arrière . . . . . . . . . . . . . . . . 50
3.1.2 Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.1.3 Eectivité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2 Terminaison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2.1 Indécidabilité de l’atteignabilité . . . . . . . . . . . . . . . . . . 57
3.2.2 Conditions pour la terminaison . . . . . . . . . . . . . . . . . . 59
3.2.3 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3 Gardes universelles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.3.1 Travaux connexes . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.3.2 Calcul de pré-image approximé . . . . . . . . . . . . . . . . . . 69
3.3.3 Exemples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.3.4 Relation avec le modèle de panne franche et la relativisation des
quanticateurs . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.4.1 Exemples sans existence d’un bel ordre . . . . . . . . . . . . . . 74
3.4.2 Résumé . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Le cadre théorique sur lequel repose Cubicle est celui du model checking modulo théories
proposé par Ghilardi et Ranise [77]. C’est un cadre déclaratif dans lequel les systèmes
manipulent un ensemble de tableaux innis d’où le nom de systèmes à tableaux. Un système
est décrit par des formules logiques du premier ordre et des transitions gardées (voir
49Chapitre 3 Cadre théorique : model checking modulo théories
Section 2.4.3). Une des forces de cette approche réside dans l’utilisation des capacités de
raisonnement des solveurs SMT et de leur support interne pour de nombreuses théories. On
rappelle dans ce chapitre les fondements de cette théorie et certains résultats et théorèmes
associés.
Dans la suite de ce chapitre on se donne un système à tableaux S = (Q,I,τ ) et une
formule Θ représentant des états dangereux. Rappelons que la relation τ peut s’exprimer
sous la forme d’une disjonction de transitions paramétrées de la forme :
t(Q,Q
′
) = ∃i¯. γ (i¯,Q)
| {z }
garde
∧
^
x∈Q
∀j¯.x
′
(j¯) = δx (i¯,j¯,Q)
| {z }
action
avecγ et δx des formulessans quanticateurs. Ces restrictions sont très importantes car elles
permettent à la théorie exposée dans ce chapitre de caractériser des conditions susantes
pour lesquelles l’atteignabilité (ou la sûreté) est décidable.
Cubicle implémente le cadre du model checking modulo théories mais va au-delà en
relâchant certaines contraintes, au prix de la complétude et de la terminaison, mais reste
correct.
3.1 Analyse de sûreté des systèmes à tableaux
Cette section présente les conditions sous lesquelles l’analyse de sûreté d’un système à
tableau peut être mise en œuvre de façon eective. En particulier on caractérise un fragment
de la logique qui est clos par les opérations de l’algorithme d’atteignabilité arrière et on
montre que cette approche est correcte.
3.1.1 Atteignabilité par chaînage arrière
Plusieurs approches sont possibles pour résoudre les instances du problème de sûreté
(ou d’atteignabilité). Une première approche consiste à construire l’ensemble des états
atteignables (par chaînage avant) à partir des états initiaux, une autre approche, celle qui
sera adoptée dans la suite, consiste à construire l’ensemble des états qui peuvent atteindre
les mauvais états du système (atteignabilité par chaînage arrière).
Dénition 3.1.1. La post-image d’une formule φ(X) par la relation de transition τ est
dénie par
Postτ (φ)(X
′
) = ∃X. φ(X) ∧ τ (X,X
′
)
Elle représente les états atteignables à partir de φ en une étape de τ .
503.1 Analyse de sûreté des systèmes à tableaux
Dénition 3.1.2. La pré-image d’une formule φ(X
′
) par la relation de transition τ est
dénie par
Preτ (φ)(X) = ∃X
′
. τ (X,X
′
) ∧ φ(X
′
)
De manière analogue la pré-image d’une formule φ(X
′
) par une transition t est dénie par
Pret
(φ)(X) = ∃X
′
. t(X,X
′
) ∧ φ(X
′
)
et donc Preτ (φ)(X) =
W
t∈τ Pret
(φ)(X).
La pré-image Preτ (φ) est donc une formule qui représente les états qui peuvent atteindre
φ en une étape de transition de τ .
La clôture de la pré-image Pre∗
τ
(φ) est dénie par :
Pre0
τ
(φ) = φ
Pren
τ
(φ) = Preτ (Pren−1
τ
(φ))
Pre∗
τ
(φ) =
W
k∈N Prek
τ
(φ)
La clôture de Θ par Preτ caractérise alors l’ensemble des états qui peuvent atteindre
Θ. Une approche générale pour résoudre le problème d’atteignabilité consiste alors à
calculer cette clôture et vérier si elle contient un état de la formule initiale. L’algorithme
d’atteignabilité par chainage arrière présenté ci-après, fait exactement ceci de manière
incrémentale. Si, pendant sa construction de Pre∗
τ
(Θ), on découvre qu’un état initital (un
modèle de I) est aussi un modèle d’un des Pren
τ
(Θ), alors le système n’est pas sûr vis-à-
vis de Θ. Le système est sûr si, a contrario, aucune des pré-images ne s’intersecte avec
I. L’algorithme peut décider une telle propriété seulement si cette clôture est aussi un
point-xe.
Plusieurs briques de base sont nécessaires pour construire un tel algorithme. La première
est d’être capable de calculer les pré-images successives. Pour une recherche en avant, il
faut pouvoir calculer Postn
(I) ce qui est souvent impossible pour les systèmes paramétrés à
cause de la présence de quanticateurs universels dans la formule initiale I. En revanche, le
calcul du Pre est eectif dans les systèmes à tableaux si on restreint la forme des formules
exprimant les états dangereux.
Dénition 3.1.3. On appelle cube une formule de la forme ∃i¯.(∆ ∧ F ), où les i¯sont des
variables de la théorie TP , ∆ est une conjonction de diséquations entre les variables de i¯, et
F est une conjonction de littéraux.
Proposition 3.1.1. Si φ est un cube, la formule Preτ (φ) est équivalente à une disjonction de
cubes.
51Chapitre 3 Cadre théorique : model checking modulo théories
Démonstration. Soit φ = ∃p.(∆ ∧ F ) un cube, Preτ (φ) est la disjonction des Pret
(φ) pour
chaque transition de t de τ .
Prenons une transition t de τ , Pret
(φ)(Q) = ∃Q
′
. t(Q,Q
′
) ∧ ∃p.(∆ ∧ F (Q
′
)) se réécrit en
∃Q
′
. ∃i¯. γ (i¯,Q) ∧
^
x∈Q
x
′ = λj¯. δx (i¯,j¯,Q) ∧ ∃p.(∆ ∧ F (Q
′
)) (3.1)
Par la suite, on utilisera la notation traditionnellement employée en combinatoire
A
k
pour l’ensemble des combinaisons de A de taille k.
Soit j¯ = j1,. . . jk . Étant donné x ∈ Q et x
′ = λj¯. δx (i¯,j¯,Q) une des mises à jour de la
transition t, on notera σx
′ la substitution x
′
(pσ )\δx (i¯,pσ ,Q) pour tout pσ ∈
p
k
. Maintenant,
la formule ∃p.(∆ ∧ F (Q
′
))[σx
′] correspond en essence à la réduction de l’application d’une
lambda expression apparaissant dans l’équation (3.1). La réduction de toutes les lambda
expressions conduit à la formule suivante :
∃Q
′
. ∃i¯. γ (i¯,Q) ∧ ∃p.(∆ ∧ F (Q
′
)[σx
′,x
′
∈ Q
′
])
F (Q
′
)[σx ,x ∈ Q] n’est pas tout à fait sous forme de cube car les σx peuvent contenir des
termes ite. Notons Fd1
(i¯,Q) ∨ . . . ∨ Fdh
(i¯,Q) = elim_ite(F (Q
′
)[σx ,x ∈ Q]) la disjonction
résultant de l’élimination des ite des mises à jour. Les Fd (i¯,Q) sont des conjonctions de
littéraux. En sortant les disjonctions, on obtient :
∃Q
′
.
_
Fd (i¯,Q) ∈elim_ite(F (Q′
)[σx ,x∈Q])
∃i¯. γ (i¯,Q) ∧ ∃p.(∆ ∧ Fd (i¯,Q))
et donc :
Pret
(φ)(Q) =
_
Fd (i¯,Q) ∈ elim_ite(F (Q′
)[σx ,x∈Q])
∃i¯.∃p. (∆ ∧γ (i¯,Q) ∧ Fd (i¯,Q))
Dans ce qui suit on supposera que la formule caractérisant les états dangereux Θ est un
cube. La clôture construite par l’algorithme sera donc une disjonction de cubes et pourra
être vue comme un ensemble de cubes V.
On donne une analyse d’atteignabilité par chaînage arrière classique dans l’algorithme 4.
Cet algorithme maintient une le à priorité Q de cubes à traiter et la clôture partielle ou
l’ensemble des cubes visités V dont la disjonction des éléments caractérise les états pouvant
atteindre Θ. On démarre avec en ensemble V vide matérialisant la formule false et une le
Q où seule la formule Θ est présente.
La fonction BWD calcule itérativement la clôture Pre∗
τ
(Θ) et si elle termine, renvoie un
des deux résultats possibles :
52
Segmentation supervisée d’actions à partir de primitives
haut niveau dans des flux vid´eos
Adrien Chan-Hon-Tong
To cite this version:
Adrien Chan-Hon-Tong. Segmentation supervis´ee d’actions `a partir de primitives haut niveau
dans des flux vid´eos. Signal and Image Processing. Universit´e Pierre et Marie Curie - Paris
VI, 2014. French. .
HAL Id: tel-01084604
https://tel.archives-ouvertes.fr/tel-01084604
Submitted on 19 Nov 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of scientific
research documents, whether they are published
or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.Université Pierre et Marie Curie
École doctorale des Sciences mécaniques,
acoustique, électronique et robotique de Paris
Laboratoire de Vision et Ingénierie des Contenus (CEA, LIST, DIASI)
Segmentation supervisée d’actions à partir de primitives
haut niveau dans des flux vidéos
Par Adrien CHAN-HON-TONG
Thèse de doctorat de Génie informatique, automatique et traitement du signal
Dirigée par Catherine ACHARD
et encadrée par Laurent LUCAT
Présentée et soutenue publiquement le 29/09/2014
Devant un jury composé de :
CAPLIER Alice, Professeur, Rapporteur
CANU Stéphane, Professeur, Rapporteur
CORD Matthieu, Professeur, Examinateur
EL YACOUBI Mounim A., Maitre de conférences, Examinateur
ACHARD Catherine, Maitre de conférences, Directeur de thèse
LUCAT Laurent, Ingénieur – Chercheur, Encadrant de thèseA mes parents.
A ma femme et mes enfants.
Factorisation matricielle, application `a la
recommandation personnalis´ee de pr´ef´erences
Julien Delporte
To cite this version:
Julien Delporte. Factorisation matricielle, application `a la recommandation personnalis´ee de
pr´ef´erences. Other. INSA de Rouen, 2014. French. .
HAL Id: tel-01005223
https://tel.archives-ouvertes.fr/tel-01005223
Submitted on 12 Jun 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of scientific
research documents, whether they are published
or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.THÈSE
Présentée à :
L’Institut National des Sciences Appliquées de Rouen
En vue de l’obtention du titre de :
Docteur en Informatique
Par :
Julien DELPORTE
Intitulée :
Factorisation Matricielle,
Application à la Recommandation
Personnalisée de Préférences
soutenue le 3 février 2013
Devant le jury composé de :
Rapporteurs : Younès BENNANI - Université Paris-Nord
Gérard GOVAERT - Université de Technologie de Compiègne
Examinateurs : Anne BOYER - Université de Lorraine
Michèle SEBAG - Université Paris-Sud
Dominique FOURDRINIER - Université de Rouen
Directeur : Stéphane CANU - INSA de Rouen
Lh*rs p2p : une nouvelle structure de donn´ees distribu´ee
et scalable pour les environnements Pair `a Pair
Hanafi Yakouben
To cite this version:
Hanafi Yakouben. Lh*rs p2p : une nouvelle structure de donn´ees distribu´ee et scalable pour
les environnements Pair `a Pair. Other. Universit´e Paris Dauphine - Paris IX, 2013. French.
.
HAL Id: tel-00872124
https://tel.archives-ouvertes.fr/tel-00872124
Submitted on 11 Oct 2013
Services de r´epartition de charge pour le Cloud :
application au traitement de donn´ees multim´edia.
Sylvain Lefebvre
To cite this version:
Sylvain Lefebvre. Services de r´epartition de charge pour le Cloud : application au traitement
de donn´ees multim´edia.. Computers and Society. Conservatoire national des arts et metiers -
CNAM, 2013. French. .
HAL Id: tel-01062823
https://tel.archives-ouvertes.fr/tel-01062823
Submitted on 10 Sep 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of scientific
research documents, whether they are published
or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destin´ee au d´epˆot et `a la diffusion de documents
scientifiques de niveau recherche, publi´es ou non,
´emanant des ´etablissements d’enseignement et de
recherche fran¸cais ou ´etrangers, des laboratoires
publics ou priv´es.CONSERVATOIRE NATIONAL DES
ARTS ET MÉTIERS
École Doctorale Informatique, Télécommunications et Électronique (Paris)
CEDRIC
THÈSE DE DOCTORAT
présentée par : Sylvain LEFEBVRE
soutenue le : 10 Décembre 2013
pour obtenir le grade de : Docteur du Conservatoire National des Arts et Métiers
Spécialité : INFORMATIQUE
Services de répartition de charge pour le Cloud :
Application au traitement de données multimédia
THÈSE dirigée par
M. GRESSIER-SOUDAN Eric Professeur, CEDRIC-CNAM, Paris
Encadrée par
Mme. CHIKY Raja Enseignant-Chercheur, LISITE-ISEP, Paris
RAPPORTEURS
Mme. MORIN Christine Chercheur Titulaire, INRIA-IRISA, Rennes
M. PIERSON Jean-Marc Professeur, IRIT, Université Paul Sabatier, Toulouse
EXAMINATEURS
M. ROOSE Philippe Directeur de recherche, LIUPPA, Pau
M. SENS Pierre Directeur de recherche, LIP6, UPMC, Paris
M. PAWLAK Renaud Directeur technique, IDCapture, Paris
Mme. SAILHAN Françoise Maitre de conférence, CEDRIC-CNAM, Paris"When they first built the University of California at Irvine they just put the buildings in.
They did not put any sidewalks, they just planted grass. The next year, they came back
and put the sidewalks where the trails were in the grass."
Larry WallRemerciements
Les travaux menés durant ces trois ans ont été possible grâce à la confiance et aux
encouragements que mes encadrants Eric Gressier-Soudan, Renaud Pawlak puis Raja Chiky
m’ont accordé, à la fois lors du recrutement et tout au long de ces 36 derniers mois. Je
tiens à remercier particulièrement Raja Chiky, pour son optimisme, ses conseils et ses
encouragements en toutes circonstances.
Je tiens aussi à remercier tous les membres de l’équipe de Recherche et Développement
Informatique du LISITE : Yousra pour les filtres de Bloom, Zakia, Bernard et Matthieu
pour leurs conseils et commentaires avisés, et Olivier pour les probabilités. En restant à
l’ISEP je remercie aussi mes collègues de bureaux : Giang, Sathya, Adam, Ahmad, Fatima,
Maria et Ujjwal.
Mes sincères remerciements vont aux membres du jury qui ont accepté de juger mon travail
: MM. Pierson et Roose, Mme Morin ainsi que le professeur P. Sens pour ses conseils lors
de ma soutenance de mi-parcours et pour m’avoir donné accès à la plateforme GRID5000.
Je dois aussi adresser des remerciements à ceux qui m’ont encouragé dans ma poursuite
d’études : Professeurs B. Marchal, F. Pommereau, M. Bernichi de l’ESIAG et A. Ung
d’Essilor sans lesquels je ne me serais pas lancé dans cette aventure.
Enfin, ce travail n’aurait pu être mené sans les encouragements de ma famille, de mes
amis, et de Clara.
iiiREMERCIEMENTS
ivRésumé
Les progrès conjugués de l’algorithmique et des infrastructures informatiques permettent
aujourd’hui d’extraire de plus en plus d’informations pertinentes et utiles des données issues
de réseaux de capteurs ou de services web. Ces plateformes de traitement de données
"massives" sont confrontées à plusieurs problèmes. Un problème de répartition des données
: le stockage de ces grandes quantités de données impose l’agrégation de la capacité de
stockage de plusieurs machines. De là découle une seconde problématique : les traitements
à effectuer sur ces données doivent être à leur tour répartis efficacement de manière à ne
surcharger ni les réseaux, ni les machines.
Le travail de recherche mené dans cette thèse consiste à développer de nouveaux algorithmes
de répartition de charge pour les systèmes logiciels de traitement de données
massives. Le premier algorithme mis au point, nommé "WACA" (Workload and Cache
Aware Algorithm) améliore le temps d’exécution des traitements en se basant sur des ré-
sumés des contenus stockés sur les machines. Le second algorithme appelé "CAWA" (Cost
Aware Algorithm) tire partie de l’information de coût disponible dans les plateformes de
type "Cloud Computing" en étudiant l’historique d’exécution des services.
L’évaluation de ces algorithmes a nécessité le développement d’un simulateur d’infrastructures
de "Cloud" nommé Simizer, afin de permettre leur test avant le déploiement en
conditions réelles. Ce déploiement peut se faire de manière transparente grâce au système
de distribution et de surveillance de service web nommé "Cloudizer", développé aussi dans
le cadre de cette thèse, qui a servi à l’évaluation des algorithmes sur l’Elastic Compute
Cloud d’Amazon.
Ces travaux s’inscrivent dans le cadre du projet de plateforme de traitement de données
Multimédia for Machine to Machine (MCube). Ce projet requiert une infrastructure
logicielle adaptée au stockage et au traitement d’une grande quantité de photos et de sons,
issus de divers réseaux de capteurs. La spécificité des traitements utilisés dans ce projet a
nécessité le développement d’un service d’adaptation automatisé des programmes vers leur
environnement d’exécution.
Mots clés : Répartition de charge, Cloud, Données Massives
vAbstract
Nowadays, progresses in algorithmics and computing infrastructures allow to extract
more and more adequate and useful information from sensor networks or web services data.
These big data computing platforms have to face several challenges. A data partitioning
issue ; storage of large volumes imposes aggregating several machines capacity. From this
problem arises a second issue : to compute these data, tasks must be distributed efficiently
in order to avoid overloading networks and machines capacities. The research work carried
in this thesis consists in developping new load balancing algorithms for big data computing
software. The first designed algorithm, named WACA (Workload And Cache Aware algorithm)
enhances computing latency by using summaries for locating data in the system.
The second algorithm, named CAWA for "Cost AWare Algorithm", takes advantage of
cost information available on so-called "Cloud Computing" platforms by studying services
execution history. Performance evaluation of these algorithms required designing a Cloud
Infrastructure simulator named "Simizer", to allow testing prior to real setting deployment.
This deployement can be carried out transparently thanks to a web service monitoring and
distribution framework called "Cloudizer", also developped during this thesis work and was
used to assess these algorithms on the Amazon Elastic Compute Cloud platform.
These works are set in the context of data computing platform project called "Multimedia
for Machine to Machine" (MCube). This project requires a software infrastructure
fit to store and compute a large volume of pictures and sounds, collected from sensors. The
specifics of the data computing programs used in this project required the development of
an automated execution environement adaptation service.
Keywords : Load balancing, Cloud Computing, Big Data
viiAvant propos
Cette thèse se déroule dans l’équipe Recherche et Développement en Informatique (RDI)
du laboratoire Laboratoire d’Informatique, Signal et Image, Électronique et Télécommunication
(LISITE) de l’Institut Supérieur d’Électronique de Paris. L’équipe RDI se focalise
principalement sur la thématique des systèmes complexes avec deux axes forts. Le premier
axe vise la fouille de données et l’optimisation dans ces systèmes. L’optimisation est en
effet pour la plupart du temps liée à la traçabilité et à l’analyse des données, en particulier
les données liées aux profils d’utilisation. Le deuxième axe concerne l’étude de langages, de
modèles, et de méthodes formelles pour la simulation et la validation des systèmes complexes,
en particulier les systèmes biologiques et les systèmes embarqués autonomes sous
contraintes fortes.
Les travaux de cette thèse s’inscrivent dans le cadre d’un projet européen de mise au
point de plateforme de services Machine à Machine permettant d’acquérir et d’assurer
l’extraction et l’analyse de données multimédia issues de réseaux de capteurs. Ce projet,
baptisé Mcube pour Multimedia for machine to machine (Multimédia pour le machine à
machine) est développé en partenariat avec le fond Européen de Développement Régional
et deux entreprises de la région Parisienne : CAP 2020 et Webdyn. Il permet de réunir les
différentes problématiques de recherches de l’équipe RDI au sein d’un projet commun, en
partenariat avec les membres de l’équipe de recherche en traitement du signal du LISITE.
La particularité de ce projet est qu’il se concentre sur la collecte de données multimédias.
Les difficultés liées au traitement de ces données nécessitent de faire appel à des
technologies spécifiques pour leur stockage. Cette plateforme ne se limite pas au stockage
des données, et doit fournir aux utilisateurs la possibilité d’analyser les données récupérées
en ligne. Or, ces traitements peuvent s’exécuter dans un environnement dynamique soumis
à plusieurs contraintes comme le coût financier, ou la consommation énergétique. La répartition
des traitements joue donc un rôle prépondérant dans la garantie de ces contraintes. La
répartition des données sur les machines joue aussi un rôle important dans la performance
des traitements, en permettant de limiter la quantité de données à transférer.
Ces travaux ont été encadrés par Dr. Renaud Pawlak puis suite à son départ de
l’ISEP en septembre 2011, par Dr. Raja Chiky, sous la responsabilité du professeur Eric
Gressier-Soudan de l’équipe Systèmes Embarqués et Mobiles pour l’Intelligence Ambiante
du CEDRIC-CNAM.RÉSUMÉ
xTable des matières
1 Introduction 1
2 MCube : Une plateforme de stockage et de traitement de données multimédia
9
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 Architecture du projet MCube . . . . . . . . . . . . . . . . . . . . . 9
2.1.2 Problématiques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Composants des plateformes de stockage de données multimédia . . . . . . . 12
2.2.1 Stockage de données multimédia . . . . . . . . . . . . . . . . . . . . 13
2.2.2 Distribution des traitements . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.3 Plateformes génériques de traitements de données multimédias . . . 20
2.3 Analyse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4 Architecture de la plateforme MCube . . . . . . . . . . . . . . . . . . . . . . 23
2.4.1 Architecture matérielle . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.2 Architecture logicielle . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.3 Architecture de la plateforme MCube . . . . . . . . . . . . . . . . . . 24
2.4.4 Description des algorithmes de traitement de données multimédia . . 26
2.5 Développement du projet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
xiTABLE DES MATIÈRES
3 Le framework Cloudizer : Distribution de services REST 31
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Architecture du canevas Cloudizer . . . . . . . . . . . . . . . . . . . . . . . 32
3.2.1 Les nœuds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.2 Le répartiteur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.3 Déploiement de services . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 Intégration au projet MCube . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.4 Considérations sur l’implémentation . . . . . . . . . . . . . . . . . . . . . . 38
3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4 Simizer : un simulateur d’application en environnement cloud 41
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.1 Simulateurs de Cloud existants . . . . . . . . . . . . . . . . . . . . . 42
4.2 Architecture de Simizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2.1 Couche Evénements . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.2 Couche architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.2.3 Fonctionnement du processeur . . . . . . . . . . . . . . . . . . . . . 48
4.2.4 Exécution d’une requête . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 Utilisation du simulateur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4.1 Génération de nombres aléatoires . . . . . . . . . . . . . . . . . . . . 53
4.4.2 Fonctionnement des processeurs . . . . . . . . . . . . . . . . . . . . . 55
4.5 Discussion et travaux futurs . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5 Algorithmes de répartition de charge 59
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
xiiTABLE DES MATIÈRES
5.2 Classification et composition des algorithmes de répartition de charge . . . . 61
5.2.1 Familles d’algorithmes de répartition de charge . . . . . . . . . . . . 61
5.2.2 Composition d’algorithmes de répartition de charge . . . . . . . . . . 63
5.3 Algorithmes de répartition par évaluation de la charge . . . . . . . . . . . . 65
5.4 Algorithmes de répartition fondés sur la localité . . . . . . . . . . . . . . . . 66
5.4.1 Techniques exhaustives . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.4.2 Techniques de hachage . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.4.3 Techniques de résumé de contenus . . . . . . . . . . . . . . . . . . . 69
5.5 Stratégies basées sur le coût . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6 WACA : Un Algorithme Renseigné sur la Charge et le Cache 77
6.1 Contexte et Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.1.1 Localité des données pour les traitements distribués . . . . . . . . . 77
6.2 Filtres de Bloom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.2.1 Filtres de Bloom Standards (FBS) . . . . . . . . . . . . . . . . . . . 80
6.2.2 Filtres de Bloom avec compteurs (FBC) . . . . . . . . . . . . . . . . 82
6.2.3 Filtres de Bloom en Blocs (FBB) . . . . . . . . . . . . . . . . . . . . 82
6.3 Principe Général de l’algorithme WACA . . . . . . . . . . . . . . . . . . . . 83
6.3.1 Dimensionnement des filtres . . . . . . . . . . . . . . . . . . . . . . . 84
6.3.2 Choix de la fonction de hachage . . . . . . . . . . . . . . . . . . . . . 85
6.4 Algorithme sans historique (WACA 1) . . . . . . . . . . . . . . . . . . . . . 87
6.4.1 Tests de WACA 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.5 Algorithme avec historique . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.5.1 Compléxité de l’algorithme et gain apporté . . . . . . . . . . . . . . 92
6.5.2 Experimentation de WACA avec historique . . . . . . . . . . . . . . 93
xiiiTABLE DES MATIÈRES
6.6 Block Based WACA (BB-WACA) . . . . . . . . . . . . . . . . . . . . . . . . 95
6.6.1 Description de l’algorithme . . . . . . . . . . . . . . . . . . . . . . . 95
6.6.2 Analyse de l’algorithme . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7 Evaluation de la politique BB WACA . . . . . . . . . . . . . . . . . . . . . 102
6.7.1 Évaluation par simulation . . . . . . . . . . . . . . . . . . . . . . . . 102
6.7.2 Évaluation pratique sur l’Elastic Compute Cloud . . . . . . . . . . . 107
6.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7 CAWA : Un algorithme de répartition basé sur les les coûts 113
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
7.2 Approche proposée : Cost AWare Algorithm . . . . . . . . . . . . . . . . . . 115
7.2.1 Modélisation du problème . . . . . . . . . . . . . . . . . . . . . . . . 116
7.2.2 Avantages de l’approche . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.3 Phase de classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.3.1 Représentation et pré-traitement des requêtes . . . . . . . . . . . . . 118
7.3.2 Algorithmes de classification . . . . . . . . . . . . . . . . . . . . . . . 120
7.4 Phase d’optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.4.1 Matrice des coûts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.4.2 Résolution par la méthode Hongroise . . . . . . . . . . . . . . . . . . 122
7.4.3 Répartition vers les machines . . . . . . . . . . . . . . . . . . . . . . 123
7.5 Évaluation par simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7.5.1 Preuve de concept : exemple avec deux machines . . . . . . . . . . . 124
7.5.2 Robustesse de l’algorithme . . . . . . . . . . . . . . . . . . . . . . . . 125
7.6 Vers une approche hybride : Localisation et optimisation des coûts . . . . . 127
7.7 Conclusion et travaux futurs . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
8 Conclusion 131
xivTABLE DES MATIÈRES
A Cloud Computing : Services et offres 139
A.1 Définition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
A.2 Modèles de services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
A.2.1 L’Infrastructure à la Demande (Infrastructure as a Service) . . . . . 140
A.2.2 La Plateforme à la demande (Platform as a Service) . . . . . . . . . 140
A.2.3 Le logiciel à la demande (Software As a Service) . . . . . . . . . . . 141
A.3 Amazon Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
B Code des politiques de répartition 143
B.1 Implémentation de WACA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
B.2 Implémentation de l’algorithme CAWA . . . . . . . . . . . . . . . . . . . . . 145
C CV 147
Bibliographie 149
Glossaire 161
Index 163
xvTABLE DES MATIÈRES
xviListe des tableaux
2.1 Propriétés des systèmes de traitements de données multimédias . . . . . . . 23
3.1 Protocole HTTP : exécution de services de traitement de données . . . . . . 37
4.1 Lois de probabilité disponibles dans Simizer . . . . . . . . . . . . . . . . . . 44
4.2 Résultats des tests du Chi-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.3 Comparaison de Simizer et CloudSim, temps d’exécution de tâches . . . . . 56
6.1 Configurations des tests de la stratégie WACA . . . . . . . . . . . . . . . . 103
A.1 Caractéristiques des instances EC2 . . . . . . . . . . . . . . . . . . . . . . . 141
xviiLISTE DES TABLEAUX
xviiiTable des figures
2.1 Architecture du projet MCube . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Architecture pour la fouille de données multimédia . . . . . . . . . . . . . . 12
2.3 Architecture de la plateforme MCube . . . . . . . . . . . . . . . . . . . . . . 25
2.4 Modèle de description d’algorithme . . . . . . . . . . . . . . . . . . . . . . . 27
3.1 Architecture de la plateforme Cloudizer . . . . . . . . . . . . . . . . . . . . 33
3.2 Modèle des stratégies de répartition de charge . . . . . . . . . . . . . . . . . 35
4.1 Architecture de Simizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Entités de gestion des événements . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 Entités de simulation système . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.4 Comparaison des distributions aléatoires de Simizer . . . . . . . . . . . . . . 54
5.1 Stratégies de sélection possibles [GPS11, GJTP12] . . . . . . . . . . . . . . 64
5.2 Graphique d’ordonnancement, d’après Assunçao et al. [AaC09] . . . . . . . 72
6.1 Relation entre nombre d’entrées/sorties et temps d’éxécution . . . . . . . . 78
6.2 Exemples de Filtres de Bloom . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.3 Description de l’algorithme . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.4 Comparaison des fonctions MD5, SHA-1 et MumurHash . . . . . . . . . . . 86
6.5 Mesures du temps d’exécution des requêtes, WACA version 1 . . . . . . . . 90
xixTABLE DES FIGURES
6.6 Mesures du temps d’exécution des requêtes, WACA version 2 . . . . . . . . 93
6.7 Comparaisons de WACA 1 et 2 . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.8 Flot d’exécution de l’algorithme BB WACA . . . . . . . . . . . . . . . . . . 96
6.9 Schéma du tableau d’index . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.10 Complexité . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.11 Distribution des temps de réponses simulés, 1000 requêtes, distribution Uniforme
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.12 Distribution des temps de réponses simulés, 1000 requêtes, distribution Gaussienne
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.13 Distribution des temps de réponses simulés, 1000 requêtes, distribution Zip-
fienne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.14 Répartition des requêtes sur 20 machines, (Zipf) . . . . . . . . . . . . . . . 107
6.15 Résultats de simulation : temps de sélection des machines . . . . . . . . . . 108
6.16 Résultats d’exécutions sur le cloud d’Amazon . . . . . . . . . . . . . . . . . 110
7.1 Description de la phase de classification / optimisation . . . . . . . . . . . . 116
7.2 Distribution cumulative des temps de réponses, 2 machines . . . . . . . . . 124
7.3 CAWA : Distribution cumulative des temps de réponse, 10 machines . . . . 126
xxChapitre 1
Introduction
Ces dernières années, le besoin de traiter des quantités de données de plus en plus grandes
s’est fortement développé. Les applications possibles de ces analyses à grande échelle vont
de l’indexation de pages web à l’analyse de données physiques, biologiques ou environnementales.
Le développement de nouvelles plateformes de services informatiques comme les
réseaux de capteurs et les "nuages", ces plateformes de services de calcul à la demande,
accentuent ce besoin tout en facilitant la mise en œuvre de ces outils. Les réseaux de capteurs
ne sont pas une technologie récente, mais la capacité de traiter des grandes quantités
de données issues de ces réseaux est aujourd’hui facilement accessible à un grand nombre
d’organisations, en raison du développement de l’informatique en "nuage" : le Cloud Computing.
Ce nouveau modèle de service informatique est, d’après la définition du National
Institute of Standards and Technologies (NIST), un service mutualisé, accessible par le ré-
seau, qui permet aux organisations de déployer rapidement et à la demande des ressources
informatiques [MG11]. Dans ce contexte, les applications combinant plateformes d’acquisition
de données autonomes et les "nuages" pour traiter les données acquises, se développent
de plus en plus et permettent de fournir de nouveaux services en matière d’analyse de données,
dans le domaine commercial [Xiv13], ou académique [BMHS13, LMHJ10].
1Cadre applicatif
Les travaux de cette thèse s’inscrivent dans le cadre d’un projet de recherche financé
par le Fond Européen de DEveloppement Régional (FEDER) d’Ile de France 1, nommé
Multimedia pour le Machine à Machine (MCube) 2. Ce projet consiste à mettre au point
un ensemble de technologies permettant de collecter, transférer, stocker et traiter des données
multimédias collectées à partir de passerelles autonomes déployées géographiquement.
L’objectif est d’en extraire des informations utiles pour les utilisateurs du service. Par
exemple, il peut s’agir de détecter la présence d’une certaine espèce d’insecte nuisible dans
un enregistrement sonore, afin de permettre à l’agriculteur d’optimiser le traitement insecticide
de ses cultures, en épandant que lorsque cela est nécessaire.
Ce projet pose plusieurs difficultés de réalisation : en premier lieu, l’intégration et le
stockage de volumes de données importants, pouvant venir de différentes sources comme un
réseau de capteurs ou de multiples services WEB, implique de sélectionner les technologies
appropriées pour traiter et stocker ces données[Lan01]. À titre d’exemple, les cas d’études
utilisés pour le projet MCube consistent en un suivi photographique du développement de
plans de tomates et des écoutes sonores pour la détection de nuisibles. Ces deux cas d’études
effectués sur un seul champ durant les trois campagnes d’acquisition du projet (2011,2012
et 2013) ont nécessité la collecte d’un téraoctet et demi de données, soit environ 466,6
gigaoctets par utilisateur et par an. Le service définitif devant s’adresser à un nombre plus
important de clients, l’espace requis pour ces deux applications représentera donc un volume
non négligeable, en utilisation réelle. Le deuxième aspect de ces travaux consiste à mettre
au point un système pouvant adapter ces applications à un environnement dynamique
tel que les plateformes de cloud computing, de manière à pouvoir bénéficier de ressources
importantes à la demande lorsqu’un traitement le requiert.
La troisième difficulté concerne les traitements d’images eux mêmes, qui sont réalisés
par des scientifique ayant peu ou pas d’expériences de la programmation exceptés dans
des langages spécifiques comme MATLAB 3, dont l’utilisation sur les plateforme de type
1. http://www.europeidf.fr/index.php, consulté le 5 octobre 2013
2. http://mcube.isep.fr/mcube, consulté le 5 octobre 2013
3. http://www.mathworks.fr/products/matlab/, consulté le 5 octobre 2013
2"nuage" n’est pas facile. En effet, ces programmes ne sont pas écrits par des experts de la
programmation parallèle ou des calculs massifs, mais sont écrits pour traiter directement
un ou deux fichiers à la fois. Or, le contexte du projet MCube impose de traiter de grandes
quantités de fichiers avec ces mêmes programmes, ce qui requiert un système permettant
de les intégrer le plus simplement possible dans un environnement parallèle.
Ces trois verrous s’inscrivent dans des problématiques scientifiques plus larges liées aux
"nuages" et aux services distribués en général.
Problématiques scientifiques
Les problématiques scientifiques abordées dans ce document sont au nombre de trois.
La première d’entre elle consiste à assurer la répartition de requêtes dans un système
distribué déployé sur un "Cloud", en tenant compte des aspects particuliers de ce type
de plateformes. La deuxième problématique, plus spécifique au projet MCube, concerne
l’adaptation automatique de traitements à leur environnement d’exécution. Troisièmement,
Le développement de nouvelles stratégies de répartition a nécessité de nombreux tests en
amont du déploiement sur une infrastructure réelle, or il n’existe pas, à notre connaissance,
de simulateur d’applications distribuées sur le "cloud" approprié pour tester ce type de
développements.
Distribution de requêtes dans le "Cloud" Les stratégies de répartition fournies par
les services de "Cloud Computing" sont limitées dans leurs fonctionnalités ou leur capacité.
Par exemple, le répartiteur élastique de la plateforme Amazon Web Services (ELB : Elastic
Load Balancer), distribue les requêtes selon une stratégie en tourniquet (Round Robin)
sur les machines les moins chargées du système 4. Seule cette stratégie de répartition est
disponible pour les utilisateurs. Or, les applications déployées sur les plateformes d’informatique
en nuage sont diverses, par conséquent une stratégie générique n’est pas toujours
adaptée pour obtenir la meilleure performance du système. De plus, Liu et Wee [LW09]
ont démontré que déployer son propre répartiteur d’applications web sur une machine vir-
4. https://forums.aws.amazon.com/message.jspa?messageID=129199#129199, consulté le 5 octobre
2013
3tuelle dédiée était plus économique et permettait de répondre à un plus grand nombre de
requêtes concurrentes que le service de répartition de charge élastique fournit par Amazon.
Les fournisseurs de plateformes Cloud imposent souvent à leurs utilisateurs des stratégies
de répartition génériques pour pouvoir répondre à un grand nombre de cas d’utilisation,
mais ces stratégies ne sont pas toujours les plus efficaces : Un exemple notable est celui de
la plateforme Heroku 5, qui utilisait une stratégie de répartition aléatoire pour attribuer
les requêtes à différents processeurs. Or, cette répartition aléatoire se faisait sans que les
multiples répartiteurs du système ne communiquent entre eux pour surveiller la charge
des machines. Par conséquent, certains répartiteurs envoyaient des requêtes vers des machines
déjà occupées, ce qui créait des files d’attentes inattendues 6. Utiliser des stratégies
de répartition de charge appropriées, tenant compte des spécificités des plateformes de
type "Cloud Computing" est donc indispensable. Les spécificités de ces plateformes sont
notamment :
Des performances difficilement prévisibles : L’infrastructure physique de ces plateformes
est partagée par un grand nombre d’utilisateurs (cf. annexe A). La consé-
quence de ce partage est qu’en fonction de l’utilisation des ressources faite par les multiples
utilisateurs, il peut arriver que deux machines virtuelles identiques sur le papier
ne fournissent pas obligatoirement le même niveau de performance [BS10, IOY+11].
Des latences réseaux plus élevées : La taille des centres de données utilisés dans les
plateformes de Cloud Computing, ainsi que leur répartition géographique a pour
conséquence une latence plus importante dans les communications réseaux, qu’il
convient de masquer le plus possible à l’utilisateur final.
Une plateforme d’informatique à la demande doit donc permettre aux utilisateurs de choisir
la stratégie de distribution la plus adaptée à leur cas d’utilisation. De plus, il est nécessaire
de prendre en compte les spécificités de ces environnements pour développer de nouvelles
stratégies de répartition adaptées.
5. https://www.heroku.com/, consulté le 5 octobre 2013
6. https://blog.heroku.com/archives/2013/2/16/routing_performance_update, consulté le 5 octobre
2013
4Adaptation d’applications de traitements de données multimédia Ces nouvelles
plateformes sont par définition simples d’utilisation, flexibles et économiques. Cependant,
pour tirer parti des avantages de ces services, il est nécessaire de mettre en place des processus
automatisés de déploiement, de distribution et de configuration des applications
[AFG+09, ZCB10, MG11]. Les plateformes de cloud computing promettent à leurs utilisateurs
l’accès à une infinité de ressources à la demande, mais toutes les applications ne
sont pas nécessairement conçues pour être distribuées dans un environnement dynamique.
Quelles sont donc les caractéristiques des applications pouvant être déployées dans les environnements
de type "Cloud Computing" ? Quels mécanismes peuvent être mis en œuvre
pour assurer le déploiement de ces applications de manière plus ou moins transparente ?
Comment amener ces traitements à s’exécuter de manière distribuée ou parallèle pour
pouvoir tirer parti des avantages fournis par les plateformes de "Cloud Computing" ?
Test et performance A ces questions s’ajoute la problématique du test de l’application.
Il n’existe à ce jour pas de simulateur fiable permettant d’évaluer le comportement
d’applications déployées dans le Cloud, car les efforts actuels, tels que les simulateurs
CloudSim[RNCB11] et SimGrid[CLQ08], se concentrent sur la simulation des infrastructures
physiques supportant ces services plutôt que sur les applications utilisant ces services
[SL13]. Or, le développement de protocoles et d’algorithmes distribués nécessite des tests
sur un nombre important de machines. L’ajustement de certains paramètres peut requé-
rir de multiples tests sur diverses configurations (nombre de machines, puissance, type de
charge de travail), mais il n’est pas toujours aisé de devoir déployer des tests grandeur
nature pour ce type d’ajustements. Il faut donc développer ou adapter les outils existants
pour faciliter ces tests, en fournissant une interface de programmation de haut niveau pour
développer les protocoles à tester, sans que l’utilisateur n’ait à programmer à la fois la
simulation et le protocole qu’il souhaite étudier.
Approche développée
L’approche développée pour répondre à ces problématiques a consisté à mettre au
point une plateforme de distribution de services web destinée aux environnements de type
5Service d’Infrastructure à la Demande (Infrastructure As A Service, IaaS, cf. annexe A),
permettant de déployer et de distribuer des services web ou des applications en ligne de
commande en les transformant en services Web. Cette application, nommée Cloudizer, a
permis l’étude de différentes stratégies de distribution et de leur impact sur les différentes
applications déployées à travers plusieurs publications [PLCKA11, SL12, LKC13].
Cette plateforme a été utilisée dans le projet MCUBE, pour distribuer les programmes
de traitements des images collectées. Les volumes de données concernés par ce projet sont
tels que l’utilisation d’une stratégie de distribution des requêtes fondée sur la localité
des données [DG04, PLCKA11] permet de réduire efficacement les temps de traitement.
Dans ce domaine, les algorithmes de distribution les plus utilisés sont ceux fondés sur la
technique du hachage cohérent [KLL+97, KR04], qui est mise en œuvre dans plusieurs
systèmes de stockage de données massives. Le but de ces travaux est de montrer à travers
différentes simulations que les filtres de Bloom [Blo70, FCAB98, DSSASLP08] peuvent
être facilement combinés à d’autres algorithmes de répartition de charge, pour fournir une
alternative performante au hachage cohérent, dans certaines conditions. Ce travail a permis
la mise au point d’une stratégie de répartition de charge fondée sur des filtres de Bloom
indexés appelée WACA (Workload And Cache Aware Algorithm, Algorithme Renseigné
sur la Charge et le Cache).
Les Clouds de type Infrastructure à la Demande constituent l’environnement de déploiement
cible de ces services. La propriété de ces plateformes est de facturer leurs services
à l’utilisation. Afin de prendre ce facteur en compte, nous avons développé une stratégie
fondée sur le coût comme indicateur de performance, et nommée CAWA (Cost-AWare Algorithm,
Algorithme Renseigné sur le Coût) dont le fonctionnement a été testé à travers
des simulations. Tel que montré en section 1 de cette introduction, il n’existe pas encore
de simulateur adéquat pour les utilisateurs de plateformes de Cloud Computing. Le logiciel
Simizer a donc été développé pour permettre la simulation d’applications orientées services
déployées sur des infrastructures Cloud. Ce simulateur a permis de simuler et tester
les stratégies de distribution développées pour la plateforme Cloudizer, et d’étudier leur
comportement à large échelle.
6Organisation du présent document
Le reste de ce document est organisé en deux parties principales. La première partie,
composée des chapitres 2 à 4, décrit les développements des différentes plateformes logicielles
évoquées dans la section précédente. Le chapitre 2 présente les particularités du
projet MCube et ce qui le différencie des plateformes de traitement de données multimédia
existantes. Le chapitre 3 décrit le fonctionnement de la plateforme Cloudizer et son utilisation
dans le cadre du projet MCUBE, puis le chapitre 4 décrit les particularités et le
fonctionnement du simulateur Simizer.
La seconde partie de ce document se concentre sur les stratégies de distribution de
requêtes mises au point au cours de cette thèse. L’état de l’art en matière de répartition
de charge est présenté dans le chapitre 5. les chapitres suivants décrivent respectivement
la stratégie WACA, (chapitre 6), qui utilise des résumés compacts pour assurer une distribution
des tâches en fonction de la localisation des données dans le système, et la stratégie
CAWA (chapitre 7) qui se fonde sur une estimation du coût d’exécution des tâches pour effectuer
la répartition. Le chapitre 8 présentera les conclusions et perspectives de l’ensemble
de ces travaux.
78Chapitre 2
MCube : Une plateforme de stockage
et de traitement de données
multimédia
2.1 Introduction
L’objectif technique du projet MCube (Multimedia 4 Machine 2 Machine) est de développer
une technologie Machine à Machine pour la capture et l’analyse de données multimédia
par des réseaux de capteurs avec des problématiques de faible consommation et de
transmission GPRS/EDGE. Le projet couvre la chaîne complète : l’acquisition des données,
la transmission, l’analyse, le stockage, et le service d’accès par le WEB. Il s’appuie sur les
infrastructures M2M actuellement offertes par les opérateurs du secteur comme les réseaux
3G/GPRS pour la communication des données. Le but de cette plateforme est de fournir
des services d’analyse de données multimédia à faible coût permettant diverses applications
telles que la surveillance de cultures et de sites industriels (détections d’intrusions ou
d’insectes nuisibles).
2.1.1 Architecture du projet MCube
L’architecture globale de la technologie MCube correspond à l’état de l’art en matière
d’architecture M2M et est résumée en figure 2.1. Les "passerelles" sont des systèmes embarqués
permettant de piloter des périphériques d’acquisition de données multimédia comme
des appareils photos ou des microphones USB. Les passerelles assurent la transmission des
92.1. INTRODUCTION
Figure 2.1 – Architecture du projet MCube
données collectées vers la plateforme MCube, via une connexion 3G si un accès Ethernet
n’est pas disponible. La communication s’effectue via le protocole HTTP. La plateforme
est un serveur accessible via le web, dont le rôle consiste à :
Stocker les données collectées La plateforme assure le stockage des données multimé-
dia capturées et remontées par les passerelles. Ceci nécessite un large espace de stockage
redondant et distribué afin de limiter les risques de pertes de données, ainsi
que des méthodes efficaces pour retrouver les données stockées.
Configurer les passerelles Le système offre la possibilité de communiquer avec les passerelles
afin de les mettre à jour dynamiquement et de changer différents paramètres
tels que le type de capture à effectuer (sons, images ou vidéos) ou la fréquence de ces
captures. Cette communication se fait via une interface de programmation prenant
la forme d’un service web REST[Fie00], déployée sur les serveurs de la plateforme.
Traiter et analyser les données multimédia Les utilisateurs du système MCube peuvent
développer leurs propres programmes d’analyse de données, et les stocker sur la plateforme
pour former leur propre bibliothèque de traitements. Ces traitements doivent
pouvoir être exécutés par lots, en parallèle sur un grand nombre de fichiers, mais aussi
en temps réel au fur et à mesure de la collecte des données.
Par conséquent, mis à part le rôle de communication avec les passerelles, la plateforme
MCube devrait reposer sur une base de données multimédia. Ces systèmes stockent et
102.1. INTRODUCTION
traitent des données en volumes importants (plusieurs giga/téraoctets). Les données concernées
peuvent être de types différents (sons, images, vidéos,...) et sont séparées en un nombre
important de petits fichiers pouvant provenir de différents périphériques d’acquisition. De
plus, ces données sont généralement peu ou pas structurées. Afin d’assurer l’efficacité des
traitements sur les données multimédia, il est nécessaire de les exécuter de façon parallèle.
À nouveau, le résultat de ces traitements doit être stocké dans le système, ce qui permet
la réutilisation des données pour des analyses supplémentaires. La somme de toutes ces
données exige une capacité de stockage importante.
2.1.2 Problématiques
Une contrainte primordiale du projet MCube est d’aboutir à un service suffisamment
flexible pour pouvoir être personnalisé à chaque scénario d’utilisation, dans le but d’optimiser
le modèle économique du client en fonction de nombreux critères. Ainsi, les traitements
des données multimédia pourront être faits aussi bien côté passerelle que côté plateforme
suivant les critères à optimiser. La plateforme fournira des services d’aide à la décision
permettant d’optimiser au mieux la solution. Les choix d’architecture logicielle décrits en
section 2.4 reflètent ces besoins.
La plateforme MCube permet d’exécuter des analyses poussées sur les données reçues
des passerelles. Les algorithmes utilisés pour ces analyses sont souvent soit expérimentaux,
soit développés spécifiquement pour certains clients, et ne sont pas disponibles dans les librairies
de traitements de données multimédias existantes telles que Image Terrier[HSD11]
ou OpenCV[Bra00]. La conséquence est que ces algorithmes ne sont pas toujours directement
parallélisables. Or, la collecte d’un nombre important de fichiers dans les passerelles
MCube impose de pouvoir être en mesure d’exécuter les traitements sur une grande quantité
de données en parallèle.
Cet état de fait a des conséquences sur les choix d’architecture à effectuer car il est
nécessaire de fournir un accès aux données à la fois au niveau fichier pour que les données
soient accessibles aux différents programmes de traitements et un accès automatisé, via une
interface de programmation, pour permettre aux utilisateurs d’interroger leurs données en
temps réel.
112.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES
MULTIMÉDIA
Ce chapitre expose donc dans un premier temps les composants des systèmes de stockage
et de traitement de données multimédias en section 2.2, et analyse les différentes
plateformes existantes de ce domaine par rapport aux besoins de la plateforme MCube en
section 2.3. La section 2.4 décrit la solution adoptée pour la mise au point de l’architecture
de la plateforme MCube et les problématiques qui découlent de ces choix sont discutées en
section 2.6.
2.2 Composants des plateformes de stockage de données multimédia
Il est possible de définir une plateforme de traitement de données multimédias comme
un système logiciel distribué permettant de stocker des fichiers contenant des données
photographiques, vidéos ou sonores, et de fournir les services nécessaires à l’extraction et
au stockage d’informations structurées issues de ces fichiers.
Les systèmes d’analyse de données multimédia possèdent un ensemble hiérarchisé de
composants permettant de transformer les données brutes en informations précises [CSC09,
HSD11]. Le schéma 2.2 montre l’articulation de ces différents composants, que l’on retrouvera
dans la plupart des systèmes décrits dans ce chapitre.
Données
multimédias
Extraction
descripteurs
Stockage
descripteurs
Indexation
Detection
Index
informations
Figure 2.2 – Architecture pour la fouille de données multimédia
Le premier composant de ces systèmes correspond à la couche de stockage des données.
Ce composant doit assurer le stockage et la disponibilité des données brutes, c’est à dire
les documents audiovisuels dans leur format d’origine (RAW, MPEG, AVI, WAV), mais
aussi des données intermédiaires nécessaires pour les traitements (les descripteurs) et les
résultats des algorithmes d’indexation et d’analyse.
Les descripteurs permettent de décrire le contenu des documents audiovisuels stockés
122.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES
MULTIMÉDIA
dans le système. Il s’agit de représentations intermédiaires des données multimédias, comme
par exemple les valeurs obtenues après l’application d’une transformée de Fourrier à un enregistrement
sonore. Il existe un nombre important de représentations différentes, chacune
dépendant du type d’information que l’utilisateur souhaite extraire des données collectées.
Les systèmes de stockage multimédia fournissent une bibliothèque d’algorithmes divers
permettant de procéder à l’extraction de plusieurs types de représentations.
À partir de ces descripteurs, l’information obtenue peut être traitée de plusieurs manières
: elle peut être utilisée pour construire un index des données afin de faciliter la
recherche de contenu dans la banque de données multimédia (indexation), ou bien les données
peuvent être traitées pour en extraire des informations précises, à travers l’utilisation
d’algorithmes de détection d’objets, d’artefacts ou de mouvements (détection).
Les composants génériques décrits dans cette section sont présents dans la plupart des
systèmes évoqués dans la suite de ce chapitre. Chaque système doit cependant relever un
certain nombre de défis pour parvenir à indexer ou extraire des informations des contenus.
2.2.1 Stockage de données multimédia
Les données multimédia sont dans la plupart des cas des données extrêmement riches en
informations et donc très volumineuses. Les données ainsi stockées doivent aussi faire l’objet
de traitements et pré-traitements, dont il faut stocker les résultats pour pouvoir les analyser
à l’aide d’outils appropriés. Les volumes de stockage atteints deviennent très vite handicapant
pour les systèmes de gestion de base de données traditionnels. Différentes stratégies de
stockage distribué peuvent être utilisées pour résoudre ce problème. Les données multimé-
dias recouvrent généralement des espaces disque importants et sont difficilement compressibles
sans perte d’information. Par conséquent, la distribution des données sur plusieurs
machines à la fois apparaît comme la solution simple et économique. Plusieurs stratégies
coexistent dans le domaine pour parvenir à ce but : les données peuvent être stockées dans
un système de fichiers distribué [SKRC10], dans une base de données distribuée [Kos05] ou
encore dans une base de données distribuée non relationnelle [Cor07, CDG+06].
132.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES
MULTIMÉDIA
Système de fichiers distribués Un système de fichiers distribué est un système dans
lequel l’espace disque de plusieurs machines est mis en commun et vu par toutes les machines
du système comme une seule entité de manière transparente. Cela permet d’augmenter
l’espace disponible en partageant la capacité des disques de chaque machine du système,
ou d’augmenter la fiabilité du système en autorisant la réplication des données. Des
exemples de systèmes de fichiers distribués sont le Network File System (NFS) [SCR+00],
XTreemFS[HCK+07] ou encore le Hadoop Distributed File System [SKRC10].
Cette approche est utilisée dans de nombreux projets de stockage et de traitement de
données multimédia. Par exemple, les projets RanKloud [Can11] et ImageTerrier [HSD11]
reposent sur le système de fichiers distribué issu du projet Hadoop, le Hadoop Distributed
File System [SKRC10]. Ce système de fichiers a la particularité de ne permettre que l’ajout
et la suppression de fichiers. La modification des données n’est pas possible. Cette limitation
ne pose pas de problème dans les systèmes de stockage et traitement de données multimédia,
car les fichiers bruts ne font pas l’objet de modifications internes, mais sont copiés puis lus
plusieurs fois de suite.
Ce type de système est particulièrement adapté lorsque les données doivent être traitées
en parallèle sur plusieurs machines par divers programmes, comme dans le système de
traitement de données parallèle Hadoop [Whi09].
Stockage en base de données La solution la plus directe pour fournir l’accès à des
données dans une plateforme de services est la mise au point d’une base de données relationnelle.
D’après [Kos05] l’arrivée dans les années 90 des systèmes de bases de données
orientés objets tels qu’Informix 1 ou Oracle 2 a permis l’émergence de systèmes plus efficaces
pour représenter les données multimédia. Les standards MPEG-7 [MPE12b] et MPEG-
21 [Mpe12a], par exemple, ont fourni un cadre de représentation des données multimédia
pouvant servir de référence pour la mise au point de schémas de base de données spéci-
fiques. C’est cette approche qui a été retenue dans le projet de “base de donnée MPEG-7”,
développée par M. Döller et décrite dans [Döl04]. En utilisant un système de gestion de
bases de données objet, M. Döller a créé un nouveau schéma respectant le format de re-
1. http://www.ibm.com/software/data/informix/, consulté le 5 octobre 2013
2. http://www.oracle.com/us/products/database/overview/index.html, consulté le 5 octobre 2013
142.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES
MULTIMÉDIA
présentation fourni par le standard MPEG-7. Afin de parvenir à utiliser le langage SQL
pour rechercher les données, les méthodes d’indexation ont aussi été étendues pour pouvoir
indexer des données multi-dimensionnelles. Ceci permet aux utilisateurs de bénéficier de
tous les mécanismes des bases de données traditionnelles et notamment d’utiliser le langage
SQL pour la manipulation et la recherche de données multimédia. Les fichiers sont stockés
soit en tant qu’objets binaires (BLOBs), comme simples champs d’une table, soit dans le
système de fichiers, et seul le chemin vers les données figure dans la base.
La principale limite de cette approche réside dans le fait que le système soit implémenté
comme une extension du système commercial de bases de données Oracle. Il est donc
difficilement envisageable d’ajouter de nouvelles fonctionnalités au système, en raison de
l’interface de programmation de bas niveau nécessaire à ces changements. L’autre limite de
cette approche est que le système n’est pas distribué, or, le volume de données généralement
associé avec les bases de données multimédia peut nécessiter un fonctionnement réparti
afin de multiplier la capacité de stockage et de traitement, ce qui n’est pas le cas dans ces
travaux.
Bases de données distribuées : Les bases de données distribuées permettent d’agréger
la capacité de stockage de plusieurs machines au sein d’un seul système de base
de données. Retenue par K. Chatterjee et al. [CSC09], cette approche consiste à
utiliser les capacités existantes de systèmes de bases de données distribuées, comme
par exemple MySql Cluster 3, et à adapter la base de données en ajoutant un index
réparti permettant de retrouver rapidement les données recherchées sur l’ensemble
des machines.
IrisNet [GBKYKS03] est un réseau de capteurs pair à pair sur lequel les images et données
des différents capteurs sont stockées dans une base de données XML. Les nœuds
sont divisés en deux catégories : les "Sensing Agents" (SA) qui sont des machines
de puissance moyenne reliées aux capteurs (webcam, météo, . . . ) et les "Organizing
Agents" (OA) qui sont des serveurs sur lesquels sont déployés les services du réseau
de capteurs. Les capteurs eux-mêmes sont des machines de puissance moyenne, avec
peu ou pas de contraintes environnementales, et un espace de stockage important.
3. http://www.mysql.com, consulté le 5 octobre 2013
152.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES
MULTIMÉDIA
Ces nœuds appelés “Sensing Agents” sont utilisés pour déployer des “Senselets”, qui
filtrent les données issues des capteurs. La base de données est distribuée sur les OA.
Les utilisateurs peuvent requêter la base de données en utilisant le langage XPATH
(langage de requêtage de données XML).
Ces deux approches reposent sur les systèmes de base de données traditionnels fournissant
de fortes garanties de cohérence des données, suivant le principe ACID (Atomicité,
Cohérence, Isolation, Durabilité). Or, il a été démontré que lorsque la taille
d’un système distribué augmente, fournir une garantie sur la cohérence des données
du système ne permet pas d’assurer à la fois disponibilité et la performance de
celui-ci. C’est le théorème de CAP, Consistency, Availability et Partition Tolerance
(Cohérence, Disponibilité et Tolérance aux partitions) démontré par Gilbert et Lynch
dans [GL02]. Les systèmes multimédias distribués ne nécessitent pas de supporter de
fortes garanties de cohérence sur les données : les fichiers à traiter ne sont écrits
qu’une seule fois puis lus à plusieurs reprises pour mener à bien les analyses voulues.
Par conséquent, certains acteurs ont construit leur propre système de gestion de données
multimédia en s’abstrayant de ces garanties, comme Facebook 4 l’a fait avec le
système Haystack [Vaj09]. Les composants de ce système ne reposent pas sur une base
de données relationnelle mais sur un index distribué permettant de stocker les images
dans des fichiers de grandes tailles (100Go), ce qui permet de grouper les requêtes et
de réduire les accès disques nécessaires à l’affichage de ces images. La particularité
de ce système est que les images y sont peu recherchées, mais sont annotées par les
utilisateurs pour y ajouter des informations sur les personnes présentes dans l’image
et la localisation de la prise de vue.
Un autre exemple de base de données distribuée utilisée pour stocker les données
multimédia est celui de Youtube 5, qui utilise la base de données orientée colonnes
BigTable [CDG+06, Cor07], pour stocker et indexer les aperçus de ses vidéos.
Agrégation de bases de données : Le canevas AIR créé par F. Stiegmaier et al. [SDK+11]
se concentre sur l’extensibilité en utilisant un intergiciel pour connecter plusieurs
bases de données entre elles, et utilise des requêtes au format MPEG-7 [MPE12b].
4. http://facebook.com, consulté le 5 octobre 2013
5. http://youtube.com, consulté le 5 octobre 2013
162.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES
MULTIMÉDIA
Cependant ces travaux restent dirigés sur la recherche de contenus plutôt que sur
la détection d’éléments précis dans les images. Les données et leurs métadonnées
(descripteurs) sont réparties sur différents systèmes de gestion de bases de données
(SGBD) pouvant fonctionner sur différentes machines. Lorsque le système reçoit une
requête, deux cas se présentent : soit les nœuds sont homogènes et la requête est envoyée
à chaque machine puis les résultats sont fusionnés et / ou concaténés, soit les
nœuds sont des systèmes différents (bases de données différentes) et seule la partie la
plus adaptée de la requête est transmise au noeud le plus approprié. Les intergiciels
LINDO[BCMS11] et WebLab[GBB+08] fournissent des modèles de représentations
de données communs à plusieurs systèmes et permettent les transferts de données
entre des services de traitements adoptant ce modèle. Ces systèmes ne reposent sur
aucun système de stockage particulier mais permettent d’agréger différentes sources
de données et différents services de traitements hétérogènes. La problématique de
cette approche est la nécessité de devoir adapter les services et les traitements au
modèle exposé par les interfaces d’un tel système.
Une grande variété de solutions existent au problème du stockage des données multimédia.
Un critère de choix possible est le type d’utilisation qui sera fait des données : par
exemple un système d’extraction des connaissances fonctionnera mieux avec un système
de fichiers distribué dans lequel les données sont accessibles de manière transparente, au
contraire d’une base de donnée SQL qui requiert une interface particulière pour accéder
aux données stockées. Dans le cadre du projet MCube l’accès aux données doit être possible
de plusieurs manières : il faut pouvoir fournir un accès permettant de traiter les données
en parallèle ainsi qu’une interface permettant de requêter les données générées par les ré-
sultats des analyses effectuées. Le système se rapprochant le plus de ce cas d’utilisation est
donc l’utilitaire HADOOP [Whi09], qui fournit à la fois un système de haut niveau pour
requêter les données et un accès de bas niveau via le système de fichiers distribué.
2.2.2 Distribution des traitements
Un volume important de données à analyser se traduit par la nécessité de paralléliser
le traitement des données, de manière à réduire les temps d’exécution. Cependant, tous
172.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES
MULTIMÉDIA
les systèmes de stockage de données multimédia ne permettent pas de traitement parallèle
des données. Dans le domaine des traitements de données multimédia il est possible de
distinguer deux manières de traiter les données[Can11] : les traitements à la chaîne (Multiple
Instruction Multiple Data, MIMD) et les traitements parallèles (Single Instruction
Multiple Data, SIMD).
Traitements à la chaîne : Multiple Instruction Multiple Data Le système est
constitué de plusieurs unités, chacune exécutant une étape particulière d’une chaine de
traitements. Les données passent d’une étape à l’autre en étant transférées d’une entité
à l’autre, de sorte que plusieurs données sont à différents stades de traitement au même
instant.
Le projet WebLab [GBB+08] est une architecture de traitement de données multimé-
dia orientée service qui permet de faciliter le développement d’applications distribuées de
traitement multimédia. Cette plateforme repose sur un modèle de données commun à tous
ses composants représentant une ressource multimédia. Ce modèle ne repose pas sur un
standard mais permet de créer des ressources conformes au standard MPEG-7. La raison
de ce choix est que le standard MPEG-7 a été créé en 2002 et ne permet pas de bénéficier
des dernières avancées en matière de descripteurs de données multimédias. Cette repré-
sentation générique permet d’associer à chaque ressource des “annotations” représentant le
contenu extrait de la ressource (appelés “morceaux de connaissance”). Les services disponibles
se divisent en services d’acquisition, services de traitements, services de diffusion.
Le modèle de données commun permet à ces différents services de pouvoir être composés
pour créer des flux de traitements, de l’acquisition à la diffusion du contenu multimédia.
Cette solution se veut très flexible et générique mais ne propose pas de solution en ce qui
concerne la répartition des traitements.
Le système SAPIR (Search in Audio-visual content using Peer-to-peer Information Retrieval)
[KMGS09] est un moteur de recherche multimédia implémentant la recherche par
l’exemple : il permet de retrouver à partir d’un document multimédia donné les documents
similaires ou correspondant au même objet. SAPIR utilise le Framework apache
UIMA [UIM12] pour extraire les informations nécessaires à l’indexation des données au
182.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES
MULTIMÉDIA
format MPEG-7. Les flux ou fichiers reçus sont divisés en plusieurs modalités : par exemple
dans le cas d’une vidéo, un “splitter” va séparer le son des images, puis la suite d’images
va aussi être divisée en plusieurs images fixes, chaque élément ainsi extrait va ensuite être
traité par un composant spécifique pour le son, la vidéo et les images le tout de façon parallèle.
Une fois la phase d’extraction terminée le fichier d’origine est recomposé en intégrant
les métadonnées qui en ont été extraites. UIMA rend la main au système SAPIR, qui va
insérer le fichier dans son index distribué en se basant sur les métadonnées que le fichier
contient désormais. Le système utilise une architecture pair à pair pour stocker et répartir
les données.
Cependant, la dépendance du système SAPIR avec Apache UIMA, destine ce système
uniquement à la recherche de contenus multimédia plutôt qu’à servir de plateforme de
traitement de données multimédia générique.
Traitement en parallèle : Single Instruction Multiple Data (SIMD) Ce type de
système sépare les étapes du traitement dans le temps plutôt que dans l’espace : à chaque
étape du traitement, toutes les machines du système exécutent le même traitement en
parallèle sur l’ensemble des données. C’est le modèle suivi par le framework Hadoop [Whi09]
pour traiter les données. Ce framework générique de traitement de données massives en
parallèle est utilisé dans plusieurs systèmes de traitements de données multimédia. Par
exemple, les travaux menés dans le cadre du projet RanKloud [Can11] utilisent et étendent
le framework HADOOP [Whi09] pour répondre à des requêtes de classement, ainsi que
pour permettre des jointures dans les requêtes portant sur plusieurs ensembles de données.
Pour cela le système échantillonne les données et évalue statistiquement leur répartition
pour optimiser certaines procédures (notamment les opérations de jointures). Un index des
données locales est construit sur chaque machine du système, permettant à chaque nœud
de résoudre les requêtes localement et d’envoyer les résultats à une seconde catégorie de
nœuds qui assureront la finalisation des requêtes. Cependant ce système est conçu pour
analyser une grande quantité de données et non pas pour répondre à des requêtes en temps
réel, et se destine plutôt à des requêtes de recherche de contenus.
Le projet Image Terrier [HSD11] est un système d’indexation et de recherche d’images
192.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES
MULTIMÉDIA
et de vidéos fondé sur le système de recherche textuelle Terrier 6. Ce système utilise lui
aussi la technique de distribution des traitements fournie par le canevas HADOOP afin de
construire incrémentalement l’index des images. Ce système présente l’avantage d’être très
ouvert, mais il est uniquement destiné à la recherche de contenus.
Limites des outils existants Ces deux types de répartitions ne sont pas exclusives
et les deux approches peuvent être combinées de manière à augmenter l’efficacité d’un
système. Il est ainsi possible de créer des flux de travaux enchaînant différentes étapes de
traitement ou les exécutant en parallèle. La principale limite des outils existants réside
dans le fait que les traitements répartis font partie d’une bibliothèque logicielle utilisant
un canevas donné, comme ImageTerrier [HSD11] et Hadoop. Or, dans le projet MCube,
les traitements à répartir sont codés et compilés sans suivre d’API ou de canevas logiciel
particulier. Il est aussi parfois nécessaire d’exécuter les traitements à la demande au fur
et à mesure de l’arrivée des données. Dans ce contexte il apparait donc nécessaire de faire
appel à des plateformes de plus haut niveau permettant d’adapter les traitements à leur
contexte d’exécution.
2.2.3 Plateformes génériques de traitements de données multimédias
La recherche en matière de traitement des données multimédia se concentre aussi pour
une large part sur la conception de nouveau algorithmes d’extraction d’informations. Le
système doit fournir un catalogue de services permettant de procéder aux extractions de
données et aux analyses voulues par l’utilisateur. Cette librairie doit être extensible afin de
permettre aux utilisateurs d’y implanter leurs propres analyses. L’ouverture d’un système
se mesure à sa disponibilité, et à sa capacité à s’interfacer avec d’autres systèmes de manière
à pouvoir être utilisé dans différents contextes. Plusieurs systèmes de stockage évoqués dans
ce chapitre sont décrits dans la littérature mais ne sont pas disponibles pour évaluation,
du fait du peu de cas d’utilisations qu’ils permettent de résoudre. Par exemple le système
Haystack développé par Facebook ne fait pas l’objet d’un développement à l’extérieur
de l’entreprise. Le projet RanKloud bien que décrit dans diverses publications, n’est pas
6. http://terrier.org/, consulté le 5 octobre 2013
202.2. COMPOSANTS DES PLATEFORMES DE STOCKAGE DE DONNÉES
MULTIMÉDIA
publiquement disponible. De même, peu de projets sortent du simple cadre du prototype
de recherche, comme par exemple le projet LINDO [BCMS11].
Le projet LINDO [BCMS11] a pour objectif de développer une infrastructure géné-
rique de traitement de données multimédia permettant de distribuer les traitements et
les données, en assurant l’interopérabilité de différents systèmes de traitement de données
multimédias. Il a permis la mise au point d’une infrastructure d’indexation générique et
distribuée. Pour cela, un modèle de données représentatif a été créé pour les données multimédia
et pour les algorithmes d’indexation de ces données. Les données sont donc réparties
sur un système de fichiers distribué et le système sélectionne l’algorithme d’indexation le
plus approprié en fonction des requêtes les plus féquemment envoyées par l’utilisateur. Ce
système a été appliqué à la vidéosurveillance, pour détecter des événements dans les flux vidéo.
C’est une plateforme multi-utilisateurs, qui peut être utilisée dans plusieurs domaines
à la fois, et permet de représenter les données au format MPEG-7 ou MPEG-21. Cependant
les informations ne sont extraites qu’à la demande explicite des utilisateurs (à travers
une requête) et traversent toutes un serveur central qui contrôle les machines indexant
les contenus. Ce système risque donc de limiter l’élasticité de la solution. L’élasticité d’un
système est sa capacité à s’adapter à l’apparition ou la disparition dynamique de nouvelles
ressources. Cette notion est particulièrement importante dans le contexte des plateformes
de Cloud Computing, où les ressources peuvent être allouées et disparaître rapidement.
Le projet WebLab est un projet libre 7, il est construit sur le même principe d’interface
et de modèle d’échange de données que le projet LINDO. Le projet WebLab ne modélise
pas les traitements, mais seulement le format des données à échanger entre différentes
plateformes de traitements hétérogènes. Ce système repose sur le bus de service Petals 8
pour ses communications, or ce composant semble posséder des problèmes de performance
comme le montre les résultats de différents bancs d’essais successifs 9.
Ces deux systèmes servent donc d’intergiciels pour la communication entre différentes
plateformes de traitements et permettent l’agrégation de différents sites ou grilles spécialisés
7. http://weblab-project.org/, consulté le 5 octobre 2013
8. http://petals.ow2.org/, consulté le 5 octobre 2013
9. http://esbperformance.org/display/comparison/ESB+Performance+Testing+-+Round+6,
consulté le 5 octobre 2013
212.3. ANALYSE
dans certaines étapes de traitement.
Le propos du projet MCube est au contraire de concentrer les données des différents
utilisateurs afin de mutualiser la plateforme de traitement et permettre des économies
d’échelles en matière de stockage et de capacité de calcul. Ce projet se situe à un niveau
intermédiaire entre les systèmes de traitements de données de bas niveau (ImageTerrier,
Hadoop, ...) et ces plateformes de fédération que sont les projets LINDO et WebLab.
Il s’agit donc d’un système multi-tenant, car il sert plusieurs clients en mutualisant ses
ressources.
2.3 Analyse
Les systèmes de stockage et de traitements multimédias décrits ici possèdent diverses
architectures. Les architectures fondées sur les bases de données traditionnelles laissent
peu à peu place à des systèmes basés sur les technologies issues des plateformes Web à
large échelle telles que le canevas Hadoop [Whi09]. Bien que l’adoption de ces solutions se
démocratise, il n’existe pas pour le moment d’alternative permettant de tirer parti des différents
services de ces plateformes sans nécessiter de développement supplémentaire pour
être adapté à l’environnement cible. De plus, hormis les initiatives telles que LINDO, Weblab
ou ImageTerrier, peu de projets dépassent le stade du prototype de recherche. Pour
une large part, ces projets servent de plateformes d’évaluation des algorithmes d’extraction
de connaissances et d’indexation, ou sont orientés vers la recherche de contenus. Le
tableau 2.1 résume les propriétés disponibles pour chacun des systèmes listés dans les sections
précédentes, par rapport aux défis décrits dans la section 2.1.2 de ce chapitre.
L’examen des différentes plateformes de stockage et de traitement de données multimé-
dias montre qu’il n’existe pas, actuellement, de projets remplissant le cahier des charges du
projet MCube. Il existe des plateformes de traitements de données génériques, qui ne sont
pas spécialisées dans les données multimédia, comme le canevas Hadoop, et qui peuvent
donc servir de base à la plateforme MCube.
222.4. ARCHITECTURE DE LA PLATEFORME MCUBE
Table 2.1 – Propriétés des systèmes de traitements de données multimédias
Stockage Traitements Mutualisation ouverture Elasticité
Système Données Descripteurs Informations Descripteurs Indexation Informations
Mpeg-7 DB oui oui non oui oui non oui non non
AIR distribué distribué non oui distribué non oui non limitée
Lindo distribué distribué non distribué distribué oui oui limitée limitée
Weblab distribué distribué non non non non oui limitée limitée
SAPIR distribué distribué non oui oui non oui limité oui
IrisNet distribué distribué distribués oui distribué oui oui oui oui
[CSC09] distribué distribué non oui distribué non oui non oui
RanKloud distribué distribué non distribué distribué non oui non oui
Haystack distribué distribué distribués distribué distribué oui non non oui
ImageTerrier distribué distribué non distribué distribué non non oui oui
2.4 Architecture de la plateforme MCube
2.4.1 Architecture matérielle
Passerelles MCube. Les passerelles MCube sont des appareils mis au point par la
société Webdyn 10. Il s’agit d’un système embarqué reposant sur le système Linux et un
processeur ARM. La présence du système d’exploitation complet permet de connecter
divers périphériques d’acquisition de données multimédias comme des appareils photos ou
des micros. Le système de la passerelle exécute un planificateur de tâches pouvant être
configuré à distance en recevant des commandes depuis les serveurs MCube. Le rôle des
passerelles est d’assurer l’exécution à intervalles réguliers (configurés par l’utilisateur) d’un
programme appelé "librairie passerelle" actuellement déployé sur le système. Les passerelles
sont équipées d’un port Ethernet et d’un modem 3G permettant de se connecter à internet
pour dialoguer avec le serveur MCube.
Serveurs MCube. Les serveurs MCube sont trois machines Dell équipées chacune de
deux processeurs intel Xeon embarquant 8 cœurs chacun. L’espace de stockage important
sur les machines (8 téraoctets par serveur) permet d’anticiper sur la quantité de données
à stocker dans le cadre du projet MCube, dont les collectes annuelles peuvent représenter
dans certains cas plusieurs centaines de gigaoctets par mois.
10. http://www.webdyn.com, consulté le 5 octobre 2013
232.4. ARCHITECTURE DE LA PLATEFORME MCUBE
2.4.2 Architecture logicielle
L’architecture logicielle du projet MCube a été conçue de manière à permettre aux
utilisateurs de pouvoir eux-mêmes développer les applications déployées sur les passerelles
et la plateforme. Pour cela, une interface de programmation a été mise au point par le
constructeur de la passerelle, la société WebDyn 11, permettant de séparer les tâches d’acquisition
des données de leur planification. Les composants développés sur cette interface
sont appelés "librairies passerelles". Les librairies de passerelles sont des librairies dynamiques
implantant les fonctions de l’interface de programmation MCube, ce qui permet de
les déployer à la demande depuis la plateforme Web du projet. Ces librairies sont développées
pour remplir deux fonctions principales :
– L’acquisition des données : C’est la fonction essentielle de ces librairies, elle consiste
à piloter les différents périphériques de capture de données multimédia connectés à
la passerelle pour acquérir des données brutes : photos, sons, vidéos.
– Filtrage des données : Dans certains cas, les données capturées peuvent être volumineuses.
Par exemple, un cas d’utilisation possible du service MCube consiste à
détecter la présence d’insectes nuisibles à partir d’enregistrements sonores. Dans ce
cas précis les données enregistrées ne sont envoyées au serveur que lorsqu’un signal
suspect est détecté dans l’enregistrement. Un traitement plus précis exécuté à la
demande sur le serveur permet ensuite de confirmer ou d’infirmer la détection.
La communication entre les passerelles et les serveurs repose sur un protocole HTTP/
REST qui permet aux passerelles :
– d’envoyer des données collectées à la plateforme MCube via le serveur Web,
– de recevoir une nouvelle librairie,
– de recevoir des commandes à transmettre à la librairie déployée sur la passerelle.
2.4.3 Architecture de la plateforme MCube
La plateforme MCube a un rôle de gestion des équipements et d’agrégation d’information.
Elle assure la communication avec les passerelles déployées, en permettant aux
utilisateurs de les configurer à l’aide d’une interface web. La plateforme fournit aux utilisa-
11. http://www.webdyn.com/, consulté le 5 octobre 2013
242.4. ARCHITECTURE DE LA PLATEFORME MCUBE
teurs l’accès aux librairies et programmes d’analyse de données qu’ils ont développés. Les
données transférées par les passerelles sont stockées sur les serveurs de la plateforme, et
une interface web permet aux utilisateurs de lancer ou de programmer les analyses qu’ils
souhaitent exécuter sur les données téléchargées depuis les passerelles. L’architecture finale
de la plateforme MCube repose sur trois composants logiciels :
Demandes de
traitements
mcube.isep.fr Cloudizer
Utilisateurs
Passerelles
Envoi de données
Consultation des données
Configuration
HDFS
HDFS
HDFS
Noeuds de stockage / traitement
Stockage / consultation
données passerelles et traitements
Requêtes de traitements
Service Web
MCube
Figure 2.3 – Architecture de la plateforme MCube
L’interface Web C’est l’interface utilisateur du système. Elle permet aux agriculteurs
de gérer leurs passerelles et d’accéder à leurs données afin de les récupérer dans le
format qu’ils souhaitent ou de lancer / planifier les traitements à effectuer. De plus
l’interface permet aux utilisateurs de charger de nouveaux programmes de traitements
de données.
Le canevas Hadoop Le système Hadoop permet de stocker les données de manière répartie
via le système de fichiers distribué HDFS [SKRC10]. Il permet d’avoir un système
de fichiers redondant pouvant accueillir un grand nombre de machines. De plus, les
données ainsi stockées peuvent être accédées à l’aide de langages de plus haut niveau
tel que le langage de requêtage HQL (Hive Query Language) [TSJ+09]
252.4. ARCHITECTURE DE LA PLATEFORME MCUBE
Le canevas Cloudizer Le framework Cloudizer est un canevas logiciel développé durant
cette thèse qui permet de distribuer des requêtes de services web à un ensemble de
machines. Ce système est utilisé pour traiter les fichiers envoyés par les passerelles
au fur et à mesure de leur arrivée dans le système.
La figure 2.3 montre comment s’articulent les différents composants de la plateforme.
Les données de configuration, les informations utilisateurs, et les coordonnées des passerelles
sont stockées dans une base de données SQL traditionnelle (MySql 12). Les données
remontées par les passerelles, ainsi que les librairies et les programmes d’analyses sont
stockés dans le système HDFS [SKRC10].
Chaque utilisateur possède un répertoire dédié sur le système de fichiers distribué. Ce
répertoire contient les fichiers reçus des passerelles et classés par nom de la librairie d’origine
des données. Les résultats des traitements effectués sur ces données sont stockés dans un
répertoire dédié et classés par noms de traitements et date d’exécution. Ceci permet aux
utilisateurs de pouvoir retrouver leurs données de manière intuitive.
Les utilisateurs peuvent choisir deux modes de traitement des données : un mode évé-
nementiel et un mode planifié. Lorsque le mode événementiel est sélectionné, l’utilisateur
définit le type d’événement déclenchant le traitement voulu : ce peut par exemple être l’arrivée
d’un nouveau fichier ou le dépassement d’un certain volume de stockage. Le traitement
est alors exécuté sur les données désignées par l’utilisateur. Le mode planifié correspond
à une exécution du traitement régulière et à heure fixe. Le déclenchement des traitements
est géré par la plateforme Cloudizer décrite dans le chapitre suivant.
2.4.4 Description des algorithmes de traitement de données multimédia
Les utilisateurs de la plateforme peuvent charger de nouveaux algorithmes de traitement
de données multimédia. Pour que le système puisse utiliser cet algorithme, il est nécessaire
d’en fournir une description afin de le stocker dans la base de données. Le terme "algorithme"
désigne ici par abus de langage l’implantation concrète d’un algorithme donné,
compilée sous la forme d’un programme exécutable. Un modèle type de ces algorithmes
a donc été conçu, il est représenté en figure 2.4. Ce schéma est inspiré des travaux sur
12. www.mysql.com, consulté le 5 octobre 2013
262.4. ARCHITECTURE DE LA PLATEFORME MCUBE
l’adaptation de composants de [AGP+08, BCMS11] et [GBB+08].
Figure 2.4 – Modèle de description d’algorithme
Un programme et l’ensemble de ses fichiers sont décrits par la classe "Algorithm". Cette
classe contient une liste qui permet de spécifier les fichiers exécutables et de configurations
nécessaires à l’exécution du programme. Un programme est généralement exécuté avec des
paramètres fournis sur l’entrée standard, la liste des paramètres (la classe "Parameter")
décrit les paramètres à fournir à l’exécutable ainsi que leurs positions et préfixes respectifs
si besoin. La classe "OutputFormat" permet de décrire le type de donnée en sortie de
l’exécution du programme. Par exemple, la sélection du format ”TextOuput” signifie que
la sortie du programme sur l’entrée standard correspond à des données textes tabulaires,
séparées par un caractère spécifique. D’autres formats sont disponibles et peuvent être
ajoutés à la librairie.
Exemple de modèle d’algorithme : Calcul de tailles de tomates Le listing 2.1
présente un exemple de modèle d’algorithme pour un programme développé dans le cadre
du projet MCube. Dans ce cas d’utilisation, un dispositif expérimental a été mis au point
par les membres de l’équipe de traitement du signal de l’ISEP, afin de prendre en photo un
272.4. ARCHITECTURE DE LA PLATEFORME MCUBE
plant de tomates selon deux angles différents. Grâce à ce décalage et un calibrage précis du
dispositif, il est possible de donner une estimation de la taille des tomates détectées dans
les photos, si leurs positions sont connues. Cet algorithme prend en entrée deux listes de
points correspondants respectivement aux contours droit et gauche d’une tomate du plant
photographié, et fournit une estimation de la taille du fruit.
Listing 2.1 – Exemple d’instance du modèle d’algorithme
tomate_measurement1. 0U j jwal Verma au tho r>
Detection de visages
methode de Viola!Jones , l i b r a i r i e OPEN!CV
d e s c r i p t i o n>
t x ttomate_measurement . j a r
f i l e s>
leftImageContours
parameter>
rightImageContours
parameter>
pa rame te r s>