| List of Figures | p. xi |
| List of Tables | p. xv |
| Series Foreword | p. xvii |
| Preface | p. xix |
| Contributing Authors | p. xxi |
| Bridging the Semantic Gap in Content Management Systems | p. 1 |
| Computational Media Aesthetics | p. 3 |
| Primitive Feature Extraction | p. 4 |
| Higher Order Semantic Construct Extraction | p. 5 |
| What is This Book About? | p. 5 |
| References | p. 9 |
| Essentials of Applied Media Aesthetics | p. 11 |
| Applied Media Aesthetics: Definition and Method | p. 12 |
| Contextual Fields | p. 13 |
| The First Aesthetic Field: Light | p. 13 |
| Attached and Cast Shadows | p. 14 |
| Above- and Below Eye-Level Lighting | p. 15 |
| Falloff | p. 17 |
| The Extended First Aesthetic Field: Color | p. 18 |
| Informational Function | p. 18 |
| Screen Balance | p. 18 |
| Expressive Function | p. 19 |
| Desaturation Theory | p. 19 |
| The Two-Dimensional Field: Area | p. 19 |
| Aspect Ratio | p. 20 |
| Screen Size | p. 20 |
| Field of View | p. 21 |
| Asymmetry of the Screen | p. 21 |
| Psychological Closure | p. 23 |
| Vector Fields | p. 24 |
| The Three-Dimensional Field: Depth and Volume | p. 25 |
| Graphic Depth Factors | p. 25 |
| Z-Axis Articulation and Lenses | p. 26 |
| Z-Axis Blocking | p. 27 |
| The Four-Dimensional Field: Time-Motion | p. 28 |
| Ontological Difference | p. 28 |
| Time in Television and Film Presentations | p. 29 |
| Editing and Mental Maps | p. 30 |
| The Five-Dimensional Field: Sound | p. 32 |
| Literal Sounds | p. 32 |
| Nonliteral sounds | p. 33 |
| Structural Matching | p. 34 |
| Summary and Conclusion | p. 34 |
| References | p. 37 |
| Space-Time Mappings as Database Browsing Tools | p. 39 |
| The Need to Segment and the Narrative Map | p. 40 |
| The Shortcomings of Common Database Search Practices as They Apply to Moving Image Databases | p. 41 |
| The Cartesian Grid as the Spatio-Temporal Mapping for Browsing | p. 42 |
| From the Frame to the Shot | p. 42 |
| Self-Generating Segmentations | p. 43 |
| Beyond Shots | p. 44 |
| Embedded Linkages and Taggability | p. 45 |
| Alternatives to the Shot | p. 45 |
| Conclusion--Generalizing the Notion of Segmentation | p. 47 |
| References | p. 55 |
| Formulating Film Tempo | p. 57 |
| The Need for a Framework: Computational Media Aesthetics | p. 59 |
| A Short History of Automatic Content Management | p. 59 |
| Approaches to Film Content Management | p. 61 |
| The Solution: The Framework of Film Grammar | p. 63 |
| What is Film Grammar? | p. 63 |
| How do We Use Film Grammar? | p. 64 |
| Using the Framework: Extracting and Analyzing Film Tempo | p. 66 |
| What is Tempo? | p. 67 |
| Manipulation of Tempo | p. 68 |
| Computational Aspects of Tempo | p. 69 |
| Extracting the Components of Tempo | p. 69 |
| Formulating Tempo | p. 70 |
| The Tempo Function | p. 72 |
| An Example from the Movie, The Matrix | p. 74 |
| Building on the Tempo Function | p. 75 |
| Conclusion | p. 78 |
| References | p. 81 |
| Modeling Color Dynamics for the Semantics of Commercials | p. 85 |
| Semantics of Color and Motion in Commercials | p. 87 |
| Modeling Arrangements of Entities Extended over Time and Space | p. 89 |
| Absolute Dynamics of a Single Entity | p. 89 |
| Properties and Derivation | p. 91 |
| Reference Points | p. 92 |
| Relative Dynamics of Two Entities | p. 93 |
| Properties and Derivation | p. 94 |
| Distance Based on 3D Weighted Walkthroughs | p. 94 |
| Extraction and Representation of Color Dynamics | p. 95 |
| Color Flow Extraction | p. 95 |
| Color Flow Description | p. 97 |
| Video Retrieval by Color Dynamics | p. 97 |
| Similarity Assessment | p. 98 |
| Evaluating Absolute Dynamics | p. 99 |
| Evaluating Relative Dynamics | p. 101 |
| Conclusion | p. 102 |
| References | p. 103 |
| Scene Determination Using Auditive Segmentation | p. 105 |
| The Meta-model Framework | p. 106 |
| Audio Editing Practices for Scenes | p. 110 |
| Automatic Extraction of Auditive Scenes | p. 114 |
| Scenes Created by Narration | p. 114 |
| Scenes Created by Editing | p. 115 |
| Top-down Approach | p. 115 |
| Bottom-up Approach | p. 118 |
| Implemented Approaches | p. 119 |
| Scenes Determined by Linguistic Analysis | p. 119 |
| Scenes Determined by Sound Classification | p. 121 |
| Scenes Determined by Feature Patterns | p. 122 |
| Conclusion | p. 123 |
| References | p. 125 |
| Determining Affective Events Through Film Audio | p. 131 |
| Sound in Film | p. 133 |
| Sound Energy | p. 134 |
| Matching the Visual Event via Sound Energy | p. 135 |
| Heighten Anticipation | p. 135 |
| Reinforce Dramatic Event | p. 136 |
| Predictive Reinforcing Syncopation | p. 136 |
| Counterpoint via Sound Energy | p. 137 |
| Computing Affective Events in Motion Pictures | p. 137 |
| Sound Energy Events | p. 138 |
| Sound Energy Envelope Characteristics | p. 138 |
| Sound Energy Event Composition and Affect | p. 139 |
| Sound Energy Patterns without Affect | p. 141 |
| Location and Semantics of Sound Energy Events | p. 142 |
| Sound Energy Event Occurrence Classification | p. 142 |
| Intra Sound Energy Pattern and Affect | p. 142 |
| Experimental Data | p. 143 |
| Data Processing | p. 143 |
| Sound Energy Event Detection Algorithm | p. 144 |
| Computing Sound Energy Dynamics | p. 144 |
| Detecting Sound Energy Events | p. 146 |
| Experimental Results | p. 147 |
| Accuracy of Event Detection | p. 147 |
| Accuracy of Affect Detection | p. 148 |
| Data Support for Affect Events | p. 150 |
| Discussion of Errors | p. 152 |
| Conclusion | p. 153 |
| References | p. 157 |
| The Future of Media Computing | p. 159 |
| The Structure of a Semantic and Semiotic Continuum | p. 162 |
| General Concepts | p. 162 |
| Nodes | p. 163 |
| Relations and Anchors | p. 164 |
| Problems | p. 166 |
| Media Production | p. 167 |
| Digital Production--Environment and Tools | p. 168 |
| Preproduction | p. 169 |
| Production | p. 171 |
| Postproduction | p. 175 |
| Encyclopaedic Spaces | p. 180 |
| Information Space Editing Environment (ISEE) | p. 182 |
| Dynamic Presentation Environment (DPE) | p. 184 |
| Conclusion | p. 186 |
| References | p. 189 |
| Index | p. 197 |
| Table of Contents provided by Syndetics. All Rights Reserved. |