| About the Cover | p. xiii |
| Preface | p. xv |
| Why This Book Is Important | p. xvi |
| What This Book Is About | p. xvi |
| What Prior Knowledge You Should Have | p. xviii |
| Reading Paths | p. xviii |
| Why System Verilog? | p. xix |
| VHDL and Verilog | p. xix |
| Hardware Verification Languages | p. xx |
| Code Examples | p. xxi |
| For More Information | p. xxii |
| Acknowledgements | p. xxii |
| What is Verification? | p. 1 |
| What is a Testbench? | p. 1 |
| The Importance of Verification | p. 2 |
| Reconvergence Model | p. 4 |
| The Human Factor | p. 5 |
| Automation | p. 6 |
| Poka-Yoke | p. 6 |
| Redundancy | p. 7 |
| What Is Being Verified? | p. 7 |
| Equivalence Checking | p. 8 |
| Property Checking | p. 9 |
| Functional Verification | p. 10 |
| Functional Verification Approaches | p. 11 |
| Black-Box Verification | p. 11 |
| White-Box Verification | p. 13 |
| Grey-Box Verification | p. 14 |
| Testing Versus Verification | p. 15 |
| Scan-Based Testing | p. 16 |
| Design for Verification | p. 17 |
| Design and Verification Reuse | p. 18 |
| Reuse Is About Trust | p. 18 |
| Verification for Reuse | p. 19 |
| Verification Reuse | p. 19 |
| The Cost of Verification | p. 20 |
| Summary | p. 22 |
| Verification Technologies | p. 23 |
| Linting | p. 24 |
| The Limitations of Linting Technology | p. 25 |
| Linting SystemVerilog Source Code | p. 27 |
| Code Reviews | p. 29 |
| Simulation | p. 29 |
| Stimulus and Response | p. 30 |
| Event-Driven Simulation | p. 31 |
| Cycle-Based Simulation | p. 33 |
| Co-Simulators | p. 35 |
| Verification Intellectual Property | p. 38 |
| Waveform Viewers | p. 39 |
| Code Coverage | p. 41 |
| Statement Coverage | p. 43 |
| Path Coverage | p. 44 |
| Expression Coverage | p. 45 |
| FSM Coverage | p. 46 |
| What Does 100 Percent Code Coverage Mean? | p. 48 |
| Functional Coverage | p. 49 |
| Coverage Points | p. 51 |
| Cross Coverage | p. 53 |
| Transition Coverage | p. 53 |
| What Does 100 Percent Functional Coverage Mean? | p. 54 |
| Verification Language Technologies | p. 55 |
| Assertions | p. 57 |
| Simulated Assertions | p. 58 |
| Formal Assertion Proving | p. 59 |
| Revision Control | p. 61 |
| The Software Engineering Experience | p. 62 |
| Configuration Management | p. 63 |
| Working with Releases | p. 65 |
| Issue Tracking | p. 66 |
| What Is an Issue? | p. 67 |
| The Grapevine System | p. 68 |
| The Post-It System | p. 68 |
| The Procedural System | p. 69 |
| Computerized System | p. 69 |
| Metrics | p. 71 |
| Code-Related Metrics | p. 71 |
| Quality-Related Metrics | p. 73 |
| Interpreting Metrics | p. 74 |
| Summary | p. 76 |
| The Verification Plan | p. 77 |
| The Role of the Verification Plan | p. 78 |
| Specifying the Verification | p. 78 |
| Defining First-Time Success | p. 79 |
| Levels of Verification | p. 80 |
| Unit-Level Verification | p. 81 |
| Block and Core Verification | p. 82 |
| ASIC and FPGA Verification | p. 84 |
| System-Level Verification | p. 84 |
| Board-Level Verification | p. 85 |
| Verification Strategies | p. 86 |
| Verifying the Response | p. 86 |
| From Specification to Features | p. 87 |
| Block-Level Features | p. 90 |
| System-Level Features | p. 91 |
| Error Types to Look For | p. 91 |
| Prioritize | p. 92 |
| Design for Verification | p. 93 |
| Directed Testbenches Approach | p. 96 |
| Group into Testcases | p. 96 |
| From Testcases to Testbenches | p. 98 |
| Verifying Testbenches | p. 99 |
| Measuring Progress | p. 100 |
| Coverage-Driven Random-Based Approach | p. 101 |
| Measuring Progress | p. 101 |
| From Features to Functional Coverage | p. 103 |
| From Features to Testbench | p. 105 |
| From Features to Generators | p. 107 |
| Directed Testcases | p. 109 |
| Summary | p. 111 |
| High-Level Modeling | p. 113 |
| High-Level versus RTL Thinking | p. 113 |
| Contrasting the Approaches | p. 115 |
| You Gotta Have Style! | p. 117 |
| A Question of Discipline | p. 117 |
| Optimize the Right Thing | p. 118 |
| Good Comments Improve Maintainability | p. 121 |
| Structure of High-Level Code | p. 122 |
| Encapsulation Hides Implementation Details | p. 122 |
| Encapsulating Useful Subprograms | p. 125 |
| Encapsulating Bus-Functional Models | p. 127 |
| Data Abstraction | p. 130 |
| 2-state Data Types | p. 131 |
| Struct, Class | p. 131 |
| Union | p. 134 |
| Arrays | p. 139 |
| Queues | p. 141 |
| Associative Arrays | p. 143 |
| Files | p. 145 |
| From High-Level to Physical-Level | p. 146 |
| Object-Oriented Programming | p. 147 |
| Classes | p. 147 |
| Inheritance | p. 153 |
| Polymorphism | p. 156 |
| The Parallel Simulation Engine | p. 159 |
| Connectivity, Time and Concurrency | p. 160 |
| The Problems with Concurrency | p. 160 |
| Emulating Parallelism on a Sequential Processor | p. 162 |
| The Simulation Cycle | p. 163 |
| Parallel vs. Sequential | p. 169 |
| Fork/Join Statement | p. 170 |
| The Difference Between Driving and Assigning | p. 173 |
| Race Conditions | p. 176 |
| Read/Write Race Conditions | p. 177 |
| Write/Write Race Conditions | p. 180 |
| Initialization Races | p. 182 |
| Guidelines for Avoiding Race Conditions | p. 183 |
| Semaphores | p. 184 |
| Portability Issues | p. 186 |
| Events from Overwritten Scheduled Values | p. 186 |
| Disabled Scheduled Values | p. 187 |
| Output Arguments on Disabled Tasks | p. 188 |
| Non-Re-Entrant Tasks | p. 188 |
| Static vs. Automatic Variables | p. 193 |
| Summary | p. 196 |
| Stimulus and Response | p. 197 |
| Reference Signals | p. 198 |
| Time Resolution Issues | p. 199 |
| Aligning Signals in Delta-Time | p. 201 |
| Clock Multipliers | p. 203 |
| Asynchronous Reference Signals | p. 205 |
| Random Generation of Reference Signal Parameters | p. 206 |
| Applying Reset | p. 208 |
| Simple Stimulus | p. 212 |
| Applying Synchronous Data Values | p. 212 |
| Abstracting Waveform Generation | p. 214 |
| Simple Output | p. 216 |
| Visual Inspection of Response | p. 217 |
| Producing Simulation Results | p. 217 |
| Minimizing Sampling | p. 219 |
| Visual Inspection of Waveforms | p. 220 |
| Self-Checking Testbenches | p. 221 |
| Input and Output Vectors | p. 221 |
| Golden Vectors | p. 222 |
| Self-Checking Operations | p. 224 |
| Complex Stimulus | p. 227 |
| Feedback Between Stimulus and Design | p. 228 |
| Recovering from Deadlocks | p. 228 |
| Asynchronous Interfaces | p. 231 |
| Bus-Functional Models | p. 234 |
| CPU Transactions | p. 234 |
| From Bus-Functional Tasks to Bus-Functional Model | p. 236 |
| Physical Interfaces | p. 238 |
| Configurable Bus-Functional Models | p. 243 |
| Response Monitors | p. 246 |
| Autonomous Monitors | p. 249 |
| Slave Generators | p. 253 |
| Multiple Possible Transactions | p. 255 |
| Transaction-Level Interface | p. 258 |
| Procedural Interface vs Dataflow Interface | p. 259 |
| What is a Transaction? | p. 263 |
| Blocking Transactions | p. 265 |
| Nonblocking Transactions | p. 265 |
| Split Transactions | p. 267 |
| Exceptions | p. 270 |
| Summary | p. 278 |
| Architecting Testbenches | p. 279 |
| Verification Harness | p. 280 |
| Design Configuration | p. 284 |
| Abstracting Design Configuration | p. 285 |
| Configuring the Design | p. 288 |
| Random Design Configuration | p. 290 |
| Self-Checking Testbenches | p. 292 |
| Hard Coded Response | p. 294 |
| Data Tagging | p. 295 |
| Reference Models | p. 297 |
| Transfer Function | p. 299 |
| Scoreboarding | p. 300 |
| Integration with the Transaction Layer | p. 302 |
| Directed Stimulus | p. 304 |
| Random Stimulus | p. 307 |
| Atomic Generation | p. 307 |
| Adding Constraints | p. 312 |
| Constraining Sequences | p. 316 |
| Defining Random Scenarios | p. 320 |
| Defining Procedural Scenarios | p. 322 |
| System-Level Verification Harnesses | p. 327 |
| Layered Bus-Functional Models | p. 328 |
| Summary | p. 331 |
| Simulation Management | p. 333 |
| Transaction-Level Models | p. 333 |
| Transaction-Level versus Synthesizable Models | p. 334 |
| Example of Transaction-Level Modeling | p. 335 |
| Characteristics of a Transaction-Level Model | p. 337 |
| Modeling Reset | p. 341 |
| Writing Good Transaction-Level Models | p. 342 |
| Transaction-Level Models Are Faster | p. 347 |
| The Cost of Transaction-Level Models | p. 348 |
| The Benefits of Transaction-Level Models | p. 349 |
| Demonstrating Equivalence | p. 351 |
| Pass or Fail? | p. 352 |
| Managing Simulations | p. 355 |
| Configuration Management | p. 355 |
| Avoiding Recompilation or SDF Re-Annotation | p. 358 |
| Output File Management | p. 361 |
| Seed Management | p. 364 |
| Regression | p. 365 |
| Running Regressions | p. 366 |
| Regression Management | p. 367 |
| Summary | p. 370 |
| Coding Guidelines | p. 371 |
| File Structure | p. 372 |
| Filenames | p. 375 |
| Style Guidelines | p. 376 |
| Comments | p. 376 |
| Layout | p. 378 |
| Structure | p. 380 |
| Debugging | p. 383 |
| Naming Guidelines | p. 384 |
| Capitalization | p. 384 |
| Identifiers | p. 386 |
| Constants | p. 389 |
| Portability Guidelines | p. 391 |
| Glossary | p. 397 |
| Index | p. 401 |
| Table of Contents provided by Ingram. All Rights Reserved. |