A Practical Introduction to Real-time Systems for Undergraduate Engineering
Douglas Wilhelm Harder
A Practical Introduction to Real-time Systems for Undergraduate Engineering
Free
Description
Contents
Reviews

This course will begin by introducing the requirements and constraints of real-time and embedded systems, together with examples. Next, we will consider various programming paradigms, appropriate characteristics for embedded and real-time programming languages, and an introduction to the C programming language and contrast it with C++. We continue by describing the organization of a computer, including descriptions of Turing machines, register machines, main memory, processor architecture and operating systems. Continuing from here, we describe static memory allocation and the call stack, and then consider dynamic memory allocation, including numerous variable-sized-block strategies, their appropriateness for real-time systems and various implementations in FreeRTOS. We also consider automatic memory allocation, including garbage collection. Following this, we discuss the idea of threads and tasks running in parallel, looking at examples of sorting in parallel, and the data structures necessary to maintain threads. We then consider the issue of scheduling threads, first with multi-programming, non-preemptive scheduling and the concept of context switching, and then considering multitasking with preemptive scheduling, focusing on real-time schedulers such as earliest-deadline-first and rate- monotonic scheduling. Next, we consider the issue of the communication of other devices with the processor and the concept of interrupts and interrupt service routines as well as the impact of interrupts on real-time systems. Next we consider synchronization issues between cooperating threads and tasks, including issues of serialization and synchronization. We describe semaphores and consider a number of synchronization problems and consider solutions using semaphores. Additionally, we consider other means of automatic synchronization. With this, we consider the application of semaphores and synchronization in general to resource management, looking specifically at priority inversion. The most serious issue, however, is deadlock, when tasks and threads holding resources are mutually blocked on subsequent requests and how to avoid this issue. Next, inter-process communication is discussed together with how synchronization can be achieved through messaging. Next, we consider fault tolerance, specifically considering error detection and correction, the synchronization of clocks, and fault-tolerant message passing. Having considered all this, we now consider how resource management can be centralized in an operating system protected with fault tolerance through kernel modes and space. Having achieved this, we now consider the problem of software simulating including client-server models and distributions, and then software verification, including a look at propositional logic, predicate logic, linear temporal logic, computation tree logic and model checkers. We conclude the course by consider issues of data management and file management, virtual memory and caching, digital signal processing and digital control theory, and finishing with an introduction to security and a look at what is ahead.

Language
English
ISBN
Unknown
Preface
1 Introduction to real-time systems
1.1 What is a real-time system?
1.1.1 What is time?
1.1.2 What are embedded systems?
1.2 Case study: anti-lock braking system
1.3 Components of real-time systems
1.3.1 The environment
1.3.2 Real-time hardware
1.3.3 Real-time software
1.3.3.1 Therac-25
1.3.3.2 The Mars rover “Spirit”
1.3.3.3 Logic expression simplification
1.3.3.4 Summary of real-time software and race conditions
1.3.4 Summary of the components of real-time systems
1.4 The history of real-time programming
1.5 Topic summary
Problem set
2 Real-time, embedded and operating-system programming languages
2.1 Programming languages
2.1.1 Programming paradigms
2.1.1.1 Structured programming
2.1.1.2 Procedural programming
2.1.1.3 Data abstraction
2.1.1.4 Object-oriented programming
2.1.1.5 Design patterns
2.1.1.6 Summary of programming paradigms
2.1.2 Ideal characteristics of programming languages
2.1.2.1 Characteristics of a real-time programming language
2.1.2.2 Characteristics of an embedded systems programming language
2.1.2.3 Characteristics of an operating system programming language
2.1.2.4 Summary of ideal characteristics
2.1.3 Other software programming techniques
2.1.4 Summary of real-time programming languages
2.2 The C programming language
2.2.1 No class, just structure…
2.2.2 More than the sum of its parts
2.2.3 Generics in C
2.2.4 Static and dynamic memory allocation
2.2.5 Pass-by-reference in C
2.2.6 An object-oriented approach in C
2.2.7 Header files
2.2.8 The pre-processor
2.2.9 Bit-wise operations
2.2.10 Bit-fields in C99
2.2.11 Vision4 specifics
2.2.12 Comments
2.2.13 Further help
2.3 Summary of real-time programming
Problem set
3 Computer organization
3.1 The Turing machine
3.2 Register machines
3.2.1 Instructions versus data
3.2.2 Word size
3.2.3 The registers in the lpc1768
3.2.3.1 General-purpose registers
3.2.3.2 Special-purpose registers
3.2.3.3 Summary of the lpc1768
3.2.4 Summary of register machines
3.3 Main memory
3.3.1 Addressing
3.3.2 Byte order
3.3.3 Accessing memory
3.3.4 Buses
3.3.5 Memory allocation
3.3.6 Summary of main memory
3.4 Processor architecture
3.4.1 Microprocessors and microcontrollers
3.4.1.1 Microprocessor
3.4.1.2 Microcontrollers
3.4.1.3 What is firmware?
3.4.2 Harvard architecture
3.4.3 von Neumann architecture
3.4.4 The Cortex-M3 architecture
3.4.5 Architecture summary
3.5 Operating systems
3.5.1 Why do we need an operating system?
3.5.2 Uses of operating systems
3.5.3 Linux, POSIX and the Keil RTX real-time operating system
3.5.4 Real-time operating systems
3.6 Computer organization summary
Problem set
4 Static memory allocation
4.1 The requirements of a function
4.2 The Cortex-M3 design
4.3 Set jump and long jump
4.4 Summary of static memory allocation
Problem set
5 Dynamic memory allocation
5.1 Abstract dynamic memory allocator
5.1.1 Manual allocation management
5.1.1.1 Wild pointers
5.1.1.2 Dangling pointers
5.1.1.3 Freeing the same location more than once
5.1.1.4 Memory leaks
5.1.1.5 Summary of issues with memory deallocation
5.1.2 Automatic allocation management
5.1.2.1 Automatic initialization
5.1.2.2 Garbage collection
5.1.2.2.1 Garbage collection algorithms
5.1.2.2.1.1 Reference counting
5.1.2.2.1.2 Tracing algorithms
5.1.2.2.1.3 Garbage collection in C
5.1.2.2.1.4 4 Summary of garbage collection algorithms
5.1.2.2.2 Issues with garbage collection
5.1.2.2.3 Summary of garbage collection
5.1.2.3 Summary of automatic allocation
5.1.3 Summary of abstract dynamic memory allocation
5.2 Allocation strategies
5.2.1 Fixed block sizes
5.2.1.1 One size of blocks
5.2.1.2 Fixed size blocks
5.2.1.3 Internal fragmentation
5.2.1.4 Summary for fixed block sizes
5.2.2 Variable-sized-block strategies
5.2.2.1 Bitmaps
5.2.2.2 Linked lists
5.2.2.3 External fragmentation
5.2.2.3.1 Coalescence
5.2.2.3.2 Allowing internal fragmentation
5.2.2.3.3 Summary of external fragmentation
5.2.2.4 Basic variable-sized allocation schemes
5.2.2.4.1 First fit
5.2.2.4.2 Next fit
5.2.2.4.3 Best fit
5.2.2.4.4 Worst fit
5.2.2.4.5 Summary of linked list memory management
5.2.2.5 Summary of variable-sized allocation strategies
5.2.3 Advanced memory allocation algorithms
5.2.3.1 Quick fit
5.2.3.2 Binary buddy
5.2.3.3 Doug Lea’s malloc
5.2.3.4 Half fit
5.2.3.5 Two-level segregated fit
5.2.3.6 Smart fit
5.2.3.7 Summary of advanced memory allocation strategies
5.2.4 Summary of allocation strategies
5.3 Case study: FreeRTOS
5.3.1 Allocation only
5.3.2 Best fit without coalescence
5.3.3 Standard library malloc and free
5.3.4 First fit with coalescence
5.3.5 First fit with coalescence over multiple regions
5.3.6 Summary of the case study
5.4 Other features: clearing and reallocation
5.4.1 Reallocation in C++ and the move constructor and move assignment operator
5.5 Summary of dynamic memory allocation
Problem set
6 Threads and tasks
6.1 Weaknesses in single threads
6.2 Creating threads and tasks
6.2.1 Threads in POSIX
6.2.2 Threads in Java
6.2.3 Tasks in the Keil RTX RTOS
6.2.4 Threads in the CMSIS-RTOS RTX
6.2.5 Summary of thread and task creation
6.3 Applications of threads and tasks
6.3.1 Parallel execution
6.3.2 Divide-and-conquer algorithms
6.3.3 Independent versus dependent tasks
6.3.4 Application of threads and tasks
6.4 Maintaining threads
6.4.1 Memory allocation for threads
6.4.2 The thread identifier
6.4.3 The hierarchy of children threads
6.4.4 The call stack
6.4.5 Return values
6.4.6 Case study: the TCB in the Keil RTX
6.4.7 Summary of maintaining threads
6.5 The volatile keyword in C
6.6 Summary of threads and tasks
Problem set
7 Scheduling
7.1 Background: waiting on tasks and resources
7.2 Introduction to multitasking
7.2.1 What can be scheduled?
7.2.2 Storing an image of the processor—saving the register values
7.2.3 Description of tasks
7.2.3.1 Classification of tasks
7.2.3.1.1 Fixed-instance tasks
7.2.3.1.2 Periodic tasks
7.2.3.1.3 Aperiodic tasks
7.2.3.1.4 Sporadic tasks
7.2.3.1.5 Summary of task classification
7.2.3.2 Estimating worst-case execution times
7.2.3.3 Periodic and sporadic tasks with additional deadlines
7.2.3.4 Example: Periodic garbage collection
7.2.3.5 Summary of the description of tasks
7.2.4 State and state diagrams
7.2.4.1 A TCB macro-based queue data structure
7.2.4.2 Using the state
7.2.4.3 The idle thread: if nothing else is ready
7.2.4.4 Some observations
7.2.4.5 Case study: states in various systems
7.2.4.6 Summary of the state
7.2.5 Timing diagrams
7.2.6 Timing interrupts
7.2.7 Summary of multiprogramming and non-preemptive scheduling
7.3 Non-preemptive scheduling algorithms
7.3.1 Timeline scheduling
7.3.2 First-come—first-served scheduling
7.3.3 Shortest-job next scheduling
7.3.4 Earliest-deadline first scheduling
7.3.5 Least-slack first scheduling
7.3.6 Summary of non-preemptive scheduling algorithms
7.4 Preemptive scheduling algorithms
7.4.1 Motivation
7.4.2 Timeline scheduling
7.4.2.1 Example of timeline scheduling
7.4.2.2 Large super periods
7.4.2.3 Issues with timeline scheduling
7.4.2.4 Summary of timeline scheduling
7.4.3 Round-robin and other fair schedulers
7.4.4 Deadline-based preemptive scheduling
7.4.4.1 Earliest-deadline first
7.4.4.2 Least-slack first algorithm
7.4.4.3 Implementation of deadline scheduling algorithms
7.4.4.4 Optimal schedulers and periodic tasks with overloads
7.4.4.5 Real-time issues with optimal deadline scheduling algorithms
7.4.4.6 Summary of optimal deadline scheduling algorithms
7.4.5 Priority-based preemptive scheduling algorithms
7.4.5.1 Priorities
7.4.5.1.1 Classification of priorities: fixed and dynamic
7.4.5.1.2 Implementation of priority schedulers
7.4.5.1.2.1 Using an array of queues
7.4.5.1.2.2 Using a heap structure
7.4.5.1.2.3 Issues with heap structures
7.4.5.1.3 Summary of priorities
7.4.5.2 Rate-monotonic scheduling
7.4.5.2.1 Description of rate-monotonic scheduling
7.4.5.2.2 Determining schedulability
7.4.5.2.3 An older formula
7.4.5.2.4 When can rm scheduling do better?
7.4.5.2.5 rm scheduling with overloads
7.4.5.2.6 Summary of rate-monotonic scheduling
7.4.5.3 Deadline-monotonic scheduling
7.4.5.4 Implementing earliest-deadline first (edf) scheduling with priorities
7.4.5.5 Dealing with jitter
7.4.5.6 Restricted priority levels
7.4.5.7 Summary of priority-based preemptive scheduling algorithms
7.4.6 Summary of preemptive scheduling algorithms
7.5 Issues with scheduling
7.5.1 Missing deadlines
7.5.2 Scheduling non-pre-emptible tasks
7.5.3 Multicore and multiprocessor processing
7.5.3.1 States and TCBs
7.5.3.2 Multiprocessor scheduling
7.5.3.2.1 Least-slack first
7.5.3.2.2 Rate-monotonic—first-fit periodic scheduling
7.5.3.2.3 3 Earliest-deadline first periodic scheduling
7.5.3.2.4 Summary of multiprocessor scheduling
7.5.3.3 Summary of multiprocessor and multicore processing
7.5.4 Summary of issues with scheduling
7.6 Summary of scheduling
Problem set
8 Hardware interrupts
8.1 Sources of interrupts
8.2 The mechanism of interrupts
8.2.1 Interrupting the processor
8.2.2 Halting execution
8.2.3 Selecting the interrupt service routine (isr)
8.2.4 Characteristics of interrupt service routines (isrs)
8.2.5 Returning from an interrupt
8.2.6 A simple application
8.2.7 Summary of the mechanism of interrupts
8.3 Ignoring and nested interrupts
8.4 Waiting for an interrupt
8.5 System design
8.5.1 Time- versus event-triggered systems
8.5.1.1 Time-triggered systems
8.5.1.2 Event-triggered systems
8.5.1.3 Hybrid systems and summary
8.5.2 Pure interrupt-driven systems
8.5.3 Summary of time- versus event-triggered systems
8.6 Watchdog timers
8.7 Implementation of interrupts
8.8 Apollo 11: an interrupt overload
8.9 Summary hardware interrupts
Problem set
9 Synchronization
9.1 The need for synchronization
9.1.1 Sharing a data structure
9.1.2 Two tasks communicating information
9.1.3 Multiple tasks communicating information
9.1.4 Summary of problems
9.2 Petri nets—describing synchronizations graphically
9.2.1
9.3 Synchronization through token passing
9.4 Test-and-set—a crude signal with polling
9.5 Semaphores—a better signal without polling
9.5.1 Binary semaphores
9.5.1.1 Description
9.5.1.2 Applications of binary semaphores
9.5.1.2.1 Sharing a data structure
9.5.1.2.2 Two tasks communicating information
9.5.1.2.3 Multiple tasks communicating information
9.5.1.2.4 Summary of applications
9.5.1.3 Mutex: a more exclusive binary semaphore
9.5.1.4 Summary of binary semaphores
9.5.2 Implementation of binary semaphores
9.5.2.1 The binary semaphore algorithms
9.5.2.2 Data structures for semaphores
9.5.2.2.1 Using a linked-list queue with a searching pop
9.5.2.2.2 Using a binary min-heap
9.5.2.2.3 A leftist heap
9.5.2.2.4 A skew heap
9.5.2.2.5 Analysis
9.5.2.2.6 Summary of data structures for semaphores
9.5.2.3 Priority inversion
9.5.2.4 Summary of the implementation of binary semaphores
9.5.3 Counting semaphores
9.5.3.1 Counting with binary semaphores
9.5.3.2 The design of counting semaphores
9.5.3.3 An application of counting semaphores
9.5.3.4 A summary of counting semaphores
9.5.4 Implementation of counting semaphores
9.5.4.1 POSIX semaphores
9.5.4.2 RTX RTOS semaphores
9.5.4.3 CMSIS-RTOS semaphores
9.5.4.4 Implementing counting semaphores with binary semaphores
9.5.4.5 Summary of implementations of semaphores
9.5.5 A summary of semaphores
9.6 Problems in synchronization
9.6.1 Basic problems in synchronization
9.6.1.1 Mutual exclusion
9.6.1.2 Signalling
9.6.1.3 Rendezvous
9.6.1.4 Waiting on interrupts: an application of serialization
9.6.1.5 Summary of basic serialization
9.6.2 Intermediate problems in synchronization
9.6.2.1 Multiplexing
9.6.2.2 Group rendezvous and turnstile
9.6.2.2.1 A sub-optimal solution
9.6.2.2.2 A better solution
9.6.2.2.3 A reusable group rendezvous (the current optimal design pattern)
9.6.2.2.4 The turnstile data structure
9.6.2.2.5 The reusable group rendezvous data structure
9.6.2.2.6 Summary of the group rendezvous
9.6.2.3 Light-switches
9.6.2.4 Events
9.6.2.5 Summary of intermediate problems in synchronization
9.6.3 Advanced problems in synchronization
9.6.3.1 Dining philosophers’ problem
9.6.3.1.1 First left, then right—deadlock
9.6.3.1.2 Accessing both chopsticks simultaneously
9.6.3.1.3 Ordering the resources
9.6.3.1.4 Restricting access to the table
9.6.3.1.5 Starvation
9.6.3.1.6 Summary of the dining philosophers’ problem
9.6.3.2 Readers-writers problem
9.6.3.2.1 The default problem
9.6.3.2.2 Readers wait for a writer
9.6.3.2.3 Summary of the reader-writer problem
9.6.3.3 Summary of advanced problems in synchronization
9.6.4 Summary of problems in synchronization
9.7 Automatic synchronization
9.7.1 Weaknesses of semaphores
9.7.2 Automatic alternatives to semaphores for mutual exclusion
9.7.3 Rendezvous in Ada
9.7.4 Mutual exclusion in Ada
9.7.5 The equivalence of synchronization tools
9.7.6 Summary of automatic synchronization
9.8 Summary of synchronization
Problem set
10 Resource management
10.1 Semaphores
10.2 Classification of resources
10.3 Device management
10.4 Resource managers
10.5 Priority and deadline inversion
10.5.1 Mars Pathfinder
10.5.2 Blocking interrupts
10.5.3 Priority inheritance
10.5.4 Priority ceiling
10.5.5 Priority demotion
10.5.6 Priority inheritance in mutual exclusion locks
10.5.7 Summary of priority and deadline inversion
10.6 Summary of resource management
Problem set
11 Deadlock
11.1 Requirements for deadlock
11.2 Deadlock modeling
11.3 Techniques for preventing deadlock during the design
11.3.1 Mutual-exclusion condition
11.3.2 Hold-and-wait condition
11.3.3 No-preemption condition
11.3.4 Circular-wait condition
11.3.5 Other recommendations for deadlock prevention
11.3.6 Summary of techniques for preventing deadlock
11.4 Deadlock detection and recovery
11.4.1 1 Probabilistic deadlock detection with watchdog timers
11.4.2 A brief introduction to graph theory
11.4.3 Algorithmic deadlock detection
11.4.3.1 Reusable resources with blocking on requests for individual resources
11.4.3.1.1 Example 1
11.4.3.1.2 Example 2
11.4.3.1.3 Example 3
11.4.3.1.4 Example 4
11.4.3.1.5 Example 5
11.4.3.2 Reusable and consumable resources with blocking on requests for individual resources
11.4.3.2.1 Example 1
11.4.3.2.2 Example 2
11.4.3.2.3 Example 3
11.4.3.3 Cycle detection algorithms
11.4.3.3.1 Cycle detection with reusable resources with blocking on individual requests
11.4.3.3.2 Cycle detection with consumable resources with blocking on individual requests
11.4.3.3.3 Cycle detection with blocking on multiple requests
11.4.3.3.4 Summary of cycle detection algorithms
11.4.3.4 The algorithm when tasks may be blocked on requests for multiple resources
11.4.3.5 Summary of deadlock detection
11.4.4 Recovery
11.4.4.1 Preemption
11.4.4.2 Rebooting the system
11.4.4.3 Killing a task
11.4.4.4 Rolling back to a check-point
11.4.4.4.1 Example with memory allocation
11.4.4.4.2 Example with a communication bus
11.4.4.5 Examples 11.4.1.1.2 through 5
11.4.4.6 Summary of deadlock recovery
11.4.5 When to perform deadlock detection
11.4.6 Summary of deadlock detection and recovery
11.5 Deadlock avoidance
11.6 Summary
Problem set
12 Inter-process communication
12.1 Classification of communications
12.2 Solutions for communication
12.2.1 Binary and counting semaphores
12.2.2 Shared memory
12.2.3 Pipes
12.2.4 Message queues
12.2.5 Pipes and message queues in secondary memory
12.2.6 Sockets
12.2.7 Mailboxes and ports
12.2.8 Summary of solutions for communication
12.3 Priorities of messages
12.4 Synchronization
12.4.1 Serialization between two tasks or threads
12.4.2 Mutual exclusion
12.4.3 Counting semaphores
12.4.4 Priority inversion
12.4.5 Summary of synchronization
12.5 Network communications
12.5.1 Token-based communications
12.5.2 Collision-based communications
12.5.3 Contention-free communications
12.5.4 Summary of network communications
12.6 Summary of inter-process communication
Problem set
13 Fault tolerance
13.1 Failures in real-time systems
13.1.1 Fault prevention
13.1.1.1 Fault avoidance
13.1.1.2 Fault removal
13.1.1.3 Summary of fault prevention
13.1.2 Fault tolerance
13.1.2.1 Recovery blocks
13.1.2.2 Exceptions and exception handling
13.1.2.3 Summary of fault tolerance
13.1.3 Summary of failures
13.2 Redundancy
13.3 Error detection and correction in signals
13.3.1 Repetition
13.3.2 Check sums
13.3.3 Cyclic redundancy check (crcs)
13.3.4 Error correcting codes
13.3.5 Summary of error detection and correction
13.4 Clocks
13.4.1 Example: the Patriot missile system
13.4.2 Requirements
13.4.3 Restoring synchronization
13.4.4 Achieving synchronization
13.4.4.1 Synchronization with a standard clock
13.4.4.2 Distributed synchronization
13.4.4.2.1 Synchronization algorithm
13.4.4.2.2 Duplication over exclusion
13.4.4.2.3 Practical synchronization algorithm
13.4.4.2.4 Summary of distributed synchronization
13.4.4.3 Summary for achieving synchronization
13.4.5 Summary of clocks
13.5 Byzantine generals’ problem
13.5.1 A non-solution
13.5.2 A solution not using signatures
13.5.2.1 No disloyal participants
13.5.2.2 At most one disloyal participant
13.5.2.3 At most two disloyal participant
13.5.2.3.1 Example with seven participants and two disloyal lieutenants
13.5.2.3.2 Example with seven participants with a disloyal general and a collaborating lieutenant
13.5.2.3.3 Example with six participants and two disloyal lieutenants
13.5.2.3.4 Example with six participants with a disloyal general and a collaborating lieutenant
13.5.2.4 At most three disloyal participants
13.5.2.5 Generalization and analysis
13.5.3 A solution using signatures
13.5.4 Summary of Byzantine generals’ problem
13.6 Summary of fault tolerance
Problem set
14 Operating systems
14.1 Operating systems as resource managers
14.2 Processor modes
14.2.1 Multiple processor modes
14.3 Memory management
14.3.1 Memory protection unit
14.4 Microkernels
14.5 Real-time operating systems
14.6 Examples of real-time operating systems
14.6.1 The Keil RTX RTOS
14.6.1.1 Synchronization and message passing
14.6.2 The CMSIS RTOS-RTX
14.6.3 FreeRTOS
14.6.4 Wind River’s VxWorks
14.6.5 Blackberry’s QNX
14.7 Summary of operating systems
Problem set
15 Software simulation
15.1 Physics engines
15.2 Modelling client-server systems
15.2.1 Deterministic rates
15.2.2 Markov rates
15.2.3 Load factor
15.2.4 Describing client-server setups
15.2.4.1 D/D/n
15.2.4.2 M/D/1
15.2.4.3 M/M/1
15.2.4.4 Summary
15.2.5 Multiple queues or one queue
15.2.6 Simulating Markov arrivals and processing times
15.2.6.1 The distribution of arrivals
15.2.6.2 Simulating such arrivals
15.2.6.3 Example
15.2.6.4 Simulating sporadic arrivals
15.2.6.5 Summary
15.2.7 Summary of modeling client-server systems
15.3 15.3 Simulating variation
15.3.1 Simulating uniform distributions
15.3.2 Simulating normal distributions
15.3.2.1 Twelve random samples
15.3.2.2 Marsaglia’s polar method
15.3.2.3 Summary of simulating normal distributions
15.3.3 Summary of simulating variation
15.4 Summary of simulating physical systems
Problem set
16 Software verification
16.1 The scenario and user or client needs
16.2 Propositional logic
16.2.1 Review of propositional logic
16.2.2 The implication operator
16.2.3 Our example
16.2.4 A fun example
16.3 Predicate logic
16.3.1 Introduction to predicate logic
16.3.2 Our driving example
16.4 Linear temporal logic (LTL)
16.4.1 Linear temporal logic operators
16.4.1.1 During the next time interval
16.4.1.2 Now and always in the future
16.4.1.3 Now or at some point in the future
16.4.1.4 Until
16.4.1.5 Equivalencies
16.4.2 Examples
16.4.2.1 Liveness
16.4.2.2 Invariance
16.4.2.3 Safety
16.4.2.4 Fairness
16.4.2.5 Oscillations
16.4.2.6 Mutual exclusion
16.4.2.7 Signaling
16.4.2.8 Rendezvous
16.4.3 Automated aircraft control architecture
16.5 Computation tree logic (CTL)
16.6 Model checkers
16.7 Modelling software
16.8 Summary of software verification
Problem set
17 File management
17.1 Block addressable
17.2 Files
17.3 Organization
17.4 File systems
17.4.1 Sample file structures
17.4.1.1 File allocation tables (FAT)
17.4.1.2 Unix index nodes or inodes
17.4.1.3 Summary of file structures
17.4.2 Sample directory structure (B+-trees)
17.4.2.1 Information in leaves
17.4.2.2 Internal nodes
17.4.2.3 Maintaining balance: B+-trees
17.4.2.4 Summary of a sample directory structure
17.4.3 Fragmentation
17.4.4 Fault tolerance and journaling
17.4.5 Memory and file systems: Spirit Mars rover
17.4.6 Summary of file systems
17.5 Data formats
17.5.1 The extensible mark-up language (XML)
17.5.2 The JavaScript object notation (JSON)
17.5.3 Java serializable
17.5.4 Summary of data formats
17.6 The file abstraction
17.6.1 open
17.6.2 close
17.6.3 read
17.6.4 write
17.6.5 lseek
17.6.6 Buffered I/O
17.6.7 Summary
17.7 Keil RTX RTOS
17.8 Summary
Problem set
18 Data management
18.1 Linear data structures
18.1.1 Array-based stacks, queues and deques
18.1.2 Node-based stacks, queues and deques
18.1.3 Sorted list data structures
18.2 Hash tables
18.2.1 Quadratic probing
18.2.2 Cuckoo hashing
18.3 Graphs
18.4 Non-relational databases
18.5 Relational databases
18.6 Summary of data management
19 Virtual memory and caching
19.1 Caches and virtual memory
19.2 Multiple levels of cache
19.3 Using solid-state drives as caches
19.4 Virtual memory and real-time systems
19.5 Thrashing
19.6 Page replacement algorithms
19.6.1 Two page-replacement algorithms
19.6.1.1 First-in—first-out
19.6.1.2 Least-recently used
19.6.1.3 Summary of two page-replacement algorithms
19.6.2 Bélády’s optimal replacement algorithm
19.6.3 Bélády’s anomaly
19.6.4 Summary of page-replacement algorithms
19.7 Summary
Problem set
20 Digital signal processing
20.1 Signals: definitions and descriptions
20.1.1 Basic definitions
20.1.2 Statistical descriptions of signals
20.1.3 Sampling and aliasing
20.1.4 Time domain versus frequency domain
20.1.5 Summary of definitions and descriptions of signals
20.2 Signal processing: definitions, issues and analysis
20.2.1 Definitions
20.2.2 The delta impulse and unit step signals and their responses
20.2.3 Linear systems
20.2.4 Time-invariant systems
20.2.5 Causal systems
20.2.6 Characteristics of causal linear time-invariant systems
20.2.7 The convolution
20.2.8 Analyzing systems and transfer functions
20.2.9 Summary of definitions, issues and analysis in signal processing
20.3 Classification of causal linear time-independent digital systems
20.3.1 Explicit versus recursive formulas
20.3.1.1 Explicit filters
20.3.1.2 Recursive filters
20.3.1.3 Summary of explicit digital filters and recursive digital filters
20.3.2 The response to bounded input
20.3.3 Duration of the impulse response
20.3.3.1 Finite impulse response (FIR) filters
20.3.3.1.1 Description of FIR filters
20.3.3.1.2 Some simple FIR filters
20.3.3.1.3 FIR filter design
20.3.3.1.4 Implementation
20.3.3.1.5 Summary of FIR filters
20.3.3.2 Infinite impulse response filters
20.3.3.2.1 Description of IIR filters
20.3.3.2.2 Implementation
20.3.3.2.3 Summary of IIR filters
20.3.3.3 Summary of the duration of the impulse response
20.3.4 Summary of the classification of causal linear time-independent digital systems
20.4 Digital signal processing and analysis
20.4.1 Time-domain filters
20.4.1.1 FIR filters in the time domain
20.4.1.1.1 Noise reduction: the moving average filter
20.4.1.1.2 Approximating the next point
20.4.1.1.2.1 Polynomial interpolation
20.4.1.1.2.2 Least-squares best-fitting line
20.4.1.1.2.3 Least-squares best-fitting quadratic polynomial
20.4.1.1.3 Estimating the current point with least squares
20.4.1.1.3.1 Least-squares best-fitting line
20.4.1.1.3.2 Least-squares best-fitting quadratic polynomial
20.4.1.1.4 Estimating a previous missing value
20.4.1.1.4.1 Polynomial interpolation
20.4.1.1.4.2 Least-squares best-fitting line
20.4.1.1.4.3 Least-squares best-fitting quadratic function
20.4.1.1.5 First derivative
20.4.1.1.5.1 Polynomial interpolation
20.4.1.1.5.2 Least-squares best-fitting line
20.4.1.1.5.3 Least-squares best-fitting quadratic function
20.4.1.1.6 Second derivatives
20.4.1.1.6.1 Polynomial interpolation
20.4.1.1.6.2 Least-squares best-fitting quadratic polynomial
20.4.1.1.6.3 Least-squares best-fitting cubic polynomial
20.4.1.1.7 Summary of FIR filters in the time domain
20.4.1.2 IIR filters in the time domain
20.4.1.2.1 Low-pass single-pole filter
20.4.1.2.2 Low-pass N-stage single-pole filter
20.4.1.2.3 High-pass single pole filter
20.4.1.2.4 Notch filters
20.4.1.2.5 Integration
20.4.1.2.5.1 Polynomial interpolation
20.4.1.2.5.2 Least-squares best-fitting line
20.4.1.2.5.3 Least-squares best-fitting quadratic polynomial
20.4.1.2.6 Summary of IIR filters in the time domain
20.4.1.3 Summary of time-domain filters
20.4.2 Frequency-domain filters
20.4.2.1 FIR filters in the frequency domain
20.4.2.1.1 The sinc and the Blackman functions
20.4.2.1.2 Parks-McClellan algorithm
20.4.2.1.3 Summary of FIR filters in the frequency domain
20.4.2.2 IIR filters in the frequency domain
20.4.2.2.1 Butterworth filters
20.4.2.2.2 Chebyshev filters
20.4.2.2.3 Summary of frequency-domain filters
20.4.2.3 Summary of frequency-domain filters
20.4.3 Summary of digital processing and analysis
20.5 Discrete transforms
20.5.1 Fast Fourier transform
20.5.2 Discrete cosine transform
20.5.3 Summary of transforms
20.6 Summary of digital signal processing
21 Digital control theory
22 Security
23 Summary and looking ahead
23.1 What you achieved
23.2 The next courses
23.3 Directly related technical electives
23.4 Other related technical electives
23.5 Conclusions
Appendices, references and index
Appendix A Scheduling examples
A.1 Earliest deadline first scheduling
A.2 Rate monotonic scheduling
Appendix B Representation of numbers
B.1 Representations of integers
B.2 Floating-point representations
B.3 Fixed-point representations
B.4 Summary of the representation of numbers
Appendix C Fixed-point algorithms for RM schedulability tests
Appendix D Synchronization tools
D.1 Counting semaphores
D.2 Turnstile
D.3 Group rendezvous
D.4 Light switch
D.5 Events
Appendix E Implementation of a buffer
Appendix F Efficient mathematics
F.1 Evaluating 
F.2 Approximating trigonometric functions
F.2.1 Approximations with polynomials
F.2.2 Horner’s rule
F.2.3 Approximations with a single cubic function
F.2.4 Interpolating a greater number of points
F.2.5 Summary of approximating trigonometric functions
F.3 Approximating the square root function
Appendix G Trigonometric approximations
Appendix H Complex numbers and linear algebra
H.1 Complex numbers
H.2 Linear algebra and inner-product spaces
H.3 Examples of inner-product spaces
H.3.1 Real and complex finite-dimensional vector spaces
H.3.2 Finite-energy real and complex sequences
H.3.3 Real- and complex-valued piecewise continuous functions on an interval [a, b]
H.3.4 Finite-energy real- and complex-valued piecewise continuous functions on the real line R
H.3.5 Summary of examples of inner-product spaces
H.4 Linear operators (or linear systems)
H.4.1 Matrices
H.4.2 Differentiation and definite integration
H.4.3 Fourier series and the Laplace and Fourier transforms
H.4.4 Image processing as a counter example
H.4.5 Summary of linear operators
H.5 Summary of linear algebra
References
Books
Papers
Index
About the authors
Colophon
The book hasn't received reviews yet.