A Practical Introduction to Real-time Systems for Undergraduate Engineering
Douglas Wilhelm Harder
A Practical Introduction to Real-time Systems for Undergraduate Engineering

This course will begin by introducing the requirements and constraints of real-time and embedded systems, together with examples. Next, we will consider various programming paradigms, appropriate characteristics for embedded and real-time programming languages, and an introduction to the C programming language and contrast it with C++. We continue by describing the organization of a computer, including descriptions of Turing machines, register machines, main memory, processor architecture and operating systems. Continuing from here, we describe static memory allocation and the call stack, and then consider dynamic memory allocation, including numerous variable-sized-block strategies, their appropriateness for real-time systems and various implementations in FreeRTOS. We also consider automatic memory allocation, including garbage collection. Following this, we discuss the idea of threads and tasks running in parallel, looking at examples of sorting in parallel, and the data structures necessary to maintain threads. We then consider the issue of scheduling threads, first with multi-programming, non-preemptive scheduling and the concept of context switching, and then considering multitasking with preemptive scheduling, focusing on real-time schedulers such as earliest-deadline-first and rate- monotonic scheduling. Next, we consider the issue of the communication of other devices with the processor and the concept of interrupts and interrupt service routines as well as the impact of interrupts on real-time systems. Next we consider synchronization issues between cooperating threads and tasks, including issues of serialization and synchronization. We describe semaphores and consider a number of synchronization problems and consider solutions using semaphores. Additionally, we consider other means of automatic synchronization. With this, we consider the application of semaphores and synchronization in general to resource management, looking specifically at priority inversion. The most serious issue, however, is deadlock, when tasks and threads holding resources are mutually blocked on subsequent requests and how to avoid this issue. Next, inter-process communication is discussed together with how synchronization can be achieved through messaging. Next, we consider fault tolerance, specifically considering error detection and correction, the synchronization of clocks, and fault-tolerant message passing. Having considered all this, we now consider how resource management can be centralized in an operating system protected with fault tolerance through kernel modes and space. Having achieved this, we now consider the problem of software simulating including client-server models and distributions, and then software verification, including a look at propositional logic, predicate logic, linear temporal logic, computation tree logic and model checkers. We conclude the course by consider issues of data management and file management, virtual memory and caching, digital signal processing and digital control theory, and finishing with an introduction to security and a look at what is ahead.

1 Introduction to real-time systems
1.1 What is a real-time system?
1.1.1 What is time?
1.1.2 What are embedded systems?
1.2 Case study: anti-lock braking system
1.3 Components of real-time systems
1.3.1 The environment
1.3.2 Real-time hardware
1.3.3 Real-time software Therac-25 The Mars rover “Spirit” Logic expression simplification Summary of real-time software and race conditions
1.3.4 Summary of the components of real-time systems
1.4 The history of real-time programming
1.5 Topic summary
Problem set
2 Real-time, embedded and operating-system programming languages
2.1 Programming languages
2.1.1 Programming paradigms Structured programming Procedural programming Data abstraction Object-oriented programming Design patterns Summary of programming paradigms
2.1.2 Ideal characteristics of programming languages Characteristics of a real-time programming language Characteristics of an embedded systems programming language Characteristics of an operating system programming language Summary of ideal characteristics
2.1.3 Other software programming techniques
2.1.4 Summary of real-time programming languages
2.2 The C programming language
2.2.1 No class, just structure…
2.2.2 More than the sum of its parts
2.2.3 Generics in C
2.2.4 Static and dynamic memory allocation
2.2.5 Pass-by-reference in C
2.2.6 An object-oriented approach in C
2.2.7 Header files
2.2.8 The pre-processor
2.2.9 Bit-wise operations
2.2.10 Bit-fields in C99
2.2.11 Vision4 specifics
2.2.13 Further help
2.3 Summary of real-time programming
Problem set
3 Computer organization
3.1 The Turing machine
3.2 Register machines
3.2.1 Instructions versus data
3.2.2 Word size
3.2.3 The registers in the lpc1768 General-purpose registers Special-purpose registers Summary of the lpc1768
3.2.4 Summary of register machines
3.3 Main memory
3.3.1 Addressing
3.3.2 Byte order
3.3.3 Accessing memory
3.3.4 Buses
3.3.5 Memory allocation
3.3.6 Summary of main memory
3.4 Processor architecture
3.4.1 Microprocessors and microcontrollers Microprocessor Microcontrollers What is firmware?
3.4.2 Harvard architecture
3.4.3 von Neumann architecture
3.4.4 The Cortex-M3 architecture
3.4.5 Architecture summary
3.5 Operating systems
3.5.1 Why do we need an operating system?
3.5.2 Uses of operating systems
3.5.3 Linux, POSIX and the Keil RTX real-time operating system
3.5.4 Real-time operating systems
3.6 Computer organization summary
Problem set
4 Static memory allocation
4.1 The requirements of a function
4.2 The Cortex-M3 design
4.3 Set jump and long jump
4.4 Summary of static memory allocation
Problem set
5 Dynamic memory allocation
5.1 Abstract dynamic memory allocator
5.1.1 Manual allocation management Wild pointers Dangling pointers Freeing the same location more than once Memory leaks Summary of issues with memory deallocation
5.1.2 Automatic allocation management Automatic initialization Garbage collection Garbage collection algorithms Reference counting Tracing algorithms Garbage collection in C 4 Summary of garbage collection algorithms Issues with garbage collection Summary of garbage collection Summary of automatic allocation
5.1.3 Summary of abstract dynamic memory allocation
5.2 Allocation strategies
5.2.1 Fixed block sizes One size of blocks Fixed size blocks Internal fragmentation Summary for fixed block sizes
5.2.2 Variable-sized-block strategies Bitmaps Linked lists External fragmentation Coalescence Allowing internal fragmentation Summary of external fragmentation Basic variable-sized allocation schemes First fit Next fit Best fit Worst fit Summary of linked list memory management Summary of variable-sized allocation strategies
5.2.3 Advanced memory allocation algorithms Quick fit Binary buddy Doug Lea’s malloc Half fit Two-level segregated fit Smart fit Summary of advanced memory allocation strategies
5.2.4 Summary of allocation strategies
5.3 Case study: FreeRTOS
5.3.1 Allocation only
5.3.2 Best fit without coalescence
5.3.3 Standard library malloc and free
5.3.4 First fit with coalescence
5.3.5 First fit with coalescence over multiple regions
5.3.6 Summary of the case study
5.4 Other features: clearing and reallocation
5.4.1 Reallocation in C++ and the move constructor and move assignment operator
5.5 Summary of dynamic memory allocation
Problem set
6 Threads and tasks
6.1 Weaknesses in single threads
6.2 Creating threads and tasks
6.2.1 Threads in POSIX
6.2.2 Threads in Java
6.2.3 Tasks in the Keil RTX RTOS
6.2.4 Threads in the CMSIS-RTOS RTX
6.2.5 Summary of thread and task creation
6.3 Applications of threads and tasks
6.3.1 Parallel execution
6.3.2 Divide-and-conquer algorithms
6.3.3 Independent versus dependent tasks
6.3.4 Application of threads and tasks
6.4 Maintaining threads
6.4.1 Memory allocation for threads
6.4.2 The thread identifier
6.4.3 The hierarchy of children threads
6.4.4 The call stack
6.4.5 Return values
6.4.6 Case study: the TCB in the Keil RTX
6.4.7 Summary of maintaining threads
6.5 The volatile keyword in C
6.6 Summary of threads and tasks
Problem set
7 Scheduling
7.1 Background: waiting on tasks and resources
7.2 Introduction to multitasking
7.2.1 What can be scheduled?
7.2.2 Storing an image of the processor—saving the register values
7.2.3 Description of tasks Classification of tasks Fixed-instance tasks Periodic tasks Aperiodic tasks Sporadic tasks Summary of task classification Estimating worst-case execution times Periodic and sporadic tasks with additional deadlines Example: Periodic garbage collection Summary of the description of tasks
7.2.4 State and state diagrams A TCB macro-based queue data structure Using the state The idle thread: if nothing else is ready Some observations Case study: states in various systems Summary of the state
7.2.5 Timing diagrams
7.2.6 Timing interrupts
7.2.7 Summary of multiprogramming and non-preemptive scheduling
7.3 Non-preemptive scheduling algorithms
7.3.1 Timeline scheduling
7.3.2 First-come—first-served scheduling
7.3.3 Shortest-job next scheduling
7.3.4 Earliest-deadline first scheduling
7.3.5 Least-slack first scheduling
7.3.6 Summary of non-preemptive scheduling algorithms
7.4 Preemptive scheduling algorithms
7.4.1 Motivation
7.4.2 Timeline scheduling Example of timeline scheduling Large super periods Issues with timeline scheduling Summary of timeline scheduling
7.4.3 Round-robin and other fair schedulers
7.4.4 Deadline-based preemptive scheduling Earliest-deadline first Least-slack first algorithm Implementation of deadline scheduling algorithms Optimal schedulers and periodic tasks with overloads Real-time issues with optimal deadline scheduling algorithms Summary of optimal deadline scheduling algorithms
7.4.5 Priority-based preemptive scheduling algorithms Priorities Classification of priorities: fixed and dynamic Implementation of priority schedulers Using an array of queues Using a heap structure Issues with heap structures Summary of priorities Rate-monotonic scheduling Description of rate-monotonic scheduling Determining schedulability An older formula When can rm scheduling do better? rm scheduling with overloads Summary of rate-monotonic scheduling Deadline-monotonic scheduling Implementing earliest-deadline first (edf) scheduling with priorities Dealing with jitter Restricted priority levels Summary of priority-based preemptive scheduling algorithms
7.4.6 Summary of preemptive scheduling algorithms
7.5 Issues with scheduling
7.5.1 Missing deadlines
7.5.2 Scheduling non-pre-emptible tasks
7.5.3 Multicore and multiprocessor processing States and TCBs Multiprocessor scheduling Least-slack first Rate-monotonic—first-fit periodic scheduling 3 Earliest-deadline first periodic scheduling Summary of multiprocessor scheduling Summary of multiprocessor and multicore processing
7.5.4 Summary of issues with scheduling
7.6 Summary of scheduling
Problem set
8 Hardware interrupts
8.1 Sources of interrupts
8.2 The mechanism of interrupts
8.2.1 Interrupting the processor
8.2.2 Halting execution
8.2.3 Selecting the interrupt service routine (isr)
8.2.4 Characteristics of interrupt service routines (isrs)
8.2.5 Returning from an interrupt
8.2.6 A simple application
8.2.7 Summary of the mechanism of interrupts
8.3 Ignoring and nested interrupts
8.4 Waiting for an interrupt
8.5 System design
8.5.1 Time- versus event-triggered systems Time-triggered systems Event-triggered systems Hybrid systems and summary
8.5.2 Pure interrupt-driven systems
8.5.3 Summary of time- versus event-triggered systems
8.6 Watchdog timers
8.7 Implementation of interrupts
8.8 Apollo 11: an interrupt overload
8.9 Summary hardware interrupts
Problem set
9 Synchronization
9.1 The need for synchronization
9.1.1 Sharing a data structure
9.1.2 Two tasks communicating information
9.1.3 Multiple tasks communicating information
9.1.4 Summary of problems
9.2 Petri nets—describing synchronizations graphically
9.3 Synchronization through token passing
9.4 Test-and-set—a crude signal with polling
9.5 Semaphores—a better signal without polling
9.5.1 Binary semaphores Description Applications of binary semaphores Sharing a data structure Two tasks communicating information Multiple tasks communicating information Summary of applications Mutex: a more exclusive binary semaphore Summary of binary semaphores
9.5.2 Implementation of binary semaphores The binary semaphore algorithms Data structures for semaphores Using a linked-list queue with a searching pop Using a binary min-heap A leftist heap A skew heap Analysis Summary of data structures for semaphores Priority inversion Summary of the implementation of binary semaphores
9.5.3 Counting semaphores Counting with binary semaphores The design of counting semaphores An application of counting semaphores A summary of counting semaphores
9.5.4 Implementation of counting semaphores POSIX semaphores RTX RTOS semaphores CMSIS-RTOS semaphores Implementing counting semaphores with binary semaphores Summary of implementations of semaphores
9.5.5 A summary of semaphores
9.6 Problems in synchronization
9.6.1 Basic problems in synchronization Mutual exclusion Signalling Rendezvous Waiting on interrupts: an application of serialization Summary of basic serialization
9.6.2 Intermediate problems in synchronization Multiplexing Group rendezvous and turnstile A sub-optimal solution A better solution A reusable group rendezvous (the current optimal design pattern) The turnstile data structure The reusable group rendezvous data structure Summary of the group rendezvous Light-switches Events Summary of intermediate problems in synchronization
9.6.3 Advanced problems in synchronization Dining philosophers’ problem First left, then right—deadlock Accessing both chopsticks simultaneously Ordering the resources Restricting access to the table Starvation Summary of the dining philosophers’ problem Readers-writers problem The default problem Readers wait for a writer Summary of the reader-writer problem Summary of advanced problems in synchronization
9.6.4 Summary of problems in synchronization
9.7 Automatic synchronization
9.7.1 Weaknesses of semaphores
9.7.2 Automatic alternatives to semaphores for mutual exclusion
9.7.3 Rendezvous in Ada
9.7.4 Mutual exclusion in Ada
9.7.5 The equivalence of synchronization tools
9.7.6 Summary of automatic synchronization
9.8 Summary of synchronization
Problem set
10 Resource management
10.1 Semaphores
10.2 Classification of resources
10.3 Device management
10.4 Resource managers
10.5 Priority and deadline inversion
10.5.1 Mars Pathfinder
10.5.2 Blocking interrupts
10.5.3 Priority inheritance
10.5.4 Priority ceiling
10.5.5 Priority demotion
10.5.6 Priority inheritance in mutual exclusion locks
10.5.7 Summary of priority and deadline inversion
10.6 Summary of resource management
Problem set
11 Deadlock
11.1 Requirements for deadlock
11.2 Deadlock modeling
11.3 Techniques for preventing deadlock during the design
11.3.1 Mutual-exclusion condition
11.3.2 Hold-and-wait condition
11.3.3 No-preemption condition
11.3.4 Circular-wait condition
11.3.5 Other recommendations for deadlock prevention
11.3.6 Summary of techniques for preventing deadlock
11.4 Deadlock detection and recovery
11.4.1 1 Probabilistic deadlock detection with watchdog timers
11.4.2 A brief introduction to graph theory
11.4.3 Algorithmic deadlock detection Reusable resources with blocking on requests for individual resources Example 1 Example 2 Example 3 Example 4 Example 5 Reusable and consumable resources with blocking on requests for individual resources Example 1 Example 2 Example 3 Cycle detection algorithms Cycle detection with reusable resources with blocking on individual requests Cycle detection with consumable resources with blocking on individual requests Cycle detection with blocking on multiple requests Summary of cycle detection algorithms The algorithm when tasks may be blocked on requests for multiple resources Summary of deadlock detection
11.4.4 Recovery Preemption Rebooting the system Killing a task Rolling back to a check-point Example with memory allocation Example with a communication bus Examples through 5 Summary of deadlock recovery
11.4.5 When to perform deadlock detection
11.4.6 Summary of deadlock detection and recovery
11.5 Deadlock avoidance
11.6 Summary
Problem set
12 Inter-process communication
12.1 Classification of communications
12.2 Solutions for communication
12.2.1 Binary and counting semaphores
12.2.2 Shared memory
12.2.3 Pipes
12.2.4 Message queues
12.2.5 Pipes and message queues in secondary memory
12.2.6 Sockets
12.2.7 Mailboxes and ports
12.2.8 Summary of solutions for communication
12.3 Priorities of messages
12.4 Synchronization
12.4.1 Serialization between two tasks or threads
12.4.2 Mutual exclusion
12.4.3 Counting semaphores
12.4.4 Priority inversion
12.4.5 Summary of synchronization
12.5 Network communications
12.5.1 Token-based communications
12.5.2 Collision-based communications
12.5.3 Contention-free communications
12.5.4 Summary of network communications
12.6 Summary of inter-process communication
Problem set
13 Fault tolerance
13.1 Failures in real-time systems
13.1.1 Fault prevention Fault avoidance Fault removal Summary of fault prevention
13.1.2 Fault tolerance Recovery blocks Exceptions and exception handling Summary of fault tolerance
13.1.3 Summary of failures
13.2 Redundancy
13.3 Error detection and correction in signals
13.3.1 Repetition
13.3.2 Check sums
13.3.3 Cyclic redundancy check (crcs)
13.3.4 Error correcting codes
13.3.5 Summary of error detection and correction
13.4 Clocks
13.4.1 Example: the Patriot missile system
13.4.2 Requirements
13.4.3 Restoring synchronization
13.4.4 Achieving synchronization Synchronization with a standard clock Distributed synchronization Synchronization algorithm Duplication over exclusion Practical synchronization algorithm Summary of distributed synchronization Summary for achieving synchronization
13.4.5 Summary of clocks
13.5 Byzantine generals’ problem
13.5.1 A non-solution
13.5.2 A solution not using signatures No disloyal participants At most one disloyal participant At most two disloyal participant Example with seven participants and two disloyal lieutenants Example with seven participants with a disloyal general and a collaborating lieutenant Example with six participants and two disloyal lieutenants Example with six participants with a disloyal general and a collaborating lieutenant At most three disloyal participants Generalization and analysis
13.5.3 A solution using signatures
13.5.4 Summary of Byzantine generals’ problem
13.6 Summary of fault tolerance
Problem set
14 Operating systems
14.1 Operating systems as resource managers
14.2 Processor modes
14.2.1 Multiple processor modes
14.3 Memory management
14.3.1 Memory protection unit
14.4 Microkernels
14.5 Real-time operating systems
14.6 Examples of real-time operating systems
14.6.1 The Keil RTX RTOS Synchronization and message passing
14.6.3 FreeRTOS
14.6.4 Wind River’s VxWorks
14.6.5 Blackberry’s QNX
14.7 Summary of operating systems
Problem set
15 Software simulation
15.1 Physics engines
15.2 Modelling client-server systems
15.2.1 Deterministic rates
15.2.2 Markov rates
15.2.3 Load factor
15.2.4 Describing client-server setups D/D/n M/D/1 M/M/1 Summary
15.2.5 Multiple queues or one queue
15.2.6 Simulating Markov arrivals and processing times The distribution of arrivals Simulating such arrivals Example Simulating sporadic arrivals Summary
15.2.7 Summary of modeling client-server systems
15.3 15.3 Simulating variation
15.3.1 Simulating uniform distributions
15.3.2 Simulating normal distributions Twelve random samples Marsaglia’s polar method Summary of simulating normal distributions
15.3.3 Summary of simulating variation
15.4 Summary of simulating physical systems
Problem set
16 Software verification
16.1 The scenario and user or client needs
16.2 Propositional logic
16.2.1 Review of propositional logic
16.2.2 The implication operator
16.2.3 Our example
16.2.4 A fun example
16.3 Predicate logic
16.3.1 Introduction to predicate logic
16.3.2 Our driving example
16.4 Linear temporal logic (LTL)
16.4.1 Linear temporal logic operators During the next time interval Now and always in the future Now or at some point in the future Until Equivalencies
16.4.2 Examples Liveness Invariance Safety Fairness Oscillations Mutual exclusion Signaling Rendezvous
16.4.3 Automated aircraft control architecture
16.5 Computation tree logic (CTL)
16.6 Model checkers
16.7 Modelling software
16.8 Summary of software verification
Problem set
17 File management
17.1 Block addressable
17.2 Files
17.3 Organization
17.4 File systems
17.4.1 Sample file structures File allocation tables (FAT) Unix index nodes or inodes Summary of file structures
17.4.2 Sample directory structure (B+-trees) Information in leaves Internal nodes Maintaining balance: B+-trees Summary of a sample directory structure
17.4.3 Fragmentation
17.4.4 Fault tolerance and journaling
17.4.5 Memory and file systems: Spirit Mars rover
17.4.6 Summary of file systems
17.5 Data formats
17.5.1 The extensible mark-up language (XML)
17.5.2 The JavaScript object notation (JSON)
17.5.3 Java serializable
17.5.4 Summary of data formats
17.6 The file abstraction
17.6.1 open
17.6.2 close
17.6.3 read
17.6.4 write
17.6.5 lseek
17.6.6 Buffered I/O
17.6.7 Summary
17.7 Keil RTX RTOS
17.8 Summary
Problem set
18 Data management
18.1 Linear data structures
18.1.1 Array-based stacks, queues and deques
18.1.2 Node-based stacks, queues and deques
18.1.3 Sorted list data structures
18.2 Hash tables
18.2.1 Quadratic probing
18.2.2 Cuckoo hashing
18.3 Graphs
18.4 Non-relational databases
18.5 Relational databases
18.6 Summary of data management
19 Virtual memory and caching
19.1 Caches and virtual memory
19.2 Multiple levels of cache
19.3 Using solid-state drives as caches
19.4 Virtual memory and real-time systems
19.5 Thrashing
19.6 Page replacement algorithms
19.6.1 Two page-replacement algorithms First-in—first-out Least-recently used Summary of two page-replacement algorithms
19.6.2 Bélády’s optimal replacement algorithm
19.6.3 Bélády’s anomaly
19.6.4 Summary of page-replacement algorithms
19.7 Summary
Problem set
20 Digital signal processing
20.1 Signals: definitions and descriptions
20.1.1 Basic definitions
20.1.2 Statistical descriptions of signals
20.1.3 Sampling and aliasing
20.1.4 Time domain versus frequency domain
20.1.5 Summary of definitions and descriptions of signals
20.2 Signal processing: definitions, issues and analysis
20.2.1 Definitions
20.2.2 The delta impulse and unit step signals and their responses
20.2.3 Linear systems
20.2.4 Time-invariant systems
20.2.5 Causal systems
20.2.6 Characteristics of causal linear time-invariant systems
20.2.7 The convolution
20.2.8 Analyzing systems and transfer functions
20.2.9 Summary of definitions, issues and analysis in signal processing
20.3 Classification of causal linear time-independent digital systems
20.3.1 Explicit versus recursive formulas Explicit filters Recursive filters Summary of explicit digital filters and recursive digital filters
20.3.2 The response to bounded input
20.3.3 Duration of the impulse response Finite impulse response (FIR) filters Description of FIR filters Some simple FIR filters FIR filter design Implementation Summary of FIR filters Infinite impulse response filters Description of IIR filters Implementation Summary of IIR filters Summary of the duration of the impulse response
20.3.4 Summary of the classification of causal linear time-independent digital systems
20.4 Digital signal processing and analysis
20.4.1 Time-domain filters FIR filters in the time domain Noise reduction: the moving average filter Approximating the next point Polynomial interpolation Least-squares best-fitting line Least-squares best-fitting quadratic polynomial Estimating the current point with least squares Least-squares best-fitting line Least-squares best-fitting quadratic polynomial Estimating a previous missing value Polynomial interpolation Least-squares best-fitting line Least-squares best-fitting quadratic function First derivative Polynomial interpolation Least-squares best-fitting line Least-squares best-fitting quadratic function Second derivatives Polynomial interpolation Least-squares best-fitting quadratic polynomial Least-squares best-fitting cubic polynomial Summary of FIR filters in the time domain IIR filters in the time domain Low-pass single-pole filter Low-pass N-stage single-pole filter High-pass single pole filter Notch filters Integration Polynomial interpolation Least-squares best-fitting line Least-squares best-fitting quadratic polynomial Summary of IIR filters in the time domain Summary of time-domain filters
20.4.2 Frequency-domain filters FIR filters in the frequency domain The sinc and the Blackman functions Parks-McClellan algorithm Summary of FIR filters in the frequency domain IIR filters in the frequency domain Butterworth filters Chebyshev filters Summary of frequency-domain filters Summary of frequency-domain filters
20.4.3 Summary of digital processing and analysis
20.5 Discrete transforms
20.5.1 Fast Fourier transform
20.5.2 Discrete cosine transform
20.5.3 Summary of transforms
20.6 Summary of digital signal processing
21 Digital control theory
22 Security
23 Summary and looking ahead
23.1 What you achieved
23.2 The next courses
23.3 Directly related technical electives
23.4 Other related technical electives
23.5 Conclusions
Appendices, references and index
Appendix A Scheduling examples
A.1 Earliest deadline first scheduling
A.2 Rate monotonic scheduling
Appendix B Representation of numbers
B.1 Representations of integers
B.2 Floating-point representations
B.3 Fixed-point representations
B.4 Summary of the representation of numbers
Appendix C Fixed-point algorithms for RM schedulability tests
Appendix D Synchronization tools
D.1 Counting semaphores
D.2 Turnstile
D.3 Group rendezvous
D.4 Light switch
D.5 Events
Appendix E Implementation of a buffer
Appendix F Efficient mathematics
F.1 Evaluating 
F.2 Approximating trigonometric functions
F.2.1 Approximations with polynomials
F.2.2 Horner’s rule
F.2.3 Approximations with a single cubic function
F.2.4 Interpolating a greater number of points
F.2.5 Summary of approximating trigonometric functions
F.3 Approximating the square root function
Appendix G Trigonometric approximations
Appendix H Complex numbers and linear algebra
H.1 Complex numbers
H.2 Linear algebra and inner-product spaces
H.3 Examples of inner-product spaces
H.3.1 Real and complex finite-dimensional vector spaces
H.3.2 Finite-energy real and complex sequences
H.3.3 Real- and complex-valued piecewise continuous functions on an interval [a, b]
H.3.4 Finite-energy real- and complex-valued piecewise continuous functions on the real line R
H.3.5 Summary of examples of inner-product spaces
H.4 Linear operators (or linear systems)
H.4.1 Matrices
H.4.2 Differentiation and definite integration
H.4.3 Fourier series and the Laplace and Fourier transforms
H.4.4 Image processing as a counter example
H.4.5 Summary of linear operators
H.5 Summary of linear algebra
About the authors
The book hasn't received reviews yet.