language-icon Old Web
English
Sign In

Review of Mathematical Principles

2017 
Practical problems require good math. – R. Chellappa Introduction This chapter is a review of several of the topics that are prerequisite for use of this book as a text. The student should have had an undergraduate calculus experience equivalent to about three semesters and some exposure to differential equations and partial differential equations. The student should have coursework containing concepts from probability and statistics, including prior probabilities, conditional probability, Bayes’ rule, and expectations. Finally, and very important, the student should have strong undergraduate-level training in linear algebra. This chapter reviews and refreshes many of the concepts in those courses, but only as a review, not as a presentation of totally new material. • (Section 3.2) We briefly review important concepts in linear algebra, including various vector and matrix operations, the derivative operators, eigendecomposition, and its relationship to singular value decomposition. • (Section 3.3) Since almost all Computer Vision topics can be formulated as minimization problems, in this section, we briefly introduce function minimization, and discuss gradient descent and simulated annealing, the two minimization techniques that can lead to local and global minima, respectively. • (Section 3.4) In Computer Vision, we are often interested in the probability of certain measurement occurring. In this section, we briefly review concepts like probability density functions and probability distribution functions. A Brief Review of Linear Algebra In this section, we very briefly review vector and matrix operations. Generally, we denote vectors in boldface lowercase, scalars in lowercase italic Roman, and matrices in uppercase Roman. Vectors Vectors are always considered to be column vectors. If we need to write one horizontally for the purpose of saving space in a document, we use transpose notation. For example, we denote a vector that consists of three scalar elements as: The Inner Product The inner product of two vectors is a scalar, c = x T y . Its value is the sum of products of the corresponding elements of the two vectors: You will also sometimes see the notation x , y > used for inner product. We do not like this because it looks like an expected value of a random variable. One sometimes also sees the “dot product” notation x · y for inner product.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    4
    References
    0
    Citations
    NaN
    KQI
    []