Math is an important part of computer science. But what math is most important? An overview of math topics useful in CS is outlined below.
This is a branch of mathematics that is mainly concerned with the uses of sets and integers, both of which are ‘discrete’, separate objects from one another. The phrase was coined in the 1980s as a catch-all for math topics that were useful for computer science students, and has evolved into a study on how to think about problem solving in the real world using mathematical (and therefore computational) models.
Discrete math topics often are more concerned with reasoning than numbers – many classes begin with logical statements, mathematical proofs and induction, the important ability to prove something true for infinite amounts of data using two finite steps. When designing programs that will be run on potentially millions of machines around the world, this will help you feel confident that the algorithms and logic you designed will work correctly, no matter the environment it is run in.
Discrete math certainly has its numerical topics as well. The study of summations and their closed algorithmic forms will help you with algorithmic optimization through use of big-O notation, and working with matrices and vectors will give you an introduction into the logic needed to work with large sets of ordered data in their programs.
When companies such as Google and Yahoo were first developing their internet algorithms, they turned to graph theory, another major component of discrete math. Graphs can be used to model relationships between objects and sets of objects. They are also used to model practically any algorithmic problem, and once you have defined the problem as a graph, you can solve it through graph operations like traversal or by checking for connectivity and circularity.
Discrete math is a broad term, but it was defined as a way to group the most important topics in math for needed for computer science. The more exposure a student has to these topics, the better they will be able to handle the challenges of software engineering.
If you have taken a calculus class, or perhaps even an advanced algebra class, you may have been exposed to some discrete math topics.
Probability is a study of the odds that a particular event will occur. This area of math is often grouped under the discrete math title but has a strong presence outside the computer world as well. Probability plays a large role in determining how to predict the way a computer program will act.
Consider an algorithm that sorts numbers into descending order. Predicting how fast this algorithm will run depends on multiple factors, and the way the algorithm is designed could result in different speeds considering a data set’s best-case, worst-case, or average-case scenarios. Probability will help you determine the odds that the data coming into the algorithm is in a best or worst case ordering, and will guide you to your best solution.
A good software engineer questions everything when it comes to their design, and probability theory is an excellent tool in helping ask those questions. Probability is covered in high school statistics classes and there are many books and websites that explore it, often using dice rolls and decks of cards as examples.
Boolean and Binary Algebra
A computer’s state, at its lowest level, is a mass of what we call 1’s and 0’s, and yet computers can do so much for us in our daily lives. How do 1’s and 0’s build up into websites, word processors, and video games? A computer combines and compares these numbers through the use of Boolean algebra, a topic that can also help an engineer build effective logic in their own algorithms.
There is much value in studying the interplay of binary numbers in the computer. What should a computer do if the result of addition or multiplication cannot fit into a value of limited (often 32-bit) space? Did you know that the number 0.1 is an infinite decimal number when using a binary number system, just like 1/3 is infinite in our everyday base-10 system? What does that mean for a programmer? Understanding how the limited precision of the computer can model the infinite precision of the real world will give you a major leg up in the professional world.
A computer program often sees a core portion of its code (approximately 20%) run for most of the time (the other 80%). This 80-20 rule, as it is often referred to, will lead to certain functions getting called often. It’s possible that a software engineer may use a function to get a result, and then pass that result directly back into the function again. They may even see a function call itself! This concept is known as recursion, and the mathematical representation of them is called a recurrence.
The most common example of a recurrence is the Tower of Hanoi problem, where the user must move a pile of discs from one post to another (try it online!). The number of steps it takes to move a particular amount of discs can be found directly by applying some simple algebra to the answer for one less than this amount of discs. In the computer, you might solve this by having the computer call the same function over and over again, until it gets to a base case that returns a simple solution, and then you would add from there.
Recursion is one of the first ‘hard’ problems a computer science student will run into, but understanding how mathematical recurrences work will help you comprehend this vital subject, and will ultimately be crucial in understanding a program’s flow. A discrete math book can give you exercises on working with summations and recurrences, though recursion itself is best taught through programming exercises, like in our summer session.
The MPCS teaches a prerequisite course in Math for Computer Science which covers a subset of the math that will be most useful for success in the program and for a career in technology. For a complete list of the topics covered in our Math for Computer Science class, please view the course website.