Space-Time Tradeoffs and Efficiency

By: Eric Suh

A lot of computer science is about efficiency. For instance, one frequently used mechanism for measuring the theoretical speed of algorithms is Big-O notation. What most people don't realize, however, is that often there is a trade-off between speed and memory: or, as I like to call it, a tradeoff between space and time.

Think of space efficiency and time efficiency as two opposite ends on a band (a continuum). Every point in between the two ends has a certain time and space efficiency. The more time efficiency you have, the less space efficiency you have, and vice versa. The picture below illustrates this in a simple fashion:

The Space-Time Continuum

Algorithms like Mergesort are exceedingly fast, but require lots of space to do the operations. On the other side of the spectrum, Bubble Sort is exceedingly slow, but takes up the minimum of space.

Sometimes a few tweaks to an algorithm can change the amount of space or time traded off. Heap Sort, for instance, can be implemented in a way that allows it to be done in place, making it a very good balance between space and speed. On the other hand, it's also possible (and a bit easier) to implement Heap Sort using extra space for the heap. In that case, the heap itself takes up about the same space as an array, and yet the speed of the sort is in the same order of magnitude as Mergesort (although it is typically slower when implemented because of the high constant factor). Heap Sort has the additional benefit of being quite consistent in its speed, so it is useful in programs where timing is crucial (i.e. networks).

For data trees, 2-3 trees and 2-3-4 trees are faster and more balanced than the normal binary search trees, but they take up an extraordinary amount of space because they usually have tons of unused variables lying around.

The Red-Black tree is a compromise between space and time. The Red-Black tree is basically a binary tree representation of a 2-3-4 tree, and so it takes up less space than the 2-3-4 tree (it doesn't have all of those empty fields), but it still retains the search efficiency of the 2-3-4 tree!

Thus, there has to be a balance in the space and time aspects of computing. Much of the research in Computer Science these days is devoted to Time efficiency, particularly the theoretical time barrier of NP-Complete problems (like the Traveling Salesman Problem). These days memory is cheap, and storage space almost seems to be given away.

With networking and robotics, however, the necessity of a balance becomes apparent. Often, memory on these machines is scarce, as is processing power. Memory has to be used conservatively, otherwise the network servers could become stalled until the operation is finished. Robots often have to function with the limited resources installed on their own structures, and thus many times they do not have the memory to be spared for vast computing speed. In these situations, a compromise must be made.

With networking, this issue becomes mainly about topology. Network Topology is basically a description of how the physical connections of a network are set up. Maybe you know the term "daisy chain": that is a kind of network topology. A daisy chain (in which all computers are connected to two others in one chain) uses the minimum of cables, but the relative speed of the connections is smaller, because if the computer on one end tries to send a signal to a computer at the other end, it must first go through every computer in between. On the other hand, if every computer were connected to every other computer (called a "fully connected mesh network"), signals would be fast, but you would use a lot more cables, and the network may become hard to maintain. So, in this case, space correlates to the number of connections in a network, while time refers to the speed that signals travel the network.

There are also cases in which space is explicitly sacrificed for time. For instance, the sine and cosine functions are often used in game programming, but they are slow. Instead of computing them on the fly during the game, programmers often choose to precompute the values of sine and cosine that they expect to come up. This adds an amount of space to the program, but it helps speed it up by avoiding otherwise time consuming calculations.

Another example of the time space tradeoff in real world programming comes up when optimizing code. In C++, inline functions are substituted directly into the code during compilation (when possible) rather than turned into a function call. The reason one uses inline functions is that calling a function takes a certain amount of overhead, and if it's being done often enough, it can help speed up the inside of a loop. On the other hand, using too many inline functions can significantly bloat the resulting executable file. In fact, sometimes space and run time end up both growing if the program is too large.

Thus, although it may seem a trivial issue, it is really quite important, even now, to have efficiency in both space and time. Of course, the type of compromise made depends on the situation, but generally, for most programmers, time is of the essence, while for locations in which memory is scarce, of course, space is the issue. Maybe someday we'll be able to find algorithms that are extremely efficient in both speed and memory, bridges in the Space-Time continuum.

Thanks to Hikmat A. El-Jaber for his suggestions.


Related articles

Measuring the efficiency of algorithms with Big-O Notation

Comparison of sorting algorithms usage of time and space

More about binary trees