Monday, March 26, 2007

The Power of Logs

When I was at school (in the 1960s!) we were always using logarithms (or “logs” for short). We used our “log tables” to help us multiply together unpleasant looking numbers that we couldn’t very easily work with using just pen, paper and brain. If we had two nasty looking numbers to multiply together we would look up their logarithms (to the base 10), add together these logarithms and then finally take the anti-logarithm of this sum of logs to give us our answer. Once you know what logarithms are you will quickly see how this method works. You can use a similar method to divide one number by another only in this case you subtract the logs instead of adding them.

Today’s generation of school children have no need to use log tables for this kind of calculation. They can use a calculator (perhaps even one built into a mobile phone) or maybe a computer for unpleasant calculations. But students of economics would still be advised to find out something about logarithms as they might come across them in a variety of situations: when using a logarithmic transformation of a power function equation for example, or you might even meet a logarithmic equation itself. In the first situation you are basically making use of the properties of logarithms to turn a multiplicative function in the original variables into a linear (additive) function of the logarithms. Back to that later when we have had a look at the basic concept of a logarithm.

Put bluntly, the logarithm of a number is the power that a base must be raised to in order to get the number. You can see that we first have to think about what we mean by the base. In theory you could use any positive number as the base but in practice we usually work with one of two numbers: either the number 10 or the exponential constant known as e (it has a value approximately = 2.71828). Let’s come back to e later and stick with the base 10 for now.

From the definition we can see that the logarithm of the number 100, to the base 10, is 2. Why? Because if we raise the base 10 to the power 2, we get the number 100. Think of another simple example. What is the logarithm of the number 0.1 to the base 10? Answer? -1, because 10 raised to the power minus 1 = 1/10 = 0.1

Notice that the log of 1 to the base 10 is 0. Because 10 to the power 0 = 1.
In fact the log of 1 is zero whatever base we work with. Any positive number raised to the power zero is 1. But we are getting ahead of ourselves. Let’s stay with the base 10 for now.

Let’s see how logs to the base 10 were used in log tables. First someone has to produce a set of tables solving the equation 10^(log) = number, for lots of different numbers. For example, the logarithm of 2 to the base 10 is (to four significant figures) 0.3010. [I looked up this logarithm so many times in my youth that it is imprinted in my brain – I don’t even need to refer to a set of log tables to get the result!]. Similarly the log of 3 to the base 10 is 0.4771. So if I add these two logarithms together I should get the logarithm of the number 6 – because the log of a product is the sum of the logarithms. Now 0.3010 + 0.4771 = 0.7781. Actually when I look up the log of 6 to the base 10 I get 0.7782. (Using only four significant figure approximations has caused us to get an approximation error.) Rather than searching through the log tables to find the number that has a log = 0.7781 we were able to make use of the anti-logarithm table in which the results were set out in the other direction. That is the tables were constructed in a way to give you the number corresponding to any particular logarithm that you had calculated. Doing it this way round I would find that the anti-log of 0.7781 comes out as 5.999 – another approximation error due to rounding. Before you start smiling at this too much remember that even using a calculator or a computer there may be some rounding involved – although you will get much more than four significant figures.

The Scottish mathematician John Napier (1550-1617) is generally credited with the invention or discovery of logarithms – although apparently they were known about in eighth century India (see Smoller (2001) or Alfeld (1997) for more details).

Let’s think for a minute about how this all works. Take any base b (>0) .
If we know that b^u = x and b^v = y then simple algebra tells us that xy = b^(u+v). When we multiply two separate powers of b together we just add these powers. The insight in developing logarithms was to see that we could turn hard calculations (multiplication and division) into easier calculations (addition and subtraction) by providing a set of u and v values to go with the set of x and y values that could then be reused time and time again.

In analytical (as opposed to computational) work there may be advantages in working with logs to the base e. This is because the exponential function y = e^x has the special property that its derivative at any point is equal to the function itself – that is dy/x = y for all values of x. If you plot the graph of the function the slope of the curve is always the same as the function itself. This means that the derivative of the inverse function or logarithmic function y = lnx will be 1/x [Logs to the base e are written as lnx – the ln is short for “natural” logarithm]. This is very convenient. Logarithmic functions can be useful themselves in economics as they have the property that as x goes up y goes up but at a declining rate – something that we expect to get in a whole range of economic relationships such as production functions and utility functions.

But for us today it is the logarithmic transformation that is of most interest.

Suppose that we think that two variables are related by a power function equation – say Q = AP^b (maybe here Q is quantity demanded, P is price, A is just a constant of proportionality whose value will depend on the units of measurement for P and Q, and b is negative so as to ensure an inverse relation between the variables). If this is true then the graph showing the relationship between P and Q is a downward sloping (non-linear) curve. In the special case here b = -1 we have a rectangular hyperbola with the graph totally symmetrical around the 45 degree line, but with other values of b the graph will approach one of the axes more steeply than the other.

But if we plot the logarithm of Q against the logarithm of P we will see a downward sloping straight line with gradient b (remember b is negative). This is because the first rule of logarithms is that log(AB) = logA + logB (sticking to the same base. So log Q = logA + log(P^b). Now from the second rule of logarithms log(P^b)= b logP. Now logA is just another constant if A is a constant, so we have found that logQ = a constant plus b times log P. The power function is linear in the logarithms – or as we sometimes say for short – it is log-linear.

In regression analysis of course we don’t expect all our observations to fit exactly on a straight line (or a curve) but if, when we plot the logs of one variable against the logs of another we get points clustered around a straight line then it suggests that the underlying variables are linked by a power function equation – and the power in that equation can be estimated from the slope of the line linking the logs.

References
[1] John Napier and logarithms Laura Smoller UALR March 2001
[2} What on earth is a logarithm? Peter Alfeld, University of Utah 1997

0 Comments:

Post a Comment

<< Home