Anda di halaman 1dari 39

A Little Known Mathematical Property of a Straight Line Strange, but true, there is one!

Euclids Elements Book 1 Definitions

Definition 1 A point is that which has no part. Definition 2 A line is breadth-less length. Definition 3 The ends of a line are points. Definition 4 A straight line is a line which lies evenly with the points on itself.

Page 1 of 39

Table of Contents
No. 1. 2. 3. 4. Topic Page No. 3 4 5 8 8 12 15 19 23 31 32 35 Summary Introduction Three types of straight lines Three examples of use of y/x ratios and underlying linear law 4.1 Profits-Revenues data for a company 4.2 US Traffic Fatality data 4.3 The Ohio Unemployment Puzzle Concluding Remarks: Einsteins work function Appendix I: The Olympic Long Jump Record, Work Function Appendix II: Derivative for the Generalized Planck law Appendix III: The US teen pregnancy problem Appendix IV: Bibliography list of related articles

5. 6. 7. 8. 9.

Figure 3 in 3 provides a graphical illustration of how the ratio y/x can either increase or decrease as x increases, even on a PERFECT straight line, if the straight line does NOT pass through the origin. Hence, using the ratio y/x to determine a rate can yield misleading conclusions. Everyone should actually know about this important mathematical property of a straight line, and all of its implications, before graduating from middle school.

Page 2 of 39

1. Summary
Based on several discussions that are commonly encountered in many articles, some written even by those holding advanced degrees, it appears that a very important mathematical property of a straight line, and its implications, have NOT been widely appreciated. Very briefly, the ratio y/x is a constant at all points on a straight line, if and only if the straight line passes through the origin. More generally, if the straight line, y = hx + c, does NOT pass through the origin, the nonzero intercept c means that the ratio y/x = m = h + (c/x) will either increase or decrease as x increases, even on a PERFECT straight line. Hence, the common practice of using simple ratios (converted to a percentage), or the so-called rates which is a misleading term for a simple y/x ratio, to make comparisons can create a lot of confusion. Three examples of the use of such ratios, while overlooking the underlying linear law, are discussed here. There are literally hundreds and thousands of such ratios, or rates, used in economics, business, finance, management sciences, and in the social and political sciences to quantify empirical observations: profit margin, earnings per share, labor productivity, unemployment rate, traffic fatality rate, teen pregnancy rate, cancer rates, medical errors rates, suicide rates, and the list goes on. The simple linear law y = hx + c = h(x x0) can be compared to Einsteins photoelectric law. The nonzero intercept c, emphasized here, is exactly analogous to the work function W in Einsteins law, K = E W = hf W = h(f f0). Here E = hf is the elementary energy quantum introduced by Planck when he laid the foundations of quantum physics, K is the maximum kinetic energy of the electron produced when a photon (a particle of light with the energy E = hf) strikes the surface of a metal to eject an electron. However, in this interaction, some energy W, which Einstein refers to as the work function of the metal, must be given up. The cut-off frequency f0 = W/h is exactly analogous to the cut-off value, x0 = - c/h, of the independent variable, or the stimulus function, x before a response y is observed. It appears that Einsteins idea of a work function (and Plancks ideas) can thus be generalized and extended well beyond physics to explain many other empirical observations where a simple linear law is often observed.

Page 3 of 39

2. Introduction
As we all know, the straight line is the shortest distance between two points. Any two points on an x-y graph can always be joined by a straight line. Of course, we can join them using a fancy curve, or even a squiggle, but it is the straight line that we prefer over all else. What else is there to know about a straight line? Can anyone actually write an article about some little known mathematical property of a straight line? Well then, let us take a look at the graph in Figure 1.

Dependent variable, y


Type II Slope h > 0, Intercept c > 0



Type I Slope h > 0, Intercept c < 0

Type III Slope h < 0, Intercept c > 0

0 10 20 30 40 50


Independent variable, x
Figure 1: Three types of straight lines that do NOT pass through the origin. This graph illustrates three types of straight lines. Actually, there is a fourth type of straight line, which can be treated as a special case of the three lines here. This fourth straight line is the line that we tend to implicitly assume when we make a
Page 4 of 39

number of common calculations based on ratios or percentages to determine what we call rates. This is illustrated in Figure 2. What is the difference between these four straight lines?

Dependent variable, y


y = mx = 0.5x y/x = m = 0.50



0 0 10 20 30 40 50

Independent variable, x
Figure 2: The straight line y = mx passing through the origin. The ratio y/x = m = constant at all points along this line.

3. Three Types of Straight Lines

Let us first consider the straight line in Figure 2. This straight line passes through the origin (0, 0). The three straight lines in Figure 1 do NOT pass through the origin. The mathematical equation of the straight line passing through the origin is y = mx where the constant m is the slope of the line. The ratio y/x = m = constant. As x increases, y increases (assuming m > 0), but the ratio y/x = m = constant at all points along the line. Also, a doubling of x, from 10 to 20, or 20 to 40, leads to a doubling of y. This can be confirmed by reading the values off the graph. Likewise, a tripling of x, from 10 to 30, leads to a tripling of y, and so on.
Page 5 of 39

This is a very important property of a straight line and one that we take for granted. Does it apply to every straight line? NO. Actually, this property is ONLY exhibited by a straight line passing through the origin (0, 0).
1.0 0.9 0.8

y = 0.5x + 2 y/x = 0.5 + (2/x)

The ratio y/x

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0 20 40 60 80

y = 0.5x y/x = 0.5

y = 0.5x - 2 y/x = 0.5 - (2/x)
100 120

Independent variable, x
Figure 3a: The ratio y/x is NOT a constant and varies on different points on a straight line, if the straight line does NOT pass through the origin. The ratio y/x is a constant at all points on a straight line only it passes through the origin. If the straight line does not pass through the origin, the general equation of the line is y = hx + c. Hence, the ratio y/x = m = h + (c/x) can either increase or decrease as x increases, depending on the numerical values of the slope h and the intercept c. This gives rise to the three types of straight lines illustrated in Figure 1. The graphs in Figures 3a and 3b illustrate the varying nature of the ratio y/x as x increases even on a PERFECT straight line. We consider the three straight lines: y = 0.5x with a zero intercept c = 0 (Figure 2), y = 0.5x -2, the Type I line in Figure 1, with a negative intercept c = - 2, and y =
Page 6 of 39

0.5x + 2, the Type II line in Figure 1 with a positive intercept c = + 2. With reference to Figure 3a, the upper curve A is the graph of y/x = m = 0.5 + (2/x) for the case c = + 2. The ratio y/x keeps on decreasing as x increases and becomes equal to the slope h = 0.5 for very large values of x. The graph of y/x is a falling hyperbola. The horizontal line B is for case of zero intercept. The ratio y/x = m = h = 0.5 = constant. The lower curve C is the graph of y/x = m = 0.5 (2/x) for the case c = - 2. The ratio y/x keeps on increasing and becomes equal to the slope h = 0.5 for very large x. The graph of y/x is a rising hyperbola.

2.50 2.00

The ratio y/x

1.50 1.00 0.50 0.00 -0.50 -1.00 0 20

y = -0.5x + 20 y/x = -0.5 + (20/x)






Independent variable, x
Figure 3b: Variation of the ratio y/x on the Type III straight line, y = -0.5x + 20. As x increases, the ratio y/x = -0.5 + (20/x) decrease continuously and becomes zero when x = 40. For x > 40, the ratio y/x becomes negative and approaches the limiting value of h = -0.5, the slope of the line, at very large values of x. The graph is a falling hyperbola. (Mathematically speaking, y/x h, as x , for all three cases. This is called the asymptotic value of the ratio y/x for very large x.)

Page 7 of 39

Quite surprisingly, this rather important and fundamental mathematical property of a straight line (viz., the variation of y/x with a nonzero intercept as opposed to the absolute constancy of the ratio y/x for the straight line passing through the origin) does NOT seem to have been widely appreciated based on: a) conversations I have had with many who have advanced degrees, b) the review of several websites that discuss the basic mathematical properties of a straight line, c) several news items that have caught my attention over the years (a couple examples will be provided shortly). What are the implications of this fundamental property of a straight line?

4. Three examples of use of y/x ratios and the overlooked underlying Linear Law
We often use simple ratios y/x to make sense of many different (and very complex) situations such as financial performance of a company, the unemployment rate, the teenage pregnancy rate, traffic-related fatality rates, and so on.

4.1 Profits-Revenues data for a company

The first example that we will consider is the profits and revenues data for a hypothetical company, for three consecutive years, see Table 1. For this company, we find that as revenues x increase, the profits y increase, and the profit margins y/x also increase. On a x-y graph, these three data points will lie on three different rays passing through the origin (0, 0). The higher the profit margin, y/x, the higher the slope m of the ray joining the individual (x, y) pair back to the origin (0, 0). This is illustrated in Figure 4.

Page 8 of 39

Table 1: Profits-Revenues data for a Hypothetical Company

Year Revenues, Profits, y x 60 80 120 6 16 36 Profit margin %, 100(y/x) 10% 20% 30% Change x Change y Slope h = y/x

2010 2011 2012

20 40

10 20

0.5 0.5


Dependent variable, Profits, y


Type I, y = hx + c = h(x x0) y = 0.5x 24 = 0.5x (x 48)



y = mx = 0.3x

y = mx = 0.2x


y = mx = 0.1x
0 0 20 40 60 80 100 120 140 160

Independent variable (Revenues), x

Figure 4a: Graphical representation of the profits-revenues data for a hypothetical company. As revenues x increase, the profits y increase, and the profit margins, y/x, also increase. The three data points lie on the rays joining each (x, y) pair back to the origin (0, 0). The higher the slope m of the ray, the higher the profit margin, y/x. The opposite behavior of decreasing profit margins with increasing profits and revenues is illustrated in Figure 4b (for a different company).

Page 9 of 39

However, a careful examination of the graphs reveals that the three data points lie on a Type I line envisioned in Figure 1, with the equation, y = 0.5x 24. The slope h = 0.50 > 0 and the intercept c = - 24 < 0. This means the profit margin, the ratio y/x = m = 0.5 24/x. This will keep on increasing as revenues x increase. The slope h is fixed by considering the changes in revenues and profits (see Table 1) between consecutive years. The slope h =y/x = 10/20 = 20.40 = 0.50. Once h is fixed, the intercept c = (y hx) is readily determined since the Type I straight line passes through all three points and we know the values of h, x, and y. For example c = 6 (0.50 60) = - 24, for (60, 6). The same value is obtained with all the three (x, y) pairs. Only two (x, y) pairs are needed to fix the slope h and intercept c of a straight line. In this case, the third pair also lies on the extension of exactly the same straight line.

Dependent variable, Profits, y

y = mx = 0.5x

y = mx = 0.3x

y = mx = 0.2x

Type II, y = hx + c = h(x x0) y = 0.1x + 2 = 0.1 (x + 20) (5, 2.5), (10, 3) and (20, 4)






Independent variable (Revenues), x

Figure 4b: The profits-revenues diagram for another hypothetical company, with profits increasing with increasing revenues but profit margins decreasing as revenues increase. Why this difference? The red squares are the revenues and profits data (5, 2.5), (10, 3), and (20, 4) corresponding to values given in Table 1.
Page 10 of 39

The three rays, shown as dashed lines in Figures 4a are of course, imaginary. They serve merely as an aid to understanding. There are no actual data points for profits and revenues that fall along these rays. The company is actually operating along the Type I line. What is the significance of the slope h, as opposed to the profit margin y/x = m? As seen already, the data reveals that when revenues increase by an amount x, the profits increase by an amount y = hx. The strict proportionality between x and y will be maintained as long as the profits and revenues data fall exactly (or approximately) along this line. Notice also that the Type I line makes a finite positive intercept x0 = - c/h = 48 on the x-axis, or the revenues axis. The profits y go to zero when x = x0. For x < x0, there are no profits. Thus x0 is the minimum, or the cut-off, revenue that must be exceeded before the company can report a profit. Once x > x0, the additional revenues are converted into profits which will then increase at the fixed rate h, the slope of the Type I line. Indeed, exactly the scenario described here is observed when we analyze the financial data for several companies, big and small. Apple, Google, Microsoft, are some examples of companies that reveal this Type I behavior. For such companies both profits and profit margins increase with increasing revenues. All the three types of behavior conceived in Figure 1, Type I, Type II, and Type III are observed in the real world. In the Type II mode, profits increase as revenues increase but, unlike the Type I mode, profit margins decrease with increasing revenues, as illustrated by the three rays with decreasing slopes in Figure 4b. These points have been discussed in several recent articles, posted on this website. The reader is referred to the bibliography list at the end of this article. Specifically, the significance of x0 and its relation to the classical breakeven model for the profitability of a company has been discussed. Also, in the real world, the transitions from Type I to Type II and Type III behavior are to be viewed as local straight line segments of a smooth profits-revenues curve with a maximum point. The rising portion of the curve is the Type I segment. The falling portion of the curve is the Type III segment. These are often joined by the transitional Type II segment. Sometimes, as in the case of Microsoft, a company makes a transition from Type I to Type II and then back again to Type I behavior.
Page 11 of 39

The implications of a maximum point on the profits-revenues graph (seen with several companies, Ford Motor Company, General Motors, Yahoo, Verizon Communications, Kroger, Southwest Airlines, Air Tran, RIM) should also be carefully considered. The appearance of the maximum point and the transition to Type III behavior often seems to precede a critical negative event for the company, such as a bankruptcy (General Motors) or a merger (Air Tran with Southwest). Ford Motor Company and Southwest Airlines are examples of companies that are now operating past their maximum points. It remains to be seen what the future holds for these two, thus far, highly successful companies. This brings us to our second example.

4.2 The US Traffic fatality data

This example illustrates the beneficial effect of a sustained Type III behavior. Many years ago, a front page article in The Detroit News (January 4, 2000) stated, . Motorists speed, but fewer die. This article appeared in the years following the repeal of the 55 mph speed limit by Congress, in1995. The National Maximum Speed Limit (NMSL) took effect after the energy crisis following the Arab oil embargo in October 1973, when the major oil producers of the Middle East, OPEC, refused to sell oil to the USA because of its support for Israel during the Yom Kippur war. The lower speed limits were set by an Act of Congress, in an effort to conserve energy and improve the fuel economy of cars and trucks (by reducing the aerodynamic drag which becomes more important at higher speeds). Following the repeal of NMSL, the interstate highways around Metro Detroit had increased the speed limit to 70 mph. The writer of this column (Tom Greenwood) was citing the traffic fatality rates for 1966, 1996, and 1998 to show that the higher speeds did not seem to have increased fatalities. In 1966 (i.e., before the 55 mph speed limit), there were 5.5 deaths per 100 million Vehicle Miles Traveled (VMT) but in 1998 (after the repeal) there were only 1.6 deaths per 100 million VMT. The

Page 12 of 39

historically established negative trend (from 1976, after NSML went into effect) continued, see further discussion at 1. and also, 2. The traffic fatality rate is the ratio y/x, where the numerator y is the number of traffic-related fatalities and the denominator x is the VMT (units of 100 million miles). The x-y graph can be shown to reveal a Type III straight line. The following linear regression equation y = - 5.601x + 56,075 where x is in billions of mile, can be deduced for the (x, y) data given for the three years 1998, 1996, and 1966. Because of the negative slope, as the VMT, x, increased between 1966 and 1998, the number of fatalities y decreased.

Table 2: Traffic fatality data quoted in The Detroit News

Vehicle Miles Traveled (VMT), x, in billions 926 1966 2,486 1996 2,632 1998 Source: Year Traffic-related fatalities, y 50,984 42,065 41,471 Fatality rate, y/x x in 100 millions 5.51 1.69 1.58

A graphical representation of this data, see Figure 5, reveals a Type III behavior. The slope of the two rays joining the (x, y) pair back to the origin is defined as the fatality rate. The slope y/x = m has been decreasing year after year since the data follows a Type III line with a negative slope h. The equation of the line joining the 1966 and 1998 data is y = -0.558x + 56,148 where x is in units of 100 million miles. A complete review of the historical data on traffic-related fatalities, going back to the earliest days of the automobile, may be found in Ref.[1]. Consider now the implications of this Type III behavior. If we extrapolate backwards to lower VMT, this leads us to the ridiculous conclusion that when VMT goes to zero the number of traffic-related fatalities will increase to the
Page 13 of 39

maximum value given by the intercept c = 56,148 fatalities. This is clearly impossible and means that the Type III behavior is only applicable for a limited range of VMT. Indeed, at lower VMT, it can be shown that there is a transition to Type I (and also Type II) behavior. However, Type I behavior is NOT desirable in the problem being studied here since Type I behavior means increasing fatalities with increasing VMT. Indeed, this was observed historically in the US.

y/x = 5.51

Traffic fatalities, y

60,000 50,000 40,000 30,000 20,000 10,000 0 0

y/x = 1.58

Type III line y = - 0.558x + 56,148

5,000 10,000 15,000 20,000 25,000 30,000 35,000 40,000

Vehicle Miles Traveled (VMT), x [in 100 millions]

Figure 5: Decreasing highway fatalities with increasing Vehicle Miles Traveled (VMT). Only three data points are considered here. The complete historical data has been analyzed in Ref. [1] cited in the text. Does Speed Kill? The Forgotten US Highways Deaths in the 1950s and 1960s. The increase in US highway deaths, year-after-year, in the 1950s and the 1960s, had reached epidemic proportions. Back then, there was no speed limit in many states and vehicles did not have safety features such as seatbelts and airbags, which were resisted by US automakers citing higher costs and even questionable effectiveness. There was no Toyota or Honda or Hyundai back then, only Ford,
Page 14 of 39

GM, and Chrysler. And, a young Ralph Nader, who would become a leading consumer safety advocate, published his famous book Unsafe at any Speed in November 1965, condemning the US auto industry for neglecting safety issues. All of this forced the US Congress to hold highly publicized hearings in 1966 to improve the safety of cars. The Highway Safety Act was signed into law by President Lyndon Johnson in September 1966. Nonetheless, highway deaths continued to increase between 1966 and 1973 and the transition from a Type I to Type III (with a short intervening Type II) behavior actually occurred only after the NMSL went into effect on January 1, 1974. Suddenly, every one, all over the US, even in Texas and Montana, had to slow down to 55 mph. Ultimately, those who were lobbying in favor of repealing NMSL prevailed! The fatality rate y/x has continued to decrease and was down to 1.13 in 2009. Will the carnage begin again with the repeal of NMSL and the progressive upping of the speed limit by states like Texas? Technology has fundamentally changed and cars are safer. But when accidents do occur, especially at high speed, there is no escaping the laws of physics. Speed kills for a reason. The kinetic energy K of the vehicle moving at the high speed must be absorbed in the crash. K = mv2 where m is the mass of the vehicle and v its speed. Unfortunately, in a high speed crash (K increases as the square of the speed), the occupant is forced to become the energy absorber. And, highway fatality studies indicate that SEAT BELTS are still not being used by many drivers. This brings us to our third example.

4.3 The Ohio Unemployment Puzzle

Years ago, a Wall Street Journal article (June 4, 2001) dealing with the grim unemployment situation in several counties around Cleveland, Ohio, posed the puzzling question, If the unemployment rate is going down, why are so many people still unemployed? Or something very very close to this. (I have not yet been able to find the exact reference or link.)
Page 15 of 39

With a little reflection it becomes obvious that we are dealing with a Type II behavior. Indeed, this is confirmed by the unemployment data for several Ohio counties, especially the largest counties (with the highest number of unemployed), taken for the month of January 2000 (before the above WSJ article was written). Table 3: Unemployment data for several large Ohio counties for January 2000 County Labor force, x Unemployed, y Unemployment rate, y/x Wayne 58,600 2,300 3.92 149,200 9,500 Lorain 6.37 Lucas 225,300 11,300 5.02 Montgomery 277,000 12,400 4.48 277,800 14,100 Summit 5.08 Hamilton 424,000 17,600 4.15 Franklin 585,400 20,500 3.50 681,200 29,700 Cuyahoga 4.36 Source: The data compiled in Table 3 was obtained from the website of the Bureau of Labor Statistics (see link given above). If we consider the data for Lorain, Summit, and Cuyahoga counties, we find the pattern mentioned in the 2001 WSJ article. Cuyahoga County had the lowest unemployment rate but the highest number of unemployed! Lorain County, with the lowest unemployed had the highest unemployment rate. Amazing, indeed! In Ohio, the lower the unemployment rate, the higher is the number of unemployed! How can this data be rationalized? Indeed, with some reflection, it becomes obvious that this is a classic example of the Type II behavior envisioned in Figure 1 and mentioned also (Figure 4b) while discussing the profits-revenues data for a company. The idea of decreasing profit margins with increasing profits and revenues does NOT seem odd to merit serious discussion. However when the same variables x and y are associated with the unemployment problem, the Type II behavior seems rather odd and worthy of serious discussion. The same pattern (of increasing unemployed with decreasing unemployment rates) is confirmed by the data plotted in Figure 6 for 19 Ohio counties covering the entire range of x and y values, see also Figure 7.

Page 16 of 39

40.000 35.000

Unemployed, y [in 000s]

30.000 25.000 20.000 15.000

Type II line y = 0.038x + 3.835

5.000 0.000 0 100 200 300 400 500 600 700 800 900

Labor force, x [in 000s]

Figure 6: Unemployment data for 19 Ohio counties for the month of January 2000. The straight line joining the counties with the highest levels of unemployment is a Type II line. Hence, although the unemployment rate is decreasing (with increasing labor force), the number of unemployed keeps increasing. The equation y = 0.038x + 3.835 is deduced from the data for Lorain and Cuyahoga counties - the two extremes above. This principle of connecting the extreme points in a data set is often a good way to fix the slope h and intercept c, quickly. (Linear regression can be used but it is not attempted here since the purpose here is to understand the general principles.) The data for Summit County can be seen to fall practically on this Type II line. Franklin and Hamilton counties (relatively more affluent) have a lower level of unemployed. Notice that the Hamilton and Montgomery county data seem to follow a roughly parallel line with a small change in the intercept.

Page 17 of 39




Unemployed, y

25,000 20,000

15,000 10,000 5,000 0 0.00









Unemployment rate, y/x [percent]

Figure 7: Implications of the general Type II behavior are revealed here with the data for the unemployment rate (for the largest Ohio counties of Table 3). As the unemployment rate decreases, see direction of arrow, the number of unemployed actually increases. This also has to do with the labor force size effect. Also, as we saw with the traffic fatality statistics, there is a transition from the Type II behavior observed with a higher labor force level to a Type I behavior at lower values of the labor force. The data for the smaller counties, close to the origin, can be shown to obey the Type I law. The equation of the line joining Wayne and Lorain counties is y = 0.079x 2.36 = 0.079 (x 29.66) where x and y are in thousands. The smallest counties thus seem to suffer from a high unemployment rate due to this size effect and the transition to Type I behavior. For example, Vinton county with a labor force of 5500 had 500 unemployed for a high unemployment rate y/x = 0.098 (9.8%). But, the same 500 unemployed in Paulding County, with a labor force of 10,500, means an unemployment rate of
Page 18 of 39

only 4.6%. This is a clear size effect. The unemployment law is y = hx + c, where the numerical value of h should be fixed, as we have done here, by considering the highest unemployment levels. This same argument was also used to determine the slope h from the US, Canadian, and the Japanese unemployment data. Indeed, it appears that a single universal value of h = 0.0956, deduced by considering the highest and lowest unemployment levels, over a few decades, can be used to describe the data for three countries. In other words, the various transitions from Type I to Type II or Type III observed in all the three examples discussed here is a reflection of the size effect associated with larger and larger values of the independent (or stimulus) variable x. This (stimulus or independent variable) is revenues when we consider finances of a company, the VMT when we consider traffic fatality data and labor force when we consider unemployment statistics. This size effect is also illustrated in Figure 7. As the unemployment rate y/x decreases, the number of unemployed y is increasing because of the Type II behavior revealed in Figure 6.

5. Concluding Remarks
The noted political columnist Jeff Greenfield made an interesting remark recently. He writes, I got into writing and thinking about politics because I was told there would be no math. Boy, was I misled. Its not just the torrent of polls that we have to deal with, but the numbers that supposedly forecast Presidential elections with uncanny accuracy

Add it up: The prediction models look dismal for Obama. Can he still win? Yahoo! News Tue, Jul 31, 2012
Welcome to real world of data analysis! All of the confusion with number, ratios, rates, and percentages, is due to the fact somewhere along the way we forgot the significance of the important mathematical property of a straight line that has been discussed here. There are at least three
Page 19 of 39

types of straight lines that we encounter in the real world, when we analyze the large volumes of (x, y) data being compiled almost daily in a variety of economic, social, political, business and finance related problems. The three types of straight lines and the maximum point (in the profits-revenues data and also in the traffic fatality data) lead us to the (grand??) conception of a (bold!) generalization of the ideas of Planck and Einstein, well beyond physics. I am referring here to the fundamental idea of an elementary energy quantum E proposed by Planck to derive his blackbody radiation law. (E = hf, where f is the frequency of light and h is the Planck constant.) This, in turn, lead Einstein to the conclusion that light can be viewed as a stream of particles, each having Plancks energy quantum. (Einstein considers a property called the entropy of light to arrive at the particle conception of light. A simplified version of Plancks radiation law is used in these mathematical deliberations.) More importantly, Einstein also introduced the far reaching idea of a work function W in his photoelectric law, K = E W = hf W = h(f f0). Here K is the maximum kinetic energy of the electron that is produced when photons, with energy E, strike the surface of a metal. Some of the energy must be given up and does not appear as the energy K. The energy that must be given up is called W, an unknown quantity that must be determined experimentally for each new metal. Einsteins law is also a linear law. The K-f relation is linear and is exactly analogous to y = hx + c = h(x x0). The nonzero intercept c, which has been emphasized here, and leads us to the three types of straight lines, is really a generalization of Einsteins work function W well beyond physics to many problems, three of which have been discussed here. How do we resolve the Ohio unemployment puzzle? Let us consider just the (x, y) data for the three larger Ohio counties, Lorain, Summit, and Cuyahoga. This is plotted again in Figure 8. The three data points lie on rays with the slope y/x = m = unemployment rate, joining the individual point back to the origin. From the changing slope it is clear that as the unemployment rate y/x decreases, the number of unemployed y also increases. What is unique about Ohio that makes the number
Page 20 of 39

of unemployed y increase as the unemployment rate y/x goes down? How do we rationalize this empirical finding?
40 35 30 25 20 15 10 5 0 Counterintuitive observations Highest unemployment rate Lowest number of unemployed (Lorain) Lowest unemployment rate Highest number of unemployed (Cuyahoga)

Unemployed, y [in 000s]









Labor force, x [in 000s]

Figure 8: The unemployment data for Lorain, Summit, and Cuyahoga counties is plotted here. The three rays with decreasing slopes y/x = m = unemployment rate have also been added here. It seems that the only way to rationalize this data is by invoking the Type II line joining the three data points and introducing the labor force x into the discussion. Then, we can also see the idea of an economic work function emerging. When the labor force is small, the law is a Type I law, with a cut-off labor force x0. As the labor force increases, the majority are employed but the number of unemployed increases and the unemployment rate increases at first. Then, and thankfully so, there is a transition to the Type II behavior (which reduces the number of unemployed, relative to the Type I line) as evident with the three large Ohio counties. It is this transition which is unique to Ohio, just as the work function is unique to the metal on which light shines to produce the electron.
Page 21 of 39

Indeed, the labor force is akin to the energy of the photon. It represents the human energy or the human potential within the economy. Under the right conditions, as when E < W in Einsteins law, there will be no unemployment (x < x0). Once the labor force exceeds this cut-off, just like the appearance of an electron, there will be some unemployed. The labor force, by definition, is the sum of the employed plus the unemployed, just as E = K + W in Einsteins law. The analogy being drawn here with Einstein photoelectric law, and the idea of a work function, thus seems very fitting. When we compile empirical observations on profits and revenues, for example, why does the profit margin increase with increasing profits for Company A and decrease with increasing profits for Company B? Or, when we compile empirical observations on the unemployment data, why does the number of unemployed increase with decreasing unemployment rates, or vice versa? Or, imagine making observations on a moving vehicle. We determine its speed v as a function of time t. Vehicle A is seen to be accelerating. Vehicle B is decelerating. Vehicle C is moving at a constant speed. This does not surprise us. We know the reason why. It has to do with the idea of a force acting on the vehicle. Likewise, do we really understand why Company A reports increasing profits and profit margins while Company B reports decreasing profits and profit margins? And, so on for the other problems discussed here, and many more that have not been discussed, where we must deal with tables of x and y values. Further discussion of some of these ideas may be found in the articles cited in the bibliography list. The articles on Microsoft, Google, Apple, and Kia, provide a good summary. The unemployment problem, the teen pregnancy problem, and the traffic fatality problem provide additional examples.

Page 22 of 39

6. Appendix I The Olympic Long Jump Record

The 2012 London Olympics, with its dramatic picture of the full moon providing a sixth Olympic ring, see below, also provided us with memorable history: Michael Phelps 22 gold medals, Gabby Douglas, the gymnast, with her enchanting smile, became the new American sweet heart, and Usain Bolt. It also provided this sorry tale of Olympic long jumpers who are NOT jumping long anymore. Olympians seem to be running and swimming faster, throwing further than their predecessors, but when it comes to jumping (both the men and the women) they seem to be regressing. Great Britains Greg Rutherford won the gold with the shortest jump in 40 years, lamented Daniel Lametti in the Slate magazine, see link below, citing the stats from 1968, 1988, 2008. After the Olympic record set by
Page 23 of 39

Bob Beamon in1968 (at 8.90 meters), only once has this been exceeded (by Mike Powell, at Tokyo in 1991, with 8.95 meters). If we extrapolate forward using the negative trend, the Olympic long jump gold may soon be for the taking at 8 meters or less, by 2028 or 2032, see Figure 9.

Courtesy : data:image/jpeg;base64,/ ridiculously long URL follows here y_do_the_best_long_jumpers_in_the_world_seem_to_be_jumping_shorter_distanc es_.html %93_Men%27s_long_jump This discussion of the Olympic Long Jump records is being included here in the discussion for two interesting reasons. First, as with the traffic fatality data, we see the emergence of a classic Type III behavior. With a little Internet research, the Olympic long jump records for other intervening years, not included in the Slate article, can be shown to confirm the negative trend. The American athletic hero, Carl Lewis, who won this event 1984, 1988, 1992, and 1996, won the gold in 1996 with a 8.50 meter jump, 22 cm less than his own gold winning jump of 8.72 m in 1988. The gold mark has thus clearly been lowered in this event in recent years. The data for all of the Olympic gold winning jumps, going back to 1896, may be found in the Wikipedia article. Only the recent trends, going back 1956, preceding and following the record jumps by
Page 24 of 39

Bob Beamon (1968) and Mike Powell (1991) are considered here in Figure 9. It is of interest to note that the slope h = -0.01429 if we use the winning data for 1968 and 1996 which is virtually the same as h = -0.01425 for the 1968 and 2008 data. However, as discussed in the context of the traffic fatality data, the appearance of a Type III trend usually signifies the existence of an earlier Type I (or Type II) behavior. We cannot extrapolate the Type III trends backwards, indefinitely, or even forwards. The Type III equation, D = -0.014t + 37.01 (deduced from the data 1968 and 2008) implies that if we extrapolate to earlier years, at time t = 0, the gold medal winning jump distance D would be a ridiculously high 37 m. Or, in 2592, anyone can show up to claim the gold since the winning jump distance D = 0 in that landmark year!

Olympic gold winning long jump [m]

9.20 9.00 8.80 8.60 8.40 8.20 8.00 7.80

Beamon, 1968 8.90 m

Powell, 1991 8.95 m

Type III D = -0.014t + 37.01

7.60 1952 1960 1968 1976 1984 1992 2000 2008 2016 2024 2032

Time, t [Calendar Year]

Figure 9: The gold medal winning Long Jump distance D versus time t in years. Although a Type III trend has been established in recent years, since the first peak in 1968, the data for the earlier years reveals a Type I equation, D = 0.089t 166.6. Gregory Bell won the gold in 1956 with a 7.83 m jump, well under the 8 m mark.
Page 25 of 39

This Type I equation was deduced using the 1956 and record 1968 data. A smaller Type I slope can be deduced using the 1956 and 1991 data. The negative intercept with the Type I trend means that the ratio D/t = 0.089 (166.6/t) was increasing with each succeeding year. In other words, Olympians were indeed making the effort to beat the records held by their predecessors. Why then the recent Type III trend? This brings us to the second reason why this data is being highlighted here. As discussed in the Slate magazine article, the reason Olympians are NOT jumping as long may be the lack of lucrative post-Olympic monetary rewards. The long jump is not in the same league as other athletic events. The key to being a successful long jumper (running long jump as opposed to standing) is to have world class speed (to gain the momentum before jumping) but the athlete can make more money being a world class 100-meter runner than training for the long jump. Usain Bolt, whose 100-meter race is eagerly awaited as of this writing (on August 5, 2012), signed a lucrative three year contract with Puma, rumored at $32 million. Being the worlds fastest man apparently seems to have greater commercial value than being the worlds longest jumper! And so, it is argued that Olympians are just NOT making the effort, in other words working hard, to improve the record held by their predecessors! Work done, effort made, this is exactly what we mean by the work function W, or the nonzero intercept c in the law y = hx + c. The transition from Type I to Type III behavior that we see in the Olympic Long Jump records (which incidentally implies the existence of a maximum point on this graph) is manifestation of the nonzero intercept c, or the generalization of Einsteins idea of a work function W, well beyond physics. As noted earlier (see also the discussion in Refs. [5, 8] cited in the bibliography), Einstein uses a simplified version of Max Plancks radiation law, which can be written in its most generalized form as: y = [ mxne-ax/(1 + be-ax) ] + c (1)
Page 26 of 39

This is a power-exponential law with the power law term xn multiplying the exponential term e-ax. Hence, the x-y graph reveals a maximum point. In Plancks law b = - 1and c = 0, i.e., the intercept is taken to be zero. Einstein uses the simplified version (with b = 0, c = 0), y = mxne-ax, which also reveals a maximum point (the maximum point occurs when n = ax or x = a/n) to deliberate on the property called the entropy of light. Indeed, entropy is the starting point of Plancks discussion in developing quantum physics. The reader is referred to the references cited. Of interest to us here is the following expression for entropy S, which is the very first step taken by Planck, in his history making December 1900 paper. Planck writes (following Boltzmanns statistical arguments about entropy of a system of N particles) S = k ln + unknown constant (2)

Planck was interested in the problem of how a fixed total energy UN = NU can be distributed among N particles (which he envisioned as being oscillators, charged particles, which vibrate about a fixed position, radiating electromagnetic energy in the process). The expression for the average energy U derived by Planck marks the beginning of quantum physics. There are many different ways in which a fixed total energy can be distributed between N particles. This gives rise to the entropy S, which is a measure of extent of disorder, or chaos in the system. The parameter in equation 2 above is the number of ways and can be determined using the laws of permutations and combinations. This involves factorials of large numbers. Hence, instead of a linear law, we now have a logarithmic relation between S and . The proportionality constant in this relation is k, which Planck refers to as the Boltzmann constant in honor of Ludwig Boltzmann who spent all of his professional life developing the field that we now call statistical mechanics. In fact, we find the above entropy equation carved on Boltzmanns tombstone. (Sadly, Boltzmanns ideas were not widely appreciated by his peers. He suffered from bouts of severe depression and ultimately committed suicide, just before he was about to be vindicated, such as by Plancks use of the above entropy equation to develop quantum physics).
Page 27 of 39

Notice how Planck is careful to introduce an unknown constant into equation 2. This is the nonzero intercept made by the S- graph. We can rewrite this as S = k ln + S0 . When = 1, i.e., when there is only one way to distribute the energy (as when there is only one particle, or when only one particle has all the energy) the natural logarithm ln = 0 and the entropy S = S0. What is S0? This is a question that was later settled by physicists by actually formulating a new law of thermodynamics, called the Zeroth law, which states that the entropy of a PERFECT crystal, at the Absolute Zero temperature, will be exactly equal to ZERO. This is NOT a proof. It is more like a postulate. Planck recognizes the importance of the nonzero intercept S0 when he takes the first steps to develop quantum physics. Likewise, Einstein recognizes the importance of the nonzero intercept in the photoelectric law K = E W = hf W = h(f f0). The cut-off frequency f0 = W/h observed by experimental researchers before Einstein cannot be explained if the work function W is zero. The cut-off frequency is actually a manifestation of the nonzero intercept, or the work function W. In Einsteins law, W represents the work that must be done to overcome the forces that bind the electron within the metal. This work, or energy used up to produce the electron, cannot be calculated a priori and will depend on the metal. Einstein calls it W and must be deduced for each metal experimentally. The purpose of the discussion here is to highlight the importance of the nonzero intercept in the real world using the Type III behavior observed in the Olympic long jump record as an interesting example. There is a maximum point on this graph. It is the effort or the work that must be done by the Olympian that is subtly manifested in the nonzero intercept and hence also the maximum point since Type III must give way to Type I at earlier times. Like Planck and Einstein, we must recognize the importance of this nonzero intercept whenever we analyze (x, y) data, as discussed here. We make observations and use numbers to quantify these observations. One of the variables x is usually taken as the independent variable, or the stimulus function. This gives
Page 28 of 39

rise to the second observation, the dependent variable y, or the response function. The most general relationship between x and y is y = hx + c, not y = mx. This nonzero intercept also affects the unemployment problem (one that engages our attention because of the severe jobs crisis now faced in the USA) and in the contentious problem of labor productivity. Labor productivity = y/x = Number of units produced /Number of labor hours Is there a nonzero intercept c that affects labor productivity? The potential existence a nonzero intercept c means we must be careful when we use the ratio y/x = m to draw conclusions and formulate policies (as is done routinely by management using labor productivity data for various manufacturing plants, or to decide which retail stores to close, etc. in the retail industry, using per store statistics). The ratio y/x does not tell us anything about the rate of change y as x increases or decreases. y/x = m = h + (c/x). The slope h is the rate of change and h = m, if and only if the intercept c = 0. If not, we must be careful to consider what may be called the size effect, the dependence of the ratio y/x on the value of x. The implications of the nonzero c have been discussed for the unemployment problem, for the profits-revenues problem, for the traffic-fatality problem, and for the teenage pregnancy problem (see Ref. [26]). The nonzero c is Einsteins work function outside physics. Plancks idea about entropy and the radiation law, generalized as equation 1, can also be applied well beyond physics. We have just found a maximum point in the most unlikely of places, in the Olympic long jump record this morning, August 5, 2012! Quantum physics was conceived to explain the appearance of such a maximum point on the radiation curve for a heated body. Einsteins law and the expression relating the average entropy S and the average energy U, derived by Planck, can be generalized and applied beyond physics. Finally, Einsteins photoelectric law K = E W = hf W implies the K-f graph is a series of parallels, if we perform experiments with different metals, each having its own work function W. Examples of such movement along parallels can be found
Page 29 of 39

in the financial data (e.g., article on Microsoft, Refs. [17,18] and Kia [15]). We see a similar movement along essentially parallel lines when we consider all of the earlier Olympic Long Jump records, going back to 1896. This is illustrated in Figure 10. The historical data seems to segregate along three parallel Type I lines.

Olympic winning jump, D [meters]

9.50 9.00

8.50 8.00 7.50 7.00


Type III, D = -0.014t +37


Type I, D = 0.035t 60.8 Line C: 1896 and 1968

6.00 1860 1880 1900 1920 1940 1960 1980 2000 2020 2040 2060

Time, t [Calendar years]

Figure 10: Historical Olympic Long Jump Records 1896-2012 with a few Word Records added, like Mike Powells 1991 Tokyo Word Record of 8.95 m. The Type III trend established since 1968 was preceded by a Type I trend over many years. Line A, joining 1900 to 1912, D = 0.035t 58.5. Line B, joining 1923 to 1935, D = 0.037t -62.82. Line C, joining 1896 to 1968, D = 0.035t 60.8. Notice that the Type I Lines A and C have EXACTLY the same slope. Line B slope differs very slightly. The transitions from lines A to B to C were not always chornological with jump from C to A between 1896 to 1900 and then a movement along A, then a jump back to C and then to B. Nonetheless, it is remarkable that these jumps which signify something like a work function. A fourth Type I line can be added (1956 and 1991, with slope h = 0.032) but has not been done here.
Page 30 of 39

7. Appendix II: Derivative of the Generalized Planck Curve

As discussed in Appendix I, Plancks radiation law, which marks the birth of quantum physics, can be generalized and rewritten as follows. y = mxn [e-ax/(1 + be-ax) ] + c (1)

This is a power-exponential law. Hence, the x-y graph exhibits a maximum point at a finite value of x as x increases. This means the derivative dy/dx > 0 for small values of x, up to the maximum point x = xm and dy/dx < 0 for larger values. For the moment, let the nonzero intercept c = 0. The expression for the derivative dy/dx, can be readily deduced for various special cases such as b = 0 (simplified radiation law used by Einstein, also called Wiens law), for a = 0 and b = 0 (power law, which is also called the Rayleigh-Jeans law), and for a = 0, b = 0, n = 1, the simple linear law. For a = 0, b = 0, c = 0, y = mxn and there is no maximum point. In his December 1900 paper, Planck derives the expression for the average energy U of N oscillators (by invoking the statistical arguments of Boltzmann) and thus provides a theoretical justification for the expression [e-ax/(1 + be-ax) ] which appears within the square bracket in equation 1. It is indeed a simple exercise to derive the expression for dy/dx for the most general case is. One only needs to apply the rule for the derivative of the product of several simple functions. Thus, dy/dx = [(y c)/x] [n ax + axg (1 g) ] where g = 1/(1 + be-ax) (3) (4)

The function g is defined for convenience of differentiation and appears in the denominator of the expression for y. Now let us consider the special cases. For b = 0, g = 1 and dy/dx = [(y c)/x] [n ax] (5)

Now, it is easy to see that there is a maximum point when n = ax, or x = n/a. Also, the slope of the graph goes to zero as y c, the nonzero intercept. For b = 0 and a = 0, dy/dx = n(y c)/x (6)
Page 31 of 39

This is the power law case and there is no maximum point. The derivative dy/dx is not equal to zero for any finite value of x. The slope of the graph goes to zero, as before as y c. For b = 0, a = 0, n = 1, dy/dx = (y c) /x = m (7)

This is the simple case of the linear law, y = mx + c and the slope dy/dx = m if and only if the intercept c = 0. For nonzero c, the slope will depend on the numerical value of c. The purpose here is NOT to provide an expression for the location of the maximum point x = xm, on the generalized Planck curve. A graph can readily be prepared to find this maximum point. Rather, the purpose here is to show (once again, using the general expression for the derivative for the Planck curve) that, just as dy/dx varies at different points along a curve it also varies at different points along a straight line if the straight line does NOT pass through the origin.

8. Appendix III US Teen Pregnancy Problem

According to a recent Slate magazine article, the US teenage pregnancy rate has now reached a forty-year low. The pregnancy rate is defined as the ratio y/x where the numerator y is the total pregnancies (in the relevant age-group) and the denominator x is the female population (for the age-group of interest). For convenience, this fraction is multiplied by 1000 and expressed as the pregnancy rate per 1000 of females. This is just like multiplying by 100 to convert a fraction into a percent (e.g., the profit margin for a company). A careful study of the US teen pregnancy data, for the years 1972-2008, see Refs. [1, 2], confirms the linear law relating x and y in this problem, see for example Figures 5, 6 and 10 in Ref. [2] below. 1. Published August 2, 2012.
Page 32 of 39

2. Published August 8, 2012. Of particular interest is a rather unique Type I relation observed in this problem, for the years 1980 to 1987, see Figure 11 below.

Total pregnancies, y [in 000s]


INVERSE Type I y = 0.143x 330 = 0.143(x 2311)




800 8,500







Female population, x [in 000s]

Figure 11: The teen female population (ages 15-19) decreased between 1980 and 1987 and the total pregnancies also decreased yielding a positive slope h and thus what may be called an INVERSE Type I behavior. Notice the PERFECT linearity. Notice the NEAR PERFECT linearity observed here. The decreasing pregnancy rate between 1980 and 1987 is due to the fact that both the teen female population x and the number of pregnancies y decreased simultaneously. For example, the overall change in the teen female population (for the ages 15-19) is x = (x2 x1) = (9,139 -10,381) = - 1242 and the change in the number of teen pregnancies y = (y2 y1) = (974,580 1,151,850) = -177,270. Since both x and y are negative,
Page 33 of 39

the slope of the line joining these two points on the x-y graph, h = y/x = 177,270/ (-1242) = 0.143 > 0, is positive. Furthermore, the pregnancy rate is decreasing, not increasing, on this Type I line. As the female population x decreases the pregnancies y decrease and the ratio y/x also decreases with x and y maintaining the Type I relation with a positive slope. This is illustrated by the following calculations presented in Table 4 for the years 1980 to 1987 (exceptions are the years 1985 and 1986). Table 4: US Teen Pregnancy Data revealing an INVERSE Type I relation Year Female Total Pregnancy Change Change Slope population pregnancies rate x y h = y/x x (in 000s) y 1000(y/x) 10,381 1,151,850 110.96 10,096 1,109,540 109.90 -285 -42,310 148.46 9,809 1,077,120 109.81 -287 -32,420 112.96 9,515 1,039,600 109.26 -294 -37,520 127.62 9,287 1,002,370 107.93 -228 -37,230 163.29 9,174 1,000,110 109.02 9,206 982,450 106.72 9,139 974,580 106.64 -1,242 -177,270 142.73

1980 1981 1982 1983 1984 1985 1986 1987

Data source: US Teen Pregnancies Trends since 1972, Table 2.1 Ages 15-19 . The average of the five (5) values of the slope h, calculated from the changes in x and y for consecutive years is equal to 139, is consistent with the overall slope h = 0.143 for the line joining 1980 to 1987. The data for 1985 and 1986 were ignored as representing a fluctuation from the linear trend.

Although one might usually associate a Type I trend with increasing x and y values, here we witness an interesting INVERSE Type I trend, with decreasing x and y values and a positive slope h. The normal Type I is observed in the teen pregnancy data (click here) between 1972 and 1980. Note: When I first looked at the data for the 1980-1987 period, with the decreasing pregnancy rates 1000(y/x), and decreasing x (the year 1980 represent a minor peak in the pregnancy rate when plotted versus years). I was expecting a Type II trend, similar to that observed in the Ohio unemployment problem. The INVERSE Type I relation, yielding the decreasing pregnancy rates, is thus very unique, indeed.

Page 34 of 39

9. Appendix III: Bibliography

Related Internet articles posted at this website Since the Facebook IPO on May 18, 2012
3. Current article with all others above cited for completeness, Published June 4, 2012 with several revisions incorporating more examples. 4. Basic discussion of three types of companies, Published May 24, 2012. Examples of Google, Facebook, ExxonMobil, Best Buy, Ford, Universal Insurance Holdings 5. Detailed discussion of Apple Inc. data. Published June 7, 2012. 6. Ford Motor Company graph illustrating pronounced maximum point, Published May 29, 2012. 7. Generalization of Plancks law, Published May 30, 2012. 8. Facebook and Google data are compared here. Published May 21, 2012. 9. Published May 19, 2012 (the day after IPO launch on Friday May 18, 2012). 10. Discussion of the meaning of entropy (using example given by Boltzmann in 1877, later also used by Planck to develop quantum physics in 1900). The example here shows the concepts of entropy S and energy U (and the derivative T = dU/dS) can be extended beyond physics with energy = money, or any property of interest. Published June 3, 2012.
Page 35 of 39

11.The Future of Southwest Airlines, Completed June 14, 2012 (to be published). 12.The Air Tran Story: An Important Link to the Future of Southwest Airlines, Completed June 27, 2012 (to be published). 13.Annies Inc. A Single-Product Company Analyzed using a New Methodology, Published June 29, 2012 14.Google Inc. A Lovable One-Trick Pony Another Single-product Company Analyzed using the New Methodology., Published July 1, 2012. 15.GT Advanced Technologies, Inc. Analysis of Recent Financial Data, Completed on July 4, 2012. (To be published). 16.Disappearing Brands: Research in Motion Limited. An Interesting type of Maximum Point on the Profits-Revenues Graph Published July 5, 2012. 17.Kia Motor Company: A Disappearing Brand, Published July 6, 2012. 18.The Perfect Apple-II: Taking A Second Bite: A Simple Methodology for Revenues Predictions (Completed July 8, 2012, To be Published), Published July 30, 2012. 19. Microsoft after the quarterly loss, Published July 25, 2012. 20. , Published July 30, 2012. ****************************************************************
Page 36 of 39

21. Single universal value of h for US, Canada and Japan in the unemployment law y = hx + c, Published July 24, 2012. 22., Published July 24, 2012. 23. Published July 24, 2012. 24., Published July 22, 2012. 25. , Published July 19, 2012. 26. , Published July 12, 2012. 27. , Published July 10, 2012. **************************************************************** 28. Published August 2, 2012. 29. Published August 4, 2012. 30. Published August 4, 2012. 31. Published August 4, 2012. ****************************************************************** 32. Published August 2, 2012.

Page 37 of 39

33. Published August 8, 2012.

**************************************** About the author V. Laxmanan, Sc. D.

The author obtained his Bachelors degree (B. E.) in Mechanical Engineering from the University of Poona and his Masters degree (M. E.), also in Mechanical Engineering, from the Indian Institute of Science, Bangalore, followed by a Masters (S. M.) and Doctoral (Sc. D.) degrees in Materials Engineering from the Massachusetts Institute of Technology, Cambridge, MA, USA. He then spent his entire professional career at leading US research institutions (MIT, Allied Chemical Corporate R & D, now part of Honeywell, NASA, Case Western Reserve University (CWRU), and General Motors Research and Development Center in Warren, MI). He holds four patents in materials processing, has co-authored two books and published several scientific papers in leading peer-reviewed international journals. His expertise includes developing simple mathematical models to explain the behavior of complex systems. While at NASA and CWRU, he was responsible for developing material processing experiments to be performed aboard the space shuttle and developed a simple mathematical model to explain the growth Christmas-tree, or snowflake, like structures (called dendrites) widely observed in many types of liquid-to-solid phase transformations (e.g., freezing of all commercial metals and alloys, freezing of water, and, yes, production of snowflakes!). This led to a simple model to explain the growth of dendritic structures in both the ground-based experiments and in the space shuttle experiments. More recently, he has been interested in the analysis of the large volumes of data from financial and economic systems and has developed what may be called the
Page 38 of 39

Quantum Business Model (QBM). This extends (to financial and economic systems) the mathematical arguments used by Max Planck to develop quantum physics using the analogy Energy = Money, i.e., energy in physics is like money in economics. Einstein applied Plancks ideas to describe the photoelectric effect (by treating light as being composed of particles called photons, each with the fixed quantum of energy conceived by Planck). The mathematical law deduced by Planck, referred to here as the generalized power-exponential law, might actually have many applications far beyond blackbody radiation studies where it was first conceived. Einsteins photoelectric law is a simple linear law, as we see here, and was deduced from Plancks non-linear law for describing blackbody radiation. It appears that financial and economic systems can be modeled using a similar approach. Finance, business, economics and management sciences now essentially seem to operate like astronomy and physics before the advent of Kepler and Newton.

Cover page of AirTran 2000 Annual Report

Page 39 of 39