Anda di halaman 1dari 6

Statistics as Legal E idence ensure that his results are not overvalued.

Terms such as statistical signicance are easily and frequently misunderstood to imply a nding of practical signicance; levels of signicance are all too often interpreted as posterior probabilities, for example, in the guise that, if a DNA prole occurs only 1 in 10,000 times, then the chances are 9,999 to 1 that an individual having a matching prole is the source of the prole. As a result, individuals may tend to focus on the quantitative elements of a case, thereby overlooking qualitative elements that may in fact be more germane or relevant.
Harr J 1996 A Ci il Action. Vintage Books, New York Lagakos S W, Wessen B J, Zelen M 1986 An analysis of contaminated well water and health eects in Woburn, Massachusetts. Journal of the American Statistical Association 81: 58396 Meier P, Zabell S 1980 Benjamin Peirce and the Howland Will. Journal of the American Statistical Association 75: 497506 Tribe L H 1971 Trial by mathematics. Har ard Law Re iew 84: 132948 Zeisel H, Kaye D H 1997 Pro e It With Figures: Empirical Methods in Law and Litigation. Springer-Verlag, New York

S. L. Zabell

9. Other Applications
The last decades of the twentieth century saw a remarkable variety of legal applications of statistics: attempting to determine the possible deterrent eects of capital punishment, estimating damages in antitrust litigation, epidemiological questions of the type raised in the book and movie A Ci il Action. Some of the books listed in the bibliography discuss a number of these. See also: Juries; Legal Reasoning and Argumentation; Legal Reasoning Models; Linear Hypothesis: Fallacies and Interpretive Problems (Simpsons Paradox); Sample Surveys: The Field

Statistics, History of
The word statistics today has several dierent meanings. For the public, and even for many people specializing in social studies, it designates numbers and measurements relating to the social world: population, gross national product, and unemployment, for instance. For academics in Statistics Departments, however, it designates a branch of applied mathematics making it possible to build models in any area featuring large numbers, not necessarily dealing with society. History alone can explain this dual meaning. Statistics appeared at the beginning of the nineteenth century as meaning a quantied description of humancommunity characteristics. It brought together two previous traditions: that of German Statistik and that of English political arithmetic (Lazarsfeld 1977). When it originated in Germany in the seventeenth and eighteenth centuries, the Statistik of Hermann Conring (160681) and Gottfried Achenwall (171979) was a means for classifying the knowledge needed by kings. This science of the state included history, law, political science, economics, and geography, that is, a major part of what later became the subjects of social studies, but presented from the point of view of their utility for the state. These various forms of knowledge did not necessarily entail quantitative measurements: they were basically associated with the description of specic territories in all their aspects. This territorial connotation of the word statistics would subsist for a long time in the nineteenth century. Independently of the German Statistik, the English tradition of political arithmetic had developed methods for analyzing numbers and calculations, on the basis of parochial records of baptisms, marriages, and deaths. Such methods, originally developed by John Graunt (162074) in his work on bills of mortality, were then systematized by William Petty (162787). They were used, among others, to assess the population of a

Bibliography
Bickel P, Hammel E, OConnell J W 1975 Is there a sex bias in graduate admissions? Data from Berkeley. Science 187: 398404. Reprinted In: Fairley W B, Mosteller F (eds.) Statistics and Public Policy. Addison-Wesley, New York, pp. 11330 Chaikan J, Chaikan M, Rhodes W 1994 Predicting Violent Behavior and Classifying Violent Oenders. In: Understanding and Pre enting Violence, Volume 4: Consequences and Control. National Academy Press, Washington, DC, pp. 21795 DeGroot M H, Fienberg S E, Kadane J B 1987 Statistics and the Law. Wiley, New York Evett I W, Weir B S 1998 Interpreting DNA E idence: Statistical Genetics for Forensic Scientists Federal Judicial Center 1994 Reference Manual on Scientic E idence. McGraw Hill Fienberg S E 1971 Randomization and social aairs: The 1970 draft lottery. Science 171: 25561 Fienberg S E (ed.) 1989 The E ol ing Role of Statistical Assessments as E idence in the Courts. Springer-Verlag, New York Finkelstein M O, Levin B 1990 Statistics for Lawyers. SpringerVerlag, New York Gastwirth J L 1988 Statistical Reasoning in Law and Public Policy, Vol. 1: Statistical Modeling and Decision Science; Vol. 2: Tort Law, E idence and Health. Academic Press, Boston Gastwirth J L 2000 Statistical Science in the Courtroom. Springer-Verlag, New York

15080

Statistics, History of kingdom and to draw up the rst forms of life insurance. They constitute the origin of modern demography. Nineteenth-century statistics were therefore a fairly informal combination of these two traditions: taxonomy and numbering. At the beginning of the twentieth century, it further became a mathematical method for the analysis of facts (social or not) involving large numbers and for inference on the basis of such collections of facts. This branch of mathematics is generally associated with the probability theory, developed in the seventeenth and eighteenth centuries, which, in the nineteenth century, inuenced many branches of both the social and natural sciences (Gigerenzer et al. 1989). The word statistics is still used in both ways today and both of these uses are still related, of course, insofar as quantitative social studies use, in varied proportions, inference tools provided by mathematical statistics. The probability calculus, on its part, grounds the credibility of statistical measurements resulting from surveys, together with random sampling and condence intervals. The diversity of meanings of the word statistics, maintained until today, has been heightened by the massive development, beginning in the 1830s, of bureaus of statistics, administrative institutions distinct from universities, in charge of collecting, processing, and transmitting quantied information on the population, the economy, employment, living conditions, etc. Starting in the 1940s, these bureaus of statistics became important data suppliers for empirical social studies, then in full growth. Their history is therefore an integral part of these sciences, especially in the second half of the twentieth century, during which the mathematical methods developed by university statisticians were increasingly used by the socalled ocial statisticians. Consequently the history of statistics is a crossroads history connecting very dierent elds, which are covered in other articles of this encyclopedia: general problems of the quantication of social studies (Theodore Porter), mathematical sampling methods (Stephen E. Fienberg and J. M. Tanur), survey research (Martin Bulmer), demography (Simon Szreter), econometrics, etc. The history of these dierent elds was the object of much research in the 1980s and 1990s, some examples of which are indicated in the Bibliography. Its main interest is to underscore the increasingly closer connections between the so-called internal dimensions (history of the technical tools), the external ones (history of the institutions), and those related to the construction of social studies objects, born from the interaction between the three foci constituted by university research, administrations in charge of social problems, and bureaus of statistics. This coconstruction of objects makes it possible to join historiographies that not long ago were distinct. Three key moments of this history will be mentioned here: Adolphe Quetelet and the average man (1830s), Karl Pearson and correlation (1890s), and the establishment of large systems for the production and processing of statistics.

1. Quetelet, the A erage Man, and Moral Statistics


The cognitive history of statistics can be presented as that of the tension and sliding between two foci: the measurement of uncertainty (Stigler 1986), resulting from the work of eighteenth-century astronomers and physicists, and the reduction of di ersity, which will be taken up by social studies. Statistics is a way of taming chance (Hacking 1990) in two dierent ways: chance and uncertainty related to protocols of observation, chance and dispersion related to the diversity and the indetermination of the world itself. The Belgian astronomer and statistician Adolphe Quetelet (1796 1874) is the essential character in the transition between the world of uncertain measurement of the probability proponents (Carl Friedrich Gauss, Pierre Simone de Laplace) and that of the regularities resulting from diversity, thanks to his having transferred, around 1830, the concept of average from the natural sciences to the human sciences, through the construction of a new being, the average man. As early as the eighteenth century, specicities appeared from observations in large numbers: drawing balls out of urns, gambling, successive measurements of the position of a star, sex ratios (male and female births), or the mortality resulting from preventive smallpox inoculation, for instance. The radical innovation of this century was to connect these very dierent phenomena thanks to the common perspective provided by the law of large numbers formulated by Jacques Bernoulli in 1713. If draws from a constant urn containing white and black balls are repeated a large number of times, the observed share of white balls converges toward that actually contained by the urn. Considered by some as a mathematical theorem and by others as an experimental result, this law was at the crossroads of the two currents in epistemological science: one hypothetical deductive, the other empirical inductive. Beginning in the 1830s, the statistical description of observations in large numbers became a regular activity of the state. Previously reserved for princes, this information henceforth available to enlightened men was related to the population, to births, marriages, and deaths, to suicides and crimes, to epidemics, to foreign trade, schools, jails, and hospitals. It was generally a by-product of administration activity, not the result of special surveys. Only the population census, the showcase product of nineteenth-century statistics, was the object of regular surveys. These statistics were published in volumes with heterogeneous contents, but their very existence 15081

Statistics, History of suggests that the characteristics of society were henceforth a matter of scientic law and no longer of judicial law, that is, of observed regularity and not of the normative decisions of political power. Quetelet was the man who orchestrated this new way of thinking the social world. In the 1830s and 1840s, he set up administrative and social networks for the production of statistics and establisheduntil the beginning of the twentieth centuryhow statistics were to be interpreted. This interpretation is the result of the combination of two ideas developed from the law of large numbers: the generality of normal distribution (or, in Quetelets vocabulary: the law of possibilities) and the regularity of certain yearly statistics. As early as 1738, Abraham de Moivre, seeking to determine the convergence conditions for the law of large numbers, had formulated the mathematical expression of the future Gaussian law as the limit of a binomial distribution. Then Laplace (17491827) had shown that this law constituted a good representation of the distribution of measurement errors in astronomy, hence the name that Quetelet and his contemporaries also used to designate it: the law of errors (the expression normal law, under which it is known today, would not be introduced until the late nineteenth century by Karl Pearson). Quetelets daring intellectual masterstroke was to bring together two forms: on the one hand the law of errors in observation, and on the other, the law of distribution of certain body measurements of individuals in a population, such as the height of conscripts in a regiment. The likeness of the Gaussian looks of these two forms of distribution justied the invention of a new being with a promise of notable posterity in social studies: the average man. Thus, Quetelet restricted the calculation and the legitimate use of averages to cases where the distribution of the observations had a Gaussian shape, analogous to that of the distribution of the astronomical observations of a star. Reasoning on that basis, just as previous to this distribution there was a real star (the cause of the Gaussian-shaped distribution), previous to the equally Gaussian distribution of the height of conscripts there was a being of a reality comparable to the existence of the star. Quetelets average man is also the constant cause, previous to the observed controlled variability. He is a sort of model, of which specic individuals are imperfect copies. The second part of this cognitive construction, which is so important in the ulterior uses of statistics in social studies, is the attention drawn by the remarkable regularity of series of statistics, such as those of marriages, suicides, or crimes. Just as series of draws from an urn reveal a regularity in the observed frequency of white balls, the regularity in the rates of suicide or crime can be interpreted as resulting from series of draws from a population, some of the members of which are aected with a propensity to 15082 suicide or crime. The average man is therefore endowed not only with physical attributes but also moral ones, such as these propensities. Here again, just as the average heights of conscripts are stable, whereas individual heights are dispersed, crime or suicide rates are just as stable, whereas these acts are eminently individual and unpredictable. This form of statistics, then called moral, signaled the beginning of sociology, a science of society radically distinct from a science of the individual, such as psychology (Porter 1986). Quetelets reasoning would ground the one developed by Durkheim in Suicide: A Study in Sociology (1897). This way of using statistical regularity to back the idea of the existence of a society ruled by specic laws, distinct from those governing individual behavior, dominated nineteenth-century, and, in part, twentiethcentury social studies. Around 1900, however, another approach appeared, this one centered on two ideas: the distribution (no longer just the average) of observations, and the correlation between two or several variables, observed in individuals (no longer just in groups, such as territories).

2. Distribution, Correlation, and Causality


This shift of interest from the average individual to the distributions and hierarchies among individuals, was connected to the rise, in late-century Victorian England, of a eugenicist and hereditarian current of thought inspired from Darwin (MacKenzie 1981). Its two advocates were Francis Galton (18221911), a cousin of Darwin, and Karl Pearson (18571936). In their attempt to measure biological heredity, which was central to their political construction, they created a radically new statistics tool that made it possible to conceive partial causality. Such causality had been absent from all previous forms of thought, for which A either is or is not the cause of B, but cannot be so somewhat or incompletely. Yet Galtons research on heredity led to such a formulation: the parents height explains the childrens, but does not entirely determine it. The taller fathers are, the taller are their sons on a erage, but, for a fathers given height, the sons height dispersion is great. This formalization of heredity led to the two related ideas of regression and correlation, later to be extensively used in social studies as symptoms of causality. Pearson, however, greatly inuenced by the antirealist philosophy of the physicist Ernst Mach, challenged the idea of causality, which according to him was metaphysical, and stuck to the one of correlation, which he described with the help of contingency tables (Pearson 1911, Chap. 5). For him, scientic laws are only summaries, brief descriptions in mental stenography, abridged formulas, a condensation of perception routines for future use and forecasting. Such formulas are the limits of observa-

Statistics, History of tions that never perfectly respect the strict functional laws. The correlation coecient makes it possible to measure the strength of the connection, between zero (independence) and one (strict dependence). Thus, in this conception of science associated by Pearson with the budding eld of mathematical statistics, the reality of things can only be invoked for pragmatic ends and provided that the perception routines are maintained. Similarly, causality can only be insofar as it is a proven correlation, therefore predictable with a fairly high probability. Pearsons pointed formulations would constitute, in the early twentieth century, one of the foci of the epistemology of statistics applied to social studies. Others, in contrast, would seek to give new meaning to the concepts of reality and causality by dening them dierently. These discussions were strongly related to the aims of the statistical work, strained between scientic knowledge and decisions. Current mathematical statistics proceed from the works of Karl Pearson and his successors: his son Egon Pearson (18951980), the Polish mathematician Jerzy Neyman (18941981), the statistician pioneering in agricultural experimentation Ronald Fisher (1890 1962), and nally the engineer and beer brewer William Gosset, alias Student (18761937). These developments were the result of an increasingly thorough integration of so-called inferential statistics into probabilistic models. The interpretation of these constructions is always stretched between two perspectives: the one of science, which aims to prove or test hypotheses, with truth as its goal, and the one of action, which aims to make the best decision, with eciency as its goal. This tension explains a number of controversies that opposed the founders of inferential statistics in the 1930s. In eect, the essential innovations were often directly generated within the framework of research as applied to economic issues, for instance in the cases of Gosset and Fisher. Gosset was employed in a brewery. He developed product quality-control techniques based on a small number of samples. He needed to appraise the variances and laws of distribution of parameters calculated on observations not complying with the supposed law of large numbers. Fisher, who worked in an agricultural research center, could only carry out a limited number of controlled tests. He mitigated this limitation by articially creating a randomness, itself controlled, for variables other than those for which he was trying to measure the eect. This randomization technique thus introduced probabilistic chance into the very heart of the experimental process. Unlike Karl Pearson, Gosset and Fisher used distinct notations to designate, on the one hand, the theoretical parameter of a probability distribution (a mean, a variance, a correlation) and on the other, the estimate of this parameter, calculated on the basis of observations so insucient in number that it was possible to disregard the gap between these two values, theoretical and estimated. This new system of notation marked a decisive turning point: it enabled an inferential statistics based on probabilistic models. This form of statistics was developed in two directions. The estimation of parameters, which took into account a set of recorded data, presupposed that the model was true. The information produced by the model was combined with the data, but nothing indicated whether the model and the data were in agreement. In contrast, the hypothesis tests allowed this agreement to be tested and if necessary to modify the model: this was the inventive part of inferential statistics. In wondering whether a set of events could plausibly have occurred if a model were true, one compared these eventsexplicitly or otherwiseto those that would have occurred if the model were true, and made a judgment about the gap between these two sets of events. This judgment could itself be made according to two dierent perspectives, which were the object of vivid controversy between Fisher on the one hand, and Neyman and Egon Pearson on the other. Fishers test was placed in a perspective of truth and science: a theoretical hypothesis was judged plausible or was rejected, after consideration of the observed data. Neyman and Pearsons test, in contrast, was aimed at decision making and action. One evaluated the respective costs of accepting a false hypothesis and of rejecting a true one, described as errors of Type I and II. These two dierent aimstruth and economy although supported by close probabilistic formalism, led to practically incommensurable argumentative worlds, as was shown by the dialogue of the deaf between Fisher on one side, and Neyman and Pearson on the other (Gigerenzer et al. 1989, pp. 90109).

3. Ocial Statistics and the Construction of the State


At the same time as mathematical statistics were developing, so-called ocial statistics were also being developed in bureaus of statistics, for a long time on an independent course. These latter did not use the new mathematical tools until the 1930s in the United States and the 1950s in Europe, in particular when the random sample-survey method was used to study employment or household budgets. Yet in the 1840s, Quetelet had already actively pushed for such bureaus to be set up in dierent countries, and for their scientication with the tools of the time. In 1853, he had begun organizing meetings of the International Congress of Statistics, which led to the establishment in 1885 of the International Statistical Institute (which still exists and includes mathematicians and ocial statisticians). One could write the history of these bureaus as an aspect of the more general history of the construction of the state, insofar as they developed and legitimized a common language specically combining the authority of science and that of 15083

Statistics, History of ' res 1998, Patriarca the state (Anderson 1988, Desrosie 1996). More precisely, every period of the history of a state could be characterized by the list of socially judged social questions that were consequently put on the agenda of ocial statistics. So were co-constructed three interdependent foci: representation, action, and statisticsa way of describing and interpreting social phenomena (to which social studies would increasingly contribute), a method for determining state intervention and public action, and nally, a list of statistical variables and procedures aimed at measuring them. Thus for example in England in the second third of the nineteenth century, poverty, mortality, and epidemic morbidity were followed closely in terms of a detailed geographical distribution (counties) by the General Register Oce (GRO), set up by William Farr in 1837. Englands economic liberalism and the Poor Law Amendment Act of 1834 (which led to the creation of workhouses) were consistent with this form of statistics. In the 1880s and 1890s, Galton and Pearsons hereditarian eugenics would compete with this environmentalism, which explained poverty in terms of the social and territorial contexts. This new social philosophy was reected in news forms of political action, and of statistics. Thus the social classication in ve dierentiated groups used by British statisticians throughout the twentieth century is marked by the political and cognitive conguration of the beginning of the century (Szreter 1996). In all the important countries (including Great Britain) of the 1890s and 1900s, however, the work of the bureaus of statistics was guided by labor-related issues: employment, wages, workers budgets, subsistence costs, etc. The modern idea of unemployment emerged, but its denition and its measurement were not standardized yet. This interest in labor statistics was linked to the fairly general development of a specic labor law and the rst so-called social welfare legislation, such as Bismarcks in Germany, or that developed in the Nordic countries in the 1890s. It is signicant that the application of the sample survey method (then called representative survey) was rst tested in Norway in 1895, precisely in view of preparing a new law enacting general pension funds and invalidity insurance: this suggests the consistency of the political, technical, and cognitive dimensions of this co-construction. These forms of consistency are found in the statistics systems that were extensively developed, at a dierent scale, after 1945. At that time, public policies were governed by a number of issues: the regulation of the macroeconomic balance as seen through the Keynesian model, the reduction of social inequalities and the struggle against unemployment thanks to socialwelfare systems, the democratization of school, etc. Some people then spoke of revolution in government statistics (Duncan and Shelton 1978), and underscored 15084 its four components, which have largely shaped the present statistics systems. National accounting, a vast construction integrating a large number of statistics from dierent sources, was the instrument on which the macroeconomic models resulting from the Keynesian analysis were based. Sample surveys made it possible to study a much broader range of issues and to accumulate quantitative descriptions of the social world, which were unthinkable at a time when the observation techniques were limited to censuses and monographs. Statistical coordination, an apparently strictly administrative aair, was indispensable to make consistent the observations resulting from different elds. Finally, beginning in 1960, the generalization of computer data processing radically transformed the activity of bureaux of statistics. So, ocial statistics, placed at the junction between social studies, mathematics, and information on public policies, has become an important research component in the social studies. Given, however, that from the institutional standpoint it is generally placed outside, it is often hardly perceived by those who seek to draw up a panorama of these sciences. In fact, the way bureaus of statistics operate and are integrated into administrative and scientic contexts varies a lot from one country to another, so a history and a sociology of social studies cannot omit examining these institutions, which are often perceived as mere suppliers of data assumed to reect reality, when they are actually places where this reality is instituted through co-constructed operations of social representation, public action, and statistical measurement. See also: Estimation: Point and Interval; Galton, Sir Francis (18221911); Neyman, Jerzy (18941981); Pearson, Karl (18571936); Probability and Chance: Philosophical Aspects; Quantication in History; Quantication in the History of the Social Sciences; Quetelet, Adolphe (17961874); Statistical Methods, History of: Post-1900; Statistical Methods, History of: Pre-1900; Statistics: The Field

Bibliography
Anderson M J 1988 The American Census. A Social History. Yale University Press, New Haven, CT ' res A 1998 The Politics of Large Numbers. A History of Desrosie Statistical Reasoning. Harvard University Press, Cambridge, MA Duncan J W, Shelton W C 1978 Re olution in United States Go ernment Statistics, 19261976. US Department of Commerce, Washington, DC Gigerenzer G et al. 1989 The Empire of Chance. How Probability Changed Science and E eryday Life. Cambridge University Press, Cambridge, UK Hacking I 1990 The Taming of Chance. Cambridge University Press, Cambridge, UK Klein J L 1997 Statistics Visions in Time. A History of Time Series Analysis, 16621938. Cambridge University Press, Cambridge, UK

Statistics: The Field


Lazarsfeld P 1977 Notes in the history of quantication in sociology: Trends, sources and problems. In: Kendall M, Plackett R L (eds.) Studies in the History of Statistics and Probability. Grin, London, Vol. 2, pp. 21369 MacKenzie D 1981 Statistics in Britain, 18651930. The Social Construction of Scientic Knowledge. Edinburgh University Press, Edinburgh, UK Patriarca S 1996 Numbers and Nationhood: Writing Statistics in Nineteenth-century Italy. Cambridge University Press, Cambridge, UK Pearson K 1911 The Grammar of Science, 3rd edn rev. and enl.. A. and C. Black, London Porter T 1986 The Rise of Statistical Thinking, 18201900. Princeton University Press, Princeton, NJ Stigler S M 1986 The History of Statistics: The Measurement of Uncertainty Before 1900. Belknap Press of Harvard University Press, Cambridge, MA Szreter S 1996 Fertility, Class and Gender in Britain, 18601940. Cambridge University Press, Cambridge, UK

(c) Drawing formal inferences from empirical data through the use of probability. (d) Communicating the results of statistical investigations to others, including scientists, policy makers, and the public. This article describes a number of these elements, and the historical context out of which they grew. It provides a broad overview of the eld, that can serve as a starting point to many of the other statistical entries in this encyclopedia.

2. The Origins of the Field


The word statistics is related to the word state and the original activity that was labeled as statistics was social in nature and related to elements of society through the organization of economic, demographic, and political facts. Paralleling this work to some extent was the development of the probability calculus and the theory of errors, typically associated with the physical sciences (see Statistical Methods, History of: Pre-1900). These traditions came together in the nineteenth century and led to the notion of statistics as a collection of methods for the analysis of scientic data and the drawing of inferences therefrom. As Hacking (1990) has noted: By the end of the century chance had attained the respectability of a Victorian valet, ready to be the logical servant of the natural, biological and social sciences ( p. 2). At the beginning of the twentieth century, we see the emergence of statistics as a eld under the leadership of Karl Pearson, George Udny Yule, Francis Y. Edgeworth, and others of the English statistical school. As Stigler (1986) suggests:
Before 1900 we see many scientists of dierent elds developing and using techniques we now recognize as belonging to modern statistics. After 1900 we begin to see identiable statisticians developing such techniques into a unied logic of empirical science that goes far beyond its component parts. There was no sharp moment of birth; but with Pearson and Yule and the growing number of students in Pearsons laboratory, the infant discipline may be said to have arrived. (p. 361)

' res A. Desrosie Copyright # 2001 Elsevier Science Ltd. All rights reserved.

Statistics: The Field


Statistics is a term used to refer to both a eld of scientic inquiry and a body of quantitative methods. The eld of statistics has a 350-year intellectual history rooted in the origins of probability and the rudimentary tools of political arithmetic of the seventeenth century. Statistics came of age as a separate discipline with the development of formal inferential theories in the twentieth century. This article briey traces some of this historical development and discusses current methodological and inferential approaches as well as some cross-cutting themes in the development of new statistical methods.

1. Introduction
Statistics is a body of quantitative methods associated with empirical observation. A primary goal of these methods is coping with uncertainty. Most formal statistical methods rely on probability theory to express this uncertainty and to provide a formal mathematical basis for data description and for analysis. The notion of variability associated with data, expressed through probability, plays a fundamental role in this theory. As a consequence, much statistical eort is focused on how to control and measure variability and or how to assign it to its sources. Almost all characterizations of statistics as a eld include the following elements: (a) Designing experiments, surveys, and other systematic forms of empirical study. (b) Summarizing and extracting information from data.

Pearsons laboratory at University College, London quickly became the rst statistics department in the world and it was to inuence subsequent developments in a profound fashion for the next three decades. Pearson and his colleagues founded the rst methodologically-oriented statistics journal, Biometrika, and they stimulated the development of new approaches to statistical methods. What remained before statistics could legitimately take on the mantle of a eld of inquiry, separate from mathematics or the use of statistical approaches in other elds, was the development of the formal foundations of theories of inference from observations, rooted in an axiomatic theory of probability. 15085

International Encyclopedia of the Social & Behavioral Sciences

ISBN: 0-08-043076-7

Anda mungkin juga menyukai