Anda di halaman 1dari 57
ARTHQUAKE ELIGINED ‘RG RESEARCH LABORATO TUIE OF TECHNOLOGY A CRUFORWA gtI09 Bie TR CALIFORNIA INSTITUTE OF TECHNOLOGY i EARTHQUAKE ENGINEERING RESEARCH LABORATORY LOW FREQUENCY DIGITIZATION ERRORS AND A NEW METHOD FOR ZERO BASELINE CORRECTION OF STRONG-MOTION ACCELEROGRAMS by M, D. Trifunac Report No. EERL 70-07 A Report on Research Conducted Under a Grant From the National Science Foundation Pasadena, California September, 1970 ABSTRACT A study of the random digitizing errors introduced by the operator indicates that these errors are relatively small in the double- integrated digital data up to a period of about 16 seconds for a typical accelerograph record. A new method is proposed for standard baseline correction of accelerograms. It is based on high-pass filtering of the uncorrected data. Unlike the parabolic baseline correction, the new method has well defined frequency transfer function properties which are largely independent of the record length. INTRODUCTION A detailed quantitative knowledge of the nature of ground motion close to the source of an earthquake energy release comes from strong-motion accelerographs (e.g., see Hudson, 196331965; Fremd, 1963; Halverson, 1965;1970; Cloud, 1964; and Trifunac and Hudson, 1970). Several hundreds of such instruments are presently installed in southern California and are maintained by the U. S. Coast and Geodetic Survey. The only other comparable network has evolved during the last 15 years in Japan. Since the first strong-motion accelerogram was obtained some forty years ago during the 1933 Long Beach, California earthquake, more than one hundred strong-motion accelerograms have been recorded that are of special value for earthquake engineering and strong-motion seismology research. In order to extract maximum. information from these records, two basic tasks must be considered. ‘The first task is related to the evaluation and full understanding of the methods of operation and field performance of the strong-motion accelerographs (Trifunac and Hudson, 1970), The second task is concerned with the methods of optimum accelerograph data digitizing and processing. In this paper we consider digitization errors and the baseline correction problem. An accelerograph transducer, usually a single-degree of freedom, 60 percent damped oscillator with a high natural frequency of 12 cps to 25 cps is used to record ground acceleration. The recorded instrument response approximates ground acceleration up to about 5 cps to 10 cps. With only a few exceptions (Jenschke and Penzien, 1964; McLennan, 1969; and Trifunac and Hudson, 1970) the accelerograph record is usually taken to represent actual ground acceleration and no instrument correction is applied to it. A digitized accelerogram cannot be integrated immediately in an attempt to determine ground velocity and displacement for the following reasons. The initial velocity and displacement, and the actual zero baseline for the digitized accelerogram are not known. Of these three unknowns, finding the zero baseline of the accelerogram is the most important step in accelerogram data processing. ‘There have been many significant contributions to the general problem of digitization baseline correction and double integration of the accelerogram. Some of these investigations were by McComb, Ruge and Neumann, 1943; Housner, 1947; Hershberger, 1955; Berg and ; Amin and Ang, 1966; Brady, 1966; Schiff Housner, 1961; Berg, 196 and Bogdanoff, 1967; Poppitz, 1968; Hudson, Nigam and Trifunac, 1969; and Boyce, 1970, A majority of these investigators assume the zero acceleration baseline to be of the parabolic form. Althovgh no physical justification may be given for a curve of higher degree than the first, a straight line, (Hudson et al., 1969) it has been generally agreed that the errors in the digitized accelerogram caused by the warping of the record and other similar distortions are in most cases corrected by the parabolic baseline. Also it has been shown (Hudson et al., 1969) that the spurious periods introduced in the accelerogram by the parabolic baseline correction do not seriously affect the accuracy of the response spectra calculations for periods up to about 5 seconds. The effects of the parabolic baseline correction on the ground velocity and dis- placement of relatively short record durations, assuming zero initial conditions, have been found to be relatively minor (Hudson et al., 1969). For intermediate and long accelerograms, however, a parabolic baseline is not adequate. This is because errors in ground displace~ ment introduced by digitization, warping and the transverse play of the accelerograph paper in an instrument mainly affect intermediate and long periods. At the present time an increasing number of high buildings with long fundamental periods of vibration and studies related to the nature of the earthquake energy release require that the accuracy of the digitized accelerograms be extended to periods for which parabolic baseline correction is not adequate. For this reason, in this paper the character of different errors in the accelerogram is first studied in order to determine the longest period up to which the accelerograph data can be employed. It is clear that our present estimate of the long period limit beyond which data will be considered as inaccurate depends on the presently available means for accelerogram digitization, and the use of digitized data. Future advances in the methods of digitization will certainly improve the accuracies involved and will also extend our present long period limits, In the second part of this paper a new way of accelerogram baseline correction is proposed, which unlike the parabolic baseline, is frequency-wise independent of the record length. UNCORRECTED ACCELEROGRAMS An extensive strong-motion earthquake accelerogram date processing program has been initiated at the Earthquake Engineering Research Laboratory of the California Institute of Technology since 1960. The purpose of this program is to provide digitized accelero- graph data processed by uniform methods. Such a program will enable future investigators to begin with the same basic data and will in this way allow meaningful comparisons of independent results (Hudson, Brady and Trifunac, 1969). Original accelerograms recorded on photographic paper and recently on 70 mm and 35 mm film are kept in the Washington office of the USC & GS. From original records, full-size contact film negatives are prepared by the staff of the Seismological Field Survey of the U. S. Coast and Geodetic Survey in San Francisco. Using these negatives, contact prints are made on frosted, translucent Mylar- based film which is mechanically strong, dimensionally stable and gives excellent optical contrast for accurate setting of cross-hairs during digitization. The digitizing of this translucent film is performed on a Benson-Lehner 099D data reducer. The maximum resolution of this unit is 312 digital counts per centimeter, The table length of 60 cm can accommodate about 30 seconds of record, at the usual recording speeds. The acceleration records are digitized with unequally spaced data points in time with an average from 10 to 40 points per second. The density of digitization depends on the frequency content of the record. ~6- Most records have "fixed traces" recorded by a light beam reflected from fixed mirrors attached to the accelerograph frame. The recorded fixed traces often depart from a straight line because of paper distortion and transverse motions in the paper drive mechanism. By digitizing the fixed traces at intervals of about 1 point per second, or more, these long period errors that are introduced into the digitized accelerogram are for the most part eliminated. Digitized data of the fixed traces are first smoothed by a running average scheme using coefficients slightly different from the usual 1/4, 1/2, 1/4 for equi-spaced data, by weighting them according to the actual distance from the mid-point, and then subtracted from the accelerogram trace. Timing marks are also digitized and smoothed by 1/4, 1/2, 1/4 to form the basic time coordinates. As the final step in processing of the uncorrected data, the zero axis is translated to pass through a point which makes the integral of the digitized acceleration curve zero. This is physically equivalent to making the change in ground velocity, from the beginning to the end of the record, equal to zero. Although physically not exactly correct, this procedure represents a simple reproducible method of accurately defining the zero line. ‘The above described digitization and processing techniques do not in any way distort the original accelerogram in the domain of inter- mediate and long frequencies. The resulting data will be referred to as the basic "uncorrected accelerograms. " ERRORS IN THE DIGITIZED ACCELEROGRAMS A study has been made of the long period errors contained in the digitized data in order to assess the accuracy of the data processing methods in preparing the uncorrected data, and to determine the frequency domain over which digitized data accurately represent the actual strong-motion accelerogram. Most of the long period errors can be divided into several groups: 1, Errors caused by the transverse play of the recording paper or film in the drive mechanism. 2. Errors due to the warping of the records caused by aging and errors introduced by photographic processing of contact negatives and Mylar translucent copies. 3, Errors caused by the enlargement of 70mm and 35mm film negatives. 4, Systematic errors due to imperfect mechanical traverse mechanisms of the cross-hair system on the digitizing table. 5. Random errors generated during the digitization process and caused by the inadequate resolution of the human eye. 6. Errors involved in the transducer element, From the above list of errors it is clear that one is referring here mainly to instruments using light sensitive paper or film as the recording medium, In future applications when the use of magnetic tape instruments becomes more frequent, other errors will become increasingly important. Since this study is aimed at the baseline correction of presently available records, our concern will be only with the accelerograms optically recorded on paper or film, 8. We now consider, one by one, the above mentioned errors in an attempt to describe their nature and also if it is possible, to show how these errors may be eliminated. ‘The errors caused by the transverse play of the recording paper The errors caused by the transverse play of the recording paper or film in the instrument paper drive mechanism vary from instrument to instrument (Trifunac and Hudson, 1970) and may have amplitudes up to several millimeters on the recording paper. These errors, however, do not cause much difficulty unless there is no fixed trace in the instrument. Most instruments do have one or several fixed traces so that all irregular transverse motions of the paper are recorded on them. By simultaneously digitizing the accelerogram and the fixed trace and subtracting, most of these errors are eliminated. For this reason, digitizing of the fixed trace is a standard procedure applied to all uncorrected accelerograms. The errors caused by the warping of film negatives Errors due to warping of the film negatives and translucent Mylar copies caused by chemical processing and later by aging are estimated to be very small (Hudson et al., (1969). These errors are also eliminated by subtracting the digitized fixed trace from the accelero- gram since the two are nearly parallel and only a few centimeters apart so that any distortions in the recording paper or negative affects both curves in essentially the same way. The errors caused by printing processes ‘The errors caused by printing 70 mm or 35 mm film negatives are introduced during optical enlargement and come from two possible sources. ‘The first source is lens imperfection and the second is that the planes of the original negative and the projected image may not be parallel. With the equipment presently used at the California Institute of Technology, the second source appears to be the only significant one, and was estimated during the experiment described late in this section. Systematic and random digitization errors We consider now systematic errors caused by an imperfect mechanical traverse mechanism of the digitizer cross-hair system, together with random errors generated during digitization. These errors are simultaneously introduced and are therefore considered together. In order to determine the accuracy with which a typical group of operators can digitize the accelerogram it was decided to digitize a "gtraight" line. To make a line as straight as possible, a 0.002 in. diameter copper wire about three feet long was stretched to the yield point a few centimeters above a Mylar translucent film. The shadow of the wire was projected on the film by a point light source some six feet above the flat table supporting the translucent film. The film was exposed and processed by the same company that processes all the translucencies used in the Earthquake Engineering Research Laboratory to make record copies for standard accelerogram processing. When placed on the digitizer, the straight line extended from the lower left corner to the upper right corner of the rectangular digitizing area. A straight line with a slope wae chosen because it was desired that both x- and y- coordinates of the cross hair be changed by the operator from one point to the other. In this way, it can be ~10- expected that for every point, a pair (x, y) is chosen independently. Five repeated digitizations of the straight line were made by the staff presently participating in the earthquake data processing at the California Institute of Technology. The five traces resulting from four different operators are identified by the arabic numerals. Before further analysis, each trace was translated and rotated by least square fitted straight line to a horizontal position. The five sets of data thus obtained are shown in Figure 1. As may be seen in Figure 1, the errors are of two different types. First there are long period components in each line, some of which are common to all traces. This suggests that there is a systematic error not introduced by the operators but by the digitizing machine. Superimposed are intermediate and high frequency errors, here called random digitization errors, generated by the operators. Individual digitizations of the straight line consisted of about 1000 unequally spaced points per 60 cm of the horizontal projection of the straight line. To generate uniform data, 4096 equally spaced points were interpolated to each original sequence assuming a straight line between successive digitized points. Ii it is assumed that the digitizing errors are approximately normally distributed across the ensemble of five sequences, with a zero mean, then by averaging five independent digitizations one may approximately eliminate random errors and obtain the estimate of the systematic errors here interpreted as coming from the mechanical imperfections of the digitizer. The average of five digitizations is -ll- STRAIGHT LINE DIGITIZATION OPERATOR NOI = 2, 2 AW ath ft ai ht i ay . WM OPERATOR NO.2=Za bey i er fy" ii H faa OPERATOR NO, 2 WITH MAGNIFYING GLASS ~Z3 “Mh (AHEM Naty ge , IH My \ OPERATOR NO.3-24 aay ata tet ly aa Wt OPERATOR NO. 4-Zs. ia Many tir lt aul Ni ay na wy yi a iH NTS, CONSIDERED AS ACCELERATION IN pts/cm? AVERAGE -2 agnor nnd at od Mlyhinee Mihi My, © 0 20 30 ao 300 FIGURE 1 -12- given at the bottom of Figure I and displays long period fluctuations. ‘The difference between the average and each individual digitization then represents random errors introduced by the operators. The distribution of the amplitudes of these differences is considered next. By For each difference(z,-7) the standard deviation oj, i 5 has been determined. By counting how many times data from the total of 4096 points lay between two specified levels and taking these levels as a fraction of the standard deviation o,, @ histogram may be constructed. If a great number of points is used, such a histogram can give an excellent approximation to the probability density distribu- tion function. The amplitude level intervals used to compute the histograms have been chosen as 0/10, Five histograms are plotted in Figure 2 by connecting frequency amplitudes in each interval with a continuous straight line. The curve specified by the circular dots in Figure 2 is the normal Gaussian distribution with zero mean and variance equal to unity. From the histograms of Figure 2, one can derive two important conclusions. First, the random errors in digitizing a point on the data reducer are nearly normally distributed. Second, the average standard deviation for random digitization errors is equal to one point. ‘The importance of this result is that it shows that the apparent errors of digitization are essentially of the same order of magnitude as the resolving capability of the digitizer. As may be seen in Figure 1, the digitized points do not fall on discrete levels 1 point apart as they would had a horizontal instead of sloping straight line been digitized. -13- 2 aunole ‘oz-!2) $ ¥ £ 2 ‘ 9 t z € ’ s T T T T T T + ON YOlWUadO - sid qi} =% 2 © ON NOLVERdO - sidego = % 2 SSV79 ONIASINOVW : a HIM 2°ON YOlWwY3dO - sid 980 = *o V coe Z'ON yOLvUsdO - sid 101-% 1=-2 HLIM NOILMEILSIG NVISSAYD ° VON HoLveEadO - Sie Hrs Jo o/\Z-'2) WANG NOWWZILISIG IAI — —_|oge jog SYHOYYS NOLLWZILISIG WOGNYY 40 NOILNEIYLSIC -14- To determine the predominant frequencies of the random digiti- zation errors five Fourier amplitude spectra were computed from the five differences obtained by subtracting the average % from the individual 2, digitizations of Figure 1. The average of five of these Fourier amplitude spectra is plotted in Figure 3, which shows that the errors for the intermediate frequencies are small. The largest errors occur at low frequencies. Figure 4 is the Fourier amplitude spectrum of the averaged digitization of a straight line Z, plotted versus period, measured in cm on the digitizer table. These spectrum amplitudes are associated with the systematic long period errors representing imperfections in the mechanical construction of the digitizer. Effect of accelerogram errors 4 and 5 on displacement In order to determine the nature of probable errors in the ground displacement determined from the "accelerograms'" of Figure 1, the five traces wore double integrated assuming zero initial conditions. The units on the acceleration axis are now considered to be pts/em”. The result is shown in Figure 5. All the displacement curves with units of pts. have very similar shape and clearly demonstrate that there is a systematic trend apparently caused by systematic errors F (Figure 1) in the data reducer, We might expect that the average ¥ of the five displacement curves would give a fair estimate of this systematic error, while the sluctuations of individual curves about this average would give an estimate of the random digitizing errors, Figure 6 gives the standard deviation of these fluctuations. The predominant frequency components in the fluctuations of the double-integrated displacement curves (x,~%) are shown in Figure 7. © guna wo/po4~ AONSNOSYS Oz! ool -15- - oo ooront mn wo/sid - 3GNLITd WV YaiuNOS 2 ~16- y manor W9-CO|Yad Ov of oz ol 0 PE I NY | Vo |g 01S nm aD EEE < e 402 5 oa oO m 2 g tos 3 ANIM LHSIVYLS 4O NOLLWZILINIG Z J BOVYSAV SH1L AO VYLOAdS SCNLIIdIWY YaluNOS ip ¢ aunold 002 £X~ SSV19 ONIAJINOWN HLIM 2 ON YOLVYRdO: °X- © ON BOLWURGO “17. X ~ SOVYSAV SX — b ON YOLVYSdO 2x ~ 2°ON HoLwYadO /, ‘x= LON yoLwuado + TO SNM LHOIWYLS Vv 40 'Z NOWWZILIOIG Woud !X VIVO GSLVYOSLNI 376N0G OO! $id - LNSW30V1dSId -18- 9 mano tele) ‘OO! sid- LNSW30V1dS 10 2 ganold wad - GOIWad Os Ob Of 2 ON BOLWHSdO ~. © ON wOuWe3dO ~/ Sf SSD ONIASINOVIN HLIM 2 ON HOLVYSdO- -19- LON $OLWH3dO- SHON] NOLLVZILIOIG WOGNWY OaLv493LNI 378N0G “X-'X JO VELO3dS SNL Y3IuNOS ool 8 & wo x sid - 3GNLIIdWY YSINNOS o 3 8 jOOv ‘00s 8 quno1a 9as/ Wd - Q33dS 2 3 Oo oS AONSNDSYS -20- oO a Oe. dasywo ||| = 71 SGYO93Y SYVNOHLYVS JO WVYSOLSIH GsadS 6 aun Bywo- ALIALLISNSS se of sz oe $i oO s iri re -21- 8 g 8 & AONANDAYS 8 2 6swo ¢’9| = 7! SQHOOSY SMVNDHLYVS JO WYYOOLSIH ALIAILISNSS -22- The difference curves ( ,-¥ )represent the displacement that would be obtained from digitizing a zero acceleration curve with an ideal data reducer. Here "ideal data reducer!’ is to be understood as a system which is free of errors in its mechanical system but has still the same resolution of 312 points per centimeter and is subjected to the usual personal errors of the operator. We analyse now the Fourier spectra in Figure 7 in which the Fourier amplitude in units of (pts. cm.) is plotted versus period expressed in centimeters on the digitizer table. Although there are some differences in the long period part, all the spectra indicate the same general features. The average spectral amplitudes slowly increase up to about 60 pts. cm. amplitude at a period of about 18 cm and then rapidly build up to an amplitude of 300-400 pts. cm. For the purpose of this analysis, we shall characterize the population of about 100 past accelerograph records (Hudson, Brady and Trifunac, 1969) by the mean speed of the recording paper in em/second and the mean sensitivity in cm/g. Both parameters are to be measured from the records actually used for digitization in the processing for the uncorrected accelerograph data and thus may be different from actual instrument settings. The histogram showing the frequency occurrence of various speeds is shown in Figure 8, The calculated mean for this two-population distribution is y=1.11 cm/sec. In future work 1 will probably gradually increase because the printed records enlarged from 70 mm or 35 mm film negatives will have -23- greater apparent speeds by the factors introduced through the enlargement, As may be seen in Figure 9, the distribution of the sensitivity is spread from 5 cm/g to 35 cm/g with the mean value =16.5 em/g. Figures 1 through 7 are plotted by employing units measured on the digitizer, in order to maintain generality of the results. With typical instrument sensitivity of 16.5 cm/g on the digitizer table we find that with 312 points per centimeter, 5150 points correspond to 981 cm/sec”. The mean paper velocity for a typical record indicated that horizontally one second is equivalent to 1.11 em. Hence, vertically 5150 points correspond to 798 cm/cm® or 6.45 points per 1em/em? and for the double integrated curves in Figure 5, 6.45 points /em. A record length of 60 cm (Figure 5) whose Fourier amplitudes are shown in Figure 7 would have amplitudes of sinusoidal components up to about 2 pts for periods less than 18 cm and amplitudes of the order of 10 pts for periods of about 30 cm. Since the average record speed is 1.11 cm/sec, the period of 18 cm on the digitizer corresponds toa period of about 16 seconds. Thus typical errors in the ground displacement obtained by double integration of an accelerogram will have errors in amplitude of about 3 mm for waves with periods up to about 16 seconds. For waves with 30-second period, these errors will be of the order of 2 cm. From Figure 7, it may be concluded that the expected average errors in the ground displacement are relatively small up to about 24. 16 second period and then rapidly build up for the longer periods. Thus we decide to filter out these long-period errors from the accelerogram starting at about a 16 second period. Whether 16 seconds should be chosen as the limiting period beyond which the errors are to be considered as serious errors is a delicate question. It certainly depends on the expected use of the accelerogram, and in particular on the required accuracy of the computed ground motion. If one could expect long period ground displacements of several meters, then errors of several centimeters would certainly be acceptable in the computed ground displacements, and it would be possible to extend the validity of twice-integrated accelerograms up to maybe 30 or even 50 second periods. For the present analysis, we shall adopt the cut-off period of 16 seconds, because this range covers most of the periods of engineering structures, and in only exceptional cases, such as long suspension bridges, would the fundamental period exceed this limit. A particular sinusoidal component of the computed ground displace- ment curves should then on the average be within 3 mm of the actual ground amplitudes and we believe that this will provide sufficient accuracy for most engineering applications of the computed ground displacements. Whenever longer period motions are required for special research, one can always go back to the original uncorrected accelerogram data, and if possible, extend the information beyond 16 seconds according to the particular needs of that study. -25- A test of the acceleration transducer at low frequencies In order to examine the behavior of the acceleration transducer in the low frequency domain, four experiments were performed. The SMA-1 Kinemetrics accelerograph was mounted on a horizontal table constrained to move along a horizontal line, The table was moved by hand following approximately a single cycle of sinusoidal curve with the fundamental period increasing in successive tests from about 10 seconds to about 35 seconds. The motion was monitored and measured by a laboratory displacement meter which recorded on a Brush pen recorder. The longitudinal accelerograph transducer of the SMA-1 was aligned with the direction of table motion. The displacements from the four experiments were calculated from accelerations recorded on the 70 mm film in the accelerograph. ‘The negatives were enlarged to prints 24 inches long, and digitized in the same way as for the standard uncorrected accelerograms. In order to increase the usual accuracy of digitization a magnifying glass was used on the cross hair of the digitizer. Most of the uncorrected accelerograms have been processed without a magnifying glass on the digitizer. The use of the magnifying glass is mainly a matter of convenience although it has been found to slightly increase the digitizing accuracy. A sloping zero acceleration baseline was inserted in the four digitized accelerograms by minimizing the root mean square of the acceleration, and the accelerations were double-integrated in the usual way to obtain dieplacement, The measured and calculated -26- displacements have been plotted on the same graphs (with a displaced ordinate) to allow comparison (Figures 10, 11, 12 and 13). A timing discrepancy of about 3 per cent can be seen in these records, but does not affect results of this experiment in any way. In Figures 10 to 13, computed and measured traces are displaced by 5 inches to allow a comparison. ‘The agreement between the double-integrated accelerograms recorded on the SMA-1 and the measured displacement curves for periods of about 10 seconds (Figure 10) and 12 seconds (Figure 11) is excellent. The agreement for the period close to 20 seconds (Figure 12) is still very good but indicates small long-period drifts of several inches in amplitude which are very similar to the fluctuations caused by the random digitization errors. For experiment No.4 (Figure 13), with a predominant period of about 35 seconds, the agreement between computed and measured table displacements is poor. ‘These four experiments, designed to examine the performance of the acceleration transducer of a typical accelerograph for low frequency motions, demonstrate that the instrument performance is very good, and that the long-period fluctuations in these experiments most probably come from the random digitization errors only. These tests further indicate thatexcellent agreement between computed and recorded displacement may be obtained for periods of 10 and 12 seconds with digitization errors becoming noticeable at about a 20-second period. This independent experiment is in good agreement with our ol qunort spu0d’s- SNL Ol 8 -27- G3YuNSV3N: VON LNSWikadx3 GaivinotWws ed Ol sayou! - |NSW0V1dSI0 i gana Spuoses- SII. -28- aaunsvaw cON LNAWIY3dxd seyoul- |NAWSOVIdSIG zi gunn spuoces - NIL vt zl oO 8 a t. 2 Jo -29- ‘ ——— G3LV1NDWO —— 40 aaunsvan loz © ON LNAWINSdx4 seyoul- | NANFOV1dS!0 -30- el auno1g spuooas- NIL 02 aalvanoiwo G3uNSV3N: PON LN3WIM3dx4 ol 02 of Ov seyoul- | NSWIOV1dSI0 -31- previous results which indicate that accurate computed displacement curves may be obtained for periods up to about 16 seconds. The sloping baselines that were fitted to the accelerograms for the above four experiments showed a marked similarity, All four tests showed that the baseline was rotated about the center of the 24 in. record by 3.1210" radian or about 0,1 mm in 12 ins. This indicates that the fixed trace and the undeflected acceleration trace were not parallel to this extent. After a detailed investigation, it was concluded that this could have occurred during the enlargement process if the negative and print planes were not parallel. It would seem, therefore, that any acceleration trace obtained through an enlargement process would have to be corrected by means of a sloping baseline inserted by minimizing the root mean square of acceleration as described above. ‘The error analysis presented here has been concerned with long- period errors in the digitized accelerograms. It was motivated by the need to establish low frequency limits above which optically digitized accelerograms may be considered as sufficiently accurate for calculations of the ground displacement. High frequency errors in the digitized accelerograms are important for the treatment of the instrument correction of the accelerograms, but are not considered in this paper. These errors are smoothed by the double integration process and do not significantly affect the accuracy of the computed ground motion. -32- ACCELEROGRAM BASELINE CORRECTION In the preceding sections it was demonstrated that the parabolic baseline correction does not meet all the requirements for the standard baseline correction, and that it depends largely on the record length. It is clear that depending on the record length, low frequency com- ponents will be treated differently from one accelerogram to another if a parabolic baseline correction is used. This would be an undesirable feature for what is hoped to be a "standard" data processing procedure. The analysis of errors present in the uncorrected accelerograms has indicated the main sources of error and in particular how these errors are distributed in frequency space. The criteria for a baseline correction can then be clearly formulated from the physical point of view as follows. The baseline corrected digitized accelerogram data should be processed in such a way that most of the actual physically real signals are preserved while the long-period noise introduced into the data by the digitization process is eliminated. It would be easy to achieve the separation of the long-period digitization errors if they were represented by a well defined frequency band, Although sharp boundaries cannot be unequivocally determined, the analysis of the preceding section shows that for periods longer than about 16 seconds digitization noise may give excessively large displacement amplitudes. In the decision process of determining this cut-off period, our preference would be to filter out some of the possibly realistic seismic -33- signals rather than to contaminate the baseline corrected data with processing errors. As already mentioned, it should be clear that the final choice of the cut-off period of 16 seconds is a consequence of the presently available digitizing equipment at the California Institute of Technology and the accuracy of the operators who are currently digitizing accelerograph records. The advancement of digitizing technology will, no doubt, change data processing capabilities and the present limits will be changed accordingly. The proposed standard baseline correction procedures will consequently consist of filtering out all periods present in the uncorrected data longer than 16 seconds. We now proceed to describe the details of this procedure which is briefly summarized in the flow chart in Figure 14. Prior to filtering, a straight zero acceleration baseline is least square fitted to the uncorrected accelerogram a(t,) in order to eliminate distortions caused by the photographic enlargement of the film negatives. Although not essential, this step is expected to improve the accuracy of the filtering procedures later applied to the accelerogram a,(nAt). This step should actually be applied only to records enlarged from 35 mm and 70 mm negatives and is not required for directly recorded paper records. Since for the paper recording instruments this rotation would in any event be small compared to the effects of the filtering process, it has been decided to apply this step uniformly to all digitized uncorrected data prior to the filtering, irrespective of the type of the original record. -34- Ting Gaovae ATEN vl qunold ayo oxmees ment eer * ee VRS GORE FTA TE 35+ Before filtering, still another correction is made. It consists of fitting a straight line to the velocity obtained by integrating the apt) accelerogram. The linear term in this correction introduces a constant v, translation in the baseline of al (t,). The purpose of this correction is described later. Since digital filtering requires equally spaced data, 50 points per second are next interpolated to the unequally digitized accelerogram to give a,(nAt), with At=0.02 seconds. The removal of long periods is performed by low-pass filtering of the accelerogram by an Ormsby digital filter (Ormsby, 1961) and then subtracting the low-pass filtered accelerogram (equivalent to a new zero baseline) from the original accelerogram, As is well known, direct application of the Ormsby filter to the acceleration data would not be economical, since for cut-off periods of 16 seconds, the filtering window must be as long as the whole accelerogram or even longer. To accomplish more economical filtering, we first low-pass filter the accelerogram using the equally weighted running mean filter with a window width T,, = 0.40 sec. The transfer function Hj (i) of such a digital filter is given approximately by (Holloway, 1958), sin(nfT) 80° a “ w For {=0 we have H, (f)=1 and for an increasing frequency 11 (£) decreases towards its first zero at iT, =1. For Ty? 0.40 sec., £=2.5 cps. It is also easy to show that this filter does not introduce any phase shifts into the original unfiltered data. st aunola sdo- AONANOAYS 300 voo ZOO T -36- oe COTE seu Oo oC oS (§)'H- NOLLONN3 Y34SNVULL L Q 3S 22s Ob'0 = ML TWAUBLNI ONIDVESAV Y3L4 NV3I ONINNDY C3LHOISM- ATIVNDS yO4 W)'H NOLLONN4 YSSSNVYL a -37- After filtering, the low-pass filtered data a, (nAt) (Figure 14) are decimated and only every tenth point is kept. The new At thus becomes 0.20 seconds. The new data have the Nyquist frequency fy about f= 0,10 cps with essentially undistorted amplitudes. This is .5 cps, and preserve all the long-period components up to shown in Figure 15 in which H) (£) is plotted for low frequencies up to f=0,10 cps. The low-pass filtered accelerogram data a,(nAt,) are now used as an input toa low-pass Ormsby filter. The result will be a,(nAt)) (Figure 14). Following Ormsby (1961), we define h=w/tg, Ag =W¢/ug and Ags (Wa Wg) lg» where w= cut-off frequency (rfc) and wa = filter roll-off termination frequency. The roll-off from we to w. is near. With time coordinate f, =nAt) and At) = 1/fg the weights for the Ormsby filter become cos Zn}, - cos arn) n=0,41,£2,...,4N (2) b, Zrp_ien)™ a hetip The quantity Re Wp Ee ig) lg specifies the sharpness of roll~ off after Xe and together with the finite number of weights N measures the resultant accuracy of the filter. The accuracy is reduced for lower Lp and/or lower N. For N-oo, the transfer function H, (f) approaches the ideal fi, (f) given by 1 3 fst Ff) = @) -38- oT guna sdo - AQNSNOSYS oro 800 200 boo 200 T T 7 T Slee é 8 (3)°H- NOLONNS Y34SNVEL @ co} Q oO sas 001 = 1 TWAYSLNI ONINSIOIS OS2=N SLHOISM 30 Y38WNN sdo JOO = ty ‘sdo GoD = 4 ‘WSITId ASWHO YO4 U)®H NOLLONNS YS4SNVHL -39- oro 800 zt aanola sdo - AONSNOSYS 900 poo 200 T T T T T (3)°H G)'H = (DH NOWONN YSISNVYL ()H NOILONNS YSASNVYL -40- The dependence of the error ef, N) =H, (f) - F(t) 4) on XL and N is given by Ormsby (1961) as: (5) ‘This expression is independent of 1, and f and is adopted in the computer program for the baseline correction. It determines N and thus the width of the filtering window for the given 1, such that €<1.2 per cent. The computed transfer function H,(£) based on the N=250 filter weights for 1p = 0.004 is plotted in Figure 16. The resultant transfer function H(f) of the combined running mean filter and Ormsby filter is then given by the product of the transfer functions H, (f) and H, (£) and is plotted in Figure 17. Since for the frequencies considered H,(f) is nearly unity the resulting transfer function in Figure 17 is almost the same as that in Figure 16 for H,(f), The resulting low-pass filtered data a, (nAt,) now represents the zero baseline apart from an additive constant. ‘The preliminary baseline corrected accelerogram a,(t,) is next obtained by subtracting the low-pass filtered accelerogram ay(t;) from the original unequally spaced data a,(t,). When these data are integrated once to get the ground velocity and twice to get the ground displacement it will be seen that two additional constants are required, namely the initial velocity v, and the initial displacement d,. The following simple physical reasons will indicate the best way to determine these constants. -41- As may be recalled, the baseline corrected data are obtained by subtracting the low-pass filtered accelerogram a,(t;) from the accelerogram a, (t,) from which a least square fitted straight line and a constant v, were subtracted, as described earlier. Although the slope of this straight line approximately eliminates distortions caused by photographic enlargement, the constant term c, in such a correction may well be in error because there is no physical reason for the mean value of the ground acceleration over the time interval considered to be zero. Since the mean value of the ground acceleration is actually very small, as an initial step, the uncorrected data are processed assuming that the mean of the acceleration is zero. It is further clear from the physical nature of strong earthquake ground motion that for large time the velocity must tend to zero. From these considerations it may be concluded that a straight line error is present in the computed velocity and that under these conditions it will not tend to zero for t+o. A simple procedure is then to eliminate this error by fitting a two-parameter straight line v, + v,t to the computed ground velocity. Such a procedure also gives an estimate of the initial velocity v.. The correction y,, the mean of the ground acceleration curve, applied before the filtering process introduces e DC change in the as yet uncorrected baseline accelerogram a,(t,), and thus may require some small additional changes. We now consider reasons for these additional corrections. The accelerogram to be analyzed is given in digitized form over the time interval from 0 to T and is undefined outside that interval. ~42- It may be assumed that the instrument was triggered by one of the first arrivals of earthquake waves so that a very short and actually insignificant portion of the accelerogram is lost, over a short interval of time, say -7T. In this way we have extended the accelerogram outside the interval 0 to T. If the time window over which filtering is performed is designated by T,, then the accelerogram to be filtered must be extended from 0 to T interval to -T,/2 toT +T,,/2 interval, If the extended accelerogram is taken to be zero, outside 0 to T, and if after the first low-pass filtering this zero level is changed, it may be necessary to correct the zero base- line for these additional boundary effects. An additional change in the zero level of the accelerogram is introduced by subtracting the low- pass filtered data from the original data used to compute c,, c, and ¥, prior to filtering. The simplest way of carrying this correction out is to put the once corrected accelerogram through the whole -43- procedure again several times until the difference between the two baseline corrections is negligibly small, This iterative technique is indicated in the flow chart of Figure 14 with a dashed line. ‘To demonstrate how the above described baseline correction method filters out long-period components in the accelerogram and ground displacement, we consider the N21E component of acceleration recorded in Taft, California during the 21 July 1952 earthquake. Figure 18 gives a plot of this accelerogram (Hudson, Brady, Trifunac, 1969). We first least square fit a straight sloping line to the accelero- gram and then least square fit a straight line on the velocity, which gives an estimate of the initial velocity v,. By using this estimate of one obtains the ground displacement as Vg and double integrating aj (t) shown in Figure 19. These procedures would correspond to steps 1,2,3 and 4’ shown in Figure 14, This ground displacement derived from the uncorrected data prior to any filtering, contains long period random fluctuations introduced by digitization. If the accelerogram a,(t,) is now processed once through the standard filtering procedure following steps 5, 6,..., 14, 15 shown in Figure 14, then the resulting displacement is shown in Figure 19. We shall call this sequence of processing steps 5, 6, ..., 14, 15 the "First Iteration". Likewise we will call steps 16, 6, ..., 14, 15 the "Second Iteration" and 80 on (see Figure 14). As described above, the fact that step 3 (Figure 14) is performed on the unfiltered data may cause small changes in the new values of v, (step no. 12) obtained upon subtracting the low-pass

Anda mungkin juga menyukai