1. urtapt analysis by campaign with alternatives concerning drift and weightingplus some new post-processing utilities for data extraction from tp-files, for statistics and plot
2. urtapm
3. merged solution by urtapt
Below we explain:do_merging_job can be used also after urtap-merging-pp if directive U remains unused in do_merging_job
1. urtapt for each campaign separately
2. urtapm to merge design matrices and control parameters
2.1 recycle urtapm for correct tp-indexing (mandatory!) and, optionally, a-priori offsets
3. urtapt on the merged matrix
3.1 do-merging-job example, will generate result tables and residual files for all project
3.2 the script make-tse4urtapm
4. preparing a plot
make-urtap-ins -f -for-merge -cn z -C 201405a -with-drift-var DSYN -keep-outliersBetter:
set mark = ( a b c d e f g h i j k l m n o p )or (note, we generate one more than necessary; the above was before 201806a ):
set mark = ( `awk -v n=$#campaigns 'BEGIN{for(i=0;i<=n;i++){printf "%c ",i+97};print "\n"}'` )(avoid -slopes-from-start !) for a sequence of campaigns,
rm -f campaigns4merge.lst
@ i = 0
foreach c ( $campaigns )
@ i ++
make-urtap-ins -f -for-merge -cn $mark[$i] -C $c -with-drift -keep-outliers
end
urtapt @ urtap-<campaign>-mrg.ins >! urtap-<campaign>-mrg.logIf you have an urtapm.ins file already,
set campaigns = ( `sed -n '/ \^ /s/[^-]*.\([^\.-]*\).*/\1/p' urtapm.ins` )which takes the campaign names from the file names (between `t/urtap-´ and `.cdmp´)
set campaigns = ( `cat campaigns4merge.lst` )
foreach c ( `cat campaigns4merge.lst` )
urtapt @ urtap-$c-mrg.ins >! urtap-$c-mrg.log
end
09 B t/features.tblIf the number of cdmp-files is too large, specify
11 ^ t/urtap-201405a.cdmp
12 ^ t/urtap-201502b.cdmp [{meter|-}] [options]
...
29 ^ t/urtap-yymmddz.cdmp last possible
31 < t/urtap-merged.cdmp
32 < t/urtap-merged.y.ts optional output, merged AG DATA input series (use UNW!)
33 < t/urtap-merged.dw.ts - optional output, merged AG SIGMA series
Q
GRA UNW IUNTBL=9
SYMBOLS: 7
SCG => SCG
AAS. => AASA
ACN. => ACNA
ACS. => ACSA
ASN. => ASNA
ASS. => ASSA
-W => -W
BEGIN_HS: 01 02
01: SCG
02: A...,SL..
END
ADD COLUMNS
= ( From 2014 04 05 01 02 03 000 To 2014 04 08 23 59 59 999 ) = F233 : FG5-233 "FG5-233 Instrument offset all campaigns"
/ ( From 2009 06 30 00 00 00 000 To 2016 09 30 23 59 59 999 ) = SSCG "Residual slope all campaigns"
S d/syn-drift.ts = DRFT : "Drift all time"
R /home/hgs/TD/d/G1_garb_######-1s.mc SCL=1/-774.421
END
/*
Coding rules follow below
d/syn-drift.ts
is a symlink to a drift file. In mc4campaign a range of options are available. However, in the example below we use the result of expfit .
For synthesizing a drift-ts file, consider prl2tse, esp. the bespoke example
The advantage w.r.t. using the .ph04.ts file is that the start and end times can be extended.Consider especially urtapm-big-O.ins
The resulting drift-ts needs the resample-and-create-column directive D in urtapm's ins-file. Example in
~/TD/a/Allcamps/urtapm-big-OO-ndr-fsg-H-ochy.ins :
D d/recent-urtap$DVAR.ts COL=y SCL=1.0 ADD=0.0
where d/recent-urtap$DVAR.ts is a symlink to the synthetic drift-ts file and $DVAR is -ochy
urtapm @ urtapm-big-O.ins :U >! urtapm$VAR.log
For two reasons, the first urtapm job must be recycled. First, to
use the feature table for a correct time-to-index set for bias
parameters (orientation-dependent offsets to be estimated).
Second, you might have a priori values for time-dependent
instrument offsets to be subtracted in a series of campaigns or
projects.
Your pwd is ~/TD/a/Allcamps
and $UVAR has been set before the
first run,
source urtapm-<your-choice-of-analysis-variants>.ins
If features change, run
make-tse4urtapm -new urtapm-big$UVAR.inswhere the ins-file is associated with the particular analysis variant.
The details are covered under Drift above and in chapters
Writing tse-files for the E-command and
Writing ts-files for the A-command ( make-offsets-ts
-h ; make-ts4VAR -h )
It's the files appearing in the E and
A commands that must be renewed.
rm -f o/scg-cal-merged$VAR.tseThe steps urtapm and urtapt are first in the super-script do-merging-job
touch o/scg-cal-merged$VAR.tse
urtapt @ urtap-merged.ins >! urtap-merged$VAR.log
do-merging-job -u urtap-merged-O.ins -i urtapm-big-O.ins DThe files this job creates are
-rw-rw-r-- 1 hgs hgs 3902 Apr 15 11:12 evaluate-tp-O-expf.tsjand in o/
-rw-rw-r-- 1 hgs hgs 28252 Apr 15 11:12 evaluate-tp-O-expf.tbl
-rw-rw-r-- 1 hgs hgs 18018 Apr 15 11:12 evaluate-tp-O-expf.dat
-rw-rw-r-- 1 hgs hgs 16926 Apr 15 11:12 evaluate-tp-O-expf.rsl
-rw-rw-r-- 1 hgs hgs 1510 Apr 15 11:11 xtp-projects.dat
-rw-rw-r-- 1 hgs hgs 3754 Apr 15 11:11 evaluate-tp-O-expf.sol
-rw-rw-r-- 1 hgs hgs 1519 Apr 15 11:11 projects-in-urtapm-O-expf.lst
-rw-rw-r-- 1 hgs hgs 2177180 Apr 15 11:12 o/scg-cal-merged-O-expf.dc.mc
-rw-rw-r-- 1 hgs hgs 77324 Apr 15 11:12 o/xtp-Onsala_AC_20150509a-O-expf.ra.ts
-rw-rw-r-- 1 hgs hgs 75968 Apr 15 11:12 o/xtp-Onsala_AC_20150508a-O-expf.ra.ts
etc for all projects
prl2tse o/scg-cal-merged.prl MERGED >! tmp.tseEdit tmp.tse, select the lines you want to combine, e.g. to generate only BoxCars of BoxCars and Slopes.
tslist _1920 -Bidate,time -r0.1 -Etmp.tse,M -I -o my-first-prediction.tsStudying the variance-covariance matrix
plot-vcvf -zfloor 0.01 -ps urtap-merged-OO-ndr-H-ochy.ps \
-size 0.36 -chs 12 -P ~/www/4me/ag-superc/ -X 3.0 \
-ft t/feature-OO-ndr-H.tbl \
-corr 1,2,3,4,5,84,85,86,87,88 1,2,3,4,5,84,85,86,87,88 \
t/urtap-merged-OO-ndr-H-ochy.vcvf
uu ^ cdmp-filename [{meter|-}] [:campaign] [{+|-}FHS ] [WS=val]
where option +FHS indicates that the dump has been written with hand-selected sets of variable size,
i.e. the condumps of urtapt jobs that used namelist parameter q_hs_as_needed=.true.
(option -FHS is needed in the fixed-size case when the default has been changed with the MANYHS option/command).
Similarly, also for the output cdmp on unit 31, the {+|-}FHS option can be given in the comment part of the open-file line.
:campaign - If the file name contains the name of the campaign, this is the preferred method to communicate it to the program.
If a campaign is given in the file comment, the file name will not be parsed for it.
Example:
20 ^ t/urtap-201405b.cdmp FG5-220
will use 201405b as the campaign name, the rule being that the campaign string is coded between `-´ and `-´ or between `-´ and `.´
WS=u - this option can be used to change the a priori weights: The series of weights w is multiplied with u
and the (weighted) measurements are divided with u.
; text - a comment line, ignored in processing
[NO]GRA [UNW] [IUNTBL=#u] [DTS=#dt] [DBG] [MANYHS] [DCOPT={W|U|I}[,<symb>]]
GRA or NOGRA must appear first on the line, else the rest is ignored.
Options:
UNW - undo the weighting of the dump series after processing and writing of the merged dump.
Necessary for writing the AG input series (file output on unit 32)
IUNTBL=#u - u = unit number for the features table (output). A file instruction must appear
in the open-file block. Default output is to STDOUT.
DTS=#dt - enforce a sampling interval [10s]. Important only if consistent in- and -output in this respect is required.
see OUTPUT command.
DBG - Debug, extensive printing (current version of urtapu8.f does excessive printing, 2016-11-30)
MANYHS - Change default from fixed to variable size of hand-selected sets in cdmp file input.
DCOPT=c - Remove DC-components in the system matrix. Method U ignores any weighting, which is
the preferred method given that the matrix columns contain "smooth" (i.e. deterministic) signals.
An all-and-pure DC-column must keep its values. The symbol for this column is `WWWW´ by default.
You can alter it by appending `,<symb>´
WEED: #ythresh #wthresh #wlow
- sets matrix elements Bij to zero and sigmai to wlow if |Bij| > ythresh or wi > wthresh
SIGMA*#s - multiplies sigmas (weights = 1/sigma) with s
TRANSLATE: #n - specifies the number of instructions following (pure comments not counted)
ABCE => ABCD - Input symbol ABCE is changed to ABCD
-W => AANA
SYMBOLS: #n - specifies the number of instructions following (pure comments not counted)Features not declared in this table will not be merged eventhough they might have identical names.
ABCD => WXYZ - The following two features are not combined under the name WXYZ
ABCE => WXYZ - although r.h.s. symbols may be synonymous, they result in different columns.
ABCD,ABCE[,...] => WXYZ
- Combine columns; may contain wildcards on l.h.s. Up to 10 column symbols can be given on the l.h.s.
DE.. => DEAB - All features having DE in common are combined
.E.. => DEAC - All features having E in the second place are combined
XYZ. =v forget - Drop all features starting with XYZ (not fully implemented, collides with feature handling)
AA..[,...] !> AAAA
- The `!´ sign requests that overlapping features are poked together, not added
`&´ is synonymous with `!´ . Recommended for the global constant:
-W.. !> WWWW
-W.. !> #### - Combine symbols starting with -W and throw away.
DEL[ETE:] symb[,symb[,...]]
- Delete these columns and their associated features. Comma-separated and four characters
for each symbol. Blanks are significant. Trailing (!) wildcard character(s): `.´, e.g. -W..
Comments starting with `;´ are possible in separate lines or beyond the instructions.
The purpose of the SYMBOLS section is to collect segments of the signal model ("features") under one column, i.e. to estimate one parameter
in the urtapt merged-dump stage. TRANSLATE may help in the combination.
BEGIN_HS: 01 02This example opens two categories (01, 02) to include the SCG-series: measurement and drift in 01, and the project biases and slopes in 02.
01: SCG.,DRFT
02: A...,SL..
END
ADD COLUMNS= This is a method to add instrument-specific offsets to the design matrix.
= ( From YYYY MM DD HH MM SS FFF To YYYY MM DD HH MM SS FFF ) = TSYS : meter ["explain"]
/ ( From YYYY MM DD HH MM SS FFF To YYYY MM DD HH MM SS FFF ) = TSYS [ : meter ] ["explain"]
S filename = TSYS : ["explain"]
R /home/hgs/TD/d/G1_garb_######-1s.mc [ ADD=#v] [ SCL=[1/]#cal] [ COL=#c] [ TSE=<target>]
R filename-model [ ADD=#v] [ SCL=[1/]#v] [ COL=#c] [ TSE=<target>]
D filename [ ADD=#v] [ SCL=[1/]#v] [ COL=#c] [ TSE=<target>] [O>#nn]
A ts-file [ ADD=#v] [ SCL=[1/]#v] [ COL=#c]
A ts-file [ ADD=#v] [ SCL=[1/]#v] COL=0 = TSYS [ : meter ] ["explain"]
E tse-file,trg COL=0
E tse-file,trg [ ADD=#v] [ SCL=[1/]#v] COL=#c = TSYS ["explain"]
...
END
E file.tse,TARGET = TSYS : ["explain"]use log-output of urtapm to find the index:
TSF EDIT FG233SOUTH
ADD 1. From #1194 To #1392
ADD 1. From #41279 To #42476
...
END
awk '/ACSA/||/AASA/||/ASSA/||/ANSA/{print $0}' urtapm-big-AUNO.log |\(You need to remove the FG233 experiment of 2010!)
fgrep -v DumMer | fgrep Onsala_ | sort -n -k5 |\
awk '{printf "ADD 1.0 From #%i To #%i\n",$5,$6}' | uniq
awk '/ACSA/||/AASA/||/ASSA/||/ANSA/{print $0}' urtapm-big-AUNO.log |\(remember: all added columns must also go through weighting).
fgrep -v DumMer | grep -e ' A._2' | sort -n -k5 |\
awk '{printf "ADD 1.0 From #%i To #%i\n",$5,$6}' | uniq
OUTPUT [COLUMNS U=#uu] [DT=#dt] [+UNW] [{+|-}FHS[+DBG]] [+MJD]where symbols symb select matrix columns for output on file unit uu .
symb
symb
...
END
rm -f t/urtap-merged$VAR.jd.mcList projects and their settings using
tslist t/urtap-merged$VAR.jd.ts -I -O:MJD t/urtap-merged$VAR.jd.mc
tslist t/urtap-merged$UVAR.y.ts -I -O:Y t/urtap-merged$VAR.jd.mc
tplist-project -qq 201707a ALL t/urtap-merged$VAR.jd.mcFind the MJD's:
Onsala_AA_20170705a.drop.txt FG5-233 AA N -LWG 172971 176767
Onsala_AA_20170706a.drop.txt FG5-233 AA S -LWG 176768 180882
tslist o/scg-cal-merged-O-sdr-dsyn.jd.mc -qqq -j172970 -Un176767 -Ft1,f13.6 | awk '(NR==1){print} {s=$0} END{print s}'Example: A tsfedit section for FG-233 S orientation
57573.214086
57940.724456
tslist o/scg-cal-merged-O-sdr-dsyn.jd.mc -qqq -j176768 -Un180882 -Ft1,f13.6 | awk '(NR==1){print} {s=$0} END{print s}'
57940.782905
57941.650266
rm -f cprj$VAR.lst; touch cprj$VAR.lstThere is a script for that. Try
foreach c ( $campaigns )
tplist-project -qq $c ALL t/urtap-merged$VAR.jd.mc | tee -a cprj$VAR.lst
end
awk '/-233/&&/ S /{print "ADD 1.0 From #"$6,"To #"$7,"; 233 S ::",$1}' cprj$VAR.lst
awk '\! /^;/&&/-220/&&/ S /{print "ADD 1.0 From #"$6,"To #"$7,"; 233 S ::",$1}' cprj$VAR.lst
foreach c ( $campaigns )
tplist-project -qq $c ALL t/urtap-merged$VAR.jd.mc | grep -v -e '; ' |\
awk -v n=$c '(NR==1){s=$6} END{print "From #"s,"To #"$7,"::",n}'
end
make-tse4urtapm [-new] [<urtapm-ins-file>]edit the result if necessary and move to the subdir as required (would usually be the urtapm launching dir)
Ascii file tmp/scgjump.tse contains:
TSF EDIT JUMP
ADD -110.0 From 2020 02 07 19 50 00 000
END
In the open file section, add
61 B tmp/scgjump.tse
(the unit number 61 is hard-wired).
The R command includes option TSE=<target> :
R lpfu/G1_g_######-1s.ts TSE=JUMP SCL=1/-774.421 ADD=-464.d0 = SCG : SCG-GRA "SCG-054 LP filtered observations"
echo "TSF EDIT ADD" >! tmp/adding.tseThe file cprj$VAR.lst can be used.
set c=201505a
tplist-project -v -OO-ndr -qq $c ALL t/urtap-merged-OO-ndr.jd.mc |\
awk -v c=$c '{s=$1;sub(/\..*/,"",s);print "ADD ${Z_"s":[0.0]} From #"$6,"To #"$7;if (! b){b=$6}; e=$7} END{print "ADD ${C_"c":[0.0]} From #"b,"To #"e}' \
>> tmp/adding.tse
echo "END" >> tmp/adding.tse
setenv Z_Onsala_AA_20150506a #valueThere's a script for this purpose:
setenv C_201505a #value
...
set ndata=`tslq -v t/urtap-merged-OO-ndr.jd.ts | awk '{print $3}'`
set begd=`tslq -b t/urtap-merged-OO-ndr.jd.ts`
tslist _$ndata -I -B$begd -rs5.0 -E tmp/adding.tse,ADD -o the-afile.ts
make-ts4VARExamples:
make-ts4VAR +using tmp/offsets2env -E tmp/urtapm$VAR.tse,YOFFS -o tmp/meters-180526$VAR.tsAnalysing the skewness of the drop residuals by project using stats-of-merged DE , o/yoffs4prjs.tse , producing a series of corrections
[ setenv YOFFS_SCALE value ;] make-ts4VAR -E 'o/yoffs4prjs.tse,],Skip' -o tmp/tmp$VAR.ts
C Analyze merged calibration campaigns
11 < t/urtap.dmp
12 < t/urtap-merged.evs
13 B t/urtap.trs
14 B o/scg-cal-merged.prl
17 < t/urtap-merged.cdmp
Q
¶m
iun_condump=17, opt_condump='GET UNW'
qbatch=.true., use_window_residual='F', use_eigenvalues='W 6.0 G '
cause='TGP', demodulation_tide='2,1,1,-1,0,0,0,0'
q_get_dump=.false.
idate=2014,05,27
iuin=21, trg='BIN', dtut=57.d0, scale_y=1.0, y_units='[nm/s^2]$'
rec_mrs=-99999.9, fmt='L:A|V'
q_tsf_edit=.true., tsf_edit_name='OS '
lpef=0
dff=1.0d0,-0.9d0, l_dff=0, l_filter=0
xsite=11.9260,ysite=57.3964
nrjd=0
nx_extra=1500
ls_psp=128, wt_psp='HANN', mxmiss_psp=32
display='A@W', q_shut_graphics=.true.
tsfile_names='o/scg-cal-merged '
q_frq_tst=.false., frq_tst=1.0d0
q_filterb_remdc=.false., q_keep_y_dc=.true.
q_downw=.false.
q_allow_dc=.true., q_weights_dc=.true.
q_remdc_wmean=.true., q_ra_with_dc=.true.
q_conmrs_remdc=.false.
q_tref_slopes_start=.true.
detrend='N'
opt_hand_select_trend = 'No....'
mxlpef=0
core_model='S_GRAVITY'
npal1=99, npal2=1
qhindsight=.true.
qpr_drift=.true.
outlier_criterion=-4.0, iun_outliers=31
q_outliers_filtered=.true.
q_del_miss_obs=.true.
q_tref_slopes_start=.true., q_offset_slopes=.true.
quit_offset_slopes=.false., tsys_slopes='SL..'
&end
End of Instructions _______________________________________________
TSF EDIT OS
; +21,'BIN',-99999.0,'L:S|D',3,0,-1, -1.0
CONT 31 O o/scg-cal-merged.tse
END
ldr - a late bias, for the skewed distribution of drop residuals in 2016In Sep. 2018:
ldr3 - excludes t/urtap-201106b$DVAR.cdmp
lsdr - all input cdmp's are not drift-reduced. Instead, a "synth" drift series is sampled and an admittance parameter estimated
sdr - like lsdr , but with a bias for the campaign in 2009
ndr - ins file issues a comment: "Please setenv VAR -O-expfit-ndr" . A bias for the problematic campaign in 2016 is estimated.
ndr - With this strategy we assume the drift series as fixed (note. the data is a correction).
What is the preferred application, subtracting it from the AG-observations or adding it to the SG?
Studying the vcv-plots it turns out that the covariances change, especially those with meter-related offsets;
covariance of the scale factor with the offset of FG5-220 is near zero!
And the scale factor is closer to the value from the GAIN campaign if we do the latter. (i.e. adding drift to SG).
dsyn - synth drift, composed of the bias+slope+decay parameters from the SG standard analysis
dwwf - synth drift, composed of the bias+slope+decay parameters from the SG extended analysis
ochy - synth drift, composed of the bias+slope+decay parameters from the SG extended analysis with ECCO1 and ERAI in the regression
expf - synth drift with exponents, biases and slopes estimated in jobs running expfitbm
source do-offset-variationsThe superscript contains a few one-liners to grab the results from the prl-files.
make-tse4urtapmTo include these offsets in the analysis, urtapm must include a command
A tmp/meters${OVAR}.tsMore offsets can be appended by setting in the environment
setenv CONT_YOFFS "CONT ?? B tsi-file"e.g. o/yoffs4prjs.tse
awk '/> Campaign/{n++;print "${COMMAND} ${ADD"n":[0.0]} From #"$6," To #"$6+$7-1,"::",$9,$5}' \(motivates another script)
<feature-table> >! tmp/additional_campaign-offs.tsi
cat tmp/additional_campaign-offs.tsi
# study the output and...
setenv COMMAND ADD
setenv ADD6 20.0
setenv ADD15 20.0
setenv CONT_YOFFS "36 R tmp/additional_campaign-offs.tsi'
set ndata = `tslqn t/urtap-merged$VAR.jd.ts`
set today = `date "+%y%m%d"`
tslist _$ndata -B2009,6,30 -rs5. -Etmp/urtapm$VAR.tse,YOFFS -I -o tmp/meters-ovspec.ts
Introducing TP-MC-files
This is the pre-requisit for an MC-file to be a TP-MC-file:
MJD in column,
data of other kind pertaining to the same point of time in columns 2...
tp2tsf racreates an mc-file o/scg-cal-merged.ra.mc
tslist o/scg-cal-merged.mc -qqq -LM -LRA -N -n1 -Ff14.7,f10.11.000000 55015.3890046 -25.5
tslist o/scg-cal-merged.ph01.ts -I -O:PH01 o/scg-cal-merged.ra.mcand a tsf-file o/scg-cal-merged.ra.tsf
tp2tsUSAGE:
tp2date 52944 52945Q.: How to get the begin and end of a feature in terms of the array index?
2014 05 31 00 05 30
2014 05 31 00 06 00
0.000000 9.7065E+01Explanations:
0.000000 9.7065E+01
>
1.281019 4.2204E+01
1.281019 4.2204E+01
The primary purpose of multi-campaign analysis is the determination of a secular rate of gravity.A.: Case 2,
Subtracting the SCG and adding back the instrumental drift is the method in mind, but the drift terms as they are determined in the adjustment may cancel out physical components in the form of a linear signal. Examples:
1. Nodal tides. This tide, if subtracted from the SCG series, removes also the linear ramp implied in the incomplete cycle. The signal to subtract should comprise all nodal effects, i.e. harmonic degrees 2, 3, 4...
We had a problem in including the degree > 2 tides in the least-squares adjustment, admitting these effects
at much greater magnitudes since the implicated slope makes a good fit to the SCG drift.
2. Time series of non-tidal effects, which participate in the adjustment. In some cases the admittance coefficient is not known in advance even at one order of magnitude. Pre-filtering should result in a fit dominated by the temporal variations, and Wiener filtering is supposed to improve the fit by attenuating short-period incoherent features. Still, there is potential leaking of secular terms in these signals into the drift terms.
consider the possibility to split off secular rates from the effects that are adjusted. Then, by extrapolation of gain spectra you might obtain a reliable coefficient to account for the secular parts at the instances of urtapm or urtap-merged. You'll get ranges for biases - which also would have to be considered in AG-only analyses. Using sasm06 on the residual with a candidate effect added back in would suggest admittance extrapolated to very-long periods. A more comprehensive approach, however, would apply partial cross-spectrum analysis (à la Jenkins and Watts) involving almost ten simultaneous time series - something for the future!Case 1,
The q_analyse_slopes option in urtap tide analysis provides an account budget of the slopes-times-admittance contributions implicated in the system matrix. In the case of the environmental effects the true contributions of the slope parts --- and thus the drift bias --- might be underestimated. However, the bias will not exceed what a total neglect of these effects would incur in AG-only analyses.
As an example,
fgrep '<ASl' ~/Ttide/SCG/logs/urtap-openend-ochy-asl.log
(But note: this is a version where the nodal tide is not subtracted; the drift parameters are still biased.)
cd ~/TD
rm -f d/gnt${begd}-OPNEND-1h.ts
run_urtip -t t/nodal-2-3-4.trs -BM d/gnt${begd}-OPNEND-1h.ts d/g${begd}-OPNEND-1h.mc
subtract in urtapt using
49 ^ d/gnt${begd}-OPNEND-1h.ts
...
+49,'BIN',-99999.0,'U',0,0,-1,-1.0d0
This script creates
tmp/cprj$VAR.lstIf these files won't change for different VAR parameters, mv or cp it to a $VAR-less name and create symlinks for each $VAR
tmp/urtapm$VAR.tse
tmp/urtapm$VAR.tse contains a tslist processing command that can be activated with
tsex -x tmp/urtapm$VAR.tse,WMEANto print the weighted means of each campaign. Look at this command to make sure the environment parameters are set appropriately:
tsex -t tmp/urtapm$VAR.tse,WMEAN
(defaults make use of environment parametersplot-campaign-means -h
Two examples, showing the -ndr strategy, "-alt" when drift correction is added to the SG, else subtracted from the AG
http://barre.oso.chalmers.se/hgs/4me/ag-superc/urtap-merged-OO-ndr-H-ochy.png
http://barre.oso.chalmers.se/hgs/4me/ag-superc/urtap-merged-OO-ndr-H-ochy-alt.png
Nowadays we use the "-alt" alternative without denoting it.
The command is
plot-vcvf -zfloor 0.01 -ps urtap-merged-OO-ndr-H-ochy.ps \
-size 0.36 -chs 12 -P ~/www/4me/ag-superc/ -X 3.0 \
-ft t/feature-OO-ndr-H.tbl \
-corr 1,2,3,4,5,84,85,86,87,88 1,2,3,4,5,84,85,86,87,88 \
t/urtap-merged-OO-ndr-H-ochy.vcvf
Example:
plot-merged-residual -DC o/scg-cal-merged-O-15.dc.mc o/scg-cal-merged-O-15.ra.mc
Predictions are calculated in urtap and what is to be collected for the plot should be a partial prediction.
The instructions are given to urtapm
The keyword is HS = Hand-selected sets
First you must create the tp-dated mc-file:
setenv TP2TSF_OFFSET `xtp-platform-dcl`where the first line is needed if the drift model was not zero-mean.
tp2tsf ph02
Then:
plot-merged-predictions -R -A o/scg-cal-merged$VAR.ph02.mcAfter every re-analysis with urtapm and urtap/merged the -R option must be given.
Else the tp-tsf-files can be re-used.