2018-11-14

Call Busts and Reverse Busts: CQ WW 2017

This is the thirteenth in a series of posts on busts and reverse busts in the CQ WW contests. These posts are based on the various public CQ WW logs (cq-ww-2005--2017-augmented.xz; see here for details of the augmented format).

Prior posts in the series on busts and reverse busts in CQ WW:
In this post, we include only verified QSOs; that is, QSOs for which both parties submitted a log. This is a change from earlier years' posts.

First, the tables for the 2017 SSB data:

2017 SSB -- Most Busts
Position Call QSOs Busts % Busts
1 CN3A 10,192 183 1.8
2 LZ9W 8,143 162 2.0
3 PJ2T 7,168 158 2.2
4 HI3K 4,927 153 3.1
5 EI9E 5,055 152 3.0
6 A44A 6,508 135 2.1
7 CU4DX 7,020 133 1.9
8 V26B 6,595132 2.0
9 OT5A 6,158 131 2.1
10 HG7T 6,488 128 2.0

2017 SSB -- Most Reverse Busts
Position Call QSOs Reverse Busts % Reverse Busts
1 CN2R 8,757 247 2.8
2 SJ0X 5,256 233 4.4
3 IK2YCW 5,201 213 4.1
4 DF0HQ 7,444 172 2.3
5 HC0E 3,981 168 4.2
6 TM3R 3,819 168 4.4
7 6Y1LZ 5,229 153 2.9
8 VE3JM 4,991140 2.8
9 EC2DX 7,228 137 1.9
10 W3LPL 5,693 131 2.3

2017 SSB -- Highest Percentage of Busts (≥100 QSOs)
Position Call QSOs % Busts
1 YB0ZZD 134 12.7
2 XR3L 111 12.6
3 EA1HTF 438 12.6
4 PY2TED 128 12.5
5 4Z5FW 120 12.5
6 AJ2E 100 12.0
7 UA6BFE 200 12.0
8 K2JMY 30311.9
9 YO8RKP 144 11.1
10 PD5ISW 136 11.0

2017 SSB -- Highest Percentage of Reverse Busts (≥100 QSOs)
Position Call QSOs % Reverse Busts
1 VR20C 112 69.6
2 VR20FUN 134 25.4
3 TC10K 108 19.4
4 KM4EOG 201 16.9
5 XR3L 111 16.2
6 BG2VIA 224 16.1
7 IW0HLZ 212 14.2
8 BI4IV 40513.6
9 BG8SXC 130 10.8
10 LU9DDJ 701 10.4

In tables of reverse busts, one sometimes finds what seems like an unreasonable number of reverse busts (as, in this table, for VR20C). This is generally caused by a discrepancy between the call actually sent by the listed station and the one recorded as being sent in at least some QSOs in the log, although it can, of course, be due to an unusual callsign structure (as appears to be the cause in the instant case) or poor audio quality.

Now the tables for the 2017 CW data:

2017 CW -- Most Busts
Position Call QSOs Busts % Busts
1 PS0F 3,259 155 4.8
2 ZW8T 2,635 153 5.8
3 6Y0W 8,074 142 1.8
4 EW6W 7,641 141 1.8
5 F6ENO 2,838 134 4.7
6 EI3KE 1,881 130 6.9
7 ZW5B 2,411 119 4.9
8 TO2SP 6,095112 1.8
9 CN2R 8,885 112 1.3
10 PJ2T 9,571 112 1.2

2017 CW -- Most Reverse Busts
Position Call QSOs Reverse Busts % Reverse Busts
1 IB9T 7488 315 4.2
2 9571 8986 304 3.4
3 DF0HQ 8475 303 3.6
4 ED8X 8780 295 3.4
5 TI7W 7639 232 3.0
6 EF8R 9574 225 2.4
7 V47T 9234 222 2.4
8 P40L 9103212 2.3
9 JS3CTQ 2050 206 10.0
10 SN8B 5334 186 3.5

2017 CW -- Highest Percentage of Busts (≥100 QSOs)
Position Call QSOs % Busts
1 W2UDT 216 27.3
2 BD3MV 130 23.8
3 DJ5UZ 209 23.0
4 AE6JV 100 22.0
5 CT1ZQ 108 19.4
6 IQ5PO 130 18.5
7 W2OF 126 18.3
8 WA2JQK 26618.0
9 PY1CC 134 17.2
10 PY4RL 101 16.8

2017 CW -- Highest Percentage of Reverse Busts (≥100 QSOs)
Position Call QSOs % Reverse Busts
1 CR5SSB 288 34.0
2 EA7/OG55W 226 21.2
3 OE3BKC 259 17.0
4 R4WBD 135 16.3
5 R2AHH 249 14.1
6 JL3ZHU 122 13.9
7 LU5DDX 141 12.8
8 K7HI 18112.7
9 PY2LPM 206 12.1
10 IS0URA 134 11.9

Now we look at the tables that integrate ten years' data.

For SSB:

2008 to 2017 SSB -- Most Busts
Position Call QSOs Busts % Busts
1 OT5A 69,479 1,381 2.0
2 LZ9W 81,222 1,350 1.7
3 PJ2T 68,720 1,293 1.9
4 A73A 59,609 1,214 2.0
5 CN3A 78,609 1,206 1.5
6 HG1S 46,753 979 2.1
7 JA7YRR 29,688 826 2.8
8 PT2CM 26,606766 2.9
9 LY7A 50,053 755 1.5
10 RT6A 41,974 750 1.8

2008 to 2017 SSB -- Most Reverse Busts
Position Call QSOs Reverse Busts % Reverse Busts
1 DF0HQ 78,630 2,244 2.9
2 JA3YBK 40,551 1,228 3.0
3 K3LR 68,781 1,077 1.6
4 WE3C 39,647 1,073 2.7
5 CN2R 60,225 1,032 1.7
6 HG1S 46,753 939 2.0
7 S52ZW 34,604 895 2.6
8 GM2T 37,394861 2.3
9 CN3A 78,609 813 1.0
10 W3LPL 52,637 790 1.5

2008 to 2017 SSB -- Highest Percentage of Busts (≥500 QSOs)
Position Call QSOs % Busts
1 PV8ADI 858 12.6
2 K2JMY 2,062 12.2
3 EA1HTF 854 10.9
4 EA7JQT 572 10.9
5 PU1MMZ 544 9.5
6 PU2TRX 868 9.3
7 UR5ZDZ 502 8.7
8 K8TS 7328.7
9 YB9KA 557 8.6
10 DS5FNE 1,396 7.6

2008 to 2017 SSB -- Highest Percentage of Reverse Busts (≥500 QSOs)
Position Call QSOs % Reverse Busts
1 CW90A 1,370 30.9
2 BA8AG 752 16.4
3 BW2/KU1CW 940 12.3
4 ZP6DYA 1,226 11.3
5 V84SCQ 806 10.5
6 LU9DDJ 701 10.4
7 BI8FZA 654 10.4
8 BV55D 91910.2
9 JG3SVP 1,270 10.1
10 PP5BS 895 9.5

 And for CW:

2008 to 2017 CW -- Most Busts
Position Call QSOs Busts % Busts
1 PJ2T 86,693 1,232 1.4
2 PV8ADI 8,127 1,177 14.5
3 LZ9W 91,852 1,098 1.2
4 PI4CC 49,201 965 2.0
5 HG1S 39,874 924 2.3
6 D4C 64,215 837 1.3
7 PJ4A 68,099 810 1.2
8 NR4M 52,432742 1.4
9 9A1A 87,797 742 0.8
10 RW0A 44,026 723 1.6

2008 to 2017 CW -- Most Reverse Busts
Position Call QSOs Reverse Busts % Reverse Busts
1 DF0HQ 84,331 3,055 3.6
2 JS3CTQ 22,349 2,905 13.0
3 ES9C 49,467 1,512 3.1
4 W2FU 55,507 1,415 2.5
5 DR1A 54,949 1,396 2.5
6 K3LR 71,723 1,395 1.9
7 IR4X 43,912 1,325 3.0
8 W0AIH 31,2041,246 4.0
9 NR4M 52,432 1,129 2.2
10 V26K 50,123 1,122 2.2

2008 to 2017 CW -- Highest Percentage of Busts (≥500 QSOs)
Position Call QSOs % Busts
1 BD3MV 954 19.8
2 AD7XG 911 18.7
3 W2UDT 989 17.5
4 DJ5UZ 508 16.9
5 YO7LYM 1,461 16.7
6 JA3AHY 554 16.6
7 WP3Y 570 16.3
8 AE3D 101615.8
9 YU1NIM 619 15.8
10 N8WS 524 14.9

2008 to 2017 CW -- Highest Percentage of Reverse Busts (≥500 QSOs)
Position Call QSOs % Reverse Busts
1 HA8FW 900 100.0
2 G3RWF 943 99.9
3 RZ3VO 1,792 49.9
4 YT65A 1,149 37.2
5 OG55W 2,569 32.2
6 5K0A 1,853 32.1
7 DP65HSC 516 16.9
8 5J1E 1,52315.6
9 SB0A 1,102 15.5
10 YP0HQ 1,787 14.0

Statistics from 2017 CQ WW SSB and CQ WW CW logs

A huge number of analyses can be performed with the various public CQ WW logs (cq-ww-2005--2017-augmented.xz; see here for details of the augmented format) for the period from 2005 to 2017.

There follow a few analyses that have interested me. There is plenty of scope to use the files for further analyses.

 

Number of Logs


The raw number of submitted logs for SSB has been relatively flat for several years; the logs submitted for CW continues to show a fairly steady annual increase:


One not infrequently reads statements to the effect that the popularity of contests such as CQ WW has been increasing for the past several years. Certainly it is true that, for CW, the number of logs is still increasing, but the above plot shows that the same cannot be truthfully said for SSB, for which the number of logs has shown no systematic variation for the last lustrum or so.

 

Popularity


By definition, popularity requires some measure of people (or, in our case, the simple proxy of callsigns) -- there is no reason to believe, a priori, that the number of received logs as shown above is related in any particular way to the popularity of a contest.

So we look at the number of calls in the logs as a function of time, rather than positing any kind of well-defined positively correlated relationship between log submission and popularity (actually, the posts I have seen don't even bother to posit such a relationship: they are silent on the matter, thereby simply seeming to presume that the reader will assume one). 

However, the situation isn't as simple as it might be, because of the presence of busted calls in logs. If a call appears in the logs just once (or some small number of times), it is more likely to be a bust rather an actual participant. Where to set a cut-off a priori in order to discriminate between busts and actual calls is unclear; but we can plot the results of choosing several such values. 

First, for SSB:


Regardless of how many logs a call has to appear in before we regard it as a legitimate callsign, the popularity of CQ WW SSB in the past couple of years has fallen to a level rarely (if ever) seen in the public logs.

Conditions were quite bad for the 2016 running of the contest; nevertheless, while poor conditions might reasonably be expected to have a major bearing on metrics related to the number of QSOs, it is not obvious, a priori, that it should have a substantial effect on the raw popularity as estimated by this metric. It seems that there were simply fewer stations QRV for the contest in 2016 than in earlier years. This seems to be borne out by the numbers for 2017, when conditions were considerably improved over 2016, and yet the number of distinct callsigns seems to be more or less the same as in the prior year. It is certainly difficult to argue, on the basis of the above plot, that this contest is now more popular than it was at a similar point in the last solar cycle -- indeed, it appears, on its face, that the opposite is true.

[I note that a reasonable argument can be made that the number of uniques will be more or less proportional to the number of QSOs made (I have not tested that hypothesis; I leave it as an exercise for the interested reader to determine whether it is true), but there is no obvious reason why the same would be true for, for example, callsigns that appear in, say, ten or more logs.]


Moving to CW:

we see a similar story to SSB, except that any decrease in participation since the same point in the last cycle appears to be very small: participation in the CW event in the current inter-cycle doldrums seems to be more or less the same as at the corresponding point in the last cycle.

 

Geographical Participation


How has the geographical distribution of entries changed over time?

Again, looking at SSB first:


Zone 28 seems to be continuing to show a slow but sustained increase in the number of logs submitted. Nevertheless, compared to the behemoths like zones 14 and 15, the number of logs from zones such as 11 or 28 is miniscule. This can be seen more clearly if we plot the percentage of logs received from each zone as a function of time:


 On CW, most zones evidence a long-term increase:


But the relative increase seems to be spread more or less evenly across all zones, with the percentages of logs from each zone barely changing over the years 2005 to 2017:


 

Activity


Total activity in a contest depends both on the number of people who participate and on how many QSOs each of those people makes. We can use the public logs to count the total number of distinct QSOs in the logs (that is, each QSO is counted only once, even if both participants have submitted a log).

For SSB:


The total number of distinct QSOs in the current inter-cycle doldrums is essentially the same as at the same point in the last solar cycle.

And for CW:


On this mode there appears to be an underlying upward trend (on which the effect of the solar cycle is superimposed), although this suggestion must be regarded as merely tentative, as it is essentially based on the minor uptick in 2017. Still, it is noteworthy that, despite the claims I see that CW is an obsolete technology in serious decline, the actual evidence, at least from this, the largest contest of the year, is quite the opposite. (This is a good reminder that when someone makes a claim whose truth is not self-evident, one should examine the underlying data for oneself. I have found that all too often it transpires that no defensible evidence has been put forward for the conclusion being drawn.)

 

Running and Calling


On SSB, the ongoing gradual shift towards stations strongly favouring either running or calling, rather than splitting their effort between the two types of operation, continued in 2017:



I have not investigated the cause of the continued decrease in the percentage of stations strongly favouring running, although the public logs could readily be used to distinguish possibilities that spring to mind, such as more SO2R operation, more multi-operator stations, and/or a reluctance of stations to forego the perceived advantages of spots from cluster networks.

On CW, 2017 saw no real change in the percentage of stations that strongly favoured running; there was a very slight decrease in the percentage that strongly favoured calling (I leave the determination of statistical [in]significance of the 2017 numbers to the interested reader). The split between callers and runners continues to be much less bimodal on CW than on SSB, although there is no room for doubt that the long-term trend on both modes is towards what used to be "search and pounce" but is now "point and click":



2018-11-12

FT8 and the Reverse Beacon Network (RBN)

The rise of FT8 over the past year or so leads one to wonder how FT8 activity compares to other modes. I thought that the RBN would be an ideal vehicle to make an objective comparison between , say,m FT8 and CW. After all, although the RBN documentation is rather vague on the subject of FT8, the web site does contain clear instructions as to how to upload FT8 posts to the RBN. The web-based spot filter also allows one to select FT8 posts:


Using the historical, archived RBN data, It's easy to create a plot showing the percentage of spots in particular modes as a function of time, from the beginning of the RBN's existence in 2009 to the present day (late 2018):

One immediately sees that there is a major problem with this plot: although FT8 is present, it appears for only a short span in the first half of 2018.

This prompts us to look at the actual historical data, to see when FT8 activity has been recorded.

The first FT8 post was on the 19th of February, 2018:

[HN:2018] rbncat 2018 | grep ",FT8," | head
W3OA,K,NA,7074,40m,K0ERE,K,NA,CQ,8,2018-02-19 23:42:16,6,FT8,20180219,1519083736
W3OA,K,NA,7074,40m,K8SIA,K,NA,CQ,13,2018-02-19 23:42:16,6,FT8,20180219,1519083736
W3OA,K,NA,7074,40m,WS9V,K,NA,CQ,13,2018-02-19 23:42:16,6,FT8,20180219,1519083736
W3OA,K,NA,7074,40m,K5MAF,K,NA,CQ,17,2018-02-19 23:42:31,6,FT8,20180219,1519083751
W3OA,K,NA,7074,40m,KI1P,K,NA,CQ,6,2018-02-19 23:42:31,6,FT8,20180219,1519083751
W3OA,K,NA,7074,40m,N1KDO,K,NA,CQ,22,2018-02-19 23:42:31,6,FT8,20180219,1519083751
W3OA,K,NA,7074,40m,WX2U,K,NA,CQ,7,2018-02-19 23:42:31,6,FT8,20180219,1519083751
W3OA,K,NA,7074,40m,YV2GAW,YV,SA,CQ,8,2018-02-19 23:42:31,6,FT8,20180219,1519083751
W3OA,K,NA,7074,40m,N4XPZ,K,NA,CQ,2,2018-02-19 23:43:01,6,FT8,20180219,1519083781
W3OA,K,NA,7074,40m,VA2FW,VE,NA,CQ,3,2018-02-19 23:43:06,6,FT8,20180219,1519083786
[HN:2018]


This aligns with the start of the yellow blip in the plot above, and presumably coincides with the date when the RBN first permitted FT8 posts.

However, the most recent FT8 post (I am writing this on the 12th of November, 2018) was on the 12th of June, 2018:

[HN:2018] rbncat 2018 | grep ",FT8," | tail
KM3T-2,K,NA,7074,40m,AC9HP,K,NA,CQ,-20,2018-05-29 11:47:01,6,FT8,20180529,1527594421
W3OA,K,NA,7074,40m,KK9G,K,NA,CQ,-12,2018-06-13 11:02:02,6,FT8,20180613,1528887722
WZ7I,K,NA,10136,30m,EW8W,EU,EU,CQ,-21,2018-06-13 11:02:03,6,FT8,20180613,1528887723
WZ7I,K,NA,10136,30m,VA3HP,VE,NA,CQ,-16,2018-06-13 11:02:03,6,FT8,20180613,1528887723
WZ7I,K,NA,7074,40m,KK9G,K,NA,CQ,0,2018-06-13 11:02:03,6,FT8,20180613,1528887723
WZ7I,K,NA,14074,20m,GW0DSJ,GW,EU,CQ,-4,2018-06-13 11:02:23,6,FT8,20180613,1528887743
WZ7I,K,NA,14074,20m,PD3WDK,PA,EU,CQ,-18,2018-06-13 11:02:23,6,FT8,20180613,1528887743
WZ7I,K,NA,14074,20m,SP7FFY,SP,EU,CQ,-16,2018-06-13 11:02:23,6,FT8,20180613,1528887743
WZ7I,K,NA,14074,20m,UA6FZ,UA,EU,CQ,-12,2018-06-13 11:02:23,6,FT8,20180613,1528887743
WZ7I,K,NA,18100,17m,GI3SG,GI,EU,CQ,-11,2018-06-13 11:02:23,6,FT8,20180613,1528887743
[HN:2018]


(And we see that prior to the 13th of June, the next most recent post was on the 29th of May.)

It seems that, after a relatively brief period during which FT8 posts were allowed, the controllers of the RBN silently decided to ban them -- even though one is still permitted to select such (now non-existent) posts on the web interface . Consequently, at least as of the current date, the RBN appears to be essentially useless as a resource for determining any metric related to current or historical FT8 activity.

2018-11-04

Summary File for RBN data, 2009 to 2017

The complete set of RBN data for 2009 to the end of 2017, after uncompression, is some 60GB in size. As not all analyses need the complete dataset, I have constructed a summary file (rbn-summary-data.xz) that contains an overview of the data and which is sufficient for many kinds of analysis that do not depend on the details of individual posts to the RBN. (The basic script used to generate this summary file may be found here; the actual summary file is created by running the basic script for each individual year from 2009 to 2017 and concatenating the results after removing the header line from all except the first year.)

The summary file, after being uncompressed, comprises a single large table of values separated by white space. The name of each column (there are twelve columns in all) is on the first row. The columns are:
  1. band: a string that identifies the band pertaining to this row. Typical values are "15m" or "160m"; if a row contains data that are not distinguished by band, then the characters "NA" are used.
  2. mode: a string that identifies the mode pertaining to this row. Typical values are "CW" or "RTTY"; if a row contains data that are not distinguished by mode, then the characters "NA" are used.
  3. type: a single character that identifies whether the data on this row are for a period of a year ("A"), a month ("M") or a day ("D").
  4. year: the numeric four-digit value of the year to which the current row pertains.
  5. month: the numeric value of the month (January = 1, etc.) of the data in this row. If the data are of type A or D, then this element has the value "NA".
  6. doy: the numeric value of the day number of the year (January 1st = 1, etc.). The maximum value in each year is 366 (even if the year is not a leap year). In the event that the year is not a leap year, the data in columns 7, 8 and 9 will be set to 0 when doy is 366. If the data are of type A or M, then this element has the value "NA".
  7. posts: the total number of posts recorded by the RBN for the band, mode and period identified by the first six columns. 
  8. calls: the total number of distinguishable calls recorded by the RBN for the band, mode and period identified by the first six columns. 
  9. posters: the total number of distinguishable posters recorded by the RBN for the band, mode and period identified by the first six columns. 
  10. scatter: the value of a scatter metric that characterises the geography of the RBN for the band, mode and period identified by the first six columns. The scatter metric is the sum of all possible distance pairs of good posters (measure in km), divided by the number of distance pairs.
  11. good posters: the total number of distinguishable posters recorded by the RBN for the band, mode and period identified by the first six columns, and for which location data are available from the RBN.
  12.  grid metric: the total number of G(15, 100) grid cells that contain good posters.
For example, the first two lines of the summary file are (presented here as a table, in order to make it easier to view on more devices):

band NA
mode NA
type A
year 2009
month NA
doy NA
posts 5007040
calls 143724
posters 151
scatter 5541
good_posters 150
grid_metric 11

This tells us that the first line of actual data in the file comprises annual data for the year 2009, with no separation by band or mode. In 2009, we see that there were 50,007,040 posts of 143,724 callsigns by 151 posters; the scatter metric, which is a measure of the geographic dispersion of the posters on the RBN., was 5,541; 150 different posters contributed the data, spread across 11 distinct G(15, 100) grid cells.

The summary file allows rather rapid analysis of the RBN. For example, this plot in this post was originally generated rather tediously from the entire 50GB dataset. Using the summary file (but now extended to include 2017), the same plot --
 -- can be generated in about five seconds. From this plot, for example, we can immediately see that the largest number of daily posts occurred during the 2017 running of the CQ WW CW contest in late November (the second-highest cluster of peaks is for the CQ WPX contest, and the third is for the ARRL DX CW contest); also, the burst of activity that coincides with weekends is unmistakable.

The summary file can be used to generate a simple plot of the growth of the RBN since its inception just as quickly:


2018-11-03

Y82V audio

The operators at the WRTC station Y82V have kindly made their audio available. (The file contains the audio from both rigs, one in each stereo channel.) The size of the file is roughly 108MB.

World Radio Team Championship (WRTC) logs

The logs for the 63 participant stations in WRTC 2018 are available:

  1. In Cabrillo format;
  2. In ordinary ASCII text format.

2018-02-08

Experiences With The Array Solutions SAL-30 Mark II

I purchased an Array Solutions SAL-30 Mark II for use over this past winter, having decided that it was finally time to become active on 160m. Lacking the room for a beverage, this seemed like the best practical RX antenna I could install. For those who just want my conclusion, based on experience this past winter season, here it is: save your money.

Here is the longer story.

1. Even though the antenna was marked as in stock when I placed the order, I was informed a few days later that it was not in fact in stock, and Array Solutions were awaiting parts. I finally received the antenna about a month after placing the order.

2. The instruction manual appears to have been written by someone whose third language is English (I'm not joking; actually, I'm being kind). As I started on the task of putting the antenna together, I found myself, several times, having to backtrack because of a lack of clear instructions. All told, the antenna took about 20 hours of work to put together in a way that satisfied me.

3. Some of the parts in the antenna kit had to be replaced: the four-foot double-walled aluminium tube had no pre-drilled holes in it. To give them their due, Array Solutions shipped me a correct replacement part as soon as I notified them. However, they said that there would be a shipping label included with the replacement part, to would allow me to return the part they had originally shipped, but there was no such label in the package that contained the replacement part.

4. The screws that ship with the antenna and go into the pre-drilled holes are of abysmal quality. The second or third one I installed simply broke when I was installing it (with just a manual screwdriver). So I went to a hardware store and purchased a replacement set of longer screws and some additional nuts, and used these parts instead, using two nuts on each bolt to ensure that they would not come loose.

5. The manual gives you no clue about how careful you have to be when installing the wires at the top and bottom of the mast, to make sure that they will not become tangled when the mast is finally erected. It also gives no clue as to how to deal with the issue that, because of the way that the holes are pre-drilled, the length of mast between two of the holes is different from the length between the other pair (by a couple of inches). There doesn't seem to be any way around this problem, so I simply shrugged and assumed that it would not make any difference to the performance of the antenna.

6. I installed the antenna as far away as possible from my tower, and also from the fence that surrounds the field in which it is placed. Lining the antenna exactly NE-SE-SW-NE took a lot longer than I expected, but in the end I was satisfied that the loops were orthogonal and closely aligned with these four directions.

7. The acid test, though, is how the antenna performs, and this is where it gets really disappointing. For comparison, the TX antennas on 80m and 160m are inverted vees in parallel, with the feed points at 90 feet. In other words, it shouldn't be hard for a real receive antenna to thoroughly outperform the transmit antennas on these bands.

8. I went through the set-up procedure as outlined in the manual, using a couple of local AM broadcast transmitters. Everything seems to work more-or-less as expected. In particular, as I "rotate" the antenna, the signals change as expected. The front-to-back is several S-points, so things look reasonable.

9. But when it comes to on-the-air use, my experience is that the antenna is nothing short of thoroughly disappointing. A quick (but accurate) description is that the front-to-back is great, but only because it is even deafer off the back than it is off the front. (By "deaf", I don't mean that the signal was weak -- that, of course, is to be expected with a receive antenna -- but that the signal-to-noise is considerably worse than the TX antenna, even in the RX antenna's forward direction, and regardless of the direction of the station to which I'm listening.) After use through most of the winter season, I don't think I've ever heard more than a couple of signals that were easier to copy on the receive antenna than on the TX antenna. Conversely, there have been many, many occasions when signals that are perfectly copyable on the TX antenna have been much harder to copy, or even simply inaudible, on the RX antenna. JA on 160m is a good test from my QTH, and I quickly discovered that if I can hear a couple of difficult-to-copy JA stations on the SAL-30 responding to a CQ, if I switch to the TX antenna I'll be able to hear, and copy easily, perhaps twice as many stations calling me. This is consistently true, not just an occasional aberration.

10. I expect to dismantle the whole expensive mistake over the summer, and add the aluminium and wire to my stash of spare bits and pieces -- while pondering what to do to really improve 160m reception for next winter.