2018-02-08

Experiences With The Array Solutions SAL-30 Mark II

I purchased an Array Solutions SAL-30 Mark II for use over this past winter, having decided that it was finally time to become active on 160m. Lacking the room for a beverage, this seemed like the best practical RX antenna I could install. For those who just want my conclusion, based on experience this past winter season, here it is: save your money.

Here is the longer story.

1. Even though the antenna was marked as in stock when I placed the order, I was informed a few days later that it was not in fact in stock, and Array Solutions were awaiting parts. I finally received the antenna about a month after placing the order.

2. The instruction manual appears to have been written by someone whose third language is English (I'm not joking; actually, I'm being kind). As I started on the task of putting the antenna together, I found myself, several times, having to backtrack because of a lack of clear instructions. All told, the antenna took about 20 hours of work to put together in a way that satisfied me.

3. Some of the parts in the antenna kit had to be replaced: the four-foot double-walled aluminium tube had no pre-drilled holes in it. To give them their due, Array Solutions shipped me a correct replacement part as soon as I notified them. However, they said that there would be a shipping label included with the replacement part, to would allow me to return the part they had originally shipped, but there was no such label in the package that contained the replacement part.

4. The screws that ship with the antenna and go into the pre-drilled holes are of abysmal quality. The second or third one I installed simply broke when I was installing it (with just a manual screwdriver). So I went to a hardware store and purchased a replacement set of longer screws and some additional nuts, and used these parts instead, using two nuts on each bolt to ensure that they would not come loose.

5. The manual gives you no clue about how careful you have to be when installing the wires at the top and bottom of the mast, to make sure that they will not become tangled when the mast is finally erected. It also gives no clue as to how to deal with the issue that, because of the way that the holes are pre-drilled, the length of mast between two of the holes is different from the length between the other pair (by a couple of inches). There doesn't seem to be any way around this problem, so I simply shrugged and assumed that it would not make any difference to the performance of the antenna.

6. I installed the antenna as far away as possible from my tower, and also from the fence that surrounds the field in which it is placed. Lining the antenna exactly NE-SE-SW-NE took a lot longer than I expected, but in the end I was satisfied that the loops were orthogonal and closely aligned with these four directions.

7. The acid test, though, is how the antenna performs, and this is where it gets really disappointing. For comparison, the TX antennas on 80m and 160m are inverted vees in parallel, with the feed points at 90 feet. In other words, it shouldn't be hard for a real receive antenna to thoroughly outperform the transmit antennas on these bands.

8. I went through the set-up procedure as outlined in the manual, using a couple of local AM broadcast transmitters. Everything seems to work more-or-less as expected. In particular, as I "rotate" the antenna, the signals change as expected. The front-to-back is several S-points, so things look reasonable.

9. But when it comes to on-the-air use, my experience is that the antenna is nothing short of thoroughly disappointing. A quick (but accurate) description is that the front-to-back is great, but only because it is even deafer off the back than it is off the front. (By "deaf", I don't mean that the signal was weak -- that, of course, is to be expected with a receive antenna -- but that the signal-to-noise is considerably worse than the TX antenna, even in the RX antenna's forward direction, and regardless of the direction of the station to which I'm listening.) After use through most of the winter season, I don't think I've ever heard more than a couple of signals that were easier to copy on the receive antenna than on the TX antenna. Conversely, there have been many, many occasions when signals that are perfectly copyable on the TX antenna have been much harder to copy, or even simply inaudible, on the RX antenna. JA on 160m is a good test from my QTH, and I quickly discovered that if I can hear a couple of difficult-to-copy JA stations on the SAL-30 responding to a CQ, if I switch to the TX antenna I'll be able to hear, and copy easily, perhaps twice as many stations calling me. This is consistently true, not just an occasional aberration.

10. I expect to dismantle the whole expensive mistake over the summer, and add the aluminium and wire to my stash of spare bits and pieces -- while pondering what to do to really improve 160m reception for next winter.


2018-01-13

Most-Logged Stations in CQ WW CW and SSB Contests, 2017

The public CQ WW CW and SSB logs allow us easily to tabulate the stations that appear in the largest number of entrants' logs. For 2017, the ten stations with the largest number of appearances in CQ WW SSB logs were:

Callsign Appearances % logs
CN3A 10,347 62
EF8R 9,099 56
CN2R 8,649 55
LZ9W 8,251 53
CN2AA 8,090 54
ES9C 7,941 53
M6T 7,856 52
A73A 7,501 47
PZ5K 7,481 45
DF0HQ 7,372 50

The first column in the table is the callsign. The second column is the total number of times that the call appears in logs. That is, if a station worked CN3A on six bands, that will increment the value in the second column of the CN3A row by six. The third column is the percentage of logs that contain the callsign at least once.

For comparison, here is the equivalent table for 2016:

Callsign Appearances % logs
CN3A 8,696 57
CN2R 8,333 55
9A1A 8,111 53
LZ9W 8,072 52
PJ2T 7,729 45
EF8R 7,677 53
YT8A 7,629 50
CN2AA 7,268 51
PJ4X 7,087 44
DF0HQ 6,752 47

Similarly, the ten stations with the largest number of appearances in CQ WW CW 2017 were:

Callsign Appearances % logs
TK0C 10,719 63
9A1A 10,594 65
M6T 9,884 62
CR3W 9,783 61
YT5A 9,692 63
PJ2T 9,661 53
EF8R 9,538 60
LZ9W 9,257 60
V47T 9,128 55
CN2AA 9,092 58

And the equivalent table for 2016:

Callsign Appearances % logs
HK1NA10,27759
9A1A9,92163
TK0C9,68261
CN2R9,67563
PJ2T9,56953
CR3W9,35460
LZ9W9,09159
CN2AA8,91259
P33W8,66156
EF8R8,40957

We can also perform the same analysis for, say, a ten-year span, to show which stations have most consistently appeared in other stations' logs. So, for CQ WW SSB for the period 2008 to 2017, we find:

Callsign Appearances % logs
LZ9W 82,463 57
CN3A 79,752 59
DF0HQ 77,755 56
OT5A 71,370 53
PJ2T 69,695 47
K3LR 69,026 51
P33W 65,432 50
A73A 60,563 46
CN2R 60,356 48
DR1A 56,425 40

 And for CW over the same span:

Callsign Appearances % logs
LZ9W 92,842 66
9A1A 88,714 62
PJ2T 87,728 57
DF0HQ 82,412 62
P33W 72,498 54
W3LPL 71,859 52
K3LR 71,438 52
LX7I 69,457 53
PJ4A 68,654 52
D4C 65,049 43

2018-01-06

2017 RBN data

All the postings to the Reverse Beacon Network in 2017, along with the postings from prior years, are now available in the directory https://www.adrive.com/public/cQwkEB/rbn.

Some simple annual statistics for the period 2009 to 2017 follow (the 2009 numbers cover only part of that year, as the RBN was instantiated partway through that year).

Total posts:
2009:   5,007,040
2010:  25,116,810
2011:  49,705,539
2012:  71,584,195
2013:  92,875,152
2014:  108,862,505
2015:  116,385,762
2016:  111,027,068
2017117,973,111
 Total posting stations:
2009: 151
2010: 265
2011: 320
2012: 420
2013: 473
2014: 515
2015: 511
2016: 590
2017: 625
 Total posted distinct callsigns:
2009: 143,724
2010: 266,189
2011: 271,133
2012: 308,010
2013: 353,952
2014: 398,293
2015: 433,197
2016: 375,613
2017: 356,461
Obviously, statistics that are considerably more comprehensive may be derived rather easily from the files in the directory.

Note that if you intend to use the databaseߴs reported signal strengths in an analysis, you should be sure that you understand the ramifications of what the RBN means by SNR.

2018-01-05

Video Maps of CQ WW QSOs, 2005 to 2017

I have updated the set of CQ WW video maps on my youtube channel (channel N7DR) to include the logs from the 2017 events. These video maps cover all the years for which public CQ WW logs are currently available (2005 to 2017).

To access individual videos directly:


SSB

CW
 

2018-01-04

Cleaned, Augmented and Submitted Logs for 2017 CQ WW CW and SSB Contests

Now available are copies of the public logs for CQ WW CW and SSB for 2017, as well as cleaned and augmented versions of the logs for the period 2005 to 2017,.

The copies of the public logs for 2017 may be downloaded from:
Links to the cleaned and augmented logs may be followed here.

The cleaned logs are the result of processing the QSO: lines from the entrants' submitted Cabrillo files to ensure that all fields contain valid values and all the data match the format required in the rules. Any line containing illegal data in a field (for example, a zone number greater than 40, or a date/time stamp that is outside the contest period) has simply been removed. Also, only the QSO: lines are retained, so that each line in the file can be processed easily.

The augmented logs contain the same information as the cleaned logs, with the addition of some useful information on each line. The information added to each line comprises:

  1. The letter "A" or "U" indicating "assisted" or "unassisted"
  2. A four-digit number representing the time if the contact in minutes measured from the start of the contest. (I realise that this can be calculated from the other information on the line, but it saves a lot of time to have the number readily available in the file without having to calculate it each time.)
  3. Band
  4. A set of eleven flags, each -- apart from column k -- encoded as T/F: 
    • a. QSO is confirmed by a log from the second party 
    • b. QSO is a reverse bust (i.e., the second party appears to have bust the call of the first party) 
    • c. QSO is an ordinary bust (i.e., the first party appears to have bust the call of the second party) 
    • d. the call of the second party is unique 
    • e. QSO appears to be a NIL 
    • f. QSO is with a station that did not send in a log, but who did make 20 or more QSOs in the contest 
    • g. QSO appears to be a country mult 
    • h. QSO appears to be a zone mult 
    • i. QSO is a zone bust (i.e., the received zone appears to be a bust)
    • j. QSO is a reverse zone bust (i.e. the second party appears to have bust the zone of the first party)
    • k. This entry has three possible values rather than just T/F:
      • T: QSO appears to be made during a run by the first party
      • F: QSO appears not to be made during a run by the first party
      • U: the run status is unknown because insufficient frequency information is available in the first party's log 
  5. If the QSO is a reverse bust, the call logged by the second party; otherwise, the placeholder "-"
  6. If the QSO is an ordinary bust, the correct call that should have been logged by the first party; otherwise, the placeholder "-"
  7. If the QSO is a reverse zone bust, the zone logged by the second party; otherwise, the placeholder "-"
  8.  If the QSO is an ordinary zone bust, the correct zone that should have been logged by the first party; otherwise, the placeholder "-"
Notes:
  • The encoding of some of the flags requires subjective decisions to be made as to whether the flag should be true or false; consequently, and because CQ has yet to understand the importance of making their scoring code public, the value of a flag for a specific QSO line in some circumstances might not match the value that CQ would assign. (Also, CQ has more data available in the form of check logs, which are not made public.)
  • I made no attempt to deduce the run status of a QSO in the second party's log (if such exists), regardless of the status in the first party's log. This allows one cleanly to perform correct statistical analyses anent the number of QSOs made by running stations merely by excluding QSOs marked with a U in column k.
  • No attempt is made to detect the case in which both participants of a QSO bust the other station's call. This is a problematic situation because of the relatively high probability of a false positive unless both stations log the frequency as opposed to the band. (Also, on bands on which split-frequency QSOs are common, the absence of both transmit and receive frequency is a problem.) Because of the likelihood of false positives, it seems better, given the presumed rarity of double-bust QSOs, that no attempt be made to mark them.
  • The entries for the zones in the case of zone or reverse zone busts are normalised to two-digit values.

2017-10-16

Channel Energy Ranges in the LANL GPS Charged Particle Dataset

The documentation for the CXD instrument mentions eleven electron channels and five proton channels. Some details are given in a referenced paper. Unfortunately, that paper is reproduced in black-and-white, and the response curves contained therein require colour in order to be interpreted correctly; in addition, a lot of details are still absent from the paper.

So here is the information I have been able to gather so far regarding the channels of the CXD instrument. Thank to John Sullivan of LANL for this information.

I note that there are minor inconsistencies in the publicly available documentation. These inconsistencies, however, would seem likely to be less than the accuracy and reproducibility limits for the instrumentation. What follows is my best interpretation of the information I have been able to gather.

This paper and this paper provide nominal energy ranges for the 11 electron channels. Adding the equivalent values of γ to that information gives us this table:

CXD Electron Channel Energy Ranges (MeV)
Channel Detector Min Energy Min γ Max Energy Max γ
E1 LEP 0.14 1.27 0.23 1.45
E2 LEP 0.23 1.45 0.41 1.80
E3 LEP 0.41 1.80 0.77 2.51
E4 LEP 0.77 2.51 1.25 3.45
E5 LEP 1.26 3.46 68 134
E6 HEP 1.33.54 1.7 4.33
E7 HEP 1.7 4.33 2.2 5.30
E8 HEP 2.2 5.30 3.0 6.87
E9 HEP 3.0 6.87 4.1 9.02
E10 HEP 4.1 9.02 5.8 12.35
E11 HEP 5.8 12.35

There are two distinct detectors on the instrument, the low-energy detector (often denoted "LEP") and the high-energy detector ("HEP"). As shown above, electron channels 1 to 5 are from the LEP, channels 6 to 11 are from the HEP.

The LEP and HEP detectors respond to both electrons and protons (and photons, which we will ignore for now). For protons the LEP detector has two channels, with threshold energies of "about" 6 MeV and 10 MeV, with an upper limit of 70 MeV. This paper gives the complete ranges for the proton channels as:

CXD Proton Channel Energy Ranges (MeV)
Channel Detector Minimum Energy Min γ Maximum Energy Max γ
P1 LEP 6 1.01 10 1.01
P2 LEP 10 1.01 50 1.05
P3 HEP 16 1.02 128 1.14
P4 HEP 57 1.06 75 1.08
P5 HEP 75 1.08

I am informed that the lower limits in these tables are reasonably accurate; however, the upper limits are rather soft, as the channels typically have some response to particles of higher energies.

The detailed transfer function between actual particle energy and the measured flux values is currently available only for the eleven electron channels (i.e., not the proton channels), and only for the satellites carrying the CXD instrument. There are two sets of transfer functions, one for SVN 53 through 61, and one for SVN 62 through 73. The two sets of numerical coefficients that define the transfer functions are available in a spreadsheet file in OpenDocument format here (the  coefficients for SVN 53 to 61 are on the first sheet, the coefficients for the remaining satellites on the second).

Plotting the transfer functions graphically, as below, gives us a better feel for the responses in the various channels.



 







 In the same way, we can plot the response curves for the remaining satellites:












These curves are a far cry from the ideal curves of a perfect instrument: in particular, note that all channels, even the low-energy ones, have a larger response to high energy electrons than to low energy ones: the chief difference between the high-energy channels and the low-energy ones being that the high-energy channels effectively suppress any response to low-energy electrons.

Thus, a notional stream of high-energy particles would be detected by all channels. In a more realistic stream with a low-energy component, the high-energy component might well swamp the contribution from the low-energy particles, even in the (notionally) low-energy channels; but the high-energy components could be determined from the high-energy channels (which are effectively immune to contamination from low-energy electrons), and then removed prior to the analysis of the low-energy channels.

Obtaining actual flux values from the instrument is therefore not a trivial task, and will be examined in more detail in a subsequent post.