UA9BA's 160m Signal

Willy, UA9BA, has posted a brief article about his 160m antenna. Therein, he compares his 160m signal to those of RG9A, R8TT and UN7LZ in this year's CQ 160m contest.

Unfortunately, the graphs he presents are impossible to interpret quantitatively and with statistically meaningful confidence limits, a situation exacerbated by the RBN's unique use of the term SNR.

A much clearer picture emerges if we issue the command:
  compare-multiple-sigs 20170128 20170128 160m EU UA9BA RG9A R8TT UN7LZ
which generates this plot:

This tells us immediately that, for the periods in which UA9BA and one of the other stations could be heard by the European portion of the RBN, there is no evidence that there was a substantive difference between the signals from UA9BA and RG9A or between the signals from UA9BA and UN7LZ. We can, however, be 99% certain that UA9BA was stronger than R8TT in Europe, with a signal difference of a few dB.

I note that although Willy refers to the other station as his "local rivals", they are actually rather distant from him, and (if the maps on qrz.com are correct) the stations are actually considerably closer to Europe, which, of course, makes his signal even more impressive.

NB: The graphs in Willy's post show that there are periods when not all the stations generated posts at a given RBN receiver. This is hardly surprising, given the relatively large separation of the stations and the nature of propagation on 160m. A statistically meaningful signal comparison requires enough pairwise measurements be made at about the same time so that a statistical analysis with usefully narrow 99% confidence limits can be performed.


The Growth of the RBN

In this post, as an example of the utility of a summary file of RBN data, I showed a simple graph of the RBN's numerical growth since inception in 2009:


If we look at the original data rather than the summary, we can create a more useful series of pictures showing the geographic growth (and, in some areas, the lack of such growth) of the RBN. In these images, the size and colour of each ring represents the number of posts from each RBN poster in the named year.

We can see that, apart from the near-saturation geographical coverage in  most of Europe and parts of the U.S. and Canada, RBN posters are scattered thinly around the world. In particular, Africa, South America, the Pacific, most of Australia, and a large swath of Asia are nearly devoid of reporting stations -- and even the stations that do exist in those places make few postings to the network (although part of this may be simply because of a relative paucity of stations heard).

So, although the growth in raw numbers in the original summary graph looks quite impressive, the distribution maps show that there is a long way to go before reasonably dense geographical coverage is in place over most of the globe. Since it is not even necessary to possess a transmitting license to establish a useful RBN receiver, it is particularly dispiriting that so many densely populated, technologically modern countries around the world have not even a single location posting received signals to the RBN.

The same data can be plotted in a less-cluttered manner that perhaps makes the lack of posting stations across much of the world even clearer:


New augmented logs for CQ WW SSB from 2005 to 2016

I have added another column to the flags on each line of the CQ WW SSB augmented logs for 2005 to 2016 (the MD5 hash of the file is: 491e48051f23ff6a67f700333668c4fe).

The additions to the standard CQ WW Cabrillo QSO line are now (changes in bold):
  1. The letter "A" or "U" indicating "assisted" or "unassisted"
  2. A four-digit number representing the time if the contact in minutes measured from the start of the contest. (I realise that this can be calculated from the other information on the line, but it saves a lot of time to have the number readily available in the file without having to calculate it each time.)
  3. Band
  4. A set of ten flags, each encoded as T/F: 
    • a. QSO is confirmed by a log from the second party 
    • b. QSO is a reverse bust (i.e., the second party appears to have bust the call of the first party) 
    • c. QSO is an ordinary bust (i.e., the first party appears to have bust the call of the second party) 
    • d. the call of the second party is unique 
    • e. QSO appears to be a NIL 
    • f. QSO is with a station that did not send in a log, but who did make 20 or more QSOs in the contest 
    • g. QSO appears to be a country mult 
    • h. QSO appears to be a zone mult 
    • i. QSO is a zone bust (i.e., the received zone appears to be a bust)
    • j. QSO is a reverse zone bust (i.e. the second party appears to have bust the zone of the first party)
    • k. QSO appears to be made during a run by the first party 
Note that the encoding of some of the flags requires subjective decisions to be made as to whether the flag should be true or false; consequently, and because CQ has yet to understand the importance of making their scoring code public, the value of a flag for a specific QSO line in some circumstances might not match the value that CQ would assign. (Also, CQ has more data available in the form of check logs, which are not made public.)


Why I Don't Use a Word Processor (1)

It's been far too long since I wrote an entry here on the intended principal subject of this blog: writing.

The trouble is that when one is in the process of writing, there really doesn't seem to be much to talk about: the work goes on, either quickly or slowly, but, above all, in private. It would seem odd (to me, anyway) to share any details of the work in progress, if only because it all might change before the work sees the light of day (if, indeed, it ever does). Characters change all the time, and even the basic story has a habit of ending up being quite different from what I thought I was writing.

So, I wondered, what could I post about if not the work in progress?  After pondering for a while I realised that it might be worthwhile to talk about the subject I have given as a title to this post, particularly as the subject should be big enough to occupy several posts. (Is there a word for a blog entry? "Post" seems too staid for the twenty-first century; "blentry" seems like it would be the obvious candidate, but I don't recall ever seeing it used.)

These days it seems that one can't throw the proverbial rock without hitting someone who has taken advantage of one of the many self-publishing companies that convert a computer file into a printed book (I'll leave e-books for another time... they might easily turn out to be the subject of several harangues). Regardless of the quality of the writing, the people I know who have taken advantage of these services (the cynic in me wonders which side is taking advantage of which) have all used a word processor to generate and format the content of their work. Whenever the subject of formatting has come up in conversation, they seem uniformly puzzled when I confess that I use no word processor either when actively writing or when formatting the finished product. The usual response, to put words in their mouths, is more or less along the lines of, "How is it possible not to use a word processor when you write a book?  You don't write it out in longhand, do you?" The possibility that a word processor might not be the best kind of program -- indeed, might not even be an appropriate program at all -- seems never to be questioned by most people.

There is little doubt that word processors are perfectly acceptable instruments if one intends to write a business letter or the annual Christmas missive. But for anything more complicated, it is often worth spending a not-inconsiderable amount of effort on the decision of which tool to use.

Writers are far from the only people who simply fire up the word processor because it is a convenient tool for transferring words from the brain to the screen (and, ultimately, the page). Lawyers, for example, do the same thing: legal briefs are often insanely complicated documents that include figures, tables, appendices, cross references, citations ad nauseum and other esoterica -- and, to make matters worse, are often co-authored by an entire team, members of which often spend an inordinate amount of time trying manage a document they have received from a colleague but which doesn't want to display correctly on their computer, or which causes their word processor to crash when performing some function like accepting a change from another person on the team.

One would think (if one is as naïf as I) that the sheer pain of such a process would cause someone to call a time-out and instigate an investigation as to whether there isn't a better way. But that never seems to happen.

Authors have a different problem than those experienced by a legal team: the author's goal is to produce a hardcopy book, rather than a document with complex internal structure. It sounds simple: how hard can it be to lay out words on a page? But rare is the author who thinks at the outset about whether a word processor is really the right tool for producing beautiful text.

Like many things in life, the process of producing attractive text seems like it should be easy -- and turns out to be anything but, demanding attention to details that aren't even in a word processor's vocabulary.

Commercial publishers used to employ specialist typographers (I imagine that some still do, but they seem to have disappeared from the large, mainstream publishing houses who now routinely produce books of whose typography they would once surely have been ashamed). The principal obvious job of a typographer is to make the text look attractive; but a less-obvious task is to make the text easy to read, minimizing the fatigue that is caused in a reader who has to work harder than necessary to convert the shapes on the paper to words in his head. Reading a badly-typeset book is at best more fatiguing than necessary, and at worst so aggravating that the reader might well give up altogether. (The most annoying of all is when the content of the book grasps the reader's attention, but the typography wears him down.)

In later posts (should I say "blentries"? is that any more of an abomination than "blog", which is now common currency?) I shall describe in some detail the various miniscule changes that a good typographer (or good typography software) can make to text in order to make it more readable. Unless the reader knows what to look for, he will probably never consciously notice any of these changes: but they make all the difference between a shoddy "cheap"-looking book produced from a word processor, and a book formatted to professional standards; also, the difference between smoothly-flowing text that the brain can interpret easily as it scans the page, and shapes that cause the reader to have to stop scanning because the transition from shapes on a page to a word in the head isn't as seamless as it should be.

And, egotistical though it may sound, when I've put in all the work needed to write a book, I don't want the reader to be distracted from the story by some awkwardness in the layout of letters and words on the page.


2016 Kernow Callsigns and the RBN

During 2016, stations in Cornwall, England were permitted to apply for a Notice of Variation ("NoV") from Ofcom to allow them to change their prefix when they desired to do so, at their option, for the year 2016. A couple of limitations applied: the main address of the station had to be in Cornwall, and the operation had to occur in Cornwall.

The Kernow prefixes were:

Ordinary Prefix Kernow Prefix
2E 2K

Thus, for example, the active Cornish station G4AMT sometimes signed GK4AMT during 2016.

The RBN recorded a total of 16,120 posts of 192 stations using Kernow prefixes. Some of these posts, however, were miscopies of callsigns, and some were of stations that do not appear to have met the criteria for legitimate use of a Kernow callsign. The RBN reports all occasions on which the call was recorded calling CQ on CW and most digital modes (a single CQ results in multiple posts if more than one station on the network copies it). While postings on the RBN are not an ideal measurement of activity, they are the best we have, and they do have the distinct merit of being objective.

The 192 callsigns with Kernow prefixes reported by the RBN (including many obvious errors) were:


Unfortunately, there appears to be no definitive list of stations that were issued an NoV; but, comparing the listed calls with stations with Cornish main addresses on QRZ.com, we find that the following stations appear to be the only legitimate Kernow callsigns posted by the RBN:


We can create a table showing the number of posts for each of these stations in 2015 and 2016:

Base call 2015 2016 K 2016 Σ 2016
G0ANM 1 1 1 2
G0DLV 0 0 4 4
G0PNM 7,895 3,542 1,107 4,649
G1LQT 60 1 64 65
G3KDP 2,160 632 134 766
G3LAI 0 1 12 13
G3LNW 918 81 15 96
G3MPD 0 1 553 554
G3PLE 2 20 140 160
G3UCQ 3,299 304 6,717 7,021
G3WPP 0 0 1 1
G4AMT 651 2,273 5,449 7,722
G4BPJ 1,753 2,429 690 3,119
G4DTD 409 142 42 184
G4EOG 482 151 82 233
G4MYY 0 0 43 43
G4PBN 8 0 4 4
G7KFQ 0 0 28 28
G7OGX 32 3 127 130
M0BKV 58 2 1 3
M0BUI 112 4 1 5
M0ORS 0 1 8 9
TOTAL 17,840 9,587 14,670 24,257

The columns show:
  1. the station's ordinary callsign;
  2. the number of times that the station's ordinary G call was posted by the RBN in 2015;
  3. the number of times that the station's ordinary G call was posted by the RBN in 2016;
  4. the number of times that the station's Kernow call was posted (in 2016);
  5. The number of times that either of the station's calls were posted in 2016.
Note that G[K]3MPD is a club station, and there was, apparently, no requirement by Ofcom that the operator of the station hold an NoV even when signing with the GK callign; for example, GK3MPD entered a log in the CQ WW SSB contest -- but the operator was from Scotland, not Cornwall). Therefore it seems reasonable to exclude GK3MPD as a fully legitimate GK callsign in the analysis below, since its use on at least some occasions appears to go against the spirit of the NoV, whose intent was clearly not to include operators who were temporarily in Cornwall (otherwise, the NoV would have been made available directly to such operators; it was not). The station of the other major club in Cornwall, the Cornish Radio Amateur Club, GX4CRC, appears to be inactive, if not moribund: GX4CRC was not spotted by the RBN in 2015 or 2016, nor was GK4CRC spotted in 2016.

Thus the TOTAL line in the above table does not include data from G[K]3MPD. In the tables below, values for G[K]3MPD are included for informational purposes in the individual rows, but they are not used in calculations of totals, etc.

Just looking at the raw numbers, several things are obvious:
  • Stations with an NoV showed a large percentage of activity in 2016 using their ordinary call (roughly 40%).
  • The total amount of activity due to these stations appeared to increase from 2015 to 2016, by about 35%.
  • The apparent increase in activity was therefore not due merely to operations with the K calls.
  • The vast bulk of activity was from a mere handful of stations: just two stations, GK3UCQ and GK4AMT accounted for more than 80%(!!) of the posts of GK calls. Any station looking to work GK stations would likely be frustrated by this statistic. I know that I was. 
  • Indeed, if G[K]3UCQ and G[K]4AMT had not been active, the activity level would have decreased by more than 30% between 2015 and 2016.
We can improve our understanding by looking at the numbers in the context of the total annual number of posts of stations in England (this number will reflect both the activity of the stations and the dynamic nature of the RBN -- the latter is important, and we need to take it into account when we try to determine whether the Cornish stations were really more active in 2016; presumably the dynamic nature of the RBN affects both the reported G activity in toto and the reported activity from our Cornish stations in an unbiased manner ):

Year G Posts
2009 100,579
2010 729,257
2011 1,418,474
2012 2,164,217
2013 2,667,757
2014 2,978,974
2015 2,957,711
2016 2,889,584

We can use these figures to determine what percentage of the total English activity is represented by the stations that were active with a K callsign in 2016 (splitting the resulting table into two so as to allow it to fit more easily on the web page).

Base call 2009 2010 2011 2012
G0ANM 0 1 1 0
G0DLV 0 2 2 0
G0PNM 0 0 0 0
G1LQT 0 0 129 26
G3KDP 15 135 568 1,603
G3LAI 2 0 0 0
G3LNW 2 4 52 20
G3MPD 0 4 0 554
G3PLE 0 4 0 2
G3UCQ 79 455 1,370 1,372
G3WPP 0 147 103 0
G4AMT 250 2,989 3,781 2,922
G4BPJ 69 172 785 845
G4DTD 5 80 223 163
G4EOG 1 67 113 414
G4MYY 0 0 0 0
G4PBN 0 0 0 0
G7KFQ 0 0 0 0
G7OGX 0 22 33 23
M0BKV 0 1 21 17
M0BUI 0 0 0 0
M0ORS 0 0 0 0
TOTAL 423 4,083 7,181 7,407
ALL Gs 100,579 729,257 1,418,474 2,164,217
% Cornish 0.42 0.56 0.51 0.34

Base call 2013 2014 2015 G 2016 K 2016 Σ 2016
G0ANM 2 7 1 1 1 2
G0DLV 0 0 0 0 4 4
G0PNM 940 7,376 7,895 3,542 1,107 4,649
G1LQT 33 2 60 1 64 65
G3KDP 1,235 2,163 2,160 632 134 766
G3LAI 0 0 0 1 12 13
G3LNW 34 0 918 81 15 96
G3MPD 0 51 0 1 553 0
G3PLE 51 258 2 20 140 160
G3UCQ 2,852 2,870 3,299 304 6,717 7,021
G3WPP 106 112 0 0 1 1
G4AMT 3,687 4,619 651 2,273 5,449 7,722
G4BPJ 1,454 4,955 1,753 2,429 690 3,119
G4DTD 363 296 409 142 42 184
G4EOG 129 297 482 151 82 233
G4MYY 0 0 0 0 43 43
G4PBN 0 1 8 0 4 4
G7KFQ 0 0 0 0 28 28
G7OGX 10 61 32 3 127 130
M0BKV 26 12 58 2 1 3
M0BUI 0 0 112 4 1 5
M0ORS 0 0 0 1 8 9
TOTAL 10,922 23,080 17,840 14,670 15,223 24,257
ALL Gs 2,667,757 2,978,974 2,957,711 2,889,584 2,889,584 2,889,584
% Cornish 0.41 0.77 0.60 0.33 0.51 0.83

Thus, we see that the activity of our Cornish stations as compared to the activity of G stations as a whole varied tremendously in the years leading up to 2016, representing between 0.34% and 0.77% of the total, depending on the year.

There are insufficient data to determine the long-term distribution of the percentage of activity due to the Cornish stations, but we can see that the total activity of these stations in 2016, including both GK activity and non-GK activity, is not much different from the peak activity in the few years prior to 2016 (0.83% as against 0.77%). (Indeed, there are so few annual data that the standard error of the mean is ~0.05, so that the 99% confidence limit for the mean of the underlying percentage distribution covers the rather wide range from 0.41 to 0.62.)

In summary, the data about enhanced activity in 2016 are not completely clear-cut: but that in itself tells us something important: that the availability of GK callsigns did not make a substantial (i.e., statistically unambiguous) difference to the activity from Cornwall. To the extent that there was an increase in activity, it was of a somewhat peculiar secondary type: the stations with an NoV still exhibited a high percentage of activity with their ordinary callsigns, and it is only if both types of callsigns are included that there is even a possibility that activity increased a little above historical levels.

Another clear result is that the availability of the GK calls did very little to encourage relatively inactive stations to become more active. The four most active stations in 2016 were G[K]0PNM, G[K]3UCQ, G[K]4AMT and G[K]4BPJ. These four stations accounted for 22,511, or 93%, of the posts in 2016. For the most part, these are the same four stations as have been historically active  (the exception being G0PNM, who seems to have been inactive prior to 2013; similarly, G3KDP, who was historically one of the more active stations, has become much less active in the past couple of years). Once one looks past the top four stations in a given year, the remaining stations contribute almost nothing to the activity total. These would be the very stations that one might expect to have been encouraged to become more active in 2016 with their special callsigns; but their activity remained at a very low level despite their occasional use of their Kernow callsigns.

Departing from the objective data to make a few concluding subjective statements: this analysis confirms my own impression of Kernow activity in 2016, to which I had looked forward with considerable enthusiasm and expectation: GK4AMT was active and workable on several bands over the course of the year (including thrice on 80m,); GK3UCQ was heard and worked on 20m and 17m; the other stations I worked were a struggle, and were heard only very rarely. Without the RBN to alert me to their presence, I would have worked no Kernow stations except GK4AMT and, possibly, GK3UCQ. I especially appreciated GK4AMT's activity throughout the year; in particular, he was one of only two EU stations I worked on 80m during the CQ WW CW contest.

The absence of GK4CRC throughout the year was a disappointment (especially as the club sponsored an award that essentially required a QSO with GK4CRC), as was the lack of focused activity on St. Piran's Day (5 March), when I naïvely expected several GK stations to be workable. Despite the fact that the day coincided with the ARRL SSB DX contest, I heard not a single Kernow station. Some GK stations were spotted by the RBN on that day, but no stations were outside the small group that were active throughout the year.

Another disappointment took the form of the small number of workable Cornish stations who, for whatever reason, had not applied for a Kernow call. I was quite saddened that the special station GB0GLD, whose callsign was taken from the call of Land's End Radio (GLD), where I took my Morse test long ago, chose not to use a Kernow call.

I am grateful to all those that put in the effort to bring about the 2016 K-for-Kernow NoVs, and to GK3UCQ in particular, not only for his sustained efforts in that cause but also for being the manager for the beautiful Kernow award.


Summary File for RBN data 2009 to 2016

The complete set of RBN data for 2009 to the end of 2016, after uncompression, is some 50GB in size. As not all analyses need the complete dataset, I have constructed a summary file that contains an overview of the data and which is sufficient for many kinds of analysis that do not depend on the details of individual posts to the RBN. (The script used to generate this summary file may be found here.)

The summary file, after being uncompressed, comprises a single large table of values separated by white space. The name of each column (there are nine columns in all) is on the first row. The columns are:
  1. band: a string that identifies the band pertaining to this row. Typical values are "15m" or "160m"; if a row contains data that are not distinguished by band, then the characters "NA" are used.
  2. mode: a string that identifies the mode pertaining to this row. Typical values are "CW" or "RTTY"; if a row contains data that are not distinguished by mode, then the characters "NA" are used.
  3. type: a single character that identifies whether the data on this row are for a period of a year ("A"), a month ("M") or a day ("D").
  4. year: the numeric four-digit value of the year to which the current row pertains.
  5. month: the numeric value of the month (January = 1, etc.) of the data in this row. If the data are of type A or D, then this element has the value "NA".
  6. doy: the numeric value of the day number of the year (January 1st = 1, etc.). The maximum value in each year is 366 (even if the year is not a leap year). In the event that the year is not a leap year, the data in columns 7, 8 and 9 will be set to 0 when doy is 366. If the data are of type A or M, then this element has the value "NA".
  7. posts: the total number of posts recorded by the RBN for the band, mode and period identified by the first six columns. 
  8. calls: the total number of distinguishable calls recorded by the RBN for the band, mode and period identified by the first six columns. 
  9. posters: the total number of distinguishable posters recorded by the RBN for the band, mode and period identified by the first six columns.
 For example, the first two lines of the summary file are:
      band     mode     type     year    month      doy    posts    calls  posters
     NA       NA        A     2009       NA       NA  5007040   143724      151

This tells us that the first line of actual data comprises annual data for the year 2009, with no separation by band or mode. In 2009, we see that there were 50,007,040 posts of 143,724 callsigns by 151 posters.

The summary file allows rather rapid analysis of the RBN. For example, this plot in this post was originally generated rather tediously from the entire 50GB dataset. Using the summary file, the same plot --

 -- can be generated in less than ten seconds.

The summary file can be used to generate a simple plot of the growth of the RBN since its inception just as quickly:


Most-Logged Stations in CQ WW SSB 2016

The public CQ WW SSB logs allow us easily to tabulate the stations that appear in the largest number of entrants' logs. For 2016, the ten stations with the largest number of appearances were:

Callsign Appearances % logs
CN3A 8696 57
CN2R 8333 55
9A1A 8111 53
LZ9W 8072 52
PJ2T 7729 45
EF8R 7677 53
YT8A 7629 50
CN2AA 7268 51
PJ4X 7087 44
DF0HQ 6752 47

The first column in the table is the callsign. The second column is the total number of times that the call appears in logs. That is, if a station worked CN3A on six bands, that will increment the value in the second column of the CN3A row by six. The third column is the percentage of logs that contain the callsign at least once.

For comparison, here is the equivalent table for 2015:

Callsign Appearances % logs
CN2AA 15100 78
CN2R 10017 63
9A1A 9872 61
CN3A 9693 61
DF0HQ 9237 59
LZ9W 9045 56
HK1NA 8936 54
PJ2T 8456 51
K3LR 8310 54
EF8R 8254 57

We can also perform the same analysis for, say, a ten-year span, to show which stations have most consistently appeared in other stations' logs. So, for CQ WW SSB for the period 2007 to 2016, we find:

Callsign Appearances % logs
LZ9W 80063 57
DF0HQ 76871 57
CN3A 74097 58
OT5A 71180 55
K3LR 66814 51
PJ2T 66702 47
P33W 64957 52
DR1A 53237 46
V26B 55209 43
LY7A 55161 44

 For comparison, the table for the period 2006 to 2015 is:

Callsign Appearances % logs
DF0HQ 76024 59
LZ9W 71994 54
CN3A 69456 58
OT5A 66397 53
K3LR 65974 52
DR1A 65622 49
PJ2T 63768 47
LY7A 58901 49
P33W 58520 49
V26B 56197 45

Tables relating to earlier years are here.


Summary of RBN data, 2009 - 2016

A simple plot of the RBN data from inception to the end of 2016 shows a few noteworthy features:

  1. The hebdomadal periodicity caused by increased activity at weekends is obvious.
  2. The weekends of major SSB contests (CQ WW, CQ WPX, and, to a lesser extent, ARRL DX) show little or no increase over the activity during the surrounding week.
  3. The weekend with the largest activity is consistently the weekend of CQ WW CW. The next-most-active weekend is the weekend of CQ WPX. During 2016 for the first time the RDXC contest engendered more activity (just) than did ARRL DX CW.


Some CQ WW SSB Analyses

A huge number of analyses can be performed with the various CQ WW SSB files. There follow a few that interested me. There is plenty of scope for further analyses (some of which are suggested below).

Geographical Participation

How has the geographical distribution of entries changed over time? 

Zones 11 and 28 seem to show a slow but sustained increase in the number of logs submitted. Nevertheless, compared to the behemoths like zones 14 and 15, the number of logs from these areas is miniscule. This can be seen more clearly if we plot the percentage of logs received from each zone as a function of time:

It is also worth noting that after a lustrum or so of somewhat increased participation, zone 3 has declined perceptibly in relative participation in recent years.


By definition, popularity requires some measure of people (or, in our case, the simple proxy of callsigns). So we can look at the number of calls in the logs as a function of time:

Regardless of how many logs a call has to appear in before we regard it as a legitimate callsign, the popularity of CQ WW SSB dropped considerably in 2016, to a level rarely (if ever) seen in the public logs.

Conditions were quite bad for the 2016 running of the contest; nevertheless, while poor conditions might reasonably be expected to have a major bearing on metrics related to the number of QSOs, it is not obvious, a priori, that it should have a substantial effect on the raw popularity as estimated by this metric. It seems that there were simply fewer stations QRV for the contest in 2016 than in earlier years.

I note that a reasonable argument can be made that the number of uniques will be more or less proportional to the number of QSOs made (I have not tested that hypothesis; I leave it as an exercise for the interested reader to determine whether it is true), but there is no obvious reason why the same would be true for, for example, callsigns that appear in, say, ten or more logs.


Total activity in a contest depends both on the number of people who participate and on how many QSOs each of those people makes. We can use the public logs to count the total number of distinct QSOs in the logs (that is, each QSO is counted only once, even if both participants have submitted a log).

As was the case for popularity, there was a precipitous decline in activity in the 2016 CQ WW SSB contest, with both 10m and 15m showing major decreases. Activity on the other bands was down slightly from the each band's peak year, but was more or less in line with activity for the past few years. The overall activity, though, was down nearly a third from 2015, a greater decline even than the decline in popularity: thus, fewer stations were active, and, on the average, the stations that were active made fewer QSOs.

Activity would be reasonably expected to be down in 2016, partly because the pool of stations to work was smaller in 2016 (i.e., the contest was less popular in that year), and partly because propagation between the stations that were active was worse.

(Everything else being equal, one might naïvely think that the number of QSOs would scale roughly as the square of the number of active stations active; however, given the highly non-random nature of the geographical distribution of stations in CQ WW and the propagation paths between stations, there is no reason to expect, a priori and without further analysis, that such a relationship would necessarily pertain. And, in any case, everything is not equal from year to year: a reasonable initial hypothesis is that the amount of activity would scale roughly as the product of the square of the number of active stations and some linear measure of the ease with which a "typical" QSO can be completed. There are, I think, sufficient data in the logs to perform a more quantitative analysis along these lines, should anyone be so inclined.)


New CQ WW SSB video maps

I have updated the set of CQ WW SSB video maps on my youtube channel (channel N7DR). These video maps cover all the years for which public CQ WW SSB logs are currently available (2005 to 2016).

To access individual videos directly:


CQ WW SSB 2016 logs available

The logs for the 2016 running of the CQ WW SSB contest are now available from CQ. The logs for the CW contest are not yet available.

The logs are also available in compressed form that may be more easily and quickly downloaded here.

I have created a compressed file that contains all the cleaned QSO lines from the Cabrillo files from all the SSB logs for all the years for which data are available. Currently, the file covers all the SSB QSOs in the years from 2005 to 2016. The MD5 checksum of this file is: ac555db15417fa76466ed636936e0efe.

I have also created an augmented file, in compressed format, that adds useful data to each QSO. Each QSO line in the augmented file includes an additional four columns, with the following meanings:

  1. The letter "A" or "U" indicating "assisted" or "unassisted"
  2. A four-digit number representing the time if the contact in minutes measured from the start of the contest. (I realise that this can be calculated from the other information on the line, but it saves a lot of time to have the number readily available in the file without having to calculate it each time.)
  3. Band
  4. A set of ten flags, each encoded as T/F: 
    • a. QSO is confirmed by a log from the second party 
    • b. QSO is a reverse bust (i.e., the second party appears to have bust the call of the first party) 
    • c. QSO is an ordinary bust (i.e., the first party appears to have bust the call of the second party) 
    • d. the call of the second party is unique 
    • e. QSO appears to be a NIL 
    • f. QSO is with a station that did not send in a log, but who did make 20 or more QSOs in the contest 
    • g. QSO appears to be a country mult 
    • h. QSO appears to be a zone mult 
    • i. the QSO is a zone bust (i.e., the received zone appears to be a bust)
    • j. the QSO is a reverse zone bust (i.e. the second party appears to have bust the zone of the first party)
The MD5 checksum of this file is: 4acae023af4ea41903249f54446dd79a.
Note that the flags in the augmented data are calculated from the raw data independently of the CQ contest committee. This is because:
  1. CQ still does not make the actual scoring code available ;
  2. the checklogs are not public, and hence represent additional data that CQ can use in determining the values of the flags.


2016 RBN data

All the postings to the Reverse Beacon Network in 2016, along with the postings from prior years, are now available in the directory https://www.adrive.com/public/cQwkEB/rbn.

Some simple annual statistics for the period 2009 to 2016 follow (the 2009 numbers cover only part of that year, as the RBN was instantiated partway through that year).

Total posts:
2009:   5,007,040
2010:  25,116,810
2011:  49,705,539
2012:  71,584,195
2013:  92,875,152
2014:  108,862,505
2015:  116,385,762
2016:  111,027,068
 Total posting stations:
2009: 151
2010: 265
2011: 320
2012: 420
2013: 473
2014: 515
2015: 511
2016: 590
 Total posted callsigns:
2009: 143,724
2010: 266,189
2011: 271,133
2012: 308,010
2013: 353,952
2014: 398,293
2015: 433,197
2016: 375,613
Obviously, statistics that are considerably more comprehensive may be derived rather easily from the files in the directory.

Note that if you intend to use the databaseߴs reported signal strengths in an analysis, you should be sure that you understand the ramifications of what the RBN means by SNR.