Some CQ WW SSB Analyses

A huge number of analyses can be performed with the various CQ WW SSB files. There follow a few that interested me. There is plenty of scope for further analyses (some of which are suggested below).

Geographical Participation

How has the geographical distribution of entries changed over time? 

Zones 11 and 28 seem to show a slow but sustained increase in the number of logs submitted. Nevertheless, compared to the behemoths like zones 14 and 15, the number of logs from these areas is miniscule. This can be seen more clearly if we plot the percentage of logs received from each zone as a function of time:

It is also worth noting that after a lustrum or so of somewhat increased participation, zone 3 has declined perceptibly in relative participation in recent years.


By definition, popularity requires some measure of people (or, in our case, the simple proxy of callsigns). So we can look at the number of calls in the logs as a function of time:

Regardless of how many logs a call has to appear in before we regard it as a legitimate callsign, the popularity of CQ WW SSB dropped considerably in 2016, to a level rarely (if ever) seen in the public logs.

Conditions were quite bad for the 2016 running of the contest; nevertheless, while poor conditions might reasonably be expected to have a major bearing on metrics related to the number of QSOs, it is not obvious, a priori, that it should have a substantial effect on the raw popularity as estimated by this metric. It seems that there were simply fewer stations QRV for the contest in 2016 than in earlier years.

I note that a reasonable argument can be made that the number of uniques will be more or less proportional to the number of QSOs made (I have not tested that hypothesis; I leave it as an exercise for the interested reader to determine whether it is true), but there is no obvious reason why the same would be true for, for example, callsigns that appear in, say, ten or more logs.


Total activity in a contest depends both on the number of people who participate and on how many QSOs each of those people makes. We can use the public logs to count the total number of distinct QSOs in the logs (that is, each QSO is counted only once, even if both participants have submitted a log).

As was the case for popularity, there was a precipitous decline in activity in the 2016 CQ WW SSB contest, with both 10m and 15m showing major decreases. Activity on the other bands was down slightly from the each band's peak year, but was more or less in line with activity for the past few years. The overall activity, though, was down nearly a third from 2015, a greater decline even than the decline in popularity: thus, fewer stations were active, and, on the average, the stations that were active made fewer QSOs.

Activity would be reasonably expected to be down in 2016, partly because the pool of stations to work was smaller in 2016 (i.e., the contest was less popular in that year), and partly because propagation between the stations that were active was worse.

(Everything else being equal, one might naïvely think that the number of QSOs would scale roughly as the square of the number of active stations active; however, given the highly non-random nature of the geographical distribution of stations in CQ WW and the propagation paths between stations, there is no reason to expect, a priori and without further analysis, that such a relationship would necessarily pertain. And, in any case, everything is not equal from year to year: a reasonable initial hypothesis is that the amount of activity would scale roughly as the product of the square of the number of active stations and some linear measure of the ease with which a "typical" QSO can be completed. There are, I think, sufficient data in the logs to perform a more quantitative analysis along these lines, should anyone be so inclined.)

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.