## 2023-03-23

### Zone-Based Analyses from 2022 CQ WW SSB and CQ WW CW logs

A large number of analyses can be performed with the various public CQ WW logs (cq-ww-2005--2022-augmented.xz; see here for details of the augmented format) for the period from 2005 to 2022.

As usual, there follow a few analyses that interest me. There is, of course, plenty of scope to use the augmented files for further analyses.

Below are some simple zone-based analyses from the logs.

### Zones and Distance

As in prior years, we can examine the distribution of distance for QSOs as a function of zone.

Below is a series of figures showing this distribution integrated over all bands and, separately, band by band for the CQ WW SSB and CQ WW CW contests for 2022.

Each plot shows a colour-coded distribution of the distance of QSOs for each zone, with the data for SSB appearing above the data for CW within each zone.

For every half-QSO in a given zone, the distance of the QSO is calculated; in ths way, the total  number of half-QSOs in bins of width 500 km is accumulated. Once all the QSOs for a particular contest have been binned in this manner, the distribution for each zone is normalised to total 100% and the result coded by colour and plotted. The mean distance for each zone and mode is denoted by a small white rectangle added to the underlying distance distribution.

Only QSOs for which logs have been provided by both parties, and which show no bust of either callsign or zone number are included. Bins coloured black are those for which no QSOs are present at the relevant distance.

The resulting plots are reproduced below. I find that they display in a compact format a wealth of data that is informative and often unexpected.

### Zone Pairs

As in prior years, We can examine the number of QSOs for pairs of zones from the 2021 contests using the augmented file.

The procedure is simple. We consider only QSOs that meet the following criteria:
1. marked as "two-way" QSOs (i.e., both parties submitted a log containing the QSO);
2. no callsign or zone is bust by either party.

A counter is maintained for every pair of zones (i.e., 1-1, 1-2, 1-3 ... 40-39, 40-40) and the pertinent counter is incremented once for each distinct QSO between stations in those zones.

Separate figures are provided for each band, led by a figure integrating QSOs on all bands. The figures are constructed in such a way as to show the results for both the SSB and CW contests on a single figure. (Any zone pair with no QSOs that meet the above criteria appears in black on the figures.)

It is clear from these figures, as from those for earlier years, that CQ WW is principally a contest for intra-EU QSOs, and secondarily one for QSOs between EU and the East Coast of North America. This format is undoubtedly popular, as CQ WW, in both its SSB and CW incarnations, would seem by any reasonable measure to be the most popular contest of the year. But one does wonder whether there isn't some other format that would more strongly encourage participation from other parts of the world, instead of concentrating activity in these limited areas.

The much-reduced activity from zone 16 in 2022 is clearly visible when one compares these plots to those from, say, 2021.

### Non-Zero Zone Pairs

The activity between pairs of zones in the CW and SSB CQ WW contests over the period from 2005 to 2022 may be usefully summarised in a single figure:

There are 820 possible zone pairs: (z1, z1), (z1, z2) ... (z1, z40), (z2, z2), (z2, z3) ... (z39, z39), (z39, z40), (z40, z40). The above figure shows the number of different zone pairs actually present in the public logs, for each mode and for each year for which data are available, separated on a band-by-band basis and presented in the form of percentages of the maximum possible count (i.e., 820).

The top two lines require some additional explication: the line marked "MEAN" is the arithmetic mean of the results for the six separate bands for the relevant year and mode. The line marked "ANY" is also constructed from the data for the individual bands, but such that any give zone pair need be present on any one (or more, of course) of the individual bands to be included on the "ANY" line.

### Half-QSOs Per Zone for CQ WW CW and SSB, 2005 to 2022

A simple way to display the activity in the CQ WW contests is to count the number of half-QSOs in each zone. Each valid QSO requires the exchange of two zones, so we simply count the total number of times that each zone appears, making sure to include each valid QSO only once.

If we do this for the entire contest without taking the individual bands into account, we obtain this figure:

The plot shows data for both SSB and CW contests over the period from 2005 to 2022. As in earlier posts, I include only QSOs for which both parties submitted a log and neither party bust either the zone or the call of the other party. The black triangles represent contests in which no half-QSOs were made from (or to) a particular zone. By far the most striking feature of this plot is the way in which activity in EU overwhelms that in the rest of the world.

We can, of course, generate equivalent plots on a band-by-band basis:

The activity from zones 14 and 15 so overwhelms these figures that in order to get a feel for the activity elsewhere, we need to move to a logarithmic scale:

The figures speak for themselves.

## 2023-03-17

### arrl.net and the ARRL -- or: How Not to Treat Your Members

(I apologise for the inconsistent size of the font in the main text of this post. I wasted twenty minutes of my life trying to force blogger.com to eliminate most of its auto-generated convoluted HTML so that I could then manually rationalise the look of the text, but I was forced to admit defeat. I refrain, with some difficulty, from ranting about the blogger.com interface for creating blog entries.)

## The arrl.net Service

The February, 1999 issue of the ARRL's journal QST, included the following item (the above link may require membership in ARRL for access) in the Happenings column:

[I note that the ARRL attempts a limitation --

-- that is grossly over-reaching and, in particular, makes no allowance for fair use -- a legal concept that, while a little blurry around the edges, would certainly apply to, as here, the duplication for noncommercial purposes of a small part of one column on one page of a magazine of well over 100 pages.]

Like many active hams, I saw the value of being able to use an e-mail address of the form <callsign>@arrl.net for day-to-day e-mail, and I signed up for the service.

For more than twenty years, the address N7DR@arrl.net has been my usual e-mail address, used for communication with friends, family and many other categories of correspondent.

[In passing, I should note that a couple of people have recently pointed out that the above announcement made no mention of sending e-mail from the arrl.net domain. However, the distinction is rather disingenuous. In 1999, there was no expectation, or even general notion, that a valid e-mail address could not be used as a matter of course for both sending and receiving e-mail. Indeed, even now, it is hard to see the practical purpose of an address that can be used solely for receiving personal e-mail on a day-to-day basis. Had there been any expectation in 1999 that an address would be so limited, it's hard to envisage many members signing up. Or, indeed, that the ARRL would not have included a specific warning not to use the address for sending e-mails. And, in practice, the arrl.net addresses worked perfectly fine from the time of the service's introduction for both sending and receiving communication. And it's also difficult to understand how it could have been otherwise: the technical method that was eventually introduced to implement the limiting of e-mails in this manner was not even introduced by the IETF until the publication of RFC 4408 in April, 2006 -- and even then the RFC was explicitly stated as being experimental (i.e., explicitly not an IETF standard).]

So, for more than twenty years, arrl.net addresses worked as one would expect: e-mails sent to your arrl.net address would be forwarded to a server from which they could be downloaded using whatever protocol(s) that server supported (typically POP3 and/or IMAP); and, if you configured your e-mail program (technically, a Mail User Agent, or MUA) so that the From: header contained your arrl.net address, you had every expectation that the associated e-mail would be delivered correctly by whatever SMTP server one used to relay outbound e-mails.

And then came 2023, and an unannounced policy change by the ARRL (at least, I have not been able to find any warning, explanation, discussion or announcement of the change -- or even an after-the-fact description of it, to say nothing of any kind of motivation for it).

## The Symptom(s)

So what were the symptoms of the problem that arose because of this change?

The first indication of a problem I saw was that an e-mail for a gmail.com user was bounced by that domain's e-mail server. Now, for gmail to bounce a legitimate e-mail is hardly unknown [as far as I can tell; despite the fact that they almost certainly scan the contents of every e-mail they handle, they don't actually use that information for determining whether a given e-mail is "spam" -- technically, what we generally refer to as "spam" is "Unsolicited Commercial E-mail", or UCE)]. Historically, gmail would occasionally decide that all e-mails from the server through which I route outbound e-mail is UCE, even though a glance at any of my personal e-mails would have been sufficient to determine that, while they are indeed e-mails (!), they are generally not "unsolicited commercial" e-mails. Anyway, every now and then, gmail would bounce my e-mails for a day or two before suddenly allowing them again. This would happen perhaps every year or so, usually for a couple of days, before gmail would revert to accepting e-mails from the server. So it was annoying, and demonstrative of sloppy coding, but not really a major issue.

But now the symptom was different. The bounce message from gmail.com said:

  550-5.7.26 The MAIL FROM domain [arrl.net] has an SPF record with a hard fail
550-5.7.26 policy (-all) but it fails to pass SPF checks with the ip:
550-5.7.26 [64.62.234.38]. To best protect our users from spam and phishing,
550-5.7.26 the message has been blocked. Please visit
550 5.7.26 information. g32-20020a9d12a3000000b0068cd21180c7si2618456otg.140 - gsmtp

As usual with bounce messages from gmail, the included link was useless. But the text of the message is sufficient to describe the problem in detail: the SPF record for arrl.net suddenly was indicating that a legitimate e-mail from the arrl.net domain could come only from particular pre-defined servers. First, what exactly is an SPF record?

RFC 7208, published in April, 2014, is the IETF standards-track document defining the Sender Policy Framework (SPF). The SPF is in tended to allow the owner of a domain to place restrictions on the servers from which e-mail claiming to be from that domain are allowed to be received. In other words, a company can use SPF to inform the recipient server whether an e-mail purporting to come from its domain as actually being received from a server that is permitted to send e-mails from that domain. Great for a company... but hardly the milieu in which the ARRL's e-mail system operates, as there are hundreds (thousands?) of members legitimately sending their @arrl.net e-mails from a multitude of equally-legitimate SMTP servers, none of which are known to, or under the control of, the ARRL

## Sender Policy Framework (SPF)

SPF is defined in RFC 7208; a good less-technical explanation of the parts that matter to us may be found here.

The basic idea is that the recipient e-mail server may (there is no requirement that it do this, and many e-mail servers are configured not to do so) issue a Domain Name System (DNS) query against the domain of the e-mail's From: address. The response from DNS may include an SPF record (again, there is no requirement that such a record be included in a domain's DNS configuration).

So, suppose that an e-mail server receives an e-mail from N7DR@arrl.net. It may query DNS and receive, as part of the response the following entry:

arrl.net        text = "v=spf1 include:spf.protection.outlook.com include:pobox.com -all"

This tells the e-mail server that the owner of the arrl.net domain (i.e., the ARRL) has included a record complying with SPF version 1 (as of this writing, that is the only extant version).

Skipping some details, the two "include" fields tell the server to query those domains as well; typically, that query informs the e-mail server that arrl.net emails received from machines in those domains are to be accepted. Unless one happens to be using outlook.com or pobox.com as one's outbound e-mail server, these "include" fields may be ignored for our purposes.

The important part is that last field:  -all.

The above-referenced document describes that field well. It says, in part,:

"-all " is the part of the record that indicates what is recommended to do if the sending IP address does not match any of the ones in the record. This is determined by whomever publishes the SPF portion to the DNS record, such as the owner of the domain.

So, in our case, this is what the ARRL has decided should be done with @arrl.net e-mails that do not come from outlook.com or pobox.com.

The document continues by describing what that field means:

1. Types of rejection levels:
• -all (reject or fail them - don't deliver the email if anything does not match)
• ~all (soft-fail them - accept them, but mark it as 'suspicious')
• +all (pass regardless of match - accept anything from the domain)
• ?all (neutral - accept it, nothing can be said about the validity if there isn't an IP match)
Most records will have a "~all" listed in the SPF record because the domain owner leaves room for the possibility of a new server getting created and might forget to update the SPF record with the new IP address of that server. This also allows for regular machines to send email without causing too much of an interruption.

Very large domains such as gmail.com have "?all" in their records to leave it up to the recipient to determine what to do with the email when received.

So, since the ARRL has no idea about which machines are originating the e-mails from all their members who use @arrl.net addresses, the one thing they should NOT be doing is to set that value to "-all". As the ARRL really can't say anything about the legitimacy of any particular @arrl.net e-mail "?all" would seem to be the correct value; but "+all" would seem to be a reasonable alternative; less reasonable would be "~all", since a recipient system might well regard anything marked as "suspicious" as to be treated differently from normal e-mail, and there is really no basis for suspicion -- certainly no basis for the ARRL to suggest a priori that it is a suspicious e-mail.

But, as we can see, the ARRL is in fact claiming that @arrl.net e-mails from other than outlook.com and pobox.com are to be summarily rejected.

No wonder, then, that suddenly e-mail servers are (surprise, surprise) rejecting legitimate personal @arrl.net e-mails.

Following the first appearance of a problem sending e-mails to @gmail.com accounts, I sent test e-mails to a number of other domains, and found that results were all over the show.

Some e-mails were delivered as normal. Presumably the destination servers at these addresses did not perform SPF checking.

Some e-mails were delivered, but with subject lines amended so as to indicate that the e-mail was spam, even though the e-mail could not possibly be regarded as being of an unsolicited commercial nature. [Why it is legal for a server to amend the contents of an e-mail subject line is a mystery to me, but apparently it is.] These servers seemed to perform an SPF check, but then ignore the ARRL's explicit instructions to drop the e-mail and instead treated the "-all" in the SPF record as if it were "~all". One can see the argument for this behaviour, even though it is non-conformant.

And some e-mails were simply silently dropped. Which has the sole merit of being in accordance with the ARRL's misguided instructions.

So the end result of the experiment was that, basically, an @arrl.net e-mail may or may not be delivered, may or may not be bounced, and may or may not be treated as spam. In other words, the address becomes, at best, unreliable -- and, in practice, more or less incompatible with ordinary day-to-day use.

So, what does one do in this situation? It turns out that what one does NOT do is to contact the ARRL to try to get them to fix this mess (of their own making).

## Interaction with the ARRL

Well, the niggling concern that there was no reason a priori to associate competence at dealing with circulation matters with that required for handling e-mail issues was quickly proven to be well founded. I'm sure that the person at circulation@arrl.net meant well, but unfortunately the best of intentions are no substitute for competence. A response was quickly forthcoming. Unfortunately, that response consisted of the following:

Thank you for reaching out to us.  I just sent  a test message and it did not
bounce back.  Please confirm receipt of that message in your xxx@xxxx.xxx account.

In other words, to respond to an issue regarding the sending of e-mails, they tested the reception of an e-mail. I explained the issue again, and this time a week went by without response. By this time I had worked my through the details of the problem as described above, so I sent a new e-mail, suggesting that the problem be put before someone who understood SPF records. In response, I was told merely:

I am following up with IT

with no further details, nor any explanation as to why I had heard nothing in the meantime.

Shortly afterwards, I received an e-mail from ARRL support at the rather peculiar and hardly-confidence-inducing non-ARRL e-mail address: support@22454831.hubspot-inbox.com, to confirm that a support request had been filed and it was being "reviewed". Like the e-mail from circulation@arrl.org, it gave no details as to the contents of the request for support.

So at this point, I was left dangling, with no real confidence that a competent support request has been filed. Nor with whom it had been filed. Nor how I could have filed a support request myself, thereby skipping the step that involved a person at ARRL who, it seemed, did not really understand the issue.

Three days later I received an e-mail from the support address:

Your ticket has been closed. Thanks for contacting our support team.
We hope that your issue has been resolved to your satisfaction.

So: no information about the wording of the original support request contained (so I wasn't particularly confident that the problem had been reported correctly); no information about what the problem was determined to be; no information about how the problem was resolved; no information about any testing that had been performed to confirm that the issue had been corrected. In fact, not a thing about that brief communication seemed remotely commensurate with the professional competence that one would reasonably expect after reporting a technical issue.

Obviously, the thing to do was to test their "resolution" of the issue. So I sent an e-mail to a gmail account and... surprise, surprise, no change at all. The e-mail bounced, with exactly the same error message as before. Not to my surprise, looking at the SPF entry for arrl.net confirmed that no change had been made.

The e-mail from the support people contained a weird non-sentence that said simply "Request details" on a line by itself. I'm not sure how one is supposed to interpret those two words, but I took them to mean that I could request details of what had been done to resolve the ticket. So I sent an e-mail that included, in the relevant part, this text:

Yes, I request details. You say that the ticket is closed, but provide no
explanation of what you have done, or why you think that something has changed.

I just sent a test e-mail here, and the behaviour is precisely as I reported
before, and the problem remains exactly as it was.

As far as I can tell here, absolutely no change has taken place in the course
of the past week.

I don't intend to be rude, but I have received nothing from the ARRL that
gives me any confidence that anyone there either understands the problem or
how to fix it, despite my providing what I believe was a clear description of
the problem, along with its cause and what needed to be done to eliminate the
problem.

In response to that, I received exactly the same e-mail that they'd sent before saying when the original request for support had been filed from circulation@arrl.org:

Your request has been received and is being reviewed by our support team.

Four days later (you can guess where this is going), I received an e-mail precisely the same as the one I'd received before, saying:

Your ticket has been closed. Thanks for contacting our support team.
We hope that your issue has been resolved to your satisfaction.

And, of course, testing the situation revealed no change whatsoever. So I sent an e-mail to the support address, with a copy to circulation@arrl.org:

Why do I get the impression that there is no human there at all?

1. I requested details as to what action you had taken. You did not provide any.

2. You say: We hope that your issue has been resolved to your satisfaction.
But I already informed you that there was no change in status at all.

I just performed another test, and the problem remains. It is evident that --
if you changed anything at all -- you did not go to the trouble of testing
whatever you changed 🙁

Do I sound frustrated? I am. How do I escalate this to a real live human being
who can actually understand and fix the problem???
I never heard anything further. And have heard nothing further to this day. 

## The Work-Around

Given the utter lack of meaningful response from the ARRL, the last, desperate remedy is to purchase an account at one of the companies that are permitted by the Sender Policy record to forward e-mail from an arrl.net address. I purchased a basic account a pobox.com (\$20 per year), reconfigured my SMTP server to forward the e-mails from my arrl.net address to pobox.com, and, hey presto! suddenly I could send e-mails to all the addresses that had become problematic when I was forwarding through my normal service (I continue to use the original service for all my other (i.e., non-arrl.net) e-mail addresses).

Of course, this will last only until such time as the ARRL unilaterally changes its policy again and removes the "include:pobox.com" field in the SPF record; which, as precedent has shown, is liable to happen without warning or explanation at any time.

How most members using the service were ever supposed to figure any of this out is a mystery. As are so many things about this whole mess; springing immediately to mind, for example, are the questions:

1. Why did the ARRL decide to change the SPF record so as to eliminate all e-mail servers except those run by two particular commercial entities?

2. Who at the ARRL actually made that decision?

3. Why were those particular entities chosen?

4. Why was the change made without warning?

5. Why was the change made without any notification or explanation in QST?

6. Why was no detailed step-by-step how-to guide issued to help members understand what the ARRL has done and how to work around the problems their action has caused?

7. Why is the designated contact person for e-mail issues at the ARRL the person responsible for unrelated issues related to QST circulation?

8. Who actually is responsible for support of this service at ARRL?

9. Why is there a way in which ARRL staff can open support tickets, but no method is provided that allows members to do so?

10. Why are members not permitted to see the contents of support tickets filed by staff in response to issues raised by members?

11. Does the technical support actually do anything to fix problems, and, if so, do they test the solution? (Obviously, the answer was "No" to at least the latter question in my case.)

12. Why do e-mails from the technical support staff come from an address that has nothing to do with the ARRL (so that many people might easily automatically filter such e-mails unknowingly to a bit bucket without ever seeing them)?

13. When the technical support staff close a ticket, why do they not describe what they have done and how they have tested the purported solution?

14. Why do the technical support staff simply ignore requests from members for details as to the remediation they have performed when they claim to have fixed an issue?

I note that throughout the whole saga, not even a hint of a vague explanation was provided to me as to why the ARRL precipitated this problem. Maybe there was a legitimate reason. In fact, I hope there was.

(I note that the only reason that springs immediately to my mind is gratuitously pernicious: that they have put themselves in a position to offer a separate and distinct service to members to process outbound arrl.net e-mails -- thereby working around a block that they have themselves emplaced -- and for which they could charge an additional fee.)

I'm afraid that. as a long-time member of the ARRL, I find the combination of incompetence and arrogance displayed by its reaction to a member's legitimate issues with a service that has functioned correctly for more than twenty years to be inexcusable, even if it transpires that there was in fact a good reason for the change in their Sender Policy that precipitated all the problems. From my conversations with other amateurs preceding these events, it is clear that the ARRL has enough problems with its image and perceived competence among rank-and-file members (and non-members), without the need to create more.

## 2023-03-16

### EU QSOs in CQ WW, 2005 to 2022

Mostly because I wondered about the effect of the Russian invasion of Ukraine and the last-minute decision of the CQ WW committee to permit entrants from Russia and Belarus), I took a look at the percentage of QSOs (half-QSOs, really) with the different EU countries over time.

I find the results interesting, not only for clarification of the above issue, but also in regard to the difference in activity from different countries between CW and SSB, and over time:

## 2023-03-01

### Statistics from 2022 CQ WW SSB and CQ WW CW logs

A huge number of analyses can be performed with the various public CQ WW logs (cq-ww-2005--2022-augmented.xz; see here for details of the augmented format) for the period from 2005 to 2022.

As in prior years, there follow a few basic analyses that interest me. There is, of course, plenty of scope to use the log files for further analyses, some of which are suggested by the figures below.

Below are some simple analyses of basic statistics from the logs. The 2022 versions of the contests were, of course, run under unusual circumstances: much of the world was still in thrall to the COVID-19 pandemic circumstance; and then there was the unprecedented (at least in modern times) invasion of Ukraine. The CQ WW organisers vacillated on the latter topic. At the time of the contest, logs could be submitted by Russia and Belarus (hereinafter "UA" and "EW" respectively), and QSOs with stations in those countries counted for points. However, this change was announced rather late. And, of course, many people who were active in prior years protested this relaxation of the organiser's prior stance by refusing to take part in this year's contests (or, having taken part, declining to submit a log). So we can expect the data for 2022 to be unlike those for any other year.

### Number of Logs

Until 2020, the raw number of submitted logs for SSB had been relatively flat for several years; the logs submitted for CW showed a fairly steady annual increase. In 2020, unsurprisingly, the number of logs in both modes increased to new record, probably because of the pandemic; CQ WW SSB 2021 set another record; on CW, the number of logs decreased slightly, but would still have been a record were it not for 2020. 2022 was another year of unusual circumstances: not only was the pandemic still in evidence in much of the world, but the Russian invasion of Ukraine, along with the CQ WW committee's vacillation on how to proceed in light of that invasion -- and then the protest against the committee's position as of the contest dates -- was always going to lead to a reduction in the number of submitted logs.

One not infrequently reads statements to the effect that the popularity of contests such as CQ WW has long been increasing. This plot suggests that this claim had not been true for a number of years prior to 2020 (and even when it was true, there are alternative explanations for the year-on-year increase, such as increasing ease of electronic log submission). The circumstances for 2020, 2021 and 2022 have been so unusual that it would seem to be an error to try to predict what will happen in the next two or three years or to discern any reliable pattern based on log submissions from those three years.

### Popularity

By definition, popularity requires some measure of people (or, in our case, the simple proxy of callsigns) -- there is no reason to believe, a priori, that the number of received logs as shown above is related in any particular way to the popularity of a contest, despite non-infrequent conclusory statements to the contrary.

So we look at the number of calls in the logs as a function of time, rather than positing any kind of well-defined positively correlated relationship between log submission and popularity (actually, the posts I have seen don't even bother to posit such a relationship: they are silent on the matter, thereby simply seeming to presume that the reader will assume one).

However, the situation isn't as simple as it might be, because of the presence of busted calls in logs. If a call appears in the logs just once (or some small number of times), it is more likely to be a bust rather an actual participant. Where to set a cut-off a priori in order to discriminate between busts and actual calls is unclear; but we can plot the results of choosing several such values.

First, for SSB:

Regardless of how many logs a call has to appear in before we regard it as a legitimate callsign, the popularity of CQ WW SSB since the start of the pandemic has surely increased from the doldrums of the prior few years. Whether this contest is more popular than it was at a similar point in the last solar cycle is unclear, but it does seem to have held its own. Vitiating this effect in 2022 is, of course, the reduction in participation that is (presumably) due to the Russian invasion of Ukraine.

[I note that a reasonable argument can be made that the number of uniques will be more or less proportional to the number of QSOs made (I have not tested that hypothesis; I leave it as an exercise for the interested reader to determine whether it is true), but there is no obvious reason why the same would be true for, for example, callsigns that appear in, say, ten or more logs. The interested reader might also consider basing a similar analysis on eXtended Super Check Partial files as created by the drscp program.]

Moving to CW:

On CW, we see that in 2022 the reduction due (presumably) to the Russian invasion of Ukraine has led to the number of active calls being the lowest of all the years for which data are available.

### Geographical Participation

How has the geographical distribution of entries changed over time?

Again looking at SSB first:

The big news is the precipitous drop in entrants from zone 16, with smaller but still considerable drops from zone 15 and zone 14 -- none of which, of course, is surprising. Zone 28 continues to show an increase in the number of logs submitted, to the point where it is now not dissimilar to the number from zone 25. Still, the number of logs from zones outside EU or the US continues to be very small. This can be seen more clearly if we plot the percentage of logs received from each zone as a function of time:

In 2022, entrants from zone 16 dropped from the historical value of around 10% to close to zero. The slack was taken up principally by the US zones and zones 11 and 25.

On CW, most zones evidence a sustained long-term increase:

Again we see the expected drop in entries from zone 16, but other than that trends continue more or less as before, with the relative increase spread more or less evenly across all zones, with the percentages of logs from each zone barely changing except for the pandemic years:

It is, I think, of some interest that the change in participation in zone 28 that is obvious on SSB is only gradually making itself felt on CW. Zone 24 is gradually becoming more common, although it is still far behind the powerhouse that is zone 25.

### Activity

Total activity in a contest depends both on the number of people who participate and on how many QSOs each of those people makes. We can use the public logs to count the total number of distinct QSOs in the logs (that is, each QSO is counted only once, even if both participants have submitted a log).

For SSB:

Ignoring the anomalous nature of 2022, the total number of distinct QSOs is essentially the same as at the same point in the last solar cycle.

And for CW:

As on SSB, it is probably best not to read much into the data point for 2022, as the circumstances were so different, particularly in Europe, which is the source of a large proportion of the QSOs in this contest (well, in almost every international contest, really).

To repeat what I said last year: on this mode there continues to be, it seems, a long-lived underlying upward trend (on which the effect of the solar cycle is superimposed), perhaps augmented somewhat by the pandemic in 2020 (but not in 2021 for some reason). Despite the claims I see that CW is an obsolete technology in serious decline, the actual evidence, at least from this, the largest contest of the year, continues to be quite the opposite. (This is a good reminder that when someone makes a claim whose truth is not self-evident, one should examine the underlying data for oneself. I have found that all too often it transpires that no defensible evidence has been put forward for the conclusion being drawn.) The evidence certainly seems to indicate that CW activity is healthy, at least insofar as CQ WW is concerned.

It is worth noting that, during the 2021 running of the SSB contest, it is quite clear that cycle 25 had an impact, whereas a month later on CW conditions had returned to the doldrums.

### Running and Calling

On SSB, the ongoing gradual shift towards stations strongly favouring either running or calling, rather than splitting their effort between the two types of operation, finally appears to have reached some kind of equilibrium. There was essentially no change between 2018 and 2019, and even a (very) slight reversal of the trend in 2020 and 2021. 2022, however, for the first time saw more than 30% of entrants making no run QSOs at all:

I have not investigated the cause of the decrease in the percentage of stations strongly favouring running, although the public logs could readily be used to distinguish possibilities that spring to mind, such as more SO2R operation, more multi-operator stations, and/or a reluctance of stations to forego the perceived advantages of spots from cluster networks.

On CW, the split between callers and runners continues to be much less bimodal than on SSB (as mentioned above, on SSB, fully 30% of entrants have no run QSOs; on CW, the equivalent number is below 10%). Indeed, the difference in call/run behaviour on the two modes (and the difference in the way that the behaviour has changed over time) is profound, and probably worthy of further investigation. CW continues to appear to exhibit what would seem to be a much healthier split between the two operating styles:

### Assisted and Unassisted

We can see how the relative popularity of the assisted and unassisted categories has changed since they were introduced:

On CW, there continue to be more or less equal numbers of assisted and unassisted logs, while on SSB the unassisted logs handily exceeds the number of assisted logs. My guess, for what it's worth, is that CW assistance is more widespread partly because it (partially) absolves stations from actually being able to copy at high speed, and partly because the RBN is so effective that essentially all CQing stations are spotted.

I find it particularly interesting that the number of CWU logs has remained essentially unchanged ever since the unassisted category was created.

Looking at the number of QSOs appearing in the unassisted and assisted logs:

(The lines are for the median number of logs; the vertical bars run from 10% to 90%, 20% to 80%, 30% to 70%, 40% to 60%, with opacity increasing in that order.)

A long-term downward trend in the numbers of QSOs in the assisted logs ceased in 2016, and since then the median number of QSOs in the assisted logs has remained essentially unchanged. A more or less constant difference of roughly one hundred QSOs between the median CW and SSB logs (in favour of CW) continues.

### Inter-Zone QSOs

We can show the number of inter-zone QSOs, both band-by-band and in total. In these plots, the number of QSOs is accumulated every ten minutes, so there are six points per hour.

The new cycle is starting. Unfortunately, the CW event suffers by a month later in the year than the SSB event. [I do not understand why the CQ WW committee do not alternate the weekends of the SSB and CW modes; but then, I don't understand a lot of what they do or don't do.]

2022 saw fairly ordinary 15m participation on SSB, probably because of signs of activity on 10m. CW saw more activity, presumably because 10m did not cooperate to the same extent as it did on SSB.

Much less activity on 20m, in both modes. Partly because of better conditions on 10m and 15m, but also because of the decreased activity in generaly, probably caused by the Russian invasion of Ukraine.

As always, CW dominates on 40m; and, within that mode, intra-EU QSOs further dominate. The effect of the Russian invasion (presumably that is the cause of the decreased activity) made itself after the first few hours and persisted through the rest of the weekend.

80m is always dominated by CW; but this year saw what appears to be a record low level of activity, presumably because of the invasion.

160m paints a similar story to 80m, although the raw QSO counts are much lower, and appear to have sunk to a record low.

The overall picture shows the influence of the new solar cycle; but it is clear that the normal ramp-up has not appeared in this cycle, probably due to the invasion.