Just from casual observation at low levels approx below S3 the decoder seems to default to a center frequency that to me appears different from the 500 Hz center of the CWN filter and this may contribute to the poor decoding at those levels. Unfortunately not enough time to check further today...looks about 50 Hz different.
It's difficult to tell 100% without using a local CW test signal and an attenuator, but subjectively I'd agree with Benson it does seem to decode slightly better when the CW tone is in the 400-450 Hz range and less so when it is higher in frequency.
The code was originally hard-wired to a "sidetone" frequency of 750 Hz. I added code to change that value to the center of the current passband so it would track as required. Perhaps that value is not actually the true center frequency of the filter in the program? Or maybe I've made some other mistake.
As a quick test I set up four instances of the decoder with 30Hz BW centered on approx 430, 460, 490 & 520Hz all decoding 4XZ on 4331KHz.
Although the messages are coded, it was still possible to see that the two lowest frequencies gave the most consistent decodes as they were almost identical. The two higher frequencies had segments missing and didn't fully match.
I haven't spent any more time to see if the decodes on even lower frequencies are as good or better, but it does indicate that there is definitely a difference.
When you do this is the signal always at 500 Hz relative to the passband centers? This is, visually when the pb is centered at 430 (30 Hz wide), it doesn't even contain the signal at 500 Hz?
Or are you changing the passband and signal together so the filter center frequency is simply lower? (lower BFO in effect)
v1.229 has some additions to help diagnose this problem:
The cw extension parameter will lock the frequency of the cw decoder filter and keep it from tracking with changes to the Kiwi passband. E.g. "my_kiwi:8073/?ext=cw,500" will lock the cw filter at 500 Hz.
New ":" notation to specify the passband center frequency in the URL and frequency entry box. So the complete syntax is now:
/pbw
/pb_lo,pb_hi
:pbc
:pbc,pbw
Where pbw is passband width (Hz) and pbc is passband center frequency (Hz). And passband limit frequencies pb_lo and pb_high (Hz).
HI agn, ok on the new input parameters , but time consuming; what about using the mouse roller to move the settings ie: use the ZOOM_button_PLUS with rightmouse_ON&ROLLER to move PBC and ZOOM_button MINUS with rightmouse_ON&ROLLER to move PBW the bfo\width SETTINGS will be displayed as usual. Of course not only working with the Cw_EXT Phil
Tried combinations of the above parameters and this time used a GPS disciplined cw rf test signal. My initial suspicion that the CWN filter pass-band and the decoder center frequency did not match was incorrect and caused by a frequency error in the RF signal I used previously.
It seems the decoder has a reasonable tone tracking range for stronger signals down to about -108 dBm. Reducing the signal below that decoding starts to fail. "Training" can be forced by clicking the threshold correction button but in my tests seldom recovers the situation.
Still not a bad result although it would certainly be nice to decode even weaker signals that are still readable on the waterfall. In any case as others already mentioned it is hard to beat morse code processing by our brains, especially when it comes to hand keyed signals with imperfect timing in the presence of noise and other signals.
I used a virtual audio cable and compared the results with Multipsk in CW QRP mode, this program copies to about -115 dBm. These numbers are with daytime band noise present at 14 Mhz. CW skimmer or MRP 40 will probably be even better for weak signals.
That's interesting I'm not able to perform my own tests at the moment, but I did wonder if enabling the DSP noise reduction on the KiWi in conjunction with the CW decoder would help any further when signals start to get down to near the noise floor ?
I had planned to setup eight receive channels spaced with 30Hz BFO offset intervals with a slight overlap between each and then run a CW test sequence through a generator with a calibrated output level. it wouldn't simulate atmospheric impulse noise etc. but it would at least be repeatable.
I personally find it easier to use the new keyboard interface. If I know I want the pbc to be 425 Hz I type ":425" (since the frequency entry box is always selected by default) and I'm done. No tedious shift-dragging the yellow passband image around to get it exactly to 425.
I didn't see anything obviously wrong with their implementation. There is the question of the decision threshold "slicer" of the output (deciding if the tone is present or not). And of the sample size fed to the filter which determines bin size and time-resolution. But I've seen the decoder work fine at 50 wpm and the Kiwi passband filter should make up for the bin width being too large.
Several years ago, I was one of the main people behind the code on the mcHF SDR transceiver and I ran across a "problem" with the Goertzel that I'd encountered when I'd used it before: Trying to make sense out of the detection value it produced. In this case it was the detection of the subaudible tone transmitted by another station. What I ended up doing was running three Goertzels at a time: One on the desired frequency and two others - one just above and another below, separated from the desired frequency by a value commensurate with respect to the effective BW of the Goertzel.
By taking the ratio of the on-frequency decoder to the algebraic sum of the two "off" frequency decoders I not only narrowed the effective detection bandwidth, but made detection far more reliable and much faster - particularly in the presence of noise/modulation as the need to determine an absolute detection level became irrelevant. To be sure, it helps to know with certainty the precise frequency to be detected!
I suspect that this may not be too relevant on this application, but I thought that I'd mention it in the event it was useful for something else.
* * *
One assumption made by the CW decoder that doesn't occur in the real world (unless one is using a straight key, which is comparatively rare - and most people are remarkably stable WPM-wise, when using an SK, anyway) is that it seems to assume that the average person sending will have one hand on their paddle and the other on the speed adjust: In other words, after a run of good copy, a static crash or a bit of QSB will cause the speed to go out into the weeds and strings of characters will be lost.
I would suppose that once the detector has good confidence in that it has the right speed, it should be a bit less wont to change it immediately - but I do know that determining this sort of thing is tricky. Having written CW decoders before (assembly, Z80 was the first) I know how tough it is to come up with something that is even semi-good (this one is at least "semi-good") but go beyond that, so I know full well that this is and always will be a work in progress... just like life.
Thank you again for your very hard work!
73, Clint KA7OEI
(I see that the error count will reset itself after a period of no errors. Might a bit higher threshold with a "leaky bucket" be more realistic?)
Oddly enough, the variable enabling the automatic threshold correction code is called "use_3_goertzels". But that's not what the code does. It's just doing some running decayed averaging before the decision slicing. So maybe performing multiple Goertzels was someone's intent but they just never got there. I like the idea but it assumes the surrounding frequency choices will be quiet. The most obvious failure mode of that is what happens to the band during a contest/rare-DX situation (i.e. wall-to-wall signals).
I had some private messages with Benson and we agreed that it would be useful to display the Goertzel output in a graph just like the S-meter extension so the threshold dynamics could be studied. So I'll try and do that. Also, OOK amplitude slicing is a well known problem and there are lots of algorithms that could be implemented that would likely perform better. Like Martin I did some testing with 4XZ 4331/6007 kHz when it was sending a fixed pattern and S4-5 signal strength. Automatic threshold mode seemed to perform worse in that case.
The comment block at the top of the code says "Read Hand Sent Morse Code (tolerant of considerable jitter)", or "swing" as I guess we'd call it. I have found this to be the case except for the word spacing problem. I've also found that I can tune from one signal being successfully decoded to another at a different wpm and it locks-on fairly quickly without requiring a full retraining.
But static crashes or a noisy band that gets through the narrow passband filtering causes real trouble. There are a number of error correction procedures in the code. But there is only so much that can be done I suppose. I added the error counter business. Before then the very first error correction failure would trigger a full re-train which delayed subsequent output. One simple static crash could cause this and I thought this was too pessimistic. So the code now waits for 4 error correction failures to occur with the count being cleared if there is an 8 second period of no errors. These numbers are completely arbitrary.
Extremely rushed, but v1.230 has a graph of the Goertzel filter output along with the current threshold value (which can also be adjusted). Plotted in dB because it was easier.
The graph is nice and seems to be helpful - thanks!
* * *
There *is* something odd that occurs that is most apparent when using narrow filters to copy CW signals. (I don't think that it has much to do with the CW extension.) As with many oddities, it doesn't *always* happen. Strangley, it seems that reloading the web page will sometimes "fix" this problem making me wonder where this phenomenon really is in the signal path.
What I hear is a sharp "clicking" in the background audio that follows the signal strength. For example, at key-down of a strong CW signal there are several clicks on quick succession - and then again on key-up. To me it sounds like some sort of stepped AGC system that is operating discontinuously on the audio. It's much more difficult to spot in SSB bandwidth as it seems to be "beyond" the audio filter (e.g. the spectral energy of these clicks is outside the passband of, say, a narrow CW filter. Even on noise, one can hear the random "ticking" in the background as whatever it is seems to be tracking the audio level.
I ran across a very similar-sounding problem when I was coding the "inner" AGC loop on the mcHF (e.g. the AGC loop that scaled the A/D input gain according to the signal levels in the entire 48 kHz A/D passband to minimize clipping and also to keep too few A/D bits from being "lit up" and degrading performance) and I needed to change the scaling on all sorts of things at once (e.g. audio, AGC, S-Meter, etc.) The only way that I could "fix" this clicking (probably from the VGA on the ADC) was to slow the loop response way down (the overall energy in 48kHz of BW really doesn't change very fast!) - but it could still be heard if one listened carefully to a pure CW note that was varying rapidly in signal level - such as that from a signal generator on which the RF output level was being twiddled.
'Dunno if this is the same sort of effect - but it kinda sounds like it - but it also occurs to me that it could be due to a bit of overshoot in the AGC response causing A/D clipping on the ADPCM (or whatever it is) stream that is sent to the client.
The two signals are AM DSB (like an NDB sending a morse ID).
To read the morse you have to tune using CW to one of the two sidebands spaced 2KHz either side of each main carrier signal, and not the carrier itself
I have had to use a low modulation index in order to prevent my audio source from overdriving the signal generator external modulation input which has some ALC at higher levels.
I can modify the test signals as required if folks need any other scenarios setting up.
Here's a quick comparison using one of my test signals of the KiWi CW decoder Vs. CwGet and WD6CNF's CW decoder, all running near their limit.
The CW Bandwidth is 100Hz, threshold 42, AGC set to -100dB / 6dB slope / 500ms / no hang. Note that the AGC threshold is the most important setting. I also tried manual gain but this was even less successful. I think the CW decode threshold level has to be maintained fairly closely in order to obtain the best results.
WD6CNF's CW decoder is about the best with the KiWI second and CwGet in third place.
I tried various combinations of LMS filter settings to see if I could improve the signal level for good decodes, but it seemed to make very little difference.
I also tried varying the CW filter bandwidth and found that anything narrower than 40Hz or wider than approximately 120Hz resulted in much worse decodes. 50/60Hz BW seems to be a good 'sweet spot' and this was also apparent on the threshold / slicer graph.
I'm also a bit concerned that my test source may not be as good as it could be, so I'm currently experimenting with few different ways of generating the CW sequence which I hope will reduce some of the remaining unwanted artifacts (such as key clicks) that my be slightly impacting the CW decoding process. Obviously a lot more unwanted noise, fading and other artifacts would be present on off-air signals, but I think it's important to try and minimise these as much as possible for test and comparison purposes.
Some further testing with a new (cleaner) CW source on 4548KHz.
In order to achieve this I had to adjust the threshold level down to 35 which is not really practical for an off air signal.
The CW Bandwidth is 50Hz and AGC settings are also critical as seen before.
AGC threshold set to -100dB / 6dB slope / 500ms / no hang.
I tried using manual gain, but the AGC is required to maintain the correct threshold on off-air (and even on my previous CW test) signals.
I tweaked the RF level down to the point at which I was getting about 50% accurate decodes on the KiWi
This is now at a level of <<S1 -124dBm and is very much at the limit of the S/N ratio possible with a clean signal.
At this point the WD6CNF CW decoder gave about 90% accurate decodes and CwGet gave zero decodes.
Even a change as small as ~1dB in signal level could in some cases result in 100% or zero accuracy on all the decoders.
Based on this I think the threshold setting required to optimise the decoder slice level seems to be the key parameter. Maybe some further automation of this process to help track the incoming decoder S/N ratio rather than relying on the KiWi AGC to maintain it at the optimum slice point would help in this respect ?
The bottom line is that given a clean signal and low background noise (no static crashes to pump the AGC up and down) the decoder is capable of good results.
The test KiWi is still running if anyone else wishes to give it a try.
Does anyone know if this decoder incorporates neural network (NN) processing? Yesterday in demoing the cw decoder to a NN researcher non-ham friend, he suggested a simple NN engine would easily decode the stream perfectly. He sees this as an interesting project, but I have no idea if NN is already incorporated here or in other cw decoder applications. Does that seem a worthwhile project?
I will be helping my friend Raul work on this. He needs data sets which consist of wav files and the source text they are transmitting. For data I can use the kiwirecord.py to capture streams, but I then need a clean copy of the CW session's source. I can get that on Saturdays from KPH, but can anyone suggest another CW transmitting station where I could find the source sooner than Saturday? I don't want the data set to contain bad source or the NN will be mis-trained.
Actually NN need impaired audio but a clean version of the text being sent. The larger the data set the better. I can use KPH and get impaired versions from remote Kiwis, but that would be only one block of clear text and many versions of the receive signal.
Actually there are some open source projects around using so-called "deep learning 3D convolutional neural nets" for signal classification and speech recognition.
For the purpose of audio 2D or 3D spectograms of a very very large number of training signals are used to able to identify signals with high confidence. It looks that morse code identification and decoding would be do-able if the training set covers sufficient low SNR cases and a wide variety of QRM/QRN situations.
I looked at some of the projects about half a year ago, but ran out of time to solve all the installation and incompatibly issues using Python, Anaconda and Tensorflow to get going. Interesting stuff. Here are a few links for those interested. There is a lot more....
Comments
OK thanks, that fixed it.
Regards,
Martin - G8JNJ
Regards,
Martin - G8JNJ
As a quick test I set up four instances of the decoder with 30Hz BW centered on approx 430, 460, 490 & 520Hz all decoding 4XZ on 4331KHz.
Although the messages are coded, it was still possible to see that the two lowest frequencies gave the most consistent decodes as they were almost identical. The two higher frequencies had segments missing and didn't fully match.
I haven't spent any more time to see if the decodes on even lower frequencies are as good or better, but it does indicate that there is definitely a difference.
Regards,
Martin - G8JNJ
Or are you changing the passband and signal together so the filter center frequency is simply lower? (lower BFO in effect)
I'm changing the tuning and moving the filter passband edges to match so that they are +/- 15Hz relative to the new center frequency.
So I'm changing the BFO frequency.
Regards,
Martin - G8JNJ
The cw extension parameter will lock the frequency of the cw decoder filter and keep it from tracking with changes to the Kiwi passband. E.g. "my_kiwi:8073/?ext=cw,500" will lock the cw filter at 500 Hz.
New ":" notation to specify the passband center frequency in the URL and frequency entry box. So the complete syntax is now:
what about using the mouse roller to move the settings
ie: use the ZOOM_button_PLUS with rightmouse_ON&ROLLER to move PBC
and ZOOM_button MINUS with rightmouse_ON&ROLLER to move PBW
the bfo\width SETTINGS will be displayed as usual.
Of course not only working with the Cw_EXT
Phil
My initial suspicion that the CWN filter pass-band and the decoder center frequency did not match was incorrect and caused by a frequency error in the RF signal I used previously.
It seems the decoder has a reasonable tone tracking range for stronger signals down to about -108 dBm. Reducing the signal below that decoding starts to fail. "Training" can be forced by clicking the threshold correction button but in my tests seldom recovers the situation.
Still not a bad result although it would certainly be nice to decode even weaker signals that are still readable on the waterfall. In any case as others already mentioned it is hard to beat morse code processing by our brains, especially when it comes to hand keyed signals with imperfect timing in the presence of noise and other signals.
I used a virtual audio cable and compared the results with Multipsk in CW QRP mode, this program copies to about -115 dBm. These numbers are with daytime band noise present at 14 Mhz. CW skimmer or MRP 40 will probably be even better for weak signals.
That's interesting I'm not able to perform my own tests at the moment, but I did wonder if enabling the DSP noise reduction on the KiWi in conjunction with the CW decoder would help any further when signals start to get down to near the noise floor ?
I had planned to setup eight receive channels spaced with 30Hz BFO offset intervals with a slight overlap between each and then run a CW test sequence through a generator with a calibrated output level. it wouldn't simulate atmospheric impulse noise etc. but it would at least be repeatable.
Regards,
Martin - G8JNJ
I personally find it easier to use the new keyboard interface. If I know I want the pbc to be 425 Hz I type ":425" (since the frequency entry box is always selected by default) and I'm done. No tedious shift-dragging the yellow passband image around to get it exactly to 425.
I didn't see anything obviously wrong with their implementation. There is the question of the decision threshold "slicer" of the output (deciding if the tone is present or not). And of the sample size fed to the filter which determines bin size and time-resolution. But I've seen the decoder work fine at 50 wpm and the Kiwi passband filter should make up for the bin width being too large.
By taking the ratio of the on-frequency decoder to the algebraic sum of the two "off" frequency decoders I not only narrowed the effective detection bandwidth, but made detection far more reliable and much faster - particularly in the presence of noise/modulation as the need to determine an absolute detection level became irrelevant. To be sure, it helps to know with certainty the precise frequency to be detected!
I suspect that this may not be too relevant on this application, but I thought that I'd mention it in the event it was useful for something else.
* * *
One assumption made by the CW decoder that doesn't occur in the real world (unless one is using a straight key, which is comparatively rare - and most people are remarkably stable WPM-wise, when using an SK, anyway) is that it seems to assume that the average person sending will have one hand on their paddle and the other on the speed adjust: In other words, after a run of good copy, a static crash or a bit of QSB will cause the speed to go out into the weeds and strings of characters will be lost.
I would suppose that once the detector has good confidence in that it has the right speed, it should be a bit less wont to change it immediately - but I do know that determining this sort of thing is tricky. Having written CW decoders before (assembly, Z80 was the first) I know how tough it is to come up with something that is even semi-good (this one is at least "semi-good") but go beyond that, so I know full well that this is and always will be a work in progress... just like life.
Thank you again for your very hard work!
73,
Clint
KA7OEI
(I see that the error count will reset itself after a period of no errors. Might a bit higher threshold with a "leaky bucket" be more realistic?)
Oddly enough, the variable enabling the automatic threshold correction code is called "use_3_goertzels". But that's not what the code does. It's just doing some running decayed averaging before the decision slicing. So maybe performing multiple Goertzels was someone's intent but they just never got there. I like the idea but it assumes the surrounding frequency choices will be quiet. The most obvious failure mode of that is what happens to the band during a contest/rare-DX situation (i.e. wall-to-wall signals).
I had some private messages with Benson and we agreed that it would be useful to display the Goertzel output in a graph just like the S-meter extension so the threshold dynamics could be studied. So I'll try and do that. Also, OOK amplitude slicing is a well known problem and there are lots of algorithms that could be implemented that would likely perform better. Like Martin I did some testing with 4XZ 4331/6007 kHz when it was sending a fixed pattern and S4-5 signal strength. Automatic threshold mode seemed to perform worse in that case.
The comment block at the top of the code says "Read Hand Sent Morse Code (tolerant of considerable jitter)", or "swing" as I guess we'd call it. I have found this to be the case except for the word spacing problem. I've also found that I can tune from one signal being successfully decoded to another at a different wpm and it locks-on fairly quickly without requiring a full retraining.
But static crashes or a noisy band that gets through the narrow passband filtering causes real trouble. There are a number of error correction procedures in the code. But there is only so much that can be done I suppose. I added the error counter business. Before then the very first error correction failure would trigger a full re-train which delayed subsequent output. One simple static crash could cause this and I thought this was too pessimistic. So the code now waits for 4 error correction failures to occur with the count being cleared if there is an 8 second period of no errors. These numbers are completely arbitrary.
The graph is nice and seems to be helpful - thanks!
* * *
There *is* something odd that occurs that is most apparent when using narrow filters to copy CW signals. (I don't think that it has much to do with the CW extension.) As with many oddities, it doesn't *always* happen. Strangley, it seems that reloading the web page will sometimes "fix" this problem making me wonder where this phenomenon really is in the signal path.
What I hear is a sharp "clicking" in the background audio that follows the signal strength. For example, at key-down of a strong CW signal there are several clicks on quick succession - and then again on key-up. To me it sounds like some sort of stepped AGC system that is operating discontinuously on the audio. It's much more difficult to spot in SSB bandwidth as it seems to be "beyond" the audio filter (e.g. the spectral energy of these clicks is outside the passband of, say, a narrow CW filter. Even on noise, one can hear the random "ticking" in the background as whatever it is seems to be tracking the audio level.
I ran across a very similar-sounding problem when I was coding the "inner" AGC loop on the mcHF (e.g. the AGC loop that scaled the A/D input gain according to the signal levels in the entire 48 kHz A/D passband to minimize clipping and also to keep too few A/D bits from being "lit up" and degrading performance) and I needed to change the scaling on all sorts of things at once (e.g. audio, AGC, S-Meter, etc.) The only way that I could "fix" this clicking (probably from the VGA on the ADC) was to slow the loop response way down (the overall energy in 48kHz of BW really doesn't change very fast!) - but it could still be heard if one listened carefully to a pure CW note that was varying rapidly in signal level - such as that from a signal generator on which the RF output level was being twiddled.
'Dunno if this is the same sort of effect - but it kinda sounds like it - but it also occurs to me that it could be due to a bit of overshoot in the AGC response causing A/D clipping on the ADPCM (or whatever it is) stream that is sent to the client.
Again, thanks for your work!
Clint
KA7OEI
I have a test setup running on http://g8jnj.proxy.kiwisdr.com:8073/ with two AM signals being modulated by a CW recording.
TEST TEST TEST 1234567890 THE QUICK BROWN FOX JUMPED OVER THE LAZY DOG
The two signals are at different levels
3598.00KHz @ S3
4548.00KHz @ S5
I can get good decodes (just) on 4548.00 with 50Hz BW and a threshold of 42 but 3598.00 is too weak.
I'll leave it running so please feel free to use it for your own tests as required.
Regards,
Martin - G8JNJ
It was a quick lashup to test the idea.
The two signals are AM DSB (like an NDB sending a morse ID).
To read the morse you have to tune using CW to one of the two sidebands spaced 2KHz either side of each main carrier signal, and not the carrier itself
I have had to use a low modulation index in order to prevent my audio source from overdriving the signal generator external modulation input which has some ALC at higher levels.
I can modify the test signals as required if folks need any other scenarios setting up.
Regards,
Martin - G8JNJ
The CW Bandwidth is 100Hz, threshold 42, AGC set to -100dB / 6dB slope / 500ms / no hang. Note that the AGC threshold is the most important setting. I also tried manual gain but this was even less successful. I think the CW decode threshold level has to be maintained fairly closely in order to obtain the best results.
WD6CNF's CW decoder is about the best with the KiWI second and CwGet in third place.
Regards,
Martin - G8JNJ
I tried various combinations of LMS filter settings to see if I could improve the signal level for good decodes, but it seemed to make very little difference.
I also tried varying the CW filter bandwidth and found that anything narrower than 40Hz or wider than approximately 120Hz resulted in much worse decodes. 50/60Hz BW seems to be a good 'sweet spot' and this was also apparent on the threshold / slicer graph.
I'm also a bit concerned that my test source may not be as good as it could be, so I'm currently experimenting with few different ways of generating the CW sequence which I hope will reduce some of the remaining unwanted artifacts (such as key clicks) that my be slightly impacting the CW decoding process. Obviously a lot more unwanted noise, fading and other artifacts would be present on off-air signals, but I think it's important to try and minimise these as much as possible for test and comparison purposes.
Regards,
Martin - G8JNJ
In order to achieve this I had to adjust the threshold level down to 35 which is not really practical for an off air signal.
The CW Bandwidth is 50Hz and AGC settings are also critical as seen before.
AGC threshold set to -100dB / 6dB slope / 500ms / no hang.
I tried using manual gain, but the AGC is required to maintain the correct threshold on off-air (and even on my previous CW test) signals.
I tweaked the RF level down to the point at which I was getting about 50% accurate decodes on the KiWi
This is now at a level of <<S1 -124dBm and is very much at the limit of the S/N ratio possible with a clean signal.
At this point the WD6CNF CW decoder gave about 90% accurate decodes and CwGet gave zero decodes.
Even a change as small as ~1dB in signal level could in some cases result in 100% or zero accuracy on all the decoders.
Based on this I think the threshold setting required to optimise the decoder slice level seems to be the key parameter. Maybe some further automation of this process to help track the incoming decoder S/N ratio rather than relying on the KiWi AGC to maintain it at the optimum slice point would help in this respect ?
The bottom line is that given a clean signal and low background noise (no static crashes to pump the AGC up and down) the decoder is capable of good results.
The test KiWi is still running if anyone else wishes to give it a try.
http://g8jnj.proxy.kiwisdr.com:8073/?f=4548.00cwnz14
Ignore any other signals and just use 4548KHz.
Regards,
Martin - G8JNJ
It's all just timing-based. A little bit of smarts with error correction, but definitely no NN type stuff.
For the purpose of audio 2D or 3D spectograms of a very very large number of training signals are used to able to identify signals with high confidence. It looks that morse code identification and decoding would be do-able if the training set covers sufficient low SNR cases and a wide variety of QRM/QRN situations.
I looked at some of the projects about half a year ago, but ran out of time to solve all the installation and incompatibly issues using Python, Anaconda and Tensorflow to get going. Interesting stuff. Here are a few links for those interested. There is a lot more....
http://dataconomy.com/2017/04/history-neural-networks/
https://github.com/randaller/cnn-rtlsdr
https://www.rtl-sdr.com/deep-learning-neural-network-based-signal-identification-software-for-the-rtl-s
Ben.