On 2016-01-04 14:10, Paul-Erik Törrönen wrote:
The D<n> command is multiple bytes, right?
Yes. As I wrote, I tried the single char commands from the terminal software, and they did not work.
I think you are trying to re-use a bit too much from the backend that you used as an example. How to best read the response depends on how the protocol works. Is the length fixed or variable, do we known it in advance or not, etc.
Ok. As I may have mentioned in the beginning, this is totally unknown territory for me. From my understanding though, the code logic should be pretty simple: We do some base assumptions about buffer size, then attempt to read it full, and if the data from the device i less, then we should resize the buffer to match the actual data amount.
A very important question is: how do you know the response has been received completely? Some communication protocols use fixed size packets, so you know the length in advance. If the length is variable, then very often it's communicated somewhere in the first bytes of the response. And if that's not the case, then you have to rely on a timeouts (e.g. you assume the response is complete if no byte arrives within a certain time). That's also the most error-prone method because you can't distinguish from a timeout due to some error.
Going back here: When I successfully get the universal to send (even this seems to be unstable at times) the request byte, it does receive what seems to be an appropriate response.
I'm not sure if this is relevant, but I've noticed that many dive computers need some time between setting up the serial line, and starting the communication by sending bytes. If you'll look at the other backends, you'll notice many sleep calls with a delay of several hundred milliseconds. So that's certainly worth trying.
But I think I understand your point, it could be that the chipset on the USB-cable pretending to be a serial device has some assumptions when the client is reading off it, and this has to be matched by the client software?
Probably not the usb-serial chipset, but the dive computer software (or hardware) behind it. The usb-serial chipset is used in many different applications. It only needs to care about being compatible with RS-232 on one side and USB on the other side. But the dive computer can do whatever it wants too.
FWIW is it meaningful that the portmon log seems to indicate single byte read most of the time, or is it just an artifact of how portmon logs the communication?
Portmon has nothing to do with this. It's the application that is reading single bytes.
Here's a snippet of the portmon log (I grep IRP_MJ for clarity):
87 0.00000698 ProLink2010oct. IRP_MJ_READ Silabser0 SUCCESS Length 3: 50 50 50 92 0.00001509 ProLink2010oct. IRP_MJ_CLEANUP Silabser0 SUCCESS 93 0.00320152 ProLink2010oct. IRP_MJ_CLOSE Silabser0 SUCCESS 94 0.02598515 ProLink2010oct. IRP_MJ_CREATE Silabser0 SUCCESS Options: Open 121 0.00000503 ProLink2010oct. IRP_MJ_READ Silabser0 SUCCESS Length 1: 50 130 0.00000531 ProLink2010oct. IRP_MJ_READ Silabser0 SUCCESS Length 1: 50 139 0.00000531 ProLink2010oct. IRP_MJ_READ Silabser0 SUCCESS Length 1: 50 145 0.00093783 ProLink2010oct. IRP_MJ_WRITE Silabser0 SUCCESS Length 1: 4D 155 0.00000559 ProLink2010oct. IRP_MJ_READ Silabser0 SUCCESS Length 1: 64 164 0.00000363 ProLink2010oct. IRP_MJ_READ Silabser0 SUCCESS Length 1: 0D 173 0.00000363 ProLink2010oct. IRP_MJ_READ Silabser0 SUCCESS Length 1: 0A 182 0.00000363 ProLink2010oct. IRP_MJ_READ Silabser0 SUCCESS Length 1: 76
0x50 equals P and seems to be the standby byte sent periodically.
So this equals to:
read: PPP read: P read: P read: P send: M read: d read: \r read: \n read: v ...
This is right after opening the serial port, so it could be that the dive computer is telling you "I'm ready to receive commands". Maybe it keeps sending this byte until it receives a command? This might also be the reason why the communication is unstable: If you are trying to send too fast, the dive computer isn't ready yet, and your command might got lost.
Jef