On Thu, Aug 10, 2017 at 10:57 AM, Jef Driesen <jef@libdivecomputer.org> wrote:
On 2017-07-15 22:39, John Van Ostrand wrote:
For previously supported Cochran computers high-speed read of
log and profile data started at byte 0. Older models that lack the
high-speed transfer function use the standard speed read commands
and so the log and profile data are read at higher addresses.

I don't really understand the reason for this change. With this change you are only downloading the memory area containing the logbook and profile ringbuffers, and not a full memory dump. Can you explain why you changed this?

I figured I should start staging changes into patches that are more easily reviewed and doing it in a way that still results in a good build in case a build occurs at the commit. This patch is a staging patch intended to set a path for support for Commander TM computers. This patch also fixes an assumption I made in _device_dump.

Prior to this patch the read command that read the log and profile data always had addresses that started at 0. Log data always started at 0 and profile data followed after. So layout->rb_logbook_begin for every computer was always 0x0. In _device_dump I should have used this variable but I used a hard coded 0. Using the variable makes more sense.

In effect, this patch doesn't change what the dump contains for computers supported at the time of this patch. A dump done after this patch matches a dump done prior to it.

As to downloading *all* memory I presumed it wasn't supposed to dump all memory. I presume it was only supposed to dump log and profile data.

There may be two reasons why I thought the device_dump was only intended to dump the user data, i.e. logbook and profile data. Prior to my patches currently under review, the code uses what I now realize is a high-speed read function which seems intended to only download log and profile data probably because of the large size. I was confused when working with the older Commander TM models because the high-speed download commands didn't work and I had no reference to how to download it (i.e. no vendor program to observe.) Then I decided to brute force it recently, take the DC apart, obtain the IC specs, and try a wide range of commands. I realized that the read commands I've been using to access some data on new models were generic low-speed read commands and I could use them to access all memory areas on the older computer. Because it had randomized SRAM logbook and profile data it took a while to find the user data. It starts at 0x10000.

I could read all data on the old computer and dump all that data but there are also 32K of RAM and ROM which I figure only confuses the data and doesn't help the user much. RAM, being RAM, changes a lot.

That said, I've started experimenting with a simulator that uses a full memory dump so there might be some benefit to changing this. The new simulator would be simplified and might be more robust when working with a less predictable program like the vendor's software.




--
John Van Ostrand
At large on sabbatical