RFC: add larger pagesize read to Atom2 backend

Jef Driesen jef at libdivecomputer.org
Sun Oct 26 12:36:53 PDT 2014


On 19-10-14 17:14, Dirk Hohndel wrote:
> So after all the back and forth I'm not 100% sure where we wanted to end
> up with the larger pagesize read support.
>
> Here's something really simple, really straight forward (I think) that
> should not impact any of the other backends (unless there's a bug in my
> code, always a possibility).
>
> It should be trivial to extend this for other models that have a 128 byte
> pagesize and support unaligned reads.
>
> It doesn't mess with the multipage code, it doesn't change any of the
> calling code to do alignment for us - so on average this will use about
> twice the number of reads that better optimized code would use - but still
> only about 1/8 of the reads that the current code would use.
>
> Let me know what you think.
>
> And yes, this turns into patch #5 after the four existing A300CS patches
> and should applied with the four of them together, in case you like it.

I'm fine with the general idea behind your patch, but I wonder why you moved all 
4 settings (pagesize, crc_size, read_cmd and alignment) into the device 
structure. These are all related in the sense that using the Bx command requires 
matching values for pagesize, crc_size and alignment. It's not that you can 
choose them independently. Thus to enable the bigpage code for a device, you now 
need to set all 4 fields. I think it will be easier to have just one "bigpage" 
field that you need to set to 16 (for the B8 command) or 8 (for the B4 command), 
and then have the code automatically pick the corresponding alignment and 
checksum size. See the first attached patch. This just combines your patch with 
some earlier code from you, so feel free to stick your name on it.

One thing I don't understand is how this reduces the number of reads? For every 
16 byte request you now read 256 bytes and discard the excess bytes. So as far 
as I can see this doesn't reduce the number of reads at all?

Anyway, the reason why I like the idea is that this code allows for a very 
simple cache implementation. Instead of caching the entire memory range, we can 
just cache the last "bigpage". See the second patch for a proof of concept.

I think something like this should address everything we need: a simple 
implementation which is very efficient too. What do you think?

I also added a third patch to pass the checksum length explicitly to the 
oceanic_atom2_transfer function instead of relying on some magic packet size. 
That makes the code a bit more readable if you ask me.

Jef
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-bigpage.patch
Type: text/x-patch
Size: 3811 bytes
Desc: not available
URL: <http://libdivecomputer.org/pipermail/devel/attachments/20141026/12b51fbb/attachment.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0002-cache.patch
Type: text/x-patch
Size: 3051 bytes
Desc: not available
URL: <http://libdivecomputer.org/pipermail/devel/attachments/20141026/12b51fbb/attachment-0001.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0003-checksum.patch
Type: text/x-patch
Size: 3510 bytes
Desc: not available
URL: <http://libdivecomputer.org/pipermail/devel/attachments/20141026/12b51fbb/attachment-0002.bin>


More information about the devel mailing list