On Wed, Nov 5, 2014 at 8:00 AM, Jef Driesen jef@libdivecomputer.org wrote:
This a mainly a matter of personal preference.
That's one way of looking at it. It's an odd way.
Compact code without unnecessary syntactic overhead is generally considered universally a good thing. Not "personal preference". Sure, if your employer ends up giving bonuses by number of lines written, it's a bad thing, so I guess you can call it "personal preference" at some level, but people actually design whole programming languages on the principle that concise representation of the problem is a good thing.
No, C is not one of those concise languages. With great power comes great number of lines, paraphrasing uncle Ben. C has all that syntactic noise for doing almost everything manually and describing the solution in sometimes very tedious detail indeed.
So conciseness is not the only - or even the primary - issue, but when you have two solutions that are otherwise completely semantically identical, the concise solution without unnecessary syntactic sugar is generally considered the superior one. That is *not* some subjective thing. That's a very objective statement.
It's also - despite all the manual work you have to do in C - how the C language and its base runtime is designed. Compare C to contemporary languages that did similar things, and notice how C comes from the background of being fairly information-dense. You can use assignments as part of expressions, you have a lot of library functions that return redundant information because it often helps write expressions involving them more dense etc etc. So even C - despite being a very verbose language in requiring the programmer to specify every little detail manually - is actually trying to be fairly dense in other ways.
Dense, information-rich code is better code.
Sure, you can take it a bit *too* far, and make it so dense that it's unreadable (APL, traditional dense "write-only" perl code, C obfuscation contest etc), but my examples certainly didn't do that. I didn't try to cram things onto a single line, for example.
But I definitely stand by my point: "designing things so that they are unnecessarily verbose" is universally considered a *bad* thing.
This is getting even more off topic, but there is one last thing I'm just curious about. When you say that returning -errno is superior compared to returning -1 and a separate errno, then I absolutely agree. But isn't that somewhat contradictory with your preference for using the report_error() function? With the report_error() function, you can only return -1, and not some more descriptive errno value. Unless you also pass the errno as a parameter:
Correct. But if I return "-errno", I do it because I expect the *caller* to report the error. So I expect the caller to do something like
if (rc < 0) fprintf(stderr, "Could not open file %s: %s", filename, strerror(-rc));
(the above ignores the fact that there may be several layers of functions that just return the error code back up - only the "top-most" caller generally does the actual error reporting, and the "direct" caller often just returns the error number without any commentary on it).
So that is the common interface for library routines. They return error codes so that the callers ("the real code") can decide how - and whether - reporting an error is appropriate at all. Maybe the caller doesn't even consider the "error" to be an error at all - the caller just wanted to first *try* to do something, and if that fails, it has a different fallback. See? That's the normal thing a library would do: not enforce any logging model at all, because not all errors should necessarily even be logged in the first place.
So you can do two models:
(a) return an error code that contains the error information (it is a *negative* error, because positive or zero contains information about the successful case)
This is the normal C library model.
(b) do the whole logging thing, and then just return the fact that errors happened (again, negative error, because the non-error case wants to return data too)
This is obviously what you do
but doing *both* (a) and (b) is just redundant. If you log an error message, you migth as well just say "-1". You've already done the descriptive thing, much more so than some individual error code.
Also, quite frankly, while the whole "-errno" thing is how we do things inside the kernel, and while I think it's the superior model, in user space I would generally do the whole "-1" thing, just because it's what people are more used to. That's especially true since you would tend to have to mix your library code with *other* library code, and the -1 model is the norm.
In the kernel, we don't have that issue. We don't use random libraries. We _can't_ use random libraries. Even when we end up using interfaces that *look* like standard libraries, we have to reimplement them by hand (or rely on the compiler just doing it automatically for us). So the kernel can use the more efficient "-errno" model without confusion.
Linus