Message ID | cover.1739674648.git.herbert@gondor.apana.org.au |
---|---|
Headers | show |
Series | Multibuffer hashing take two | expand |
On Tue, Feb 18, 2025 at 06:10:36PM +0800, Herbert Xu wrote: > On Sun, Feb 16, 2025 at 11:51:29AM -0800, Eric Biggers wrote: > > > > But of course, there is no need to go there in the first place. Cryptographic > > APIs should be simple and not include unnecessary edge cases. It seems you > > still have a misconception that your more complex API would make my work useful > > for IPsec, but again that is still incorrect, as I've explained many times. The > > latest bogus claims that you've been making, like that GHASH is not > > parallelizable, don't exactly inspire confidence either. > > Sure, everyone hates complexity. But you're not removing it. I'm avoiding adding it in the first place. > You're simply pushing the complexity into the algorithm implementation > and more importantly, the user. With your interface the user has to > jump through unnecessary hoops to get multiple requests going, which > is probably why you limited it to just 2. > > If anything we should be pushing the complexity into the API code > itself and away from the algorithm implementation. Why? Because > it's shared and therefore the test coverage works much better. > > Look over the years at how many buggy edge cases such as block > left-overs we have had in arch crypto code. Now if those edge > cases were moved into shared API code it would be much better. > Sure it could still be buggy, but it would affect everyone > equally and that means it's much easier to catch. You cannot ignore complexity in the API, as that is the worst kind. In addition, your (slower) solution has a large amount of complexity in the per-algorithm glue code, making it still more lines of code *per algorithm* than my (faster) solution, which you're ignoring. Also, users still have to queue up multiple requests anyway. There are no "unnecessary hoops" with my patches -- just a faster, simpler, easier to use and less error-prone API. > Memory allocations can always fail, but they *rarely* do. Resolve > the OOM case by using a stack request as a fallback. Rarely executed fallbacks that are only executed in extremely rare OOM situations that won't be covered by xfstests? No thank you. Why would you even think that would be reasonable? Anyway, I am getting tired of responding to all your weird arguments that don't bring anything new to the table. Please continue to treat your patches as nacked and don't treat silence as agreement. I am just tired of this. - Eric