Jump to content

Talk:SHA-3/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Presentation of the Block Permutation

[edit]

The presentation of the block permutation differs from the reference in presentation. The mapping between bits in the state and the matrix is specified on page 8 as follows:

"The mapping between the bits of s and those of a is s[w(5y + x) + z] = a[x][y][z]."

The wikipedia page basically switches the second and first coordinate:

"Let a[i][j][k] be bit (i×5 + j)×w + k of the input,[..]"

The following description of the algorithm is correct, but confusing for those comparing reference, implementation guidelines and other sources. I think it would be helpful to stay closer to the reference in this regard, especially as I don't see an advantage in presenting it with the coordinates switch.

If there is no sign of disagreement, I will come back and change the section accordingly. — Preceding unsigned comment added by Deejaydarvin (talkcontribs) 10:12, 25 June 2013 (UTC)[reply]

Requested move

[edit]
The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the proposal was moved. --BDD (talk) 19:19, 11 October 2012 (UTC) (non-admin closure)[reply]

KeccakSHA-3 – Now that Keccak is the official SHA-3 algorithm, this article should be moved to SHA-3 (and perhaps recreate Keccak as a redirect to SHA-3 if it's felt warranted.) moof (talk) 16:56, 4 October 2012 (UTC)[reply]

Support, just like Rijndael redirects to Advanced Encryption Standard (and not Advanced Encryption Standard process) -- intgr [talk] 17:01, 4 October 2012 (UTC)[reply]
Support move. @moof: A move will automatically leave a redirect from Keccak. Nageh (talk) 12:35, 5 October 2012 (UTC)[reply]
Support. SHA-3 will become the much more commonly used name for this algorithm, like AES. Make Keccak a redirect here, and include in in the history as the origin of SHA-3 —fudoreaper (talk) 06:09, 9 October 2012 (UTC)[reply]
I was just about to suggest this. BrokenSegue 21:14, 10 October 2012 (UTC)[reply]
The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.


Not yet finalized

[edit]

Update: SHA-3 was added to the Secure Hash Standard by NIST today. (http://www.nist.gov/customcf/get_pdf.cfm?pub_id=919060) — Preceding unsigned comment added by Rbrightwell (talkcontribs) 17:50, 5 August 2015 (UTC)[reply]

SHA-3 standard does not not exist yet: Secure Hash Standard (SHS) is not yet updated. Only thing which is 100 % sure is that SHA-3 will be based on Keccak. This fact was pointed by the Keccak authors at FOSDEM 2013 (https://fosdem.org/2013/schedule/event/security_keccak/).

At what time of the video do they make that statement? I am watching, but the video is pretty long. —fudoreaper (talk) 03:07, 13 February 2013 (UTC)[reply]
Ha, i just found it. 40:45 is the time when he mentions this clearly. We may need to modify this article then... —fudoreaper (talk) 03:19, 13 February 2013 (UTC)[reply]
I tried to see if Wikipedia has a template for upcoming standards or similar but couldn't find one. If such template doesn't exists then perhaps something along lines:
As of [date] NIST hasn't yet published final SHA-3 specification. Contents of this article are subject to change once the final standard is published.Woupsi (talk) 22:06, 13 February 2013 (UTC)[reply]
Yes, something like this should be clearly stated in the beginning. What happened was the article called Keccak was moved to SHA-3, so a lot of the text comes from the days it was only talking about Keccak. Go ahead and make some changes! —fudoreaper (talk) 08:12, 19 February 2013 (UTC)[reply]
Updated the article to not mention any particular variants like "SHA3-256", because the standard is not published, and so it is not final! -- Sverdrup (talk) 16:04, 18 February 2013 (UTC)[reply]

news on finalization https://docs.google.com/file/d/0BzRYQSHuuMYOQXdHWkRiZXlURVE 80.98.89.22 (talk) 22:04, 27 August 2013 (UTC)[reply]

Can we delete rhash sample data? SHA-3 not standardized and the trickle of changes during the standardization process is changing the test values; moreover the current text suggest they're from a non-standard rhash utility. — Preceding unsigned comment added by 216.113.160.81 (talk) 00:52, 6 December 2014 (UTC)[reply]

I think it could be reasonable to remove the RHASH algorithm. I changed it, because when researching the algorithm, I found this example only matched the FIPS standard and the competition output on an empty string. The original block simply gave the examples as if there was no difference. I could give sample data to prove the point of the small change for the FIPS standard, but I'm not aware of it being published, so I hesitate to do so. I think that is why the RHASH algorithm has become so popular, because they have a far greater list of example inputs and outputs than the FIPS examples that consist of a few bits. I've already seen the RHASH algorithm popping up in other applications, probably because it is easier to test. It may be useful to instead have a section that shows the difference between the standards, the problem still exists however that the only published input example that is the same between all three is the empty string, and that just happens to be the worst example to use. It will be interesting if early adopters of SHA-3 mostly get it wrong, simply because RHASH got it wrong. The fact that it is wrong seems to be important.74.200.48.5 (talk) 14:35, 1 May 2015 (UTC)[reply]

seconded. examples should be kept minimum, and only official values, the latest draft in our case. Krisztián Pintér (talk) 14:39, 1 May 2015 (UTC)[reply]
removed the RHASH. also the "keccak" examples, unclear what they were. kept only the standardized ones, plus added the obligatory avalanche showcase. Pintér Krisztián (talk) 20:26, 15 August 2015 (UTC)[reply]

reopen the case for separate keccak article

[edit]

in the light of recent documents, i suggest keccak and sha-3 to be separated. rationale: in this document http://keccak.noekeon.org/NoteSoftwareInterface.pdf authors suggest a wide array of uses for keccak outside the scope of a hash function. also there are different usage modes, namely the overwrite mode absorbing (as opposed to the xor method), reduced rounds for first Keccak-f in special cases like keyed mode, and sakura tree hashing with special padding. as of now, it is impossible to incorporate these into wikipedia, because they are not related to SHA-3, and there is no keccak article. 178.21.48.247 (talk) 14:32, 26 July 2013 (UTC)[reply]

It's not necessary to create a separate article for that, just create a subsection about the non-SHA features and make that clear in text. As an example, the Advanced Encryption Standard article also discusses the Rijndael-specific block and key lengths which are not in AES. -- intgr [talk] 06:37, 28 July 2013 (UTC)[reply]
not necessary but reasonable 178.21.48.155 (talk) 11:16, 29 July 2013 (UTC)[reply]
The variant of Keccak now being proposed by NIST for SHA-3 standardization is a specific implementation of Keccak (http://keccak.noekeon.org/NoteSoftwareInterface.pdf). I believe separating Keccak and SHA-3 into two articles would be wise. Even if that can't be accomodated, the differences between Keccak as a family of primitives, Keccak as suggested for use as a hash function, and SHA-3 as defined by NIST should really be clarified. Now that the standardization process is nearing completion, they are diverging and are no longer equivalent. Rbpolsen ·

more: CAESAR contestants ketje http://competitions.cr.yp.to/round1/ketjev1.pdf and keyak http://competitions.cr.yp.to/round1/keyakv1.pdf are based on smaller state and reduced round keccak. (i am 178.21.48.247 above) Krisztián Pintér (talk) 12:34, 19 March 2014 (UTC)[reply]

now what? now we have the SHAKE's as well. where to put it? Krisztián Pintér (talk) 18:59, 7 May 2014 (UTC)[reply]

The name SHA-3 seems to be rather restrictive to SHA3-{224,256,384,512}, while there is much more than that: the SHAKE's indeed, but also the new functions standardized by NIST in SP 800-185 (cSHAKE, KMAC, etc.), Ketje, Keyak, the STROBE protocol... So I think the current redirect does not help in showing the bigger picture, which would be a nice added value of this Wikipedia page. 62.147.254.143 (talk) 03:04, 4 August 2017 (UTC)[reply]

on the apropos of User:Samboy adding kanga, i would like to mention kravatte here to be added to the keccak zoo. all this would have been solved by having a keccak article and sha3 redirecting to a section of it. Krisztián Pintér (talk) 14:11, 20 August 2018 (UTC)[reply]

controversy section

[edit]

added a little bit of info about the fuss that is going on. sadly, due to US government inaptness, i can't cite the djb mail from the NIST mailing list, it is not available. — Preceding unsigned comment added by 80.98.89.22 (talk) 16:53, 13 October 2013 (UTC)[reply]

turns out that it is a registration only site, and they don't seem to hand out accounts as easily as they claim. does anyone have an alternative source? 176.63.52.22 (talk) 22:14, 3 November 2013 (UTC)[reply]

Removing statements by Paul Crowley

[edit]

I've never heard of "Paul Crowley", and he doesn't have a Wikipedia article (in contrast to e.g. Bruce Schneier who is cited in the same section). A Google search for "Paul Crowley" doesn't turn up any cryptologist (there's an Irish football player, and a lawyer that comes on the first page). The citation itself seems to be a blog site. I'm being bold and removing the statement, particularly considering the controversy around the weakening of Keccak by NIST. We need to be careful who is being cited and their weight in the cryptologic community. Please cite who he is before adding him back.83.248.146.73 (talk) 14:16, 16 February 2014 (UTC)[reply]

Paul Crowley cryptanalyzed Salsa20 and was awarded a prize for it. His comments on the controversy are technically substantiated and can bring another light to the controversy, so they are worth adding back.82.220.1.204 (talk) 16:17, 3 March 2014 (UTC)[reply]
i personally have no objection, but i made it a little shorter Krisztián Pintér (talk) 17:22, 3 March 2014 (UTC)[reply]

Problem with the third item of the references

[edit]

I think there is a little problem on the item three of the section "References" because it is showing the follow string in red

"|first1= missing |last1= in Authors list (help)"


I'm sorry, but I don't know how to fix it, so, I'm reporting here.


Regards,

Lp.vitor (talk) 22:39, 21 October 2014 (UTC)[reply]

Free implementations available already?

[edit]

For the previous hash algorithms, there have been free implementations available under the BSD license. Is there such an implementation available for SHA-3 already? Schily (talk) 15:56, 18 August 2015 (UTC)[reply]

tweaks

[edit]

i suggest the deletion of the tweaks session, or merging it into the history section. now that the standard is out, tweaks does not seem all that relevant. Krisztián Pintér (talk) 08:51, 8 September 2015 (UTC)[reply]

I think deletion is not good, but I support moving it to the history. --mafutrct (talk) 10:59, 8 October 2015 (UTC)[reply]

padding

[edit]

AFAIK the padding is 1...0...1 for Keccak, 011...0...1 for SHA3 and 11111...0...1 for SHAKE. — Preceding unsigned comment added by 95.111.59.55 (talk) 12:13, 11 October 2015 (UTC)[reply]

those bits are not part of the paddig, though obviously it will be implemented as one step in libraries. for example the rule for sha3 is: M || 01 || 10*1. this is practically equivalent to M || 0110*1. this is already covered in the article. Krisztián Pintér (talk) 12:36, 11 October 2015 (UTC)[reply]

I have a question: The article now says that a maximum or r-1 0's are added, but given that there is a 1 in front and a 1 at the end, and that the total number of bits should be r, it seems to me that the maximal number of 0's is r-2 instead of r-1. This should occur when padding the empty message.

r-1 zeros are needed if there is only one bit space left in the final block. 1 goes to that block, followed by another block with r-1 zeros and a 1. this never happens in byte based implementations. Krisztián Pintér (talk) 11:18, 26 July 2017 (UTC)[reply]

Is the example section correct?

[edit]

I made an implementation of SHA3, and ran SHA3-224(""), and I get: d46b7676dc1570d228520b1b9dd5caa1319ec425e2775ec399f96fb7.

I've noticed that there's no in-line citation in the examples section, so I doubt it's correct.

Do we have a reference implementation we can defer to? — Preceding unsigned comment added by Dannyniu (talkcontribs) 12:20, 14 May 2016 (UTC)[reply]

i'm not really in the mood to track down more, but specifically empty-224 is here: http://csrc.nist.gov/groups/ST/toolkit/documents/Examples/SHA3-224_Msg0.pdf
Krisztián Pintér (talk) 13:40, 14 May 2016 (UTC)[reply]
and some more empty here http://csrc.nist.gov/groups/ST/toolkit/examples.html#aHashing Krisztián Pintér (talk) 13:41, 14 May 2016 (UTC)[reply]

NIST controversy, capacity setting, pre-image and collision resistance.

[edit]

I think some technical details may be worth mentioning. I can draft sentences on this, but finding reference may be over the wall for me.

Because of birthday attack, the collision resistance of any hash function is always half of that of its pre-image resistance. As NIST was considering the necessity of twice the pre-image resistance than that minimal target security level, many in the field (cite Daniel J. Bernstein, Bruce Schneier, quotes here) have raised concern over the move. While favorable in performance, the tweak would cause SHA3 to have significantly less pre-image resistance than their SHA2 counterparts. — Preceding unsigned comment added by Dannyniu (talkcontribs) 09:22, 31 May 2017 (UTC)[reply]

Also, there's the trivia that quantum computers can pre-image a sponge function in capacity/3 compared to outlen/2 on true random oracles. Dannyniu (talk) 09:30, 31 May 2017 (UTC)[reply]

@Dannyniu and Ciphergoth: I just added a paragraph explaining the capacity change:
The hash function competition called for hash functions with a d-bit output to have d/2-bit resistance to collision attacks and d-bit resistance to preimage attacks, the maximum achievable for d bits of output. Keccak's security proof allows an adjustable level of security based on a "capacity" c, providing c/2-bit resistance to both collision and preimage attacks. To meet the original competition rules, Keccak's authors proposed c=2d. The announced change was to accept the same d/2-bit security for all forms of attack and standardize c=d. This would have sped up Keccak by allowing an additional d bits of input to be hashed each round.
I didn't see this discussion until after writing the above, but I think it addresses the issue. Review and edits solicited! I considered adding wording of the form "without weakening the hash in typical applications where the output size is determined by the required collision resistance", but after adding all the necessary caveats and clarifications, it just got too ponderous. Maybe someone else can do better? 104.153.72.218 (talk) 20:57, 12 July 2017 (UTC)[reply]
to my knowledge, NIST did not explicitly called for preimage/collision resistances, but rather the requirement was to provide the same security level as the four SHA2 variants. and that happened to be b and b/2. the current text suggests that they actually set specific requirements and then changed their minds. the reality is that it did not occur to them that these can be separately tuned. if anyone has any source for the opposite, please share. i suggest reverting edit 02:19, 13 July 2017‎. Krisztián Pintér (talk) 07:00, 13 July 2017 (UTC)[reply]
I couldn't find a reference for the claim that "quantum computers can pre-image a sponge function in capacity/3 compared to outlen/2 on true random oracles", and I suspenct that it is confusing pre-images and collisions, and the restriction to sponge functions seems wrong as well (the result applies to any hash function), see User talk:Dannyniu#NIST controversy, capacity setting, pre-image and collision resistance. If anyone has a reference, please provide it. --rtc (talk) 21:55, 16 July 2017 (UTC)[reply]
Ref seems to be [1], which doesn't give an academic source either. Anyway, capacity = 2*outlen, so capacity/3 = 2*outlen/3 = 2/3*outline > outlen/2. --rtc (talk) 13:26, 22 July 2017 (UTC)[reply]

Speed in lede

[edit]

I added the following to the lede:

SHA-3 has failed to find widespread adoption. While fast in hardware, it has been critized for being slow in software; SHA2-512 is more than twice as fast as SHA3-512 and SHA-1 (though broken and with a much lower security claim) more than three times as fast on a Skylake processor at 3.2 Ghz.[1] On the other hand, SHA2-512 is susceptible to length extension attacks, so it is insecure for some applications, and some are thus recommending to use SHA3-512 instead.[2]

This was removed by User:Krisztián Pintér with the comment "removing irrelevant paragraph from lead. speed section suffices." I don't think so. The topic begs the question why hardly anybody is using SHA-3. The lede should give an explanation and give the facts. I don't think that this crucially relevant information should be buried deep down in the article. --rtc (talk) 12:05, 17 July 2017 (UTC)[reply]

whether SHA3 is slow is not a central point that would belong to the lead. it was if speed was an actual issue, but your very link explains that this is mostly just warm air coming from half educated blake2 fanboys. there is a section for explaining the facts, that is more than enough. its adoption is not important either, considering that it was never meant to be used in the first place. NIST itself said that it is just a backup plan in case something happens to SHA2. i support putting a paragraph about this latter status. but i don't think you can successfully argue that the first thing a reader needs to know about sha3 is that it "failed" to spread and it is criticized for speed. lead is a place to priority information mostly in the "what the heck is this" domain, not minutia. Krisztián Pintér (talk) 12:22, 17 July 2017 (UTC)[reply]
There are two crucial properties of cryptographic primitives: Their security and their speed. It is easy to design something that is secure but slow, and it is easy to design something that is fast but insecure. And an encyclopedia article in its lead should report the crucial properties of the subject. Software speed IS an actual issue for SHA-3. The alternatives that the authors provide are not disputing this fact, and thus there is no "warm air". It's simple, SHA3-512 is slow compared to SHA2-512. The authors say a) use hardware (not an option, it's simply not available in standard CPUs), b) reduce the security parameters (SHAKE), c) use cSHAKE to exploit parallelism (which is NOT a SHA-3, but a derived algorithm -- this article is about SHA-3), d) reduce the security parameters even more (KangarooTwelve) (again, NOT a SHA-3). yeah, "what the heck is this" -- it's a SLOW algorithm. That's what is is and that's what the lead should say. The authors say: "the initially-intended outcome of the competition is a set of four functions called SHA3-224, SHA3-256, SHA3-384, and SHA3-512. If 'SHA-3' means these four functions, then indeed SHA3-512 is unnecessarily slowed down by an excessive security parameter." That's not a fair statement. SHA2 has the same specifications, and it isn't as slow. The specifications are not to blame. --rtc (talk) 21:04, 17 July 2017 (UTC)[reply]
it is debatable how important speed is, but it is definitely not important if the primitive in question is not extraordinarily slow or fast. despite the noise, keccak is not slow. bear in mind that the fair comparison is shake256 vs sha512, even if the former is less secure, but it is not a practical difference, and the standard itself recommends shakes. your own link says that the speed is comparable to primitives used in the industry. once more, the lead is there to summarize the key points. if you truly believe that a major characteristic of keccak is its slow speed, you didn't pay attention. i would suggest putting something there, because now it is short. in my view, we should include its status as a plan-b, not something you should transition to. and maybe before that, some brief mention of the sponge construction, which is the main novelty of the design. try to argue that "slow speed" is more important than these two. Krisztián Pintér (talk) 21:20, 17 July 2017 (UTC)[reply]
If SHA-2 is twice as fast as SHA-3, that MEANS that SHA-3 is extraordinarily slow. As you say, shake256 is less secure than SHA512. Thus it is not a fair comparison. We do not have to discuss whether "it is not a practical difference" (though quantum computing + someone finding a fast way to compute the Keccak permutation function + moore's law + advances in cryptography is certainly something one has to consider. I remember articles in computer magazines of the 90s which claimed that a 1024 bit RSA key will not be within the reach of breaking for the expected age of the universe); the difference is simply there. SHA-3 is not a plan b. It was designed specifically to address the undesirable properties, not to say weaknesses, of the MD construction. See Merkle–Damgård_construction#Security_characteristics. "Crypto 2004 Conference: The Sky Falls ... Joux shows a surprising property in Merkle-Damgaard hashes"[3] That advances were made against SHA-1 etc., leading to the suspicion that they might be advanced to attack SHA-2 as well one day, only added to that. I am NOT saying that "a major characteristic of keccak is its slow speed", I am saying that SHA-3 (not Keccak with its many non-SHA3 or SHA3-derived variants) has been criticized for its slow speed in software. --rtc (talk) 21:51, 17 July 2017 (UTC)[reply]
it is your task to show that speed is a problem. and it should be well sourced, because the lead is the most important part of an article. if you really believe that 20% slower than sha2 is a such serious point that it is more important than its novel construction or intended use, you need to find some very strong evidence for that. but i bet you can only find blogposts. Krisztián Pintér (talk) 22:02, 17 July 2017 (UTC)[reply]
For the same security parameters, it is more than 50% slower. The source was given, it is the authors themselves. --rtc (talk) 22:08, 17 July 2017 (UTC)[reply]
this is going nowhere. shake is the main instance, and recommended by NIST. the speed of SHA3-s does not matter, you will only use them in rare cases, where the bigger preimage resistance matters. but i really don't care about your opinion. it is not a forum, it is an encyclopedia. if you think speed is a huge issue, prove it. not that it is slower, but that it is so important that it should be in the lead. or show me proof that the speed is hindering its acceptance or adoption. your judgement is not relevant. Krisztián Pintér (talk) 22:14, 17 July 2017 (UTC)[reply]
Your judgement is not relevant. I am merely reporting what the authors admit -- that SHA-3 has been criticized for being slow in software. --rtc (talk) 22:17, 17 July 2017 (UTC)[reply]

[outdent]

I’ve been following this dicussion today. The Wikipedia policy is pretty clear: We should only put things in the lede which reliable sources say about the topic in question. If reliable sources give extensive coverage to SHA-3’s software performance, then we put that there. But, that means a reliable source: A book about the topic, or a peer reviewed paper which gives an overview of modern cryptographic hash functions. Keep in mind that Adam Langley’s blog is not a reliable source as per our guidelines on what is and is not a reliable source. To say that SHA-3 is slow just because some people on some random Reddit discussion board feel it’s slow is not a reliable source; remember, the Pizzagate conspiracy theory started as a thread in Reddit. Since reliable sources do not give extensive discussion about SHA-3 being slower in software (in fact, they say SHA-3 is incredibly fast when implemented in hardware), to put any discussion based on a Reddit or Ycombinator thread violates WP:UNDUE. Samboy (talk) 02:12, 18 July 2017 (UTC)[reply]

It is not putting "any discussion based on a Reddit or Ycombinator", but based on the authors themselves, who react to such criticism. Obviously it is relevant enough for them to respond, and so it should be relevant enough for the lead. I find it quite ridiculous with which specious arguments you and Krisztián try to keep the fundamental thruth about SHA-3 out of the lead. The authors say, yes, it's slow compared to SHA-2 but only because of "SHA3-512 is unnecessarily slowed down by an excessive security parameter" Well, that this is unnecessary is the author's opinion, however, it does not change the fact, implied by the authors ("indeed"), that SHA-3 is slow compared to SHA-2. At least they admit that the decision to "unnecessarily" "slow" SHA-3 was made after "a fierce controversy in 2013" SHA-3#Capacity_change_controversy Apparently one can have a different opinion about the "slowing down" being necessary. But all that is irrelevant; what is relevant is that it is slow. --rtc (talk) 06:24, 18 July 2017 (UTC)[reply]
bear in mind that nobody debates the data points. the speeds are known. the debate is about how important these are, and whether it is actually impeding its adoption in a significant manner. the fact that it is X% slower than X other hash is not something you put in the lead unless it is important for some reason. show that it did not make it into tls for this reason. or some other important protocol or software chose another primitive as it was faster. something of that nature. Krisztián Pintér (talk) 07:26, 18 July 2017 (UTC)[reply]
“based on the authors themselves” Which is as per Wikipedia policy (observe the line “Articles should be based on reliable, third-party, published sources” emphasis mine), not the most reliable of source. “which specious arguments you and Krisztián try to keep the fundamental thruth about SHA-3 out of the lead” You claim our argument is “specious”, but you have failed to tell us which Wikipedia policy makes our argument specious. The argument, again, is that there is not enough independent, reliable third party discussion about SHA-3’s speed when rendered in software — the same old tired argument which only Blake2 advocates really care about now that SHA-3 is the most recent NIST hashing standard — for this discussion to be placed in the lede. If you disagree, please link to this third party discussion; please do not pretend we’re not making a reasonable valid argument just because you disagree with its conclusion. Samboy (talk) 10:04, 18 July 2017 (UTC)[reply]
Another point is that the general consensus among the cryptographic community is that anything over 256 bits of security is redundant. 256 bits of security means that if every single atom in the solar system was making 10,000,000 instances of SHAKE256 a second, it would still take 350,000 years to brute force through all possible 256-bit combinations. SHA-3-512 is only more secure than SHAKE256 in an abstract manner. I’ve removed the non-neutral wording implying SHAKE256 is somehow, in a real world manner, less secure than SHA3-512. Samboy (talk) 10:41, 18 July 2017 (UTC)[reply]
It is NOT the general consensus that anything over 256 bits of security is redundant. Quantum computing can cut security in half. For instance, SHAKE128 only has 64 bits of security against quantum attacks, not that much. Also, advances in cryptography can cut down security. Further, there may be applications that use SHA2-512 in such a way that only, say, 256 bits of security are left (not far fetched at all: using the hash as a key for Disk_encryption_theory#XTS). And finally, there's moore's law. You may believe that it cannot apply forever, but who knows? Advances in science and technology are often unexpected. Therefore, SHA-3-512 is not more secure only in an abstract manner. The wording was giving the facts and added to understanding of the article, please do not remove those facts; it biases the article. Just accept that those who were advocating removal of 512 security in the debate lost. --rtc (talk) 15:47, 18 July 2017 (UTC)[reply]
I removed your edits [2] and [3] because they are spin-doctoring the article. You claim they "imply" something you don't like, but in fact they are just giving the facts. The fact that the XOR is fast compared to the permutation. The fact that the hash becomes more secure the higher the capacity is chosen. The fact that preimage resistance was cut in half. All this is valuable information in the article. You removed "The authors have reacted to this criticism by suggesting to use SHAKE128 and SHAKE256 instead of SHA3-256 and SHA3-512, at the expense of cutting the preimage resistance in half (but while keeping the collision resistance)." because you find that "256 bits of security is every atom in the solar system making 10,000,000 SHAKE256 calculations a second for over 350,000 years; let’s not imply that cutting preimage resistance down to 256 bits decreases real-world security" Even if it were true, your removal is inappropriate, as this is about SHA3-512 AND SHA3-256, and half of SHA3-256 security is 128, not 256. You also removed "the less efficient, but more secure, the hashing becomes, since more bits of the message can be XORed into the state (a quick operation) before each application of the complex and slow f." with the comment "Removing implication that 512 bits of security is somehow, in any real world sense, more secure than 256 bits, and removing non-neutral wording" Again, this has nothing to do with 256 bits. It applies to any number of bits and is an important property of the cipher. And there's no neutrality problem with the text; on the contrary, it is very important to stress that f is complex and slow while xoring is quick and simple. That is the crucial point of how keccak weights speed against security in its variants. The article gives due weight to all views, there is no need to censor facts just because you want the article to support only the one view you prefer. If you are desparate about your 350,000 years argument, find a reliable source for it and add it to the article, but please do not remove something from it. --rtc (talk) 16:01, 18 July 2017 (UTC)[reply]
it was probably a misunderstanding. actually i also interpreted the text first in the way as Samboy did, namely that it is slow in some absolute sense. probably the fact that you are pushing this speed agenda hard does not help. anyway, i changed the text so it is more precise, and can not be interpreted incorrectly. Krisztián Pintér (talk) 22:15, 18 July 2017 (UTC)[reply]

References

Speed redux

[edit]

Since changes to this section have resulted in heated discussion from one editor, here are the latest changes I have made to the speed section (in bold)

The lower r is (and, conversely, the higher c = br = 1600 – r), the less efficient, but more theoretically secure, the hashing becomes, since fewer bits of the message can be XORed into the state (a quick operation) before each application of the computationally expensive f.

a) Theoretically secure, for two reasons:

The first reason is because, once we get at 256 bits of security (or, likewise 128 bits of security against an imaginary quantum computer), the numbers just don’t make sense in the real world. Since that one editor didn’t like my own back of the envelope calculation of making every atom in the solar system a brute force cryptographic calculating computer four times as fast as my core i7-7600U, which would require over 350,000 years to brute force 256 bits, let me quote Applied Cryptography to give an idea of how hard it would be to brute force only 128 bits:

Assume the typical algae cell is the size of a cube 10 microns on a side (this is probably a large estimate), then 10 ** 15 of them can fill a cubic meter. Pump them into the ocean and cover 200 square miles (518 square kilometers) of water to a meter deep (you figure out how to do it—I’m just the idea man), and you’d have 10 ** 23 (over a hundred billion gallons) of them floating in the ocean. (For comparison, the Exxon Valdez spilled 10 million gallons of oil.) If each of them can try a million keys per second, they will recover the key for a 128-bit algorithm in just over 100 years. (The resulting algae bloom is your problem.)

For 256 bits to be brute forced with a quantum computer, imagine this same algae bloom — but this time with each algae cell somehow being a quantum computer.

The other reason is that a cryptographic sponge would have to be indistinguishable from a random permutation to actually get more than 256 bits of security. Any cryptographic weakness in Keccak’s round function, no matter how small, would make it so having a huge capacity like 1024 bits won’t actually give us 512 bits of security (remember, with a sponge, the theoretical security level is the capacity in bits divided by two).

b) fewer bits

“bits” is a countable noun, not a mass noun, so it is grammatically correct to say “fewer bits”, not “less bits.”

Samboy (talk) 07:21, 24 July 2017 (UTC)[reply]

I still think you get it wrong about what that statement says. It is not about increasing the security above 256 bits or 128 bits into a range which you deem only "theoretically" more secure. It is a general property of the algorithm, which applies to any r. r can take any value from 1 to 1600. Which of the resulting securities are theoretically and which practically relevant is completely unrelated to what the statement says and raising the question there only confuses the reader. Please remove the "theoretical" in this place and raise the matter somewhere else in the article, if it is so important to you. Preferably, it belongs to the "Capacity change controversy" section.
About your other reason, I simply don't get it how 256 bits are relevant here. A weakness in the permutation would decrease the security for all security levels, not merely 256 bits or 512 bits. And it would actually be an argument against the "theoretical", since if the algorithm is weaker than announced, then increasing the capacity (even above 512, which you deem the "theoretical" security range) may be a reasonable way to mitigate that. However, again, it is something completely unrelated to what that statement says. --rtc (talk) 08:07, 24 July 2017 (UTC)[reply]
I haven't been following the discussion too closely, but I'm inclined to agree with rtc. In the real world, given enough time and effort, weaknesses will be found (which may be a cryptanalytic breakthrough, quantum computers or an advanced alien race attacking our communications ;) ). Looking at past breakages of ciphers and hash functions, a longer digest size (or key size) always gives you a larger security margin against attacks.
So let's not make any judgments in the article whether above 256 bits is useless/theoretical or not, because that may yet change in the future. -- intgr [talk] 10:55, 24 July 2017 (UTC)[reply]
the whole section is weird in my view. i would expect some data there, instead, i get some lengthy explanation about xor, author's response to criticism, which does not seem to exist, and a bare mention how shakes supposed to fix it despite the fact that shakes are the proposed main modes in the standard. Krisztián Pintér (talk) 07:23, 25 July 2017 (UTC)[reply]
correction: i clearly remember reading that shakes are the main instances, but apparently it is either wrong or was dropped at some point. fips202 explicitly forbids shakes to be used as hash functions. on the other hand, NIST SP 800-185 exclusively uses shakes. Krisztián Pintér (talk) 10:06, 25 July 2017 (UTC)[reply]
The section does contain some data. The explanation about xors is crucial to understand how the sponge construction allows balancing speed versus security. And the criticism as well as the author's reaction to the criticsm should be mentioned. --rtc (talk) 17:48, 25 July 2017 (UTC)[reply]


Security against quantum attacks

[edit]

It seems that the table on the security against quantum attacks is not correct for the SHAKEs. IMHO, the resistance should be:

  • Collision (Brassard et al.): min(d/3, c/3)
  • Collision (Bernstein), preimages: min(d/2, c/2)

62.147.254.143 (talk) 21:18, 30 July 2017 (UTC)[reply]

Just implemented it now. 62.147.254.143 (talk) 02:58, 4 August 2017 (UTC)[reply]

Why should the resistance be like that? You write "Noting that here n can be either the output d bits or the c inner state bits" but the classical preimage resistance security is at most 2c/2, not 2c for Keccak. This gives min(d/2, c/4) for the quantum attack for preimage etc. There is an error though, the collision resistance depends on d, not on n. n is merely an upper bound (since, if you can find second preimages, you also can also find collisions). Fixed that. --rtc (talk) 13:16, 5 August 2017 (UTC)[reply]
Ok I see there was another error; like the quantum collision attack, the quantum preimage attack depends on d, not on n. However I am not sure if there is no quantum attack where it depends on n. This can probably only be answered by a quantum cryptography expert. --rtc (talk) 13:54, 5 August 2017 (UTC)[reply]

Should be renamed to Keccak

[edit]

Keccak is a family of cryptographic algorithms (built around the cryptographic sponge or another construction and the Keccak permutation with a chosen number of rounds), some of which have been standardized as "SHA-3".

This article is currently unnecessarily confusing because of having SHA-3 as the main topic, instead of just a subsection or two being devoted to SHA-3.

I propose changing this page so as to

  • Rename it to "Keccak" or "Keccak cryptographic algorithm family" or something like that.
  • Introduce with an overview of notable schemes/algorithms that use the Keccak permutation (the NIST standardized ones, KangarooTwelve, Ketje, Kejak, Kravatte ...).
  • Describe the Keccak permutation.
  • Describe how the Keccak permutation can be used with sponge, duplex, and Farfalle constructions.
  • Describe how different cryptographic schemes (mentioned in the lead) are instantiated or built from the described permutation and constructions. Here we would mention which schemes are standardized by which standard (i.e.; FIPS 202/SHA-3, SP 800-185, TUAK).

If you do not agree with some details, take this as just a general direction. Although an amount of work would be necessary for additions to the article (like Farfalle), that stuff must (because it would currently be off-topic) be postponed. What is needed to do as soon as possible is rename and reorganize the existing article so as to emphasize the Keccak permutation, and de-emphasize the SHA-3 and cSHAKE functions (which are simply instances of the sponge constructed with the Keccak permutation); because the current structure is unnatural and confusing. 77.216.237.71 (talk) 16:39, 7 January 2019 (UTC)[reply]

see also section 4 above. note: farfalle is a general construction. kravatte is the keccak derivative. similarly duplex, monkey duplex are general constructions. Krisztián Pintér (talk) 16:54, 7 January 2019 (UTC)[reply]

Updated software speed

[edit]

Looking at the tables in https://keccak.team/sw_performance.html, it seems that the software speed figures in this page are somewhat behind the latest implementations on Skylake. And the statement "12.6 cpb on a typical x86-64-based machine" looks outdated. Do you confirm? DemusHok (talk) 15:00, 19 October 2019 (UTC)[reply]

About Kravatte

[edit]

@Krisztián Pintér: Thanks for adding Kravatte to the page!

Actually, I found the sentences "The original instance was found weak. An update was released in 2018 to address these issues." somewhat misleading. We indeed initially posted a weak instance on ePrint. However, we fixed it and Kravatte as published in ToSC 2017 is fine: The attacks by Guo et al. and Chaigneau et al. are on the earlier versions, see Appendix A for the version history. As for the 2018 update, that does not concern Kravatte itself but the modes on top of it (SANE and SANSE vs SAE and SIV). --Gilles Van Assche (talk) 14:27, 2 November 2019 (UTC)[reply]

noted. you could make it more accurate. i will myself, whenever i find the time and willpower to read your reference in appropriate detail. maybe a suggestion? Krisztián Pintér (talk) 15:11, 13 November 2019 (UTC)[reply]
I think they did not change it because it might be seen as WP:COI. BernardoSulzbach (talk) 22:58, 13 November 2019 (UTC)[reply]
how about not getting lost in the details and just include the results. article updated. Krisztián Pintér (talk) 14:08, 17 November 2019 (UTC)[reply]

XOF security properties

[edit]

I've marked this (particularly the last sentence) as failed verification:

SHAKE will generate as many bits from its sponge as requested, called XOFs (Extendable Output Functions). For example, SHAKE128(M, 256) can be used as a hash function with a 256 character bitstream with 128-bit security strength. Arbitrarily large lengths can be used as pseudo-random number generators. Alternately, SHAKE256(M, 128) can be used as a hash function with a 128-bit length and 128-bit resistance, but unlike truncated output of MD and SHA family functions, including SHA-3, will retain its security properties at any given size. SHAKE functions require that every bit of output be as strong as the last, whereas other hashes only require that the entire hash be strong, while a subset may be weak.

I think it needs rewriting, but I don't understand what it's trying to say about MD and SHA functions. SHA explicitly allows for truncation of the hash, and the concept that "every bit of output be as strong as the last" doesn't make any cryptographic sense to me. The closest the source contains is A.2:

when two different output lengths are chosen for a common message, the two outputs are closely related: the longer output is an extension of the shorter output

But this is a drawback of XOFs, not a criticism of MD and SHA functions. As a result, I've reverted User:SilverbackNet's revert of my edit. If anyone can explain a different interpretation of either the article or the source, please let me know before I spend time rewriting. Ms7821 (talk) 21:52, 22 December 2021 (UTC)[reply]

Agreed. The whole bit where it says “but unlike truncated output of MD and SHA family functions, including SHA-3, [XOFs] will retain its security properties at any given size. SHAKE functions require that every bit of output be as strong as the last, whereas other hashes only require that the entire hash be strong, while a subset may be weak.” isn’t what the reference says. And, indeed, SHA-512/256 is a 256 bit hash with 256 bits of security (128 bits against collision attacks because of the birthday paradox), even though the state used to generate the hash is 512 bits in size. A XOF allows us to use a hash to make as many secure random bits as needed, which allows it to be used for things like RSA padding, a stream cipher, or a cryptographic secure random number generator, but it doesn’t somehow make the output bits more secure compared to a fixed output length hash like SHA2-256. Well, OK, sha2-256 is vulnerable to length extension attacks since its entire state is visible, meaning we need to use a HMAC-SHA256 special construction where that matters—but that’s not what failed verification and needed to be deleted. Samboy (talk) 14:35, 23 December 2021 (UTC)[reply]

Please explain the meaning of ℓ

[edit]

In the description of the differences between Keccak and SHA3 there is this text: "12 + ℓ to 12 + 2ℓ" where nowwhere in the article is mentioned what ℓ refers to... Can somebody knowing what this means please add this?--2A02:8070:6394:7A00:A553:6D74:9F04:1A8D (talk) 13:52, 3 April 2022 (UTC)[reply]

Described in The block permutation: "It is defined for any power-of-two word size, w = 2ℓ bits. The main SHA-3 submission uses 64-bit words, ℓ = 6.". Maybe some more clarity would be useful up there. Krisztián Pintér (talk) 07:32, 25 May 2022 (UTC)[reply]

Contradiction?

[edit]

The lede says "The purpose of SHA-3 is that it can be directly substituted for SHA-2 in current applications if necessary, ..." while the History section says "SHA-3 is not meant to replace SHA-2, as no significant attack on SHA-2 has been demonstrated". Those statements might both be true in some technical sense, but they sound somewhat contradictory. I want to "harmonize" these two phrases, but I am not sure what is best. Any thoughts? David10244 (talk) 13:41, 24 December 2022 (UTC)[reply]

i agree that this is confusing. just a few points that need to be respected: 1) sha3 was started out of concerns about sha2 (and md in general), but now people are more relaxed. 2) sha3 had the explicit requirement to provide drop-in versions for sha2 (same security guarantees at 224, 256, 384, 512), so protocols don't have to be redesigned. 3) sha3 actually is more than just replacements, there are also the shakes. Krisztián Pintér (talk) 13:50, 24 December 2022 (UTC)[reply]
Interesting information, thanks. If "sha3 had the explicit requirement to provide drop-in versions for sha2", then the History part that says "SHA-3 is not meant to replace SHA-2" might be wrong. Maybe, as I think you imply, that piece should say "SHA-3 is no longer meant to replace SHA-2..."
Would it be good to clarify some of that in the article? David10244 (talk) 14:06, 24 December 2022 (UTC)[reply]
Also, there's this: "With the following definitions ... SHA-3 instances are drop-in replacements for SHA-2, intended to have identical security properties" which also makes the "not meant to replace" statement kind of weird. I realize that the phrase in the lede is a summary of the later statement that I just mentioned. David10244 (talk) 14:24, 24 December 2022 (UTC)[reply]
the thinking was: if there is an urgent need for a new hash, then it must be ready to be activated with the flip of a switch. but that "if" never materialized, as trust in MD resumed. Krisztián Pintér (talk) 14:32, 24 December 2022 (UTC)[reply]
@Krisztián Pintér Could you make the article say that? David10244 (talk) 06:20, 27 December 2022 (UTC)[reply]
unfortunately the original doesn't include a reference, and i'm not changing it without finding the exact wording of the call for submissions, and also adding a ref. 5 minutes google wasn't sufficient, but if anyone can find, please do -- Krisztián Pintér (talk) 17:55, 29 December 2022 (UTC)[reply]

Permutation

[edit]

The term permutation is indeed confusing, but the fact of confusion does not belong here and should be dealt with somewhere else (I have started a discussion at Talk:Substitution-permutation network#Meaning of "permutation"). The regular (bit) permutation is frequently called "transposition" to avoid this confusion. Dimawik (talk) 23:32, 9 October 2023 (UTC)[reply]