Message ID | 20210531130658.804599277@linuxfoundation.org |
---|---|
State | New |
Headers | show |
Series | None | expand |
Hi! > From: Mathy Vanhoef <Mathy.Vanhoef@kuleuven.be> > > commit 94034c40ab4a3fcf581fbc7f8fdf4e29943c4a24 upstream. > > Simultaneously prevent mixed key attacks (CVE-2020-24587) and fragment > cache attacks (CVE-2020-24586). This is accomplished by assigning a > unique color to every key (per interface) and using this to track which > key was used to decrypt a fragment. When reassembling frames, it is > now checked whether all fragments were decrypted using the same key. > > To assure that fragment cache attacks are also prevented, the ID that is > assigned to keys is unique even over (re)associations and (re)connects. > This means fragments separated by a (re)association or (re)connect will > not be reassembled. Because mac80211 now also prevents the reassembly of > mixed encrypted and plaintext fragments, all cache attacks are > prevented. > --- a/net/mac80211/key.c > +++ b/net/mac80211/key.c > @@ -799,6 +799,7 @@ int ieee80211_key_link(struct ieee80211_ > struct ieee80211_sub_if_data *sdata, > struct sta_info *sta) > { > + static atomic_t key_color = ATOMIC_INIT(0); > struct ieee80211_key *old_key; This is nice and simple, but does not include any kind of overflow handling. sparc32 moved away from 24-bit atomics, which is good I guess. OTOH if this is incremented 10 times a second, we'll still overflow in 6 years or so. Can attacker make it overflow? Should this have a note why overflow is not possible / why it is not a problem? Best regards, Pavel -- http://www.livejournal.com/~pavelmachek
Hello Pavel, Good remark. In practice this doesn't look like a problem: the overflow would need to happen in less than two seconds. If it takes longer, the mixed key and cache attack cannot be performed because the previous fragment(s) will be discarded, see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/mac80211/rx.c?h=v5.13-rc4&id=8124c8a6b35386f73523d27eacb71b5364a68c4c#n2202 It may be useful to add this as a comment to the code. Cheers, Mathy On 6/1/21 1:26 PM, Pavel Machek wrote: > Hi! > >> From: Mathy Vanhoef <Mathy.Vanhoef@kuleuven.be> >> >> commit 94034c40ab4a3fcf581fbc7f8fdf4e29943c4a24 upstream. >> >> Simultaneously prevent mixed key attacks (CVE-2020-24587) and fragment >> cache attacks (CVE-2020-24586). This is accomplished by assigning a >> unique color to every key (per interface) and using this to track which >> key was used to decrypt a fragment. When reassembling frames, it is >> now checked whether all fragments were decrypted using the same key. >> >> To assure that fragment cache attacks are also prevented, the ID that is >> assigned to keys is unique even over (re)associations and (re)connects. >> This means fragments separated by a (re)association or (re)connect will >> not be reassembled. Because mac80211 now also prevents the reassembly of >> mixed encrypted and plaintext fragments, all cache attacks are >> prevented. > >> --- a/net/mac80211/key.c >> +++ b/net/mac80211/key.c >> @@ -799,6 +799,7 @@ int ieee80211_key_link(struct ieee80211_ >> struct ieee80211_sub_if_data *sdata, >> struct sta_info *sta) >> { >> + static atomic_t key_color = ATOMIC_INIT(0); >> struct ieee80211_key *old_key; > > This is nice and simple, but does not include any kind of overflow > handling. sparc32 moved away from 24-bit atomics, which is good I > guess. OTOH if this is incremented 10 times a second, we'll still > overflow in 6 years or so. Can attacker make it overflow? > > Should this have a note why overflow is not possible / why it is not a > problem? > > Best regards, > Pavel >
--- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -97,6 +97,7 @@ struct ieee80211_fragment_entry { u8 rx_queue; bool check_sequential_pn; /* needed for CCMP/GCMP */ u8 last_pn[6]; /* PN of the last fragment if CCMP was used */ + unsigned int key_color; }; --- a/net/mac80211/key.c +++ b/net/mac80211/key.c @@ -799,6 +799,7 @@ int ieee80211_key_link(struct ieee80211_ struct ieee80211_sub_if_data *sdata, struct sta_info *sta) { + static atomic_t key_color = ATOMIC_INIT(0); struct ieee80211_key *old_key; int idx = key->conf.keyidx; bool pairwise = key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE; @@ -850,6 +851,12 @@ int ieee80211_key_link(struct ieee80211_ key->sdata = sdata; key->sta = sta; + /* + * Assign a unique ID to every key so we can easily prevent mixed + * key and fragment cache attacks. + */ + key->color = atomic_inc_return(&key_color); + increment_tailroom_need_count(sdata); ret = ieee80211_key_replace(sdata, sta, pairwise, old_key, key); --- a/net/mac80211/key.h +++ b/net/mac80211/key.h @@ -128,6 +128,8 @@ struct ieee80211_key { } debugfs; #endif + unsigned int color; + /* * key config, must be last because it contains key * material as variable length member --- a/net/mac80211/rx.c +++ b/net/mac80211/rx.c @@ -2265,6 +2265,7 @@ ieee80211_rx_h_defragment(struct ieee802 * next fragment has a sequential PN value. */ entry->check_sequential_pn = true; + entry->key_color = rx->key->color; memcpy(entry->last_pn, rx->key->u.ccmp.rx_pn[queue], IEEE80211_CCMP_PN_LEN); @@ -2302,6 +2303,11 @@ ieee80211_rx_h_defragment(struct ieee802 if (!requires_sequential_pn(rx, fc)) return RX_DROP_UNUSABLE; + + /* Prevent mixed key and fragment cache attacks */ + if (entry->key_color != rx->key->color) + return RX_DROP_UNUSABLE; + memcpy(pn, entry->last_pn, IEEE80211_CCMP_PN_LEN); for (i = IEEE80211_CCMP_PN_LEN - 1; i >= 0; i--) { pn[i]++;