From patchwork Thu Mar 19 13:00:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 229130 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A601BC4332B for ; Thu, 19 Mar 2020 13:09:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7F77720789 for ; Thu, 19 Mar 2020 13:09:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584623386; bh=QnLz3/BkTibGRELwWVCE0MtGduZJ0AkQOLxS6ziDWMo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=obHXGUUUYmuzkbPocI9gfKgGeWrRLPAnSXIYX+EB41lqfLYQNcNTBReC463EwZSt8 fzGwraV8iIJF9OxspkEPm3BWUmTscZq0n6TLXwZqR+/mpUAWL0Hd7Eo1+tRTh49rzo 8dUyuSWySr4AejbPw3n2/XcIn/P/2/IM/pFrWnO4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728376AbgCSNJp (ORCPT ); Thu, 19 Mar 2020 09:09:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:54246 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728377AbgCSNJm (ORCPT ); Thu, 19 Mar 2020 09:09:42 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 41210208D6; Thu, 19 Mar 2020 13:09:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584623381; bh=QnLz3/BkTibGRELwWVCE0MtGduZJ0AkQOLxS6ziDWMo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YhM/RNVGV/euH7nZ3F6CIkkbTSwODCb+SIwIq1gjiPhsaHVYXYE8ReNEMrAsiWQlV JJXnlbdndMVCoAIL6l9DrwI7jG8AEBfAC9LrgqDY6RG8fJhNNr3lZi+djnjjT/eYZd Eop7c1XNH5HuJyLsQ9Ca421VnzUUe5cIyquLaybw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jann Horn , Linus Torvalds Subject: [PATCH 4.4 92/93] mm: slub: add missing TID bump in kmem_cache_alloc_bulk() Date: Thu, 19 Mar 2020 14:00:36 +0100 Message-Id: <20200319123953.547695284@linuxfoundation.org> X-Mailer: git-send-email 2.25.2 In-Reply-To: <20200319123924.795019515@linuxfoundation.org> References: <20200319123924.795019515@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Jann Horn commit fd4d9c7d0c71866ec0c2825189ebd2ce35bd95b8 upstream. When kmem_cache_alloc_bulk() attempts to allocate N objects from a percpu freelist of length M, and N > M > 0, it will first remove the M elements from the percpu freelist, then call ___slab_alloc() to allocate the next element and repopulate the percpu freelist. ___slab_alloc() can re-enable IRQs via allocate_slab(), so the TID must be bumped before ___slab_alloc() to properly commit the freelist head change. Fix it by unconditionally bumping c->tid when entering the slowpath. Cc: stable@vger.kernel.org Fixes: ebe909e0fdb3 ("slub: improve bulk alloc strategy") Signed-off-by: Jann Horn Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/slub.c | 9 +++++++++ 1 file changed, 9 insertions(+) --- a/mm/slub.c +++ b/mm/slub.c @@ -2932,6 +2932,15 @@ int kmem_cache_alloc_bulk(struct kmem_ca if (unlikely(!object)) { /* + * We may have removed an object from c->freelist using + * the fastpath in the previous iteration; in that case, + * c->tid has not been bumped yet. + * Since ___slab_alloc() may reenable interrupts while + * allocating memory, we should bump c->tid now. + */ + c->tid = next_tid(c->tid); + + /* * Invoking slow path likely have side-effect * of re-populating per CPU c->freelist */