From patchwork Thu Oct 16 09:34:23 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Edward Nevill X-Patchwork-Id: 38800 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-qc0-f200.google.com (mail-qc0-f200.google.com [209.85.216.200]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id BA53720973 for ; Thu, 16 Oct 2014 09:34:26 +0000 (UTC) Received: by mail-qc0-f200.google.com with SMTP id r5sf6385575qcx.7 for ; Thu, 16 Oct 2014 02:34:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:message-id:subject:from:reply-to:to :cc:date:organization:mime-version:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe:content-type :content-transfer-encoding; bh=4m4Yjb0fyAqXb3MOSa4kSDgr0PHcjISouhaX/wHdmIA=; b=AO3dEN5c7aqLnU6CplHqazJAiSzEzUnnUeWkgSM5/znTuskBKNv0N77EUAVXzLfdFx QNw6oIaFmDL8GU/OwHRm+dzFLBH1ldL0zB7q/M06CeVrfGP3NILK7AkuUj5UEC9FEKWi vZU5FyD7Sw7q5dD0haau6fr89m4AF4X5dcX2dfPSKVRxI9iE3uLg3oICY+QNyQOC4zEL zaVl/fpqhpv6OQ8DvMdnhyMmIxetoJ2BFf0kO/8HQ4uY8AITp68kivq1UUv1LE71pfWK 42on5iixjeMoHUG0IEYrohW6fbRBy3pF0bp1bVln7Nm1OSmL4nNtPCCuP/0UJPQ873ts eGJg== X-Gm-Message-State: ALoCoQlURmsW3KVSlk6QXfxODQWsf1NElSX+22IQz+TtYydVjp0NsHFXFMoUz1zuOl4Cp8S72b/Q X-Received: by 10.236.222.98 with SMTP id s92mr110817yhp.29.1413452066582; Thu, 16 Oct 2014 02:34:26 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.19.147 with SMTP id 19ls646360qgh.78.gmail; Thu, 16 Oct 2014 02:34:26 -0700 (PDT) X-Received: by 10.140.108.35 with SMTP id i32mr250246qgf.66.1413452066437; Thu, 16 Oct 2014 02:34:26 -0700 (PDT) Received: from mail-qg0-f45.google.com (mail-qg0-f45.google.com [209.85.192.45]) by mx.google.com with ESMTPS id z74si36985076qgz.53.2014.10.16.02.34.26 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 16 Oct 2014 02:34:26 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.192.45 as permitted sender) client-ip=209.85.192.45; Received: by mail-qg0-f45.google.com with SMTP id q107so2191020qgd.4 for ; Thu, 16 Oct 2014 02:34:26 -0700 (PDT) X-Received: by 10.140.17.67 with SMTP id 61mr104746qgc.59.1413452066289; Thu, 16 Oct 2014 02:34:26 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.96.235.100 with SMTP id ul4csp695368qdc; Thu, 16 Oct 2014 02:34:25 -0700 (PDT) X-Received: by 10.180.93.37 with SMTP id cr5mr4346191wib.76.1413452065176; Thu, 16 Oct 2014 02:34:25 -0700 (PDT) Received: from mail-wi0-f177.google.com (mail-wi0-f177.google.com [209.85.212.177]) by mx.google.com with ESMTPS id r8si1499424wiy.73.2014.10.16.02.34.24 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 16 Oct 2014 02:34:25 -0700 (PDT) Received-SPF: pass (google.com: domain of edward.nevill@linaro.org designates 209.85.212.177 as permitted sender) client-ip=209.85.212.177; Received: by mail-wi0-f177.google.com with SMTP id fb4so4169814wid.4 for ; Thu, 16 Oct 2014 02:34:24 -0700 (PDT) X-Received: by 10.180.105.74 with SMTP id gk10mr20554030wib.0.1413452064627; Thu, 16 Oct 2014 02:34:24 -0700 (PDT) Received: from [10.0.7.5] ([88.98.47.97]) by mx.google.com with ESMTPSA id ko10sm15373257wjb.48.2014.10.16.02.34.23 for (version=SSLv3 cipher=RC4-SHA bits=128/128); Thu, 16 Oct 2014 02:34:23 -0700 (PDT) Message-ID: <1413452063.17920.24.camel@localhost.localdomain> Subject: RFR: JDK9 merge up to jdk9-b35 From: Edward Nevill Reply-To: edward.nevill@linaro.org To: "aarch64-port-dev@openjdk.java.net" Cc: Patch Tracking Date: Thu, 16 Oct 2014 10:34:23 +0100 Organization: Linaro X-Mailer: Evolution 3.8.5 (3.8.5-2.fc19) Mime-Version: 1.0 X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: edward.nevill@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.192.45 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Hi, The following webrevs merge the aarch64 jdk9 forest up to revision jdk9-b35 from jdk9-b27. http://openjdk.linaro.org/webrev/141015/corba/ http://openjdk.linaro.org/webrev/141015/hotspot/ http://openjdk.linaro.org/webrev/141015/jaxp/ http://openjdk.linaro.org/webrev/141015/jaxws/ http://openjdk.linaro.org/webrev/141015/jdk/ http://openjdk.linaro.org/webrev/141015/jdk9/ http://openjdk.linaro.org/webrev/141015/langtools/ http://openjdk.linaro.org/webrev/141015/nashorn/ The following webrev contains the aarch64 specific changes (also pasted inline below). http://openjdk.linaro.org/webrev/141015/aarch64_hotspot/ I have tested the following builds Cross compile: client: release Cross compile: server: release Cross compile: zero: release Native compile: server: release Native compile: server: fastdebug Native compile: server: slowdebug I have run JTreg hotspot and langtools and compared against aarch64 jdk9-b27 and x86 jdk9-b35. In the following the numbers in brackets are (aarch64 jdk9-b35/aarch64 jdk9-b27/x86 jdk9-b35) server/hotspot: Pass: (639/587/640), Fail: (7/7/5), Error: (18/11/19) langtools/hotspot: Pass: (3084/3053/3086), Fail (0/1/0), Error: (29/29/26) OK to push? Ed. --- CUT HERE --- # HG changeset patch # User Edward Nevill edward.nevill@linaro.org # Date 1413450007 -3600 # Thu Oct 16 10:00:07 2014 +0100 # Node ID 0d8213bd2a8ff054e18f69a485ac3fdd3dc3919a # Parent 2f147fc9cff1015dd36bd7fb3361211a90c9f99c aarch64 specific merge up to jdk9-b35 diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/cpu/aarch64/vm/c1_globals_aarch64.hpp --- a/src/cpu/aarch64/vm/c1_globals_aarch64.hpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/cpu/aarch64/vm/c1_globals_aarch64.hpp Thu Oct 16 10:00:07 2014 +0100 @@ -57,6 +57,9 @@ define_pd_global(intx, NewSizeThreadIncrease, 4*K ); define_pd_global(intx, InitialCodeCacheSize, 160*K); define_pd_global(intx, ReservedCodeCacheSize, 32*M ); +define_pd_global(intx, NonProfiledCodeHeapSize, 13*M ); +define_pd_global(intx, ProfiledCodeHeapSize, 14*M ); +define_pd_global(intx, NonMethodCodeHeapSize, 5*M ); define_pd_global(bool, ProfileInterpreter, false); define_pd_global(intx, CodeCacheExpansionSize, 32*K ); define_pd_global(uintx, CodeCacheMinBlockLength, 1); diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/cpu/aarch64/vm/c2_globals_aarch64.hpp --- a/src/cpu/aarch64/vm/c2_globals_aarch64.hpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/cpu/aarch64/vm/c2_globals_aarch64.hpp Thu Oct 16 10:00:07 2014 +0100 @@ -75,6 +75,9 @@ define_pd_global(bool, OptoBundling, false); define_pd_global(intx, ReservedCodeCacheSize, 48*M); +define_pd_global(intx, NonProfiledCodeHeapSize, 21*M); +define_pd_global(intx, ProfiledCodeHeapSize, 22*M); +define_pd_global(intx, NonMethodCodeHeapSize, 5*M ); define_pd_global(uintx, CodeCacheMinBlockLength, 4); define_pd_global(uintx, CodeCacheMinimumUseSpace, 400*K); diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/cpu/aarch64/vm/frame_aarch64.cpp --- a/src/cpu/aarch64/vm/frame_aarch64.cpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/cpu/aarch64/vm/frame_aarch64.cpp Thu Oct 16 10:00:07 2014 +0100 @@ -689,6 +689,13 @@ return NULL; } +#ifndef PRODUCT +// This is a generic constructor which is only used by pns() in debug.cpp. +frame::frame(void* sp, void* fp, void* pc) { + frame((intptr_t*)sp, (intptr_t*)fp, (address)pc); +} +#endif + intptr_t* frame::real_fp() const { if (_cb != NULL) { // use the frame size if valid diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/cpu/aarch64/vm/interpreterGenerator_aarch64.hpp --- a/src/cpu/aarch64/vm/interpreterGenerator_aarch64.hpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/cpu/aarch64/vm/interpreterGenerator_aarch64.hpp Thu Oct 16 10:00:07 2014 +0100 @@ -43,8 +43,9 @@ address generate_abstract_entry(void); address generate_math_entry(AbstractInterpreter::MethodKind kind); void generate_transcendental_entry(AbstractInterpreter::MethodKind kind, int fpargs); - address generate_empty_entry(void); - address generate_accessor_entry(void); + address generate_jump_to_normal_entry(void); + address generate_accessor_entry(void) { return generate_jump_to_normal_entry(); } + address generate_empty_entry(void) { return generate_jump_to_normal_entry(); } address generate_Reference_get_entry(); address generate_CRC32_update_entry(); address generate_CRC32_updateBytes_entry(AbstractInterpreter::MethodKind kind); diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/cpu/aarch64/vm/interpreter_aarch64.cpp --- a/src/cpu/aarch64/vm/interpreter_aarch64.cpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/cpu/aarch64/vm/interpreter_aarch64.cpp Thu Oct 16 10:00:07 2014 +0100 @@ -237,6 +237,17 @@ __ blrt(rscratch1, gpargs, fpargs, rtype); } +// Jump into normal path for accessor and empty entry to jump to normal entry +// The "fast" optimization don't update compilation count therefore can disable inlining +// for these functions that should be inlined. +address InterpreterGenerator::generate_jump_to_normal_entry(void) { + address entry_point = __ pc(); + + assert(Interpreter::entry_for_kind(Interpreter::zerolocals) != NULL, "should already be generated"); + __ b(RuntimeAddress(Interpreter::entry_for_kind(Interpreter::zerolocals))); + return entry_point; +} + // Abstract method entry // Attempt to execute abstract method. Throw exception address InterpreterGenerator::generate_abstract_entry(void) { @@ -261,43 +272,6 @@ return entry_point; } - -// Empty method, generate a very fast return. - -address InterpreterGenerator::generate_empty_entry(void) { - // rmethod: Method* - // r13: sender sp must set sp to this value on return - - if (!UseFastEmptyMethods) { - return NULL; - } - - address entry_point = __ pc(); - - // If we need a safepoint check, generate full interpreter entry. - Label slow_path; - { - unsigned long offset; - assert(SafepointSynchronize::_not_synchronized == 0, - "SafepointSynchronize::_not_synchronized"); - __ adrp(rscratch2, SafepointSynchronize::address_of_state(), offset); - __ ldrw(rscratch2, Address(rscratch2, offset)); - __ cbnz(rscratch2, slow_path); - } - - // do nothing for empty methods (do not even increment invocation counter) - // Code: _return - // _return - // return w/o popping parameters - __ mov(sp, r13); // Restore caller's SP - __ br(lr); - - __ bind(slow_path); - (void) generate_normal_entry(false); - return entry_point; - -} - void Deoptimization::unwind_callee_save_values(frame* f, vframeArray* vframe_array) { // This code is sort of the equivalent of C2IAdapter::setup_stack_frame back in diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/cpu/aarch64/vm/templateInterpreter_aarch64.cpp --- a/src/cpu/aarch64/vm/templateInterpreter_aarch64.cpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/cpu/aarch64/vm/templateInterpreter_aarch64.cpp Thu Oct 16 10:00:07 2014 +0100 @@ -660,12 +660,6 @@ // // -// Call an accessor method (assuming it is resolved, otherwise drop -// into vanilla (slow path) entry -address InterpreterGenerator::generate_accessor_entry(void) { - return NULL; -} - // Method entry for java.lang.ref.Reference.get. address InterpreterGenerator::generate_Reference_get_entry(void) { return NULL; @@ -1461,50 +1455,6 @@ // ... // [ parameter 1 ] <--- rlocals -address AbstractInterpreterGenerator::generate_method_entry( - AbstractInterpreter::MethodKind kind) { - // determine code generation flags - bool synchronized = false; - address entry_point = NULL; - - switch (kind) { - case Interpreter::zerolocals : break; - case Interpreter::zerolocals_synchronized: synchronized = true; break; - case Interpreter::native : entry_point = ((InterpreterGenerator*) this)->generate_native_entry(false); break; - case Interpreter::native_synchronized : entry_point = ((InterpreterGenerator*) this)->generate_native_entry(true); break; - case Interpreter::empty : entry_point = ((InterpreterGenerator*) this)->generate_empty_entry(); break; - case Interpreter::accessor : entry_point = ((InterpreterGenerator*) this)->generate_accessor_entry(); break; - case Interpreter::abstract : entry_point = ((InterpreterGenerator*) this)->generate_abstract_entry(); break; - - case Interpreter::java_lang_math_sin : // fall thru - case Interpreter::java_lang_math_cos : // fall thru - case Interpreter::java_lang_math_tan : // fall thru - case Interpreter::java_lang_math_abs : // fall thru - case Interpreter::java_lang_math_log : // fall thru - case Interpreter::java_lang_math_log10 : // fall thru - case Interpreter::java_lang_math_sqrt : // fall thru - case Interpreter::java_lang_math_pow : // fall thru - case Interpreter::java_lang_math_exp : entry_point = ((InterpreterGenerator*) this)->generate_math_entry(kind); break; - case Interpreter::java_lang_ref_reference_get - : entry_point = ((InterpreterGenerator*)this)->generate_Reference_get_entry(); break; - case Interpreter::java_util_zip_CRC32_update - : entry_point = ((InterpreterGenerator*)this)->generate_CRC32_update_entry(); break; - case Interpreter::java_util_zip_CRC32_updateBytes - : // fall thru - case Interpreter::java_util_zip_CRC32_updateByteBuffer - : entry_point = ((InterpreterGenerator*)this)->generate_CRC32_updateBytes_entry(kind); break; - default : ShouldNotReachHere(); break; - } - - if (entry_point) { - return entry_point; - } - - return ((InterpreterGenerator*) this)-> - generate_normal_entry(synchronized); -} - - // These should never be compiled since the interpreter will prefer // the compiled version to the intrinsic version. bool AbstractInterpreter::can_be_compiled(methodHandle m) { diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/cpu/aarch64/vm/templateTable_aarch64.cpp --- a/src/cpu/aarch64/vm/templateTable_aarch64.cpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/cpu/aarch64/vm/templateTable_aarch64.cpp Thu Oct 16 10:00:07 2014 +0100 @@ -1761,11 +1761,9 @@ // r2: scratch __ cbz(r0, dispatch); // test result -- no osr if null // nmethod may have been invalidated (VM may block upon call_VM return) - __ ldrw(r2, Address(r0, nmethod::entry_bci_offset())); - // InvalidOSREntryBci == -2 which overflows cmpw as unsigned - // use cmnw against -InvalidOSREntryBci which does the same thing - __ cmn(r2, -InvalidOSREntryBci); - __ br(Assembler::EQ, dispatch); + __ ldrb(r2, Address(r0, nmethod::state_offset())); + __ cmpw(r2, nmethod::in_use); + __ br(Assembler::NE, dispatch); // We have the address of an on stack replacement routine in r0 // We need to prepare to execute the OSR method. First we must diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/cpu/aarch64/vm/vm_version_aarch64.cpp --- a/src/cpu/aarch64/vm/vm_version_aarch64.cpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/cpu/aarch64/vm/vm_version_aarch64.cpp Thu Oct 16 10:00:07 2014 +0100 @@ -136,6 +136,17 @@ if (FLAG_IS_DEFAULT(UseCRC32Intrinsics)) { UseCRC32Intrinsics = true; } + + if (UseSHA) { + warning("SHA instructions are not implemented"); + FLAG_SET_DEFAULT(UseSHA, false); + } + if (UseSHA1Intrinsics || UseSHA256Intrinsics || UseSHA512Intrinsics) { + warning("SHA intrinsics are not implemented"); + FLAG_SET_DEFAULT(UseSHA1Intrinsics, false); + FLAG_SET_DEFAULT(UseSHA256Intrinsics, false); + FLAG_SET_DEFAULT(UseSHA512Intrinsics, false); + } } void VM_Version::initialize() { diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/os/linux/vm/os_linux.cpp --- a/src/os/linux/vm/os_linux.cpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/os/linux/vm/os_linux.cpp Thu Oct 16 10:00:07 2014 +0100 @@ -1400,7 +1400,7 @@ #ifndef SYS_clock_getres #if defined(IA32) || defined(AMD64) -#define SYS_clock_getres IA32_ONLY(266) AMD64_ONLY(229) AARCH64_ONLY(114) + #define SYS_clock_getres IA32_ONLY(266) AMD64_ONLY(229) AARCH64_ONLY(114) #define sys_clock_getres(x,y) ::syscall(SYS_clock_getres, x, y) #else #warning "SYS_clock_getres not defined for this platform, disabling fast_thread_cpu_time" @@ -1980,11 +1980,11 @@ static Elf32_Half running_arch_code=EM_MIPS; #elif (defined M68K) static Elf32_Half running_arch_code=EM_68K; - #elif (defined AARCH64) - static Elf32_Half running_arch_code=EM_AARCH64; +#elif (defined AARCH64) + static Elf32_Half running_arch_code=EM_AARCH64; #else #error Method os::dll_load requires that one of following is defined:\ - IA32, AMD64, IA64, __sparc, __powerpc__, ARM, S390, ALPHA, MIPS, MIPSEL, PARISC, M68K, AARCH64 + IA32, AMD64, IA64, __sparc, __powerpc__, ARM, S390, ALPHA, MIPS, MIPSEL, PARISC, M68K, AARCH64 #endif // Identify compatability class for VM's architecture and library's architecture @@ -5876,11 +5876,11 @@ extern char** environ; #ifndef __NR_fork -#define __NR_fork IA32_ONLY(2) IA64_ONLY(not defined) AMD64_ONLY(57) AARCH64_ONLY(1079) + #define __NR_fork IA32_ONLY(2) IA64_ONLY(not defined) AMD64_ONLY(57) AARCH64_ONLY(1079) #endif #ifndef __NR_execve -#define __NR_execve IA32_ONLY(11) IA64_ONLY(1033) AMD64_ONLY(59) AARCH64_ONLY(221) + #define __NR_execve IA32_ONLY(11) IA64_ONLY(1033) AMD64_ONLY(59) AARCH64_ONLY(221) #endif // Run the specified command in a separate process. Return its exit value, diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/os_cpu/linux_aarch64/vm/os_linux_aarch64.cpp --- a/src/os_cpu/linux_aarch64/vm/os_linux_aarch64.cpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/os_cpu/linux_aarch64/vm/os_linux_aarch64.cpp Thu Oct 16 10:00:07 2014 +0100 @@ -690,6 +690,10 @@ } #endif +int os::extra_bang_size_in_bytes() { + return 0; +} + extern "C" { int SpinPause() { } diff -r 2f147fc9cff1 -r 0d8213bd2a8f src/share/vm/runtime/arguments.cpp --- a/src/share/vm/runtime/arguments.cpp Wed Oct 15 14:19:15 2014 +0100 +++ b/src/share/vm/runtime/arguments.cpp Thu Oct 16 10:00:07 2014 +0100 @@ -1148,7 +1148,7 @@ if (FLAG_IS_DEFAULT(ReservedCodeCacheSize)) { FLAG_SET_ERGO(uintx, ReservedCodeCacheSize, ReservedCodeCacheSize * 5); // The maximum B/BL offset range on AArch64 is 128MB - AARCH64_ONLY(FLAG_SET_DEFAULT(ReservedCodeCacheSize, MIN2(ReservedCodeCacheSize, 128*M))); + AARCH64_ONLY(FLAG_SET_ERGO(uintx, ReservedCodeCacheSize, MIN2(ReservedCodeCacheSize, 128*M))); } // Enable SegmentedCodeCache if TieredCompilation is enabled and ReservedCodeCacheSize >= 240M if (FLAG_IS_DEFAULT(SegmentedCodeCache) && ReservedCodeCacheSize >= 240*M) { --- CUT HERE ---