Most of the JDK tests are using the JTReg test framework. Make sure that your configuration knows where to find your installation of JTReg. If this is not picked up automatically, use the --with-jtreg=<path to jtreg home> option to point to the JTReg framework. Note that this option should point to the JTReg home, i.e. the top directory, containing lib/jtreg.jar etc.
-
The Adoption Group provides recent builds of jtreg here. Download the latest .tar.gz file, unpack it, and point --with-jtreg to the jtreg directory that you just unpacked.
+
The Adoption Group provides recent builds of jtreg here. Download the latest .tar.gz file, unpack it, and point --with-jtreg to the jtreg directory that you just unpacked.
Building of Hotspot Gtest suite requires the source code of Google Test framework. The top directory, which contains both googletest and googlemock directories, should be specified via --with-gtest. The supported version of Google Test is 1.8.1, whose source code can be obtained:
by downloading and unpacking the source bundle from here
diff --git a/doc/building.md b/doc/building.md
index 2f9a0026e2892..835a7d99413a4 100644
--- a/doc/building.md
+++ b/doc/building.md
@@ -848,7 +848,7 @@ containing `lib/jtreg.jar` etc.
The [Adoption Group](https://wiki.openjdk.java.net/display/Adoption) provides
recent builds of jtreg [here](
-https://ci.adoptopenjdk.net/view/Dependencies/job/jtreg/lastSuccessfulBuild/artifact).
+https://ci.adoptopenjdk.net/view/Dependencies/job/dependency_pipeline/lastSuccessfulBuild/artifact/jtreg/).
Download the latest `.tar.gz` file, unpack it, and point `--with-jtreg` to the
`jtreg` directory that you just unpacked.
diff --git a/doc/testing.html b/doc/testing.html
index 49227421dcf68..effc6f0c446a9 100644
--- a/doc/testing.html
+++ b/doc/testing.html
@@ -27,6 +27,7 @@
All functionality is available using the test make target. In this use case, the test or tests to be executed is controlled using the TEST variable. To speed up subsequent test runs with no source code changes, test-only can be used instead, which do not depend on the source and test image build.
For some common top-level tests, direct make targets have been generated. This includes all JTReg test groups, the hotspot gtest, and custom tests (if present). This means that make test-tier1 is equivalent to make test TEST="tier1", but the latter is more tab-completion friendly. For more complex test runs, the test TEST="x" solution needs to be used.
The test specifications given in TEST is parsed into fully qualified test descriptors, which clearly and unambigously show which tests will be run. As an example, :tier1 will expand to jtreg:$(TOPDIR)/test/hotspot/jtreg:tier1 jtreg:$(TOPDIR)/test/jdk:tier1 jtreg:$(TOPDIR)/test/langtools:tier1 jtreg:$(TOPDIR)/test/nashorn:tier1 jtreg:$(TOPDIR)/test/jaxp:tier1. You can always submit a list of fully qualified test descriptors in the TEST variable if you want to shortcut the parser.
+
Common Test Groups
+
Ideally, all tests are run for every change but this may not be practical due to the limited testing resources, the scope of the change, etc.
+
The source tree currently defines a few common test groups in the relevant TEST.groups files. There are test groups that cover a specific component, for example hotspot_gc. It is a good idea to look into TEST.groups files to get a sense what tests are relevant to a particular JDK component.
+
Component-specific tests may miss some unintended consequences of a change, so other tests should also be run. Again, it might be impractical to run all tests, and therefore tiered test groups exist. Tiered test groups are not component-specific, but rather cover the significant parts of the entire JDK.
+
Multiple tiers allow balancing test coverage and testing costs. Lower test tiers are supposed to contain the simpler, quicker and more stable tests. Higher tiers are supposed to contain progressively more thorough, slower, and sometimes less stable tests, or the tests that require special configuration.
+
Contributors are expected to run the tests for the areas that are changed, and the first N tiers they can afford to run, but at least tier1.
+
A brief description of the tiered test groups:
+
+
tier1: This is the lowest test tier. Multiple developers run these tests every day. Because of the widespread use, the tests in tier1 are carefully selected and optimized to run fast, and to run in the most stable manner. The test failures in tier1 are usually followed up on quickly, either with fixes, or adding relevant tests to problem list. GitHub Actions workflows, if enabled, run tier1 tests.
+
tier2: This test group covers even more ground. These contain, among other things, tests that either run for too long to be at tier1, or may require special configuration, or tests that are less stable, or cover the broader range of non-core JVM and JDK features/components (for example, XML).
+
tier3: This test group includes more stressful tests, the tests for corner cases not covered by previous tiers, plus the tests that require GUIs. As such, this suite should either be run with low concurrency (TEST_JOBS=1), or without headful tests (JTREG_KEYWORDS=\!headful), or both.
+
tier4: This test group includes every other test not covered by previous tiers. It includes, for example, vmTestbase suites for Hotspot, which run for many hours even on large machines. It also runs GUI tests, so the same TEST_JOBS and JTREG_KEYWORDS caveats apply.
+
JTReg
JTReg tests can be selected either by picking a JTReg test group, or a selection of files or directories containing JTReg tests.
JTReg test groups can be specified either without a test root, e.g. :tier1 (or tier1, the initial colon is optional), or with, e.g. hotspot:tier1, test/jdk:jdk_util or $(TOPDIR)/test/hotspot/jtreg:hotspot_all. The test root can be specified either as an absolute path, or a path relative to the JDK top directory, or the test directory. For simplicity, the hotspot JTReg test root, which really is hotspot/jtreg can be abbreviated as just hotspot.
@@ -179,7 +193,9 @@
LAUNCHER_OPTIONS
AOT_MODULES
Generate AOT modules before testing for the specified module, or set of modules. If multiple modules are specified, they should be separated by space (or, to help avoid quoting issues, the special value %20).
RETRY_COUNT
-
Retry failed tests up to a set number of times. Defaults to 0.
+
Retry failed tests up to a set number of times, until they pass. This allows to pass the tests with intermittent failures. Defaults to 0.
+
REPEAT_COUNT
+
Repeat the tests up to a set number of times, stopping at first failure. This helps to reproduce intermittent test failures. Defaults to 0.
Gtest keywords
REPEAT
The number of times to repeat the tests (--gtest_repeat).
diff --git a/doc/testing.md b/doc/testing.md
index 0d09491be6e62..241bc08b40b9e 100644
--- a/doc/testing.md
+++ b/doc/testing.md
@@ -64,6 +64,52 @@ jtreg:$(TOPDIR)/test/nashorn:tier1 jtreg:$(TOPDIR)/test/jaxp:tier1`. You can
always submit a list of fully qualified test descriptors in the `TEST` variable
if you want to shortcut the parser.
+### Common Test Groups
+
+Ideally, all tests are run for every change but this may not be practical due to the limited
+testing resources, the scope of the change, etc.
+
+The source tree currently defines a few common test groups in the relevant `TEST.groups`
+files. There are test groups that cover a specific component, for example `hotspot_gc`.
+It is a good idea to look into `TEST.groups` files to get a sense what tests are relevant
+to a particular JDK component.
+
+Component-specific tests may miss some unintended consequences of a change, so other
+tests should also be run. Again, it might be impractical to run all tests, and therefore
+_tiered_ test groups exist. Tiered test groups are not component-specific, but rather cover
+the significant parts of the entire JDK.
+
+Multiple tiers allow balancing test coverage and testing costs. Lower test tiers are supposed to
+contain the simpler, quicker and more stable tests. Higher tiers are supposed to contain
+progressively more thorough, slower, and sometimes less stable tests, or the tests that require
+special configuration.
+
+Contributors are expected to run the tests for the areas that are changed, and the first N tiers
+they can afford to run, but at least tier1.
+
+A brief description of the tiered test groups:
+
+- `tier1`: This is the lowest test tier. Multiple developers run these tests every day.
+Because of the widespread use, the tests in `tier1` are carefully selected and optimized to run
+fast, and to run in the most stable manner. The test failures in `tier1` are usually followed up
+on quickly, either with fixes, or adding relevant tests to problem list. GitHub Actions workflows,
+if enabled, run `tier1` tests.
+
+- `tier2`: This test group covers even more ground. These contain, among other things,
+tests that either run for too long to be at `tier1`, or may require special configuration,
+or tests that are less stable, or cover the broader range of non-core JVM and JDK features/components
+(for example, XML).
+
+- `tier3`: This test group includes more stressful tests, the tests for corner cases
+not covered by previous tiers, plus the tests that require GUIs. As such, this suite
+should either be run with low concurrency (`TEST_JOBS=1`), or without headful tests
+(`JTREG_KEYWORDS=\!headful`), or both.
+
+- `tier4`: This test group includes every other test not covered by previous tiers. It includes,
+for example, `vmTestbase` suites for Hotspot, which run for many hours even on large
+machines. It also runs GUI tests, so the same `TEST_JOBS` and `JTREG_KEYWORDS` caveats
+apply.
+
### JTReg
JTReg tests can be selected either by picking a JTReg test group, or a selection
@@ -373,7 +419,15 @@ modules. If multiple modules are specified, they should be separated by space
#### RETRY_COUNT
-Retry failed tests up to a set number of times. Defaults to 0.
+Retry failed tests up to a set number of times, until they pass.
+This allows to pass the tests with intermittent failures.
+Defaults to 0.
+
+#### REPEAT_COUNT
+
+Repeat the tests up to a set number of times, stopping at first failure.
+This helps to reproduce intermittent test failures.
+Defaults to 0.
### Gtest keywords
diff --git a/make/Main.gmk b/make/Main.gmk
index cad3ecb3203b7..b1df835746bcd 100644
--- a/make/Main.gmk
+++ b/make/Main.gmk
@@ -324,7 +324,7 @@ $(eval $(call SetupTarget, vscode-project-ccls, \
# aren't built until after libjava and libjvm are available to link to.
$(eval $(call SetupTarget, demos-jdk, \
MAKEFILE := CompileDemos, \
- DEPS := java.base-libs exploded-image, \
+ DEPS := java.base-libs exploded-image buildtools-jdk, \
))
$(eval $(call SetupTarget, test-image-demos-jdk, \
@@ -383,12 +383,12 @@ bootcycle-images:
$(eval $(call SetupTarget, zip-security, \
MAKEFILE := ZipSecurity, \
- DEPS := java.base-java java.security.jgss-java java.security.jgss-libs, \
+ DEPS := buildtools-jdk java.base-java java.security.jgss-java java.security.jgss-libs, \
))
$(eval $(call SetupTarget, zip-source, \
MAKEFILE := ZipSource, \
- DEPS := gensrc, \
+ DEPS := buildtools-jdk gensrc, \
))
$(eval $(call SetupTarget, jrtfs-jar, \
@@ -508,13 +508,13 @@ $(eval $(call SetupTarget, docs-jdk-index, \
$(eval $(call SetupTarget, docs-zip, \
MAKEFILE := Docs, \
TARGET := docs-zip, \
- DEPS := docs-jdk, \
+ DEPS := docs-jdk buildtools-jdk, \
))
$(eval $(call SetupTarget, docs-specs-zip, \
MAKEFILE := Docs, \
TARGET := docs-specs-zip, \
- DEPS := docs-jdk-specs, \
+ DEPS := docs-jdk-specs buildtools-jdk, \
))
$(eval $(call SetupTarget, update-build-docs, \
diff --git a/make/RunTests.gmk b/make/RunTests.gmk
index f1da577de6a4b..72dc41c237433 100644
--- a/make/RunTests.gmk
+++ b/make/RunTests.gmk
@@ -200,7 +200,7 @@ $(eval $(call SetTestOpt,FAILURE_HANDLER_TIMEOUT,JTREG))
$(eval $(call ParseKeywordVariable, JTREG, \
SINGLE_KEYWORDS := JOBS TIMEOUT_FACTOR FAILURE_HANDLER_TIMEOUT \
TEST_MODE ASSERT VERBOSE RETAIN MAX_MEM RUN_PROBLEM_LISTS \
- RETRY_COUNT MAX_OUTPUT, \
+ RETRY_COUNT REPEAT_COUNT MAX_OUTPUT, \
STRING_KEYWORDS := OPTIONS JAVA_OPTIONS VM_OPTIONS KEYWORDS \
EXTRA_PROBLEM_LISTS LAUNCHER_OPTIONS, \
))
@@ -744,6 +744,15 @@ define SetupRunJtregTestBody
JTREG_RETAIN ?= fail,error
JTREG_RUN_PROBLEM_LISTS ?= false
JTREG_RETRY_COUNT ?= 0
+ JTREG_REPEAT_COUNT ?= 0
+
+ ifneq ($$(JTREG_RETRY_COUNT), 0)
+ ifneq ($$(JTREG_REPEAT_COUNT), 0)
+ $$(info Error: Cannot use both JTREG_RETRY_COUNT and JTREG_REPEAT_COUNT together.)
+ $$(info Please choose one or the other.)
+ $$(error Cannot continue)
+ endif
+ endif
ifneq ($$(JTREG_LAUNCHER_OPTIONS), )
$1_JTREG_LAUNCHER_OPTIONS += $$(JTREG_LAUNCHER_OPTIONS)
@@ -866,6 +875,18 @@ define SetupRunJtregTestBody
done
endif
+ ifneq ($$(JTREG_REPEAT_COUNT), 0)
+ $1_COMMAND_LINE := \
+ for i in {1..$$(JTREG_REPEAT_COUNT)}; do \
+ $$(PRINTF) "\nRepeating Jtreg run: $$$$i out of $$(JTREG_REPEAT_COUNT)\n"; \
+ $$($1_COMMAND_LINE); \
+ if [ "`$$(CAT) $$($1_EXITCODE)`" != "0" ]; then \
+ $$(PRINTF) "\nFailures detected, no more repeats.\n"; \
+ break; \
+ fi; \
+ done
+ endif
+
run-test-$1: pre-run-test clean-workdir-$1
$$(call LogWarn)
$$(call LogWarn, Running test '$$($1_TEST)')
diff --git a/make/ToolsJdk.gmk b/make/ToolsJdk.gmk
index 99e8bd9727c61..50ac056eda3ed 100644
--- a/make/ToolsJdk.gmk
+++ b/make/ToolsJdk.gmk
@@ -80,6 +80,8 @@ TOOL_GENERATECACERTS = $(JAVA_SMALL) -cp $(BUILDTOOLS_OUTPUTDIR)/jdk_tools_class
TOOL_GENERATEEMOJIDATA = $(JAVA_SMALL) -cp $(BUILDTOOLS_OUTPUTDIR)/jdk_tools_classes \
build.tools.generateemojidata.GenerateEmojiData
+TOOL_MAKEZIPREPRODUCIBLE = $(JAVA_SMALL) -cp $(BUILDTOOLS_OUTPUTDIR)/jdk_tools_classes \
+ build.tools.makezipreproducible.MakeZipReproducible
# TODO: There are references to the jdwpgen.jar in jdk/make/netbeans/jdwpgen/build.xml
# and nbproject/project.properties in the same dir. Needs to be looked at.
diff --git a/make/autoconf/basic.m4 b/make/autoconf/basic.m4
index 60b4097cba90e..e7fdab5305071 100644
--- a/make/autoconf/basic.m4
+++ b/make/autoconf/basic.m4
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -94,11 +94,6 @@ AC_DEFUN_ONCE([BASIC_SETUP_PATHS],
# Locate the directory of this script.
AUTOCONF_DIR=$TOPDIR/make/autoconf
-
- # Setup username (for use in adhoc version strings etc)
- # Outer [ ] to quote m4.
- [ USERNAME=`$ECHO "$USER" | $TR -d -c '[a-z][A-Z][0-9]'` ]
- AC_SUBST(USERNAME)
])
###############################################################################
diff --git a/make/autoconf/build-aux/config.guess b/make/autoconf/build-aux/config.guess
index 3e10c6e91103e..d589529f35aad 100644
--- a/make/autoconf/build-aux/config.guess
+++ b/make/autoconf/build-aux/config.guess
@@ -102,6 +102,15 @@ if [ "x$OUT" = x ]; then
fi
fi
+# Test and fix LoongArch64.
+if [ "x$OUT" = x ]; then
+ if [ `uname -s` = Linux ]; then
+ if [ `uname -m` = loongarch64 ]; then
+ OUT=loongarch64-unknown-linux-gnu
+ fi
+ fi
+fi
+
# Test and fix cpu on macos-aarch64, uname -p reports arm, buildsys expects aarch64
echo $OUT | grep arm-apple-darwin > /dev/null 2> /dev/null
if test $? != 0; then
diff --git a/make/autoconf/flags-other.m4 b/make/autoconf/flags-other.m4
index 14bb3f5b52fc6..0a0429c354bba 100644
--- a/make/autoconf/flags-other.m4
+++ b/make/autoconf/flags-other.m4
@@ -89,11 +89,11 @@ AC_DEFUN([FLAGS_SETUP_ASFLAGS],
# Fix linker warning.
# Code taken from make/autoconf/flags-cflags.m4 and adapted.
- JVM_BASIC_ASFLAGS+="-DMAC_OS_X_VERSION_MIN_REQUIRED=$MACOSX_VERSION_MIN_NODOTS \
+ JVM_BASIC_ASFLAGS+=" -DMAC_OS_X_VERSION_MIN_REQUIRED=$MACOSX_VERSION_MIN_NODOTS \
-mmacosx-version-min=$MACOSX_VERSION_MIN"
if test -n "$MACOSX_VERSION_MAX"; then
- JVM_BASIC_ASFLAGS+="$OS_CFLAGS \
+ JVM_BASIC_ASFLAGS+=" $OS_CFLAGS \
-DMAC_OS_X_VERSION_MAX_ALLOWED=$MACOSX_VERSION_MAX_NODOTS"
fi
fi
diff --git a/make/autoconf/help.m4 b/make/autoconf/help.m4
index 7de6398bbd6e0..b28218d66aa2e 100644
--- a/make/autoconf/help.m4
+++ b/make/autoconf/help.m4
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -42,21 +42,21 @@ AC_DEFUN([HELP_MSG_MISSING_DEPENDENCY],
PKGHANDLER_COMMAND=
case $PKGHANDLER in
- apt-get)
+ *apt-get)
apt_help $MISSING_DEPENDENCY ;;
- yum)
+ *yum)
yum_help $MISSING_DEPENDENCY ;;
- brew)
+ *brew)
brew_help $MISSING_DEPENDENCY ;;
- port)
+ *port)
port_help $MISSING_DEPENDENCY ;;
- pkgutil)
+ *pkgutil)
pkgutil_help $MISSING_DEPENDENCY ;;
- pkgadd)
+ *pkgadd)
pkgadd_help $MISSING_DEPENDENCY ;;
- zypper)
+ *zypper)
zypper_help $MISSING_DEPENDENCY ;;
- pacman)
+ *pacman)
pacman_help $MISSING_DEPENDENCY ;;
esac
diff --git a/make/autoconf/jdk-options.m4 b/make/autoconf/jdk-options.m4
index 299f76bd1e63c..4bd35d609c077 100644
--- a/make/autoconf/jdk-options.m4
+++ b/make/autoconf/jdk-options.m4
@@ -169,6 +169,23 @@ AC_DEFUN_ONCE([JDKOPT_SETUP_JDK_OPTIONS],
fi
AC_SUBST(CACERTS_FILE)
+ # Choose cacerts source folder for user provided PEM files
+ AC_ARG_WITH(cacerts-src, [AS_HELP_STRING([--with-cacerts-src],
+ [specify alternative cacerts source folder containing certificates])])
+ CACERTS_SRC=""
+ AC_MSG_CHECKING([for cacerts source])
+ if test "x$with_cacerts_src" == x; then
+ AC_MSG_RESULT([default])
+ else
+ CACERTS_SRC=$with_cacerts_src
+ if test ! -d "$CACERTS_SRC"; then
+ AC_MSG_RESULT([fail])
+ AC_MSG_ERROR([Specified cacerts source folder "$CACERTS_SRC" does not exist])
+ fi
+ AC_MSG_RESULT([$CACERTS_SRC])
+ fi
+ AC_SUBST(CACERTS_SRC)
+
# Enable or disable unlimited crypto
UTIL_ARG_ENABLE(NAME: unlimited-crypto, DEFAULT: true, RESULT: UNLIMITED_CRYPTO,
DESC: [enable unlimited crypto policy])
diff --git a/make/autoconf/jdk-version.m4 b/make/autoconf/jdk-version.m4
index 092e7a6f490a0..b75d239298071 100644
--- a/make/autoconf/jdk-version.m4
+++ b/make/autoconf/jdk-version.m4
@@ -69,6 +69,17 @@ AC_DEFUN_ONCE([JDKVER_SETUP_JDK_VERSION_NUMBERS],
AC_SUBST(JDK_RC_PLATFORM_NAME)
AC_SUBST(HOTSPOT_VM_DISTRO)
+ # Setup username (for use in adhoc version strings etc)
+ AC_ARG_WITH([build-user], [AS_HELP_STRING([--with-build-user],
+ [build username to use in version strings])])
+ if test "x$with_build_user" != x; then
+ USERNAME="$with_build_user"
+ else
+ # Outer [ ] to quote m4.
+ [ USERNAME=`$ECHO "$USER" | $TR -d -c '[a-z][A-Z][0-9]'` ]
+ fi
+ AC_SUBST(USERNAME)
+
# Set the JDK RC name
AC_ARG_WITH(jdk-rc-name, [AS_HELP_STRING([--with-jdk-rc-name],
[Set JDK RC name. This is used for FileDescription and ProductName properties
diff --git a/make/autoconf/jvm-features.m4 b/make/autoconf/jvm-features.m4
index a4d0bf62ec2c2..906a285787721 100644
--- a/make/autoconf/jvm-features.m4
+++ b/make/autoconf/jvm-features.m4
@@ -307,7 +307,8 @@ AC_DEFUN_ONCE([JVM_FEATURES_CHECK_SHENANDOAHGC],
JVM_FEATURES_CHECK_AVAILABILITY(shenandoahgc, [
AC_MSG_CHECKING([if platform is supported by Shenandoah])
if test "x$OPENJDK_TARGET_CPU_ARCH" = "xx86" || \
- test "x$OPENJDK_TARGET_CPU" = "xaarch64" ; then
+ test "x$OPENJDK_TARGET_CPU" = "xaarch64" || \
+ test "x$OPENJDK_TARGET_CPU" = "xppc64le"; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no, $OPENJDK_TARGET_CPU])
@@ -357,6 +358,13 @@ AC_DEFUN_ONCE([JVM_FEATURES_CHECK_ZGC],
AC_MSG_RESULT([no, $OPENJDK_TARGET_OS-$OPENJDK_TARGET_CPU])
AVAILABLE=false
fi
+ elif test "x$OPENJDK_TARGET_CPU" = "xppc64le"; then
+ if test "x$OPENJDK_TARGET_OS" = "xlinux"; then
+ AC_MSG_RESULT([yes])
+ else
+ AC_MSG_RESULT([no, $OPENJDK_TARGET_OS-$OPENJDK_TARGET_CPU])
+ AVAILABLE=false
+ fi
else
AC_MSG_RESULT([no, $OPENJDK_TARGET_OS-$OPENJDK_TARGET_CPU])
AVAILABLE=false
diff --git a/make/autoconf/platform.m4 b/make/autoconf/platform.m4
index 2dd13d0d5e207..205d64f566d93 100644
--- a/make/autoconf/platform.m4
+++ b/make/autoconf/platform.m4
@@ -72,6 +72,12 @@ AC_DEFUN([PLATFORM_EXTRACT_VARS_FROM_CPU],
VAR_CPU_BITS=64
VAR_CPU_ENDIAN=little
;;
+ loongarch64)
+ VAR_CPU=loongarch64
+ VAR_CPU_ARCH=loongarch
+ VAR_CPU_BITS=64
+ VAR_CPU_ENDIAN=little
+ ;;
m68k)
VAR_CPU=m68k
VAR_CPU_ARCH=m68k
diff --git a/make/autoconf/spec.gmk.in b/make/autoconf/spec.gmk.in
index 9b1b512a34a59..9d105b37acffc 100644
--- a/make/autoconf/spec.gmk.in
+++ b/make/autoconf/spec.gmk.in
@@ -409,6 +409,8 @@ GTEST_FRAMEWORK_SRC := @GTEST_FRAMEWORK_SRC@
# Source file for cacerts
CACERTS_FILE=@CACERTS_FILE@
+# Source folder for user provided cacerts PEM files
+CACERTS_SRC=@CACERTS_SRC@
# Enable unlimited crypto policy
UNLIMITED_CRYPTO=@UNLIMITED_CRYPTO@
diff --git a/make/autoconf/toolchain.m4 b/make/autoconf/toolchain.m4
index 7889588809587..99c780532ee87 100644
--- a/make/autoconf/toolchain.m4
+++ b/make/autoconf/toolchain.m4
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -221,6 +221,12 @@ AC_DEFUN_ONCE([TOOLCHAIN_DETERMINE_TOOLCHAIN_TYPE],
AC_ARG_WITH(toolchain-type, [AS_HELP_STRING([--with-toolchain-type],
[the toolchain type (or family) to use, use '--help' to show possible values @<:@platform dependent@:>@])])
+ # Linux x86_64 needs higher binutils after 8265783
+ # (this really is a dependency on as version, but we take ld as a check for a general binutils version)
+ if test "x$OPENJDK_TARGET_CPU" = "xx86_64"; then
+ TOOLCHAIN_MINIMUM_LD_VERSION_gcc="2.25"
+ fi
+
# Use indirect variable referencing
toolchain_var_name=VALID_TOOLCHAINS_$OPENJDK_BUILD_OS
VALID_TOOLCHAINS=${!toolchain_var_name}
@@ -228,7 +234,7 @@ AC_DEFUN_ONCE([TOOLCHAIN_DETERMINE_TOOLCHAIN_TYPE],
if test "x$OPENJDK_TARGET_OS" = xmacosx; then
if test -n "$XCODEBUILD"; then
# On Mac OS X, default toolchain to clang after Xcode 5
- XCODE_VERSION_OUTPUT=`"$XCODEBUILD" -version 2>&1 | $HEAD -n 1`
+ XCODE_VERSION_OUTPUT=`"$XCODEBUILD" -version | $HEAD -n 1`
$ECHO "$XCODE_VERSION_OUTPUT" | $GREP "Xcode " > /dev/null
if test $? -ne 0; then
AC_MSG_NOTICE([xcodebuild output: $XCODE_VERSION_OUTPUT])
@@ -677,9 +683,10 @@ AC_DEFUN_ONCE([TOOLCHAIN_DETECT_TOOLCHAIN_CORE],
TOOLCHAIN_PREPARE_FOR_LD_VERSION_COMPARISONS
if test "x$TOOLCHAIN_MINIMUM_LD_VERSION" != x; then
+ AC_MSG_NOTICE([comparing linker version to minimum version $TOOLCHAIN_MINIMUM_LD_VERSION])
TOOLCHAIN_CHECK_LINKER_VERSION(VERSION: $TOOLCHAIN_MINIMUM_LD_VERSION,
IF_OLDER_THAN: [
- AC_MSG_WARN([You are using a linker older than $TOOLCHAIN_MINIMUM_LD_VERSION. This is not a supported configuration.])
+ AC_MSG_ERROR([You are using a linker older than $TOOLCHAIN_MINIMUM_LD_VERSION. This is not a supported configuration.])
]
)
fi
diff --git a/make/autoconf/toolchain_microsoft.m4 b/make/autoconf/toolchain_microsoft.m4
index 940f73a9d22c7..2600b431cfb79 100644
--- a/make/autoconf/toolchain_microsoft.m4
+++ b/make/autoconf/toolchain_microsoft.m4
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -25,7 +25,7 @@
################################################################################
# The order of these defines the priority by which we try to find them.
-VALID_VS_VERSIONS="2019 2017"
+VALID_VS_VERSIONS="2019 2017 2022"
VS_DESCRIPTION_2017="Microsoft Visual Studio 2017"
VS_VERSION_INTERNAL_2017=141
@@ -56,6 +56,21 @@ VS_SDK_PLATFORM_NAME_2019=
VS_SUPPORTED_2019=true
VS_TOOLSET_SUPPORTED_2019=true
+VS_DESCRIPTION_2022="Microsoft Visual Studio 2022"
+VS_VERSION_INTERNAL_2022=143
+VS_MSVCR_2022=vcruntime140.dll
+VS_VCRUNTIME_1_2022=vcruntime140_1.dll
+VS_MSVCP_2022=msvcp140.dll
+VS_ENVVAR_2022="VS170COMNTOOLS"
+VS_USE_UCRT_2022="true"
+VS_VS_INSTALLDIR_2022="Microsoft Visual Studio/2022"
+VS_EDITIONS_2022="BuildTools Community Professional Enterprise"
+VS_SDK_INSTALLDIR_2022=
+VS_VS_PLATFORM_NAME_2022="v143"
+VS_SDK_PLATFORM_NAME_2022=
+VS_SUPPORTED_2022=true
+VS_TOOLSET_SUPPORTED_2022=true
+
################################################################################
AC_DEFUN([TOOLCHAIN_CHECK_POSSIBLE_VISUAL_STUDIO_ROOT],
diff --git a/make/common/ZipArchive.gmk b/make/common/ZipArchive.gmk
index efa013f501ecf..592d1a60aa0f5 100644
--- a/make/common/ZipArchive.gmk
+++ b/make/common/ZipArchive.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2011, 2019, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -26,6 +26,9 @@
ifndef _ZIP_ARCHIVE_GMK
_ZIP_ARCHIVE_GMK := 1
+# Depends on build tools for MakeZipReproducible
+include ../ToolsJdk.gmk
+
ifeq (,$(_MAKEBASE_GMK))
$(error You must include MakeBase.gmk prior to including ZipArchive.gmk)
endif
@@ -51,6 +54,8 @@ endif
# FOLLOW_SYMLINKS - Set to explicitly follow symlinks. Affects performance of
# finding files.
# ZIP_OPTIONS extra options to pass to zip
+# REPRODUCIBLE override ENABLE_REPRODUCIBLE_BUILD (to make zip reproducible or not)
+
SetupZipArchive = $(NamedParamsMacroTemplate)
define SetupZipArchiveBody
@@ -124,6 +129,10 @@ define SetupZipArchiveBody
) \
)
+ ifeq ($$($1_REPRODUCIBLE), )
+ $1_REPRODUCIBLE := $$(ENABLE_REPRODUCIBLE_BUILD)
+ endif
+
# Use a slightly shorter name for logging, but with enough path to identify this zip.
$1_NAME:=$$(subst $$(OUTPUTDIR)/,,$$($1_ZIP))
@@ -134,6 +143,8 @@ define SetupZipArchiveBody
# dir is very small.
# If zip has nothing to do, it returns 12 and would fail the build. Check for 12
# and only fail if it's not.
+ # For reproducible builds set the zip access & modify times to SOURCE_DATE_EPOCH
+ # by using a ziptmp folder to generate final zip from using MakeZipReproducible.
$$($1_ZIP) : $$($1_ALL_SRCS) $$($1_EXTRA_DEPS)
$$(call LogWarn, Updating $$($1_NAME))
$$(call MakeTargetDir)
@@ -163,7 +174,18 @@ define SetupZipArchiveBody
$$($1_ZIP_EXCLUDES_$$s) \
|| test "$$$$?" = "12" \
))$$(NEWLINE) \
- ) true \
+ ) true
+ ifeq ($$($1_REPRODUCIBLE), true)
+ $$(call ExecuteWithLog, \
+ $$(SUPPORT_OUTPUTDIR)/makezipreproducible/$$(patsubst $$(OUTPUTDIR)/%,%, $$@), \
+ ($(RM) $$(SUPPORT_OUTPUTDIR)/ziptmp/$1/tmp.zip && \
+ $(MKDIR) -p $$(SUPPORT_OUTPUTDIR)/ziptmp/$1 && \
+ $(TOOL_MAKEZIPREPRODUCIBLE) -f $$(SUPPORT_OUTPUTDIR)/ziptmp/$1/tmp.zip \
+ -t $(SOURCE_DATE_EPOCH) $$@ && \
+ $(RM) $$@ && \
+ $(MV) $$(SUPPORT_OUTPUTDIR)/ziptmp/$1/tmp.zip $$@ \
+ ))
+ endif
$(TOUCH) $$@
# Add zip to target list
diff --git a/make/conf/javadoc.conf b/make/conf/javadoc.conf
index 6c92e40329afa..f13d882b96185 100644
--- a/make/conf/javadoc.conf
+++ b/make/conf/javadoc.conf
@@ -1,4 +1,4 @@
-# Copyright (c) 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2020, 2021, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -23,9 +23,9 @@
#
# URLs
-JAVADOC_BASE_URL=https://docs.oracle.com/pls/topic/lookup?ctx=javase$(VERSION_NUMBER)&id=homepage
+JAVADOC_BASE_URL=https://docs.oracle.com/pls/topic/lookup?ctx=javase$(VERSION_FEATURE)&id=homepage
BUG_SUBMIT_URL=https://bugreport.java.com/bugreport/
COPYRIGHT_URL=legal/copyright.html
-LICENSE_URL=https://www.oracle.com/java/javase/terms/license/java$(VERSION_NUMBER)speclicense.html
+LICENSE_URL=https://www.oracle.com/java/javase/terms/license/java$(VERSION_FEATURE)speclicense.html
REDISTRIBUTION_URL=https://www.oracle.com/technetwork/java/redist-137594.html
OTHER_JDK_VERSIONS_URL=https://docs.oracle.com/en/java/javase/index.html
diff --git a/make/conf/jib-profiles.js b/make/conf/jib-profiles.js
index 0140dee7d5cb6..b16475c2f0869 100644
--- a/make/conf/jib-profiles.js
+++ b/make/conf/jib-profiles.js
@@ -249,7 +249,7 @@ var getJibProfilesCommon = function (input, data) {
dependencies: ["boot_jdk", "gnumake", "jtreg", "jib", "autoconf", "jmh", "jcov"],
default_make_targets: ["product-bundles", "test-bundles", "static-libs-bundles"],
configure_args: concat("--enable-jtreg-failure-handler",
- "--with-exclude-translations=de,es,fr,it,ko,pt_BR,sv,ca,tr,cs,sk,ja_JP_A,ja_JP_HA,ja_JP_HI,ja_JP_I,zh_TW,zh_HK",
+ "--with-exclude-translations=es,fr,it,ko,pt_BR,sv,ca,tr,cs,sk,ja_JP_A,ja_JP_HA,ja_JP_HI,ja_JP_I,zh_TW,zh_HK",
"--disable-manpages",
"--disable-jvm-feature-shenandoahgc",
versionArgs(input, common))
diff --git a/make/conf/test-dependencies b/make/conf/test-dependencies
index 4619ab744471c..a95357a56da2c 100644
--- a/make/conf/test-dependencies
+++ b/make/conf/test-dependencies
@@ -30,14 +30,14 @@ JTREG_VERSION=6
JTREG_BUILD=1
GTEST_VERSION=1.8.1
-LINUX_X64_BOOT_JDK_FILENAME=openjdk-16_linux-x64_bin.tar.gz
-LINUX_X64_BOOT_JDK_URL=https://download.java.net/java/GA/jdk16/7863447f0ab643c585b9bdebf67c69db/36/GPL/openjdk-16_linux-x64_bin.tar.gz
-LINUX_X64_BOOT_JDK_SHA256=e952958f16797ad7dc7cd8b724edd69ec7e0e0434537d80d6b5165193e33b931
+LINUX_X64_BOOT_JDK_FILENAME=openjdk-17.0.1_linux-x64_bin.tar.gz
+LINUX_X64_BOOT_JDK_URL=https://download.java.net/java/GA/jdk17.0.1/2a2082e5a09d4267845be086888add4f/12/GPL/openjdk-17.0.1_linux-x64_bin.tar.gz
+LINUX_X64_BOOT_JDK_SHA256=1c0a73cbb863aad579b967316bf17673b8f98a9bb938602a140ba2e5c38f880a
-WINDOWS_X64_BOOT_JDK_FILENAME=openjdk-16_windows-x64_bin.zip
-WINDOWS_X64_BOOT_JDK_URL=https://download.java.net/java/GA/jdk16/7863447f0ab643c585b9bdebf67c69db/36/GPL/openjdk-16_windows-x64_bin.zip
-WINDOWS_X64_BOOT_JDK_SHA256=a78bdeaad186297601edac6772d931224d7af6f682a43372e693c37020bd37d6
+WINDOWS_X64_BOOT_JDK_FILENAME=openjdk-17.0.1_windows-x64_bin.zip
+WINDOWS_X64_BOOT_JDK_URL=https://download.java.net/java/GA/jdk17.0.1/2a2082e5a09d4267845be086888add4f/12/GPL/openjdk-17.0.1_windows-x64_bin.zip
+WINDOWS_X64_BOOT_JDK_SHA256=329900a6673b237b502bdcf77bc334da34bc91355c5fd2d457fc00f53fd71ef1
-MACOS_X64_BOOT_JDK_FILENAME=openjdk-16_osx-x64_bin.tar.gz
-MACOS_X64_BOOT_JDK_URL=https://download.java.net/java/GA/jdk16/7863447f0ab643c585b9bdebf67c69db/36/GPL/openjdk-16_osx-x64_bin.tar.gz
-MACOS_X64_BOOT_JDK_SHA256=16f3e39a31e86f3f51b0b4035a37494a47ed3c4ead760eafc6afd7afdf2ad9f2
+MACOS_X64_BOOT_JDK_FILENAME=openjdk-17.0.1_macos-x64_bin.tar.gz
+MACOS_X64_BOOT_JDK_URL=https://download.java.net/java/GA/jdk17.0.1/2a2082e5a09d4267845be086888add4f/12/GPL/openjdk-17.0.1_macos-x64_bin.tar.gz
+MACOS_X64_BOOT_JDK_SHA256=6ccb35800e723cabe15af60e67099d1a07c111d2d3208aa75523614dde68bee1
diff --git a/make/conf/version-numbers.conf b/make/conf/version-numbers.conf
index 0db8f8f771d35..9b8a7b64155e4 100644
--- a/make/conf/version-numbers.conf
+++ b/make/conf/version-numbers.conf
@@ -28,15 +28,15 @@
DEFAULT_VERSION_FEATURE=17
DEFAULT_VERSION_INTERIM=0
-DEFAULT_VERSION_UPDATE=0
+DEFAULT_VERSION_UPDATE=3
DEFAULT_VERSION_PATCH=0
DEFAULT_VERSION_EXTRA1=0
DEFAULT_VERSION_EXTRA2=0
DEFAULT_VERSION_EXTRA3=0
-DEFAULT_VERSION_DATE=2021-09-14
+DEFAULT_VERSION_DATE=2022-04-19
DEFAULT_VERSION_CLASSFILE_MAJOR=61 # "`$EXPR $DEFAULT_VERSION_FEATURE + 44`"
DEFAULT_VERSION_CLASSFILE_MINOR=0
DEFAULT_VERSION_DOCS_API_SINCE=11
DEFAULT_ACCEPTABLE_BOOT_VERSIONS="16 17"
DEFAULT_JDK_SOURCE_TARGET_VERSION=17
-DEFAULT_PROMOTED_VERSION_PRE=ea
+DEFAULT_PROMOTED_VERSION_PRE=
diff --git a/make/data/cacerts/globalsignr2ca b/make/data/cacerts/globalsignr2ca
deleted file mode 100644
index 746d1fab98519..0000000000000
--- a/make/data/cacerts/globalsignr2ca
+++ /dev/null
@@ -1,29 +0,0 @@
-Owner: CN=GlobalSign, O=GlobalSign, OU=GlobalSign Root CA - R2
-Issuer: CN=GlobalSign, O=GlobalSign, OU=GlobalSign Root CA - R2
-Serial number: 400000000010f8626e60d
-Valid from: Fri Dec 15 08:00:00 GMT 2006 until: Wed Dec 15 08:00:00 GMT 2021
-Signature algorithm name: SHA1withRSA
-Subject Public Key Algorithm: 2048-bit RSA key
-Version: 3
------BEGIN CERTIFICATE-----
-MIIDujCCAqKgAwIBAgILBAAAAAABD4Ym5g0wDQYJKoZIhvcNAQEFBQAwTDEgMB4G
-A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjIxEzARBgNVBAoTCkdsb2JhbFNp
-Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDYxMjE1MDgwMDAwWhcNMjExMjE1
-MDgwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMjETMBEG
-A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI
-hvcNAQEBBQADggEPADCCAQoCggEBAKbPJA6+Lm8omUVCxKs+IVSbC9N/hHD6ErPL
-v4dfxn+G07IwXNb9rfF73OX4YJYJkhD10FPe+3t+c4isUoh7SqbKSaZeqKeMWhG8
-eoLrvozps6yWJQeXSpkqBy+0Hne/ig+1AnwblrjFuTosvNYSuetZfeLQBoZfXklq
-tTleiDTsvHgMCJiEbKjNS7SgfQx5TfC4LcshytVsW33hoCmEofnTlEnLJGKRILzd
-C9XZzPnqJworc5HGnRusyMvo4KD0L5CLTfuwNhv2GXqF4G3yYROIXJ/gkwpRl4pa
-zq+r1feqCapgvdzZX99yqWATXgAByUr6P6TqBwMhAo6CygPCm48CAwEAAaOBnDCB
-mTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUm+IH
-V2ccHsBqBt5ZtJot39wZhi4wNgYDVR0fBC8wLTAroCmgJ4YlaHR0cDovL2NybC5n
-bG9iYWxzaWduLm5ldC9yb290LXIyLmNybDAfBgNVHSMEGDAWgBSb4gdXZxwewGoG
-3lm0mi3f3BmGLjANBgkqhkiG9w0BAQUFAAOCAQEAmYFThxxol4aR7OBKuEQLq4Gs
-J0/WwbgcQ3izDJr86iw8bmEbTUsp9Z8FHSbBuOmDAGJFtqkIk7mpM0sYmsL4h4hO
-291xNBrBVNpGP+DTKqttVCL1OmLNIG+6KYnX3ZHu01yiPqFbQfXf5WRDLenVOavS
-ot+3i9DAgBkcRcAtjOj4LaR0VknFBbVPFd5uRHg5h6h+u/N5GJG79G+dwfCMNYxd
-AfvDbbnvRG15RjF+Cv6pgsH/76tuIMRQyV+dTZsXjAzlAcmgQWpzU/qlULRuJQ/7
-TBj0/VLZjmmx6BEP3ojY+x1J96relc8geMJgEtslQIxq/H5COEBkEveegeGTLg==
------END CERTIFICATE-----
diff --git a/make/data/cacerts/identrustdstx3 b/make/data/cacerts/identrustdstx3
deleted file mode 100644
index 87a0d0c4f60f0..0000000000000
--- a/make/data/cacerts/identrustdstx3
+++ /dev/null
@@ -1,27 +0,0 @@
-Owner: CN=DST Root CA X3, O=Digital Signature Trust Co.
-Issuer: CN=DST Root CA X3, O=Digital Signature Trust Co.
-Serial number: 44afb080d6a327ba893039862ef8406b
-Valid from: Sat Sep 30 21:12:19 GMT 2000 until: Thu Sep 30 14:01:15 GMT 2021
-Signature algorithm name: SHA1withRSA
-Subject Public Key Algorithm: 2048-bit RSA key
-Version: 3
------BEGIN CERTIFICATE-----
-MIIDSjCCAjKgAwIBAgIQRK+wgNajJ7qJMDmGLvhAazANBgkqhkiG9w0BAQUFADA/
-MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT
-DkRTVCBSb290IENBIFgzMB4XDTAwMDkzMDIxMTIxOVoXDTIxMDkzMDE0MDExNVow
-PzEkMCIGA1UEChMbRGlnaXRhbCBTaWduYXR1cmUgVHJ1c3QgQ28uMRcwFQYDVQQD
-Ew5EU1QgUm9vdCBDQSBYMzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
-AN+v6ZdQCINXtMxiZfaQguzH0yxrMMpb7NnDfcdAwRgUi+DoM3ZJKuM/IUmTrE4O
-rz5Iy2Xu/NMhD2XSKtkyj4zl93ewEnu1lcCJo6m67XMuegwGMoOifooUMM0RoOEq
-OLl5CjH9UL2AZd+3UWODyOKIYepLYYHsUmu5ouJLGiifSKOeDNoJjj4XLh7dIN9b
-xiqKqy69cK3FCxolkHRyxXtqqzTWMIn/5WgTe1QLyNau7Fqckh49ZLOMxt+/yUFw
-7BZy1SbsOFU5Q9D8/RhcQPGX69Wam40dutolucbY38EVAjqr2m7xPi71XAicPNaD
-aeQQmxkqtilX4+U9m5/wAl0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNV
-HQ8BAf8EBAMCAQYwHQYDVR0OBBYEFMSnsaR7LHH62+FLkHX/xBVghYkQMA0GCSqG
-SIb3DQEBBQUAA4IBAQCjGiybFwBcqR7uKGY3Or+Dxz9LwwmglSBd49lZRNI+DT69
-ikugdB/OEIKcdBodfpga3csTS7MgROSR6cz8faXbauX+5v3gTt23ADq1cEmv8uXr
-AvHRAosZy5Q6XkjEGB5YGV8eAlrwDPGxrancWYaLbumR9YbK+rlmM6pZW87ipxZz
-R8srzJmwN0jP41ZL9c8PDHIyh8bwRLtTcm1D9SZImlJnt1ir/md2cXjbDaJWFBM5
-JDGFoqgCWjBH4d1QB7wCCZAA62RjYJsWvIjJEubSfZGL+T0yjWW06XyxV3bqxbYo
-Ob8VZRzI9neWagqNdwvYkQsEjgfbKbYK7p2CNTUQ
------END CERTIFICATE-----
diff --git a/make/data/currency/CurrencyData.properties b/make/data/currency/CurrencyData.properties
index a4ad6d613b85b..c4400bbd00f10 100644
--- a/make/data/currency/CurrencyData.properties
+++ b/make/data/currency/CurrencyData.properties
@@ -32,7 +32,7 @@ formatVersion=3
# Version of the currency code information in this class.
# It is a serial number that accompanies with each amendment.
-dataVersion=169
+dataVersion=170
# List of all valid ISO 4217 currency codes.
# To ensure compatibility, do not remove codes.
@@ -54,7 +54,7 @@ all=ADP020-AED784-AFA004-AFN971-ALL008-AMD051-ANG532-AOA973-ARS032-ATS040-AUD036
SBD090-SCR690-SDD736-SDG938-SEK752-SGD702-SHP654-SIT705-SKK703-SLL694-SOS706-\
SRD968-SRG740-SSP728-STD678-STN930-SVC222-SYP760-SZL748-THB764-TJS972-TMM795-TMT934-TND788-TOP776-\
TPE626-TRL792-TRY949-TTD780-TWD901-TZS834-UAH980-UGX800-USD840-USN997-USS998-UYI940-\
- UYU858-UZS860-VEB862-VEF937-VES928-VND704-VUV548-WST882-XAF950-XAG961-XAU959-XBA955-\
+ UYU858-UZS860-VEB862-VED926-VEF937-VES928-VND704-VUV548-WST882-XAF950-XAG961-XAU959-XBA955-\
XBB956-XBC957-XBD958-XCD951-XDR960-XFO000-XFU000-XOF952-XPD964-XPF953-\
XPT962-XSU994-XTS963-XUA965-XXX999-YER886-YUM891-ZAR710-ZMK894-ZMW967-ZWD716-ZWL932-\
ZWN942-ZWR935
diff --git a/make/data/jdwp/jdwp.spec b/make/data/jdwp/jdwp.spec
index b55a286aaa1ed..51fe13cd9ef7f 100644
--- a/make/data/jdwp/jdwp.spec
+++ b/make/data/jdwp/jdwp.spec
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1998, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1998, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -134,9 +134,9 @@ JDWP "Java(tm) Debug Wire Protocol"
"
"
"
All event requests are cancelled. "
"
All threads suspended by the thread-level "
- "resume command "
+ "suspend command "
"or the VM-level "
- "resume command "
+ "suspend command "
"are resumed as many times as necessary for them to run. "
"
Garbage collection is re-enabled in all cases where it was "
"disabled "
diff --git a/make/data/tzdata/VERSION b/make/data/tzdata/VERSION
index 71632a7bb6131..3a40c9103336d 100644
--- a/make/data/tzdata/VERSION
+++ b/make/data/tzdata/VERSION
@@ -21,4 +21,4 @@
# or visit www.oracle.com if you need additional information or have any
# questions.
#
-tzdata2021a
+tzdata2021e
diff --git a/make/data/tzdata/africa b/make/data/tzdata/africa
index 5de2e5f4ab1b0..0f367713ea900 100644
--- a/make/data/tzdata/africa
+++ b/make/data/tzdata/africa
@@ -53,9 +53,6 @@
# Milne J. Civil time. Geogr J. 1899 Feb;13(2):173-94.
# https://www.jstor.org/stable/1774359
#
-# A reliable and entertaining source about time zones is
-# Derek Howse, Greenwich time and longitude, Philip Wilson Publishers (1997).
-#
# European-style abbreviations are commonly used along the Mediterranean.
# For sub-Saharan Africa abbreviations were less standardized.
# Previous editions of this database used WAT, CAT, SAT, and EAT
@@ -176,8 +173,9 @@ Zone Africa/Ndjamena 1:00:12 - LMT 1912 # N'Djamena
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Africa/Abidjan -0:16:08 - LMT 1912
0:00 - GMT
+Link Africa/Abidjan Africa/Accra # Ghana
Link Africa/Abidjan Africa/Bamako # Mali
-Link Africa/Abidjan Africa/Banjul # Gambia
+Link Africa/Abidjan Africa/Banjul # The Gambia
Link Africa/Abidjan Africa/Conakry # Guinea
Link Africa/Abidjan Africa/Dakar # Senegal
Link Africa/Abidjan Africa/Freetown # Sierra Leone
@@ -404,93 +402,8 @@ Zone Africa/Cairo 2:05:09 - LMT 1900 Oct
# Gabon
# See Africa/Lagos.
-# Gambia
-# See Africa/Abidjan.
-
+# The Gambia
# Ghana
-
-# From P Chan (2020-11-20):
-# Interpretation Amendment Ordinance, 1915 (No.24 of 1915) [1915-11-02]
-# Ordinances of the Gold Coast, Ashanti, Northern Territories 1915, p 69-71
-# https://books.google.com/books?id=ErA-AQAAIAAJ&pg=PA70
-# This Ordinance added "'Time' shall mean Greenwich Mean Time" to the
-# Interpretation Ordinance, 1876.
-#
-# Determination of the Time Ordinance, 1919 (No. 18 of 1919) [1919-11-24]
-# Ordinances of the Gold Coast, Ashanti, Northern Territories 1919, p 75-76
-# https://books.google.com/books?id=MbA-AQAAIAAJ&pg=PA75
-# This Ordinance removed the previous definition of time and introduced DST.
-#
-# Time Determination Ordinance (Cap. 214)
-# The Laws of the Gold Coast (including Togoland Under British Mandate)
-# Vol. II (1937), p 2328
-# https://books.google.com/books?id=Z7M-AQAAIAAJ&pg=PA2328
-# Revised edition of the 1919 Ordinance.
-#
-# Time Determination (Amendment) Ordinance, 1940 (No. 9 of 1940) [1940-04-06]
-# Annual Volume of the Laws of the Gold Coast:
-# Containing All Legislation Enacted During Year 1940, p 22
-# https://books.google.com/books?id=1ao-AQAAIAAJ&pg=PA22
-# This Ordinance changed the forward transition from September to May.
-#
-# Defence (Time Determination Ordinance Amendment) Regulations, 1942
-# (Regulations No. 6 of 1942) [1942-01-31, commenced on 1942-02-08]
-# Annual Volume of the Laws of the Gold Coast:
-# Containing All Legislation Enacted During Year 1942, p 48
-# https://books.google.com/books?id=Das-AQAAIAAJ&pg=PA48
-# These regulations advanced the [standard] time by thirty minutes.
-#
-# Defence (Time Determination Ordinance Amendment (No.2)) Regulations,
-# 1942 (Regulations No. 28 of 1942) [1942-04-25]
-# Annual Volume of the Laws of the Gold Coast:
-# Containing All Legislation Enacted During Year 1942, p 87
-# https://books.google.com/books?id=Das-AQAAIAAJ&pg=PA87
-# These regulations abolished DST and changed the time to GMT+0:30.
-#
-# Defence (Revocation) (No.4) Regulations, 1945 (Regulations No. 45 of
-# 1945) [1945-10-24, commenced on 1946-01-06]
-# Annual Volume of the Laws of the Gold Coast:
-# Containing All Legislation Enacted During Year 1945, p 256
-# https://books.google.com/books?id=9as-AQAAIAAJ&pg=PA256
-# These regulations revoked the previous two sets of Regulations.
-#
-# Time Determination (Amendment) Ordinance, 1945 (No. 18 of 1945) [1946-01-06]
-# Annual Volume of the Laws of the Gold Coast:
-# Containing All Legislation Enacted During Year 1945, p 69
-# https://books.google.com/books?id=9as-AQAAIAAJ&pg=PA69
-# This Ordinance abolished DST.
-#
-# Time Determination (Amendment) Ordinance, 1950 (No. 26 of 1950) [1950-07-22]
-# Annual Volume of the Laws of the Gold Coast:
-# Containing All Legislation Enacted During Year 1950, p 35
-# https://books.google.com/books?id=e60-AQAAIAAJ&pg=PA35
-# This Ordinance restored DST but with thirty minutes offset.
-#
-# Time Determination Ordinance (Cap. 264)
-# The Laws of the Gold Coast, Vol. V (1954), p 380
-# https://books.google.com/books?id=Mqc-AQAAIAAJ&pg=PA380
-# Revised edition of the Time Determination Ordinance.
-#
-# Time Determination (Amendment) Ordinance, 1956 (No. 21 of 1956) [1956-08-29]
-# Annual Volume of the Ordinances of the Gold Coast Enacted During the
-# Year 1956, p 83
-# https://books.google.com/books?id=VLE-AQAAIAAJ&pg=PA83
-# This Ordinance abolished DST.
-
-# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
-Rule Ghana 1919 only - Nov 24 0:00 0:20 +0020
-Rule Ghana 1920 1942 - Jan 1 2:00 0 GMT
-Rule Ghana 1920 1939 - Sep 1 2:00 0:20 +0020
-Rule Ghana 1940 1941 - May 1 2:00 0:20 +0020
-Rule Ghana 1950 1955 - Sep 1 2:00 0:30 +0030
-Rule Ghana 1951 1956 - Jan 1 2:00 0 GMT
-
-# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone Africa/Accra -0:00:52 - LMT 1915 Nov 2
- 0:00 Ghana %s 1942 Feb 8
- 0:30 - +0030 1946 Jan 6
- 0:00 Ghana %s
-
# Guinea
# See Africa/Abidjan.
@@ -755,7 +668,7 @@ Zone Indian/Mauritius 3:50:00 - LMT 1907 # Port Louis
# See Africa/Nairobi.
# Morocco
-# See the 'europe' file for Spanish Morocco (Africa/Ceuta).
+# See Africa/Ceuta for Spanish Morocco.
# From Alex Krivenyshev (2008-05-09):
# Here is an article that Morocco plan to introduce Daylight Saving Time between
@@ -1405,23 +1318,21 @@ Zone Africa/Lagos 0:13:35 - LMT 1905 Jul 1
0:13:35 - LMT 1914 Jan 1
0:30 - +0030 1919 Sep 1
1:00 - WAT
-Link Africa/Lagos Africa/Bangui # Central African Republic
-Link Africa/Lagos Africa/Brazzaville # Rep. of the Congo
-Link Africa/Lagos Africa/Douala # Cameroon
-Link Africa/Lagos Africa/Kinshasa # Dem. Rep. of the Congo (west)
-Link Africa/Lagos Africa/Libreville # Gabon
-Link Africa/Lagos Africa/Luanda # Angola
-Link Africa/Lagos Africa/Malabo # Equatorial Guinea
-Link Africa/Lagos Africa/Niamey # Niger
-Link Africa/Lagos Africa/Porto-Novo # Benin
+Link Africa/Lagos Africa/Bangui # Central African Republic
+Link Africa/Lagos Africa/Brazzaville # Rep. of the Congo
+Link Africa/Lagos Africa/Douala # Cameroon
+Link Africa/Lagos Africa/Kinshasa # Dem. Rep. of the Congo (west)
+Link Africa/Lagos Africa/Libreville # Gabon
+Link Africa/Lagos Africa/Luanda # Angola
+Link Africa/Lagos Africa/Malabo # Equatorial Guinea
+Link Africa/Lagos Africa/Niamey # Niger
+Link Africa/Lagos Africa/Porto-Novo # Benin
# Réunion
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Indian/Reunion 3:41:52 - LMT 1911 Jun # Saint-Denis
4:00 - +04
#
-# Crozet Islands also observes Réunion time; see the 'antarctica' file.
-#
# Scattered Islands (Îles Éparses) administered from Réunion are as follows.
# The following information about them is taken from
# Îles Éparses (, 1997-07-22,
@@ -1513,8 +1424,8 @@ Rule SA 1943 1944 - Mar Sun>=15 2:00 0 -
Zone Africa/Johannesburg 1:52:00 - LMT 1892 Feb 8
1:30 - SAST 1903 Mar
2:00 SA SAST
-Link Africa/Johannesburg Africa/Maseru # Lesotho
-Link Africa/Johannesburg Africa/Mbabane # Eswatini
+Link Africa/Johannesburg Africa/Maseru # Lesotho
+Link Africa/Johannesburg Africa/Mbabane # Eswatini
#
# Marion and Prince Edward Is
# scientific station since 1947
@@ -1550,12 +1461,13 @@ Zone Africa/Khartoum 2:10:08 - LMT 1931
3:00 - EAT 2017 Nov 1
2:00 - CAT
+# South Sudan
+
# From Steffen Thorsen (2021-01-18):
# "South Sudan will change its time zone by setting the clock back 1
# hour on February 1, 2021...."
# from https://eyeradio.org/south-sudan-adopts-new-time-zone-makuei/
-# South Sudan
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Africa/Juba 2:06:28 - LMT 1931
2:00 Sudan CA%sT 2000 Jan 15 12:00
@@ -1660,7 +1572,7 @@ Rule Tunisia 2005 only - Sep 30 1:00s 0 -
Rule Tunisia 2006 2008 - Mar lastSun 2:00s 1:00 S
Rule Tunisia 2006 2008 - Oct lastSun 2:00s 0 -
-# See Europe/Paris for PMT-related transitions.
+# See Europe/Paris commentary for PMT-related transitions.
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Africa/Tunis 0:40:44 - LMT 1881 May 12
0:09:21 - PMT 1911 Mar 11 # Paris Mean Time
diff --git a/make/data/tzdata/antarctica b/make/data/tzdata/antarctica
index 509fadc29a964..13f024ef9bc4c 100644
--- a/make/data/tzdata/antarctica
+++ b/make/data/tzdata/antarctica
@@ -171,7 +171,7 @@ Zone Antarctica/Mawson 0 - -00 1954 Feb 13
#
# Alfred Faure, Possession Island, Crozet Islands, -462551+0515152, since 1964;
# sealing & whaling stations operated variously 1802/1911+;
-# see Indian/Reunion.
+# see Asia/Dubai.
#
# Martin-de-Viviès, Amsterdam Island, -374105+0773155, since 1950
# Port-aux-Français, Kerguelen Islands, -492110+0701303, since 1951;
@@ -185,17 +185,7 @@ Zone Indian/Kerguelen 0 - -00 1950 # Port-aux-Français
5:00 - +05
#
# year-round base in the main continent
-# Dumont d'Urville, Île des Pétrels, -6640+14001, since 1956-11
-# (2005-12-05)
-#
-# Another base at Port-Martin, 50km east, began operation in 1947.
-# It was destroyed by fire on 1952-01-14.
-#
-# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone Antarctica/DumontDUrville 0 - -00 1947
- 10:00 - +10 1952 Jan 14
- 0 - -00 1956 Nov
- 10:00 - +10
+# Dumont d'Urville - see Pacific/Port_Moresby.
# France & Italy - year-round base
# Concordia, -750600+1232000, since 2005
@@ -211,20 +201,7 @@ Zone Antarctica/DumontDUrville 0 - -00 1947
# Zuchelli, Terra Nova Bay, -744140+1640647, since 1986
# Japan - year-round bases
-# Syowa (also known as Showa), -690022+0393524, since 1957
-#
-# From Hideyuki Suzuki (1999-02-06):
-# In all Japanese stations, +0300 is used as the standard time.
-#
-# Syowa station, which is the first antarctic station of Japan,
-# was established on 1957-01-29. Since Syowa station is still the main
-# station of Japan, it's appropriate for the principal location.
-# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone Antarctica/Syowa 0 - -00 1957 Jan 29
- 3:00 - +03
-# See:
-# NIPR Antarctic Research Activities (1999-08-17)
-# http://www.nipr.ac.jp/english/ara01.html
+# See Asia/Riyadh.
# S Korea - year-round base
# Jang Bogo, Terra Nova Bay, -743700+1641205 since 2014
diff --git a/make/data/tzdata/asia b/make/data/tzdata/asia
index 143d8e8fdc387..cfe48745e2459 100644
--- a/make/data/tzdata/asia
+++ b/make/data/tzdata/asia
@@ -57,9 +57,6 @@
# Byalokoz EL. New Counting of Time in Russia since July 1, 1919.
# (See the 'europe' file for a fuller citation.)
#
-# A reliable and entertaining source about time zones is
-# Derek Howse, Greenwich time and longitude, Philip Wilson Publishers (1997).
-#
# The following alphabetic abbreviations appear in these tables
# (corrections are welcome):
# std dst
@@ -2257,6 +2254,14 @@ Zone Asia/Tokyo 9:18:59 - LMT 1887 Dec 31 15:00u
# From Paul Eggert (2013-12-11):
# As Steffen suggested, consider the past 21-month experiment to be DST.
+# From Steffen Thorsen (2021-09-24):
+# The Jordanian Government announced yesterday that they will start DST
+# in February instead of March:
+# https://petra.gov.jo/Include/InnerPage.jsp?ID=37683&lang=en&name=en_news (English)
+# https://petra.gov.jo/Include/InnerPage.jsp?ID=189969&lang=ar&name=news (Arabic)
+# From the Arabic version, it seems to say it would be at midnight
+# (assume 24:00) on the last Thursday in February, starting from 2022.
+
# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
Rule Jordan 1973 only - Jun 6 0:00 1:00 S
Rule Jordan 1973 1975 - Oct 1 0:00 0 -
@@ -2287,8 +2292,9 @@ Rule Jordan 2004 only - Oct 15 0:00s 0 -
Rule Jordan 2005 only - Sep lastFri 0:00s 0 -
Rule Jordan 2006 2011 - Oct lastFri 0:00s 0 -
Rule Jordan 2013 only - Dec 20 0:00 0 -
-Rule Jordan 2014 max - Mar lastThu 24:00 1:00 S
+Rule Jordan 2014 2021 - Mar lastThu 24:00 1:00 S
Rule Jordan 2014 max - Oct lastFri 0:00s 0 -
+Rule Jordan 2022 max - Feb lastThu 24:00 1:00 S
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Asia/Amman 2:23:44 - LMT 1931
2:00 Jordan EE%sT
@@ -2763,7 +2769,8 @@ Rule NBorneo 1935 1941 - Dec 14 0:00 0 -
#
# peninsular Malaysia
# taken from Mok Ly Yng (2003-10-30)
-# http://www.math.nus.edu.sg/aslaksen/teaching/timezone.html
+# https://web.archive.org/web/20190822231045/http://www.math.nus.edu.sg/~mathelmr/teaching/timezone.html
+# This agrees with Singapore since 1905-06-01.
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Asia/Kuala_Lumpur 6:46:46 - LMT 1901 Jan 1
6:55:25 - SMT 1905 Jun 1 # Singapore M.T.
@@ -3402,11 +3409,6 @@ Zone Asia/Karachi 4:28:12 - LMT 1907
# shall [end] on Oct 24th 2020 at 01:00AM by delaying the clock by 60 minutes.
# http://www.palestinecabinet.gov.ps/portal/Meeting/Details/51584
-# From Tim Parenti (2020-10-20):
-# Predict future fall transitions at 01:00 on the Saturday preceding October's
-# last Sunday (i.e., Sat>=24). This is consistent with our predictions since
-# 2016, although the time of the change differed slightly in 2019.
-
# From Pierre Cashon (2020-10-20):
# The summer time this year started on March 28 at 00:00.
# https://wafa.ps/ar_page.aspx?id=GveQNZa872839351758aGveQNZ
@@ -3419,6 +3421,17 @@ Zone Asia/Karachi 4:28:12 - LMT 1907
# For now, guess spring-ahead transitions are at 00:00 on the Saturday
# preceding March's last Sunday (i.e., Sat>=24).
+# From P Chan (2021-10-18):
+# http://wafa.ps/Pages/Details/34701
+# Palestine winter time will start from midnight 2021-10-29 (Thursday-Friday).
+#
+# From Heba Hemad, Palestine Ministry of Telecom & IT (2021-10-20):
+# ... winter time will begin in Palestine from Friday 10-29, 01:00 AM
+# by 60 minutes backwards.
+#
+# From Paul Eggert (2021-10-20):
+# Guess future fall transitions on October's last Friday at 01:00.
+
# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
Rule EgyptAsia 1957 only - May 10 0:00 1:00 S
Rule EgyptAsia 1957 1958 - Oct 1 0:00 0 -
@@ -3454,7 +3467,8 @@ Rule Palestine 2016 2018 - Oct Sat>=24 1:00 0 -
Rule Palestine 2019 only - Mar 29 0:00 1:00 S
Rule Palestine 2019 only - Oct Sat>=24 0:00 0 -
Rule Palestine 2020 max - Mar Sat>=24 0:00 1:00 S
-Rule Palestine 2020 max - Oct Sat>=24 1:00 0 -
+Rule Palestine 2020 only - Oct 24 1:00 0 -
+Rule Palestine 2021 max - Oct lastFri 1:00 0 -
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Asia/Gaza 2:17:52 - LMT 1900 Oct
@@ -3523,6 +3537,12 @@ Zone Asia/Hebron 2:20:23 - LMT 1900 Oct
# influence of the sources. There is no current abbreviation for DST,
# so use "PDT", the usual American style.
+# From P Chan (2021-05-10):
+# Here's a fairly comprehensive article in Japanese:
+# https://wiki.suikawiki.org/n/Philippine%20Time
+# From Paul Eggert (2021-05-10):
+# The info in the Japanese table has not been absorbed (yet) below.
+
# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
Rule Phil 1936 only - Nov 1 0:00 1:00 D
Rule Phil 1937 only - Feb 1 0:00 0 S
@@ -3589,12 +3609,13 @@ Link Asia/Qatar Asia/Bahrain
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Asia/Riyadh 3:06:52 - LMT 1947 Mar 14
3:00 - +03
+Link Asia/Riyadh Antarctica/Syowa
Link Asia/Riyadh Asia/Aden # Yemen
Link Asia/Riyadh Asia/Kuwait
# Singapore
# taken from Mok Ly Yng (2003-10-30)
-# http://www.math.nus.edu.sg/aslaksen/teaching/timezone.html
+# https://web.archive.org/web/20190822231045/http://www.math.nus.edu.sg/~mathelmr/teaching/timezone.html
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Asia/Singapore 6:55:25 - LMT 1901 Jan 1
6:55:25 - SMT 1905 Jun 1 # Singapore M.T.
diff --git a/make/data/tzdata/australasia b/make/data/tzdata/australasia
index e28538e0c84e0..a6ecb3af59327 100644
--- a/make/data/tzdata/australasia
+++ b/make/data/tzdata/australasia
@@ -408,9 +408,22 @@ Zone Indian/Cocos 6:27:40 - LMT 1900
# "Minister for Employment, Parveen Bala says they had never thought of
# stopping daylight saving. He says it was just to decide on when it should
# start and end. Bala says it is a short period..."
-# Since the end date is still in line with our ongoing predictions, assume for
-# now that the later-than-usual start date is a one-time departure from the
-# recent second Sunday in November pattern.
+#
+# From Tim Parenti (2021-10-11), per Jashneel Kumar (2021-10-11) and P Chan
+# (2021-10-12):
+# https://www.fiji.gov.fj/Media-Centre/Speeches/English/PM-BAINIMARAMA-S-COVID-19-ANNOUNCEMENT-10-10-21
+# https://www.fbcnews.com.fj/news/covid-19/curfew-moved-back-to-11pm/
+# In a 2021-10-10 speech concerning updated Covid-19 mitigation measures in
+# Fiji, prime minister Josaia Voreqe "Frank" Bainimarama announced the
+# suspension of DST for the 2021/2022 season: "Given that we are in the process
+# of readjusting in the midst of so many changes, we will also put Daylight
+# Savings Time on hold for this year. It will also make the reopening of
+# scheduled commercial air service much smoother if we don't have to be
+# concerned shifting arrival and departure times, which may look like a simple
+# thing but requires some significant logistical adjustments domestically and
+# internationally."
+# Assume for now that DST will resume with the recent pre-2020 rules for the
+# 2022/2023 season.
# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
Rule Fiji 1998 1999 - Nov Sun>=1 2:00 1:00 -
@@ -422,10 +435,11 @@ Rule Fiji 2011 only - Mar Sun>=1 3:00 0 -
Rule Fiji 2012 2013 - Jan Sun>=18 3:00 0 -
Rule Fiji 2014 only - Jan Sun>=18 2:00 0 -
Rule Fiji 2014 2018 - Nov Sun>=1 2:00 1:00 -
-Rule Fiji 2015 max - Jan Sun>=12 3:00 0 -
+Rule Fiji 2015 2021 - Jan Sun>=12 3:00 0 -
Rule Fiji 2019 only - Nov Sun>=8 2:00 1:00 -
Rule Fiji 2020 only - Dec 20 2:00 1:00 -
-Rule Fiji 2021 max - Nov Sun>=8 2:00 1:00 -
+Rule Fiji 2022 max - Nov Sun>=8 2:00 1:00 -
+Rule Fiji 2023 max - Jan Sun>=12 3:00 0 -
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Pacific/Fiji 11:55:44 - LMT 1915 Oct 26 # Suva
12:00 Fiji +12/+13
@@ -487,7 +501,7 @@ Link Pacific/Guam Pacific/Saipan # N Mariana Is
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Pacific/Tarawa 11:32:04 - LMT 1901 # Bairiki
12:00 - +12
-Zone Pacific/Enderbury -11:24:20 - LMT 1901
+Zone Pacific/Kanton 0 - -00 1937 Aug 31
-12:00 - -12 1979 Oct
-11:00 - -11 1994 Dec 31
13:00 - +13
@@ -620,13 +634,46 @@ Link Pacific/Auckland Antarctica/McMurdo
# was probably like Pacific/Auckland
# Cook Is
-# From Shanks & Pottenger:
+#
+# From Alexander Krivenyshev (2021-03-24):
+# In 1899 the Cook Islands celebrated Christmas twice to correct the calendar.
+# According to the old books, missionaries were unaware of
+# the International Date line, when they came from Sydney.
+# Thus the Cook Islands were one day ahead....
+# http://nzetc.victoria.ac.nz/tm/scholarly/tei-KloDisc-t1-body-d18.html
+# ... Appendix to the Journals of the House of Representatives, 1900
+# https://atojs.natlib.govt.nz/cgi-bin/atojs?a=d&d=AJHR1900-I.2.1.2.3
+# (page 20)
+#
+# From Michael Deckers (2021-03-24):
+# ... in the Cook Island Act of 1915-10-11, online at
+# http://www.paclii.org/ck/legis/ck-nz_act/cia1915132/
+# "651. The hour of the day shall in each of the islands included in the
+# Cook Islands be determined in accordance with the meridian of that island."
+# so that local (mean?) time was still used in Rarotonga (and Niue) in 1915.
+# This was changed in the Cook Island Amendment Act of 1952-10-16 ...
+# http://www.paclii.org/ck/legis/ck-nz_act/ciaa1952212/
+# "651 (1) The hour of the day in each of the islands included in the Cook
+# Islands, other than Niue, shall be determined as if each island were
+# situated on the meridian one hundred and fifty-seven degrees thirty minutes
+# West of Greenwich. (2) The hour of the day in the Island of Niue shall be
+# determined as if that island were situated on the meridian one hundred and
+# seventy degrees West of Greenwich."
+# This act does not state when it takes effect, so one has to assume it
+# applies since 1952-10-16. But there is the possibility that the act just
+# legalized prior existing practice, as we had seen with the Guernsey law of
+# 1913-06-18 for the switch in 1909-04-19.
+#
+# From Paul Eggert (2021-03-24):
+# Transitions after 1952 are from Shanks & Pottenger.
+#
# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
Rule Cook 1978 only - Nov 12 0:00 0:30 -
Rule Cook 1979 1991 - Mar Sun>=1 0:00 0 -
Rule Cook 1979 1990 - Oct lastSun 0:00 0:30 -
# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone Pacific/Rarotonga -10:39:04 - LMT 1901 # Avarua
+Zone Pacific/Rarotonga 13:20:56 - LMT 1899 Dec 26 # Avarua
+ -10:39:04 - LMT 1952 Oct 16
-10:30 - -1030 1978 Nov 12
-10:00 Cook -10/-0930
@@ -634,10 +681,18 @@ Zone Pacific/Rarotonga -10:39:04 - LMT 1901 # Avarua
# Niue
+# See Pacific/Raratonga comments for 1952 transition.
+#
+# From Tim Parenti (2021-09-13):
+# Consecutive contemporaneous editions of The Air Almanac listed -11:20 for
+# Niue as of Apr 1964 but -11 as of Aug 1964:
+# Apr 1964: https://books.google.com/books?id=_1So677Y5vUC&pg=SL1-PA23
+# Aug 1964: https://books.google.com/books?id=MbJloqd-zyUC&pg=SL1-PA23
+# Without greater specificity, guess 1964-07-01 for this transition.
+
# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone Pacific/Niue -11:19:40 - LMT 1901 # Alofi
- -11:20 - -1120 1951
- -11:30 - -1130 1978 Oct 1
+Zone Pacific/Niue -11:19:40 - LMT 1952 Oct 16 # Alofi
+ -11:20 - -1120 1964 Jul
-11:00 - -11
# Norfolk
@@ -661,6 +716,7 @@ Zone Pacific/Palau -15:02:04 - LMT 1844 Dec 31 # Koror
Zone Pacific/Port_Moresby 9:48:40 - LMT 1880
9:48:32 - PMMT 1895 # Port Moresby Mean Time
10:00 - +10
+Link Pacific/Port_Moresby Antarctica/DumontDUrville
#
# From Paul Eggert (2014-10-13):
# Base the Bougainville entry on the Arawa-Kieta region, which appears to have
@@ -765,13 +821,17 @@ Link Pacific/Pago_Pago Pacific/Midway # in US minor outlying islands
# From Paul Eggert (2014-07-08):
# That web page currently lists transitions for 2012/3 and 2013/4.
# Assume the pattern instituted in 2012 will continue indefinitely.
+#
+# From Geoffrey D. Bennett (2021-09-20):
+# https://www.mcil.gov.ws/storage/2021/09/MCIL-Scan_20210920_120553.pdf
+# DST has been cancelled for this year.
# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
Rule WS 2010 only - Sep lastSun 0:00 1 -
Rule WS 2011 only - Apr Sat>=1 4:00 0 -
Rule WS 2011 only - Sep lastSat 3:00 1 -
-Rule WS 2012 max - Apr Sun>=1 4:00 0 -
-Rule WS 2012 max - Sep lastSun 3:00 1 -
+Rule WS 2012 2021 - Apr Sun>=1 4:00 0 -
+Rule WS 2012 2020 - Sep lastSun 3:00 1 -
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Pacific/Apia 12:33:04 - LMT 1892 Jul 5
-11:26:56 - LMT 1911
@@ -818,8 +878,8 @@ Rule Tonga 2001 2002 - Jan lastSun 2:00 0 -
Rule Tonga 2016 only - Nov Sun>=1 2:00 1:00 -
Rule Tonga 2017 only - Jan Sun>=15 3:00 0 -
# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone Pacific/Tongatapu 12:19:20 - LMT 1901
- 12:20 - +1220 1941
+Zone Pacific/Tongatapu 12:19:12 - LMT 1945 Sep 10
+ 12:20 - +1220 1961
13:00 - +13 1999
13:00 Tonga +13/+14
@@ -1761,6 +1821,23 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901
# One source for this is page 202 of: Bartky IR. One Time Fits All:
# The Campaigns for Global Uniformity (2007).
+# Kanton
+
+# From Paul Eggert (2021-05-27):
+# Kiribati's +13 timezone is represented by Kanton, its only populated
+# island. (It was formerly spelled "Canton", but Gilbertese lacks "C".)
+# Kanton was settled on 1937-08-31 by two British radio operators
+# ;
+# Americans came the next year and built an airfield, partly to
+# establish airline service and perhaps partly anticipating the
+# next war. Aside from the war, the airfield was used by commercial
+# airlines until long-range jets became standard; although currently
+# for emergency use only, China says it is considering rebuilding the
+# airfield for high-end niche tourism. Kanton has about two dozen
+# people, caretakers who rotate in from the rest of Kiribati in 2-5
+# year shifts, and who use some of the leftover structures
+# .
+
# Kwajalein
# From an AP article (1993-08-22):
@@ -2044,6 +2121,17 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901
# Tonga
+# From Paul Eggert (2021-03-04):
+# In 1943 "The standard time kept is 12 hrs. 19 min. 12 sec. fast
+# on Greenwich mean time." according to the Admiralty's Hydrographic
+# Dept., Pacific Islands Pilot, Vol. II, 7th ed., 1943, p 360.
+
+# From Michael Deckers (2021-03-03):
+# [Ian R Bartky: "One Time Fits All: The Campaigns for Global Uniformity".
+# Stanford University Press. 2007. p. 255]:
+# On 10 September 1945 Tonga adopted a standard time 12 hours,
+# 20 minutes in advance of Greenwich.
+
# From Paul Eggert (1996-01-22):
# Today's _Wall Street Journal_ (p 1) reports that "Tonga has been plotting
# to sneak ahead of [New Zealanders] by introducing daylight-saving time."
@@ -2072,9 +2160,26 @@ Zone Pacific/Wallis 12:15:20 - LMT 1901
# The Crown Prince, presented an unanswerable argument: "Remember that
# on the World Day of Prayer, you would be the first people on Earth
# to say your prayers in the morning."
-
-# From Paul Eggert (2006-03-22):
-# Shanks & Pottenger say the transition was on 1968-10-01; go with Mundell.
+#
+# From Tim Parenti (2021-09-13), per Paul Eggert (2006-03-22) and Michael
+# Deckers (2021-03-03):
+# Mundell places the transition from +12:20 to +13 in 1941, while Shanks &
+# Pottenger say the transition was on 1968-10-01.
+#
+# The Air Almanac published contemporaneous tables of standard times,
+# which listed +12:20 as of Nov 1960 and +13 as of Mar 1961:
+# Nov 1960: https://books.google.com/books?id=bVgtWM6kPZUC&pg=SL1-PA19
+# Mar 1961: https://books.google.com/books?id=W2nItAul4g0C&pg=SL1-PA19
+# (Thanks to P Chan for pointing us toward these sources.)
+# This agrees with Bartky, who writes that "since 1961 [Tonga's] official time
+# has been thirteen hours in advance of Greenwich time" (p. 202) and further
+# writes in an endnote that this was because "the legislation was amended" on
+# 1960-10-19. (p. 255)
+#
+# Without greater specificity, presume that Bartky and the Air Almanac point to
+# a 1961-01-01 transition, as Tāufaʻāhau Tupou IV was still Crown Prince in
+# 1961 and this still jives with the gist of Mundell's telling, and go with
+# this over Shanks & Pottenger.
# From Eric Ulevik (1999-05-03):
# Tonga's director of tourism, who is also secretary of the National Millennium
diff --git a/make/data/tzdata/backward b/make/data/tzdata/backward
index 48482b74d301a..59c125623e2a1 100644
--- a/make/data/tzdata/backward
+++ b/make/data/tzdata/backward
@@ -26,8 +26,10 @@
# This file is in the public domain, so clarified as of
# 2009-05-17 by Arthur David Olson.
-# This file provides links between current names for timezones
-# and their old names. Many names changed in late 1993.
+# This file provides links from old or merged timezone names to current ones.
+# Many names changed in late 1993. Several of these names are
+# also present in the file 'backzone', which has data important only
+# for pre-1970 timestamps and so is out of scope for tzdb proper.
# Link TARGET LINK-NAME
Link Africa/Nairobi Africa/Asmera
@@ -36,7 +38,7 @@ Link America/Argentina/Catamarca America/Argentina/ComodRivadavia
Link America/Adak America/Atka
Link America/Argentina/Buenos_Aires America/Buenos_Aires
Link America/Argentina/Catamarca America/Catamarca
-Link America/Atikokan America/Coral_Harbour
+Link America/Panama America/Coral_Harbour
Link America/Argentina/Cordoba America/Cordoba
Link America/Tijuana America/Ensenada
Link America/Indiana/Indianapolis America/Fort_Wayne
@@ -51,7 +53,7 @@ Link America/Rio_Branco America/Porto_Acre
Link America/Argentina/Cordoba America/Rosario
Link America/Tijuana America/Santa_Isabel
Link America/Denver America/Shiprock
-Link America/Port_of_Spain America/Virgin
+Link America/Puerto_Rico America/Virgin
Link Pacific/Auckland Antarctica/South_Pole
Link Asia/Ashgabat Asia/Ashkhabad
Link Asia/Kolkata Asia/Calcutta
@@ -126,6 +128,7 @@ Link Pacific/Auckland NZ
Link Pacific/Chatham NZ-CHAT
Link America/Denver Navajo
Link Asia/Shanghai PRC
+Link Pacific/Kanton Pacific/Enderbury
Link Pacific/Honolulu Pacific/Johnston
Link Pacific/Pohnpei Pacific/Ponape
Link Pacific/Pago_Pago Pacific/Samoa
diff --git a/make/data/tzdata/europe b/make/data/tzdata/europe
index eb9056e92d584..9b0b64aa3ebec 100644
--- a/make/data/tzdata/europe
+++ b/make/data/tzdata/europe
@@ -91,7 +91,6 @@
# 0:00 GMT BST BDST Greenwich, British Summer
# 0:00 GMT IST Greenwich, Irish Summer
# 0:00 WET WEST WEMT Western Europe
-# 0:19:32.13 AMT* NST* Amsterdam, Netherlands Summer (1835-1937)
# 1:00 BST British Standard (1968-1971)
# 1:00 IST GMT Irish Standard (1968-) with winter DST
# 1:00 CET CEST CEMT Central Europe
@@ -845,7 +844,7 @@ Zone Europe/Andorra 0:06:04 - LMT 1901
# Shanks & Pottenger give 02:00, the BEV 00:00. Go with the BEV,
# and guess 02:00 for 1945-04-12.
-# From Alois Triendl (2019-07-22):
+# From Alois Treindl (2019-07-22):
# In 1946 the end of DST was on Monday, 7 October 1946, at 3:00 am.
# Shanks had this right. Source: Die Weltpresse, 5. Oktober 1946, page 5.
@@ -1759,19 +1758,22 @@ Zone Atlantic/Reykjavik -1:28 - LMT 1908
# advanced to sixty minutes later starting at hour two on 1944-04-02; ...
# Starting at hour three on the date 1944-09-17 standard time will be resumed.
#
-# From Alois Triendl (2019-07-02):
+# From Alois Treindl (2019-07-02):
# I spent 6 Euros to buy two archive copies of Il Messaggero, a Roman paper,
# for 1 and 2 April 1944. The edition of 2 April has this note: "Tonight at 2
# am, put forward the clock by one hour. Remember that in the night between
# today and Monday the 'ora legale' will come in force again." That makes it
# clear that in Rome the change was on Monday, 3 April 1944 at 2 am.
#
-# From Paul Eggert (2016-10-27):
+# From Paul Eggert (2021-10-05):
# Go with INRiM for DST rules, except as corrected by Inglis for 1944
# for the Kingdom of Italy. This is consistent with Renzo Baldini.
# Model Rome's occupation by using C-Eur rules from 1943-09-10
# to 1944-06-04; although Rome was an open city during this period, it
-# was effectively controlled by Germany.
+# was effectively controlled by Germany. Using C-Eur is consistent
+# with Treindl's comment about Rome in April 1944, as the "Rule Italy"
+# lines during German occupation do not affect Europe/Rome
+# (though they do affect Europe/Malta).
#
# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
Rule Italy 1916 only - Jun 3 24:00 1:00 S
@@ -1823,6 +1825,10 @@ Zone Europe/Rome 0:49:56 - LMT 1866 Dec 12
1:00 Italy CE%sT 1980
1:00 EU CE%sT
+# Kosovo
+# See Europe/Belgrade.
+
+
Link Europe/Rome Europe/Vatican
Link Europe/Rome Europe/San_Marino
@@ -2173,6 +2179,10 @@ Zone Europe/Monaco 0:29:32 - LMT 1892 Jun 1
# The data entries before 1945 are taken from
# https://www.staff.science.uu.nl/~gent0113/wettijd/wettijd.htm
+# From Paul Eggert (2021-05-09):
+# I invented the abbreviations AMT for Amsterdam Mean Time and NST for
+# Netherlands Summer Time, used in the Netherlands from 1835 to 1937.
+
# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
Rule Neth 1916 only - May 1 0:00 1:00 NST # Netherlands Summer Time
Rule Neth 1916 only - Oct 1 0:00 0 AMT # Amsterdam Mean Time
@@ -2399,12 +2409,10 @@ Rule Port 1943 1945 - Aug Sat>=25 22:00s 1:00 S
Rule Port 1944 1945 - Apr Sat>=21 22:00s 2:00 M
Rule Port 1946 only - Apr Sat>=1 23:00s 1:00 S
Rule Port 1946 only - Oct Sat>=1 23:00s 0 -
-Rule Port 1947 1949 - Apr Sun>=1 2:00s 1:00 S
-Rule Port 1947 1949 - Oct Sun>=1 2:00s 0 -
-# Shanks & Pottenger say DST was observed in 1950; go with Whitman.
+# Whitman says DST was not observed in 1950; go with Shanks & Pottenger.
# Whitman gives Oct lastSun for 1952 on; go with Shanks & Pottenger.
-Rule Port 1951 1965 - Apr Sun>=1 2:00s 1:00 S
-Rule Port 1951 1965 - Oct Sun>=1 2:00s 0 -
+Rule Port 1947 1965 - Apr Sun>=1 2:00s 1:00 S
+Rule Port 1947 1965 - Oct Sun>=1 2:00s 0 -
Rule Port 1977 only - Mar 27 0:00s 1:00 S
Rule Port 1977 only - Sep 25 0:00s 0 -
Rule Port 1978 1979 - Apr Sun>=1 0:00s 1:00 S
@@ -2641,7 +2649,7 @@ Zone Europe/Bucharest 1:44:24 - LMT 1891 Oct
# Although Shanks lists 1945-01-01 as the date for transition from
# +01/+02 to +02/+03, more likely this is a placeholder. Guess that
# the transition occurred at 1945-04-10 00:00, which is about when
-# Königsberg surrendered to Soviet troops. (Thanks to Alois Triendl.)
+# Königsberg surrendered to Soviet troops. (Thanks to Alois Treindl.)
# From Paul Eggert (2016-03-18):
# The 1989 transition is from USSR act No. 227 (1989-03-14).
@@ -3706,6 +3714,9 @@ Zone Atlantic/Canary -1:01:36 - LMT 1922 Mar # Las Palmas de Gran C.
#
# Source: The newspaper "Dagens Nyheter", 1916-10-01, page 7 upper left.
+# An extra-special abbreviation style is SET for Swedish Time (svensk
+# normaltid) 1879-1899, 3° west of the Stockholm Observatory.
+
# Zone NAME STDOFF RULES FORMAT [UNTIL]
Zone Europe/Stockholm 1:12:12 - LMT 1879 Jan 1
1:00:14 - SET 1900 Jan 1 # Swedish Time
diff --git a/make/data/tzdata/leapseconds b/make/data/tzdata/leapseconds
index 6f1941601d3b9..cc514561ff177 100644
--- a/make/data/tzdata/leapseconds
+++ b/make/data/tzdata/leapseconds
@@ -95,11 +95,11 @@ Leap 2016 Dec 31 23:59:60 + S
# Any additional leap seconds will come after this.
# This Expires line is commented out for now,
# so that pre-2020a zic implementations do not reject this file.
-#Expires 2021 Dec 28 00:00:00
+#Expires 2022 Jun 28 00:00:00
# POSIX timestamps for the data in this file:
#updated 1467936000 (2016-07-08 00:00:00 UTC)
-#expires 1640649600 (2021-12-28 00:00:00 UTC)
+#expires 1656374400 (2022-06-28 00:00:00 UTC)
-# Updated through IERS Bulletin C61
-# File expires on: 28 December 2021
+# Updated through IERS Bulletin C62
+# File expires on: 28 June 2022
diff --git a/make/data/tzdata/northamerica b/make/data/tzdata/northamerica
index 610c606c01a10..411099713c302 100644
--- a/make/data/tzdata/northamerica
+++ b/make/data/tzdata/northamerica
@@ -752,7 +752,11 @@ Zone America/Adak 12:13:22 - LMT 1867 Oct 19 12:44:35
-11:00 US B%sT 1983 Oct 30 2:00
-10:00 US AH%sT 1983 Nov 30
-10:00 US H%sT
-# The following switches don't quite make our 1970 cutoff.
+# The following switches don't make our 1970 cutoff.
+#
+# Kiska observed Tokyo date and time during Japanese occupation from
+# 1942-06-06 to 1943-07-29, and similarly for Attu from 1942-06-07 to
+# 1943-05-29 (all dates American). Both islands are now uninhabited.
#
# Shanks writes that part of southwest Alaska (e.g. Aniak)
# switched from -11:00 to -10:00 on 1968-09-22 at 02:00,
@@ -848,6 +852,8 @@ Zone America/Phoenix -7:28:18 - LMT 1883 Nov 18 11:31:42
-7:00 - MST 1967
-7:00 US M%sT 1968 Mar 21
-7:00 - MST
+Link America/Phoenix America/Creston
+
# From Arthur David Olson (1988-02-13):
# A writer from the Inter Tribal Council of Arizona, Inc.,
# notes in private correspondence dated 1987-12-28 that "Presently, only the
@@ -993,7 +999,7 @@ Zone America/Indiana/Vincennes -5:50:07 - LMT 1883 Nov 18 12:09:53
-5:00 US E%sT
#
# Perry County, Indiana, switched from eastern to central time in April 2006.
-# From Alois Triendl (2019-07-09):
+# From Alois Treindl (2019-07-09):
# The Indianapolis News, Friday 27 October 1967 states that Perry County
# returned to CST. It went again to EST on 27 April 1969, as documented by the
# Indianapolis star of Saturday 26 April.
@@ -1616,24 +1622,7 @@ Zone America/Moncton -4:19:08 - LMT 1883 Dec 9
# From Paul Eggert (2020-01-10):
# See America/Toronto for most of Quebec, including Montreal.
# See America/Halifax for the Îles de la Madeleine and the Listuguj reserve.
-#
-# Matthews and Vincent (1998) also write that Quebec east of the -63
-# meridian is supposed to observe AST, but residents as far east as
-# Natashquan use EST/EDT, and residents east of Natashquan use AST.
-# The Quebec department of justice writes in
-# "The situation in Minganie and Basse-Côte-Nord"
-# https://www.justice.gouv.qc.ca/en/department/ministre/functions-and-responsabilities/legal-time-in-quebec/the-situation-in-minganie-and-basse-cote-nord/
-# that the coastal strip from just east of Natashquan to Blanc-Sablon
-# observes Atlantic standard time all year round.
-# This common practice was codified into law as of 2007; see Legal Time Act,
-# CQLR c T-5.1 .
-# For lack of better info, guess this practice began around 1970, contra to
-# Shanks & Pottenger who have this region observing AST/ADT.
-
-# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone America/Blanc-Sablon -3:48:28 - LMT 1884
- -4:00 Canada A%sT 1970
- -4:00 - AST
+# See America/Puerto_Rico for east of Natashquan.
# Ontario
@@ -1672,54 +1661,6 @@ Zone America/Blanc-Sablon -3:48:28 - LMT 1884
# time became a comic failure in Orillia. Toronto Star 2017-07-08.
# https://www.thestar.com/news/insight/2017/07/08/bold-attempt-at-daylight-saving-time-became-a-comic-failure-in-orillia.html
-# From Paul Eggert (1997-10-17):
-# Mark Brader writes that an article in the 1997-10-14 Toronto Star
-# says that Atikokan, Ontario currently does not observe DST,
-# but will vote on 11-10 whether to use EST/EDT.
-# He also writes that the Ontario Time Act (1990, Chapter T.9)
-# http://www.gov.on.ca/MBS/english/publications/statregs/conttext.html
-# says that Ontario east of 90W uses EST/EDT, and west of 90W uses CST/CDT.
-# Officially Atikokan is therefore on CST/CDT, and most likely this report
-# concerns a non-official time observed as a matter of local practice.
-#
-# From Paul Eggert (2000-10-02):
-# Matthews and Vincent (1998) write that Atikokan, Pickle Lake, and
-# New Osnaburgh observe CST all year, that Big Trout Lake observes
-# CST/CDT, and that Upsala and Shebandowan observe EST/EDT, all in
-# violation of the official Ontario rules.
-#
-# From Paul Eggert (2006-07-09):
-# Chris Walton (2006-07-06) mentioned an article by Stephanie MacLellan in the
-# 2005-07-21 Chronicle-Journal, which said:
-#
-# The clocks in Atikokan stay set on standard time year-round.
-# This means they spend about half the time on central time and
-# the other half on eastern time.
-#
-# For the most part, the system works, Mayor Dennis Brown said.
-#
-# "The majority of businesses in Atikokan deal more with Eastern
-# Canada, but there are some that deal with Western Canada," he
-# said. "I don't see any changes happening here."
-#
-# Walton also writes "Supposedly Pickle Lake and Mishkeegogamang
-# [New Osnaburgh] follow the same practice."
-
-# From Garry McKinnon (2006-07-14) via Chris Walton:
-# I chatted with a member of my board who has an outstanding memory
-# and a long history in Atikokan (and in the telecom industry) and he
-# can say for certain that Atikokan has been practicing the current
-# time keeping since 1952, at least.
-
-# From Paul Eggert (2006-07-17):
-# Shanks & Pottenger say that Atikokan has agreed with Rainy River
-# ever since standard time was introduced, but the information from
-# McKinnon sounds more authoritative. For now, assume that Atikokan
-# switched to EST immediately after WWII era daylight saving time
-# ended. This matches the old (less-populous) America/Coral_Harbour
-# entry since our cutoff date of 1970, so we can move
-# America/Coral_Harbour to the 'backward' file.
-
# From Mark Brader (2010-03-06):
#
# Currently the database has:
@@ -1850,6 +1791,7 @@ Zone America/Toronto -5:17:32 - LMT 1895
-5:00 Canada E%sT 1946
-5:00 Toronto E%sT 1974
-5:00 Canada E%sT
+Link America/Toronto America/Nassau
Zone America/Thunder_Bay -5:57:00 - LMT 1895
-6:00 - CST 1910
-5:00 - EST 1942
@@ -1865,11 +1807,7 @@ Zone America/Rainy_River -6:18:16 - LMT 1895
-6:00 Canada C%sT 1940 Sep 29
-6:00 1:00 CDT 1942 Feb 9 2:00s
-6:00 Canada C%sT
-Zone America/Atikokan -6:06:28 - LMT 1895
- -6:00 Canada C%sT 1940 Sep 29
- -6:00 1:00 CDT 1942 Feb 9 2:00s
- -6:00 Canada C%sT 1945 Sep 30 2:00
- -5:00 - EST
+# For Atikokan see America/Panama.
# Manitoba
@@ -2021,7 +1959,7 @@ Zone America/Swift_Current -7:11:20 - LMT 1905 Sep
# Alberta
-# From Alois Triendl (2019-07-19):
+# From Alois Treindl (2019-07-19):
# There was no DST in Alberta in 1967... Calgary Herald, 29 April 1967.
# 1969, no DST, from Edmonton Journal 18 April 1969
#
@@ -2060,60 +1998,6 @@ Zone America/Edmonton -7:33:52 - LMT 1906 Sep
# Shanks & Pottenger write that since 1970 most of this region has
# been like Vancouver.
# Dawson Creek uses MST. Much of east BC is like Edmonton.
-# Matthews and Vincent (1998) write that Creston is like Dawson Creek.
-
-# It seems though that (re: Creston) is not entirely correct:
-
-# From Chris Walton (2011-12-01):
-# There are two areas within the Canadian province of British Columbia
-# that do not currently observe daylight saving:
-# a) The Creston Valley (includes the town of Creston and surrounding area)
-# b) The eastern half of the Peace River Regional District
-# (includes the cities of Dawson Creek and Fort St. John)
-
-# Earlier this year I stumbled across a detailed article about the time
-# keeping history of Creston; it was written by Tammy Hardwick who is the
-# manager of the Creston & District Museum. The article was written in May 2009.
-# http://www.ilovecreston.com/?p=articles&t=spec&ar=260
-# According to the article, Creston has not changed its clocks since June 1918.
-# i.e. Creston has been stuck on UT-7 for 93 years.
-# Dawson Creek, on the other hand, changed its clocks as recently as April 1972.
-
-# Unfortunately the exact date for the time change in June 1918 remains
-# unknown and will be difficult to ascertain. I e-mailed Tammy a few months
-# ago to ask if Sunday June 2 was a reasonable guess. She said it was just
-# as plausible as any other date (in June). She also said that after writing
-# the article she had discovered another time change in 1916; this is the
-# subject of another article which she wrote in October 2010.
-# http://www.creston.museum.bc.ca/index.php?module=comments&uop=view_comment&cm+id=56
-
-# Here is a summary of the three clock change events in Creston's history:
-# 1. 1884 or 1885: adoption of Mountain Standard Time (GMT-7)
-# Exact date unknown
-# 2. Oct 1916: switch to Pacific Standard Time (GMT-8)
-# Exact date in October unknown; Sunday October 1 is a reasonable guess.
-# 3. June 1918: switch to Pacific Daylight Time (GMT-7)
-# Exact date in June unknown; Sunday June 2 is a reasonable guess.
-# note 1:
-# On Oct 27/1918 when daylight saving ended in the rest of Canada,
-# Creston did not change its clocks.
-# note 2:
-# During WWII when the Federal Government legislated a mandatory clock change,
-# Creston did not oblige.
-# note 3:
-# There is no guarantee that Creston will remain on Mountain Standard Time
-# (UTC-7) forever.
-# The subject was debated at least once this year by the town Council.
-# http://www.bclocalnews.com/kootenay_rockies/crestonvalleyadvance/news/116760809.html
-
-# During a period WWII, summer time (Daylight saying) was mandatory in Canada.
-# In Creston, that was handled by shifting the area to PST (-8:00) then applying
-# summer time to cause the offset to be -7:00, the same as it had been before
-# the change. It can be argued that the timezone abbreviation during this
-# period should be PDT rather than MST, but that doesn't seem important enough
-# (to anyone) to further complicate the rules.
-
-# The transition dates (and times) are guesses.
# From Matt Johnson (2015-09-21):
# Fort Nelson, BC, Canada will cancel DST this year. So while previously they
@@ -2130,7 +2014,7 @@ Zone America/Edmonton -7:33:52 - LMT 1906 Sep
#
# From Paul Eggert (2019-07-25):
# Shanks says Fort Nelson did not observe DST in 1946, unlike Vancouver.
-# Alois Triendl confirmed this on 07-22, citing the 1946-04-27 Vancouver Daily
+# Alois Treindl confirmed this on 07-22, citing the 1946-04-27 Vancouver Daily
# Province. He also cited the 1946-09-28 Victoria Daily Times, which said
# that Vancouver, Victoria, etc. "change at midnight Saturday"; for now,
# guess they meant 02:00 Sunday since 02:00 was common practice in Vancouver.
@@ -2167,10 +2051,7 @@ Zone America/Fort_Nelson -8:10:47 - LMT 1884
-8:00 Vanc P%sT 1987
-8:00 Canada P%sT 2015 Mar 8 2:00
-7:00 - MST
-Zone America/Creston -7:46:04 - LMT 1884
- -7:00 - MST 1916 Oct 1
- -8:00 - PST 1918 Jun 2
- -7:00 - MST
+# For Creston see America/Phoenix.
# Northwest Territories, Nunavut, Yukon
@@ -2952,64 +2833,61 @@ Zone America/Tijuana -7:48:04 - LMT 1922 Jan 1 0:11:56
# Anguilla
# Antigua and Barbuda
-# See America/Port_of_Spain.
+# See America/Puerto_Rico.
-# Bahamas
-#
-# For 1899 Milne gives -5:09:29.5; round that.
-#
-# From P Chan (2020-11-27, corrected on 2020-12-02):
-# There were two periods of DST observed in 1942-1945: 1942-05-01
-# midnight to 1944-12-31 midnight and 1945-02-01 to 1945-10-17 midnight.
-# "midnight" should mean 24:00 from the context.
-#
-# War Time Order 1942 [1942-05-01] and War Time (No. 2) Order 1942 [1942-09-29]
-# Appendix to the Statutes of 7 George VI. and the Year 1942. p 34, 43
-# https://books.google.com/books?id=5rlNAQAAIAAJ&pg=RA3-PA34
-# https://books.google.com/books?id=5rlNAQAAIAAJ&pg=RA3-PA43
-#
-# War Time Order 1943 [1943-03-31] and War Time Order 1944 [1943-12-29]
-# Appendix to the Statutes of 8 George VI. and the Year 1943. p 9-10, 28-29
-# https://books.google.com/books?id=5rlNAQAAIAAJ&pg=RA4-PA9
-# https://books.google.com/books?id=5rlNAQAAIAAJ&pg=RA4-PA28
-#
-# War Time Order 1945 [1945-01-31] and the Order which revoke War Time Order
-# 1945 [1945-10-16] Appendix to the Statutes of 9 George VI. and the Year
-# 1945. p 160, 247-248
-# https://books.google.com/books?id=5rlNAQAAIAAJ&pg=RA6-PA160
-# https://books.google.com/books?id=5rlNAQAAIAAJ&pg=RA6-PA247
-#
-# From Sue Williams (2006-12-07):
-# The Bahamas announced about a month ago that they plan to change their DST
-# rules to sync with the U.S. starting in 2007....
-# http://www.jonesbahamas.com/?c=45&a=10412
+# The Bahamas
+# See America/Toronto.
-# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
-Rule Bahamas 1942 only - May 1 24:00 1:00 W
-Rule Bahamas 1944 only - Dec 31 24:00 0 S
-Rule Bahamas 1945 only - Feb 1 0:00 1:00 W
-Rule Bahamas 1945 only - Aug 14 23:00u 1:00 P # Peace
-Rule Bahamas 1945 only - Oct 17 24:00 0 S
-Rule Bahamas 1964 1975 - Oct lastSun 2:00 0 S
-Rule Bahamas 1964 1975 - Apr lastSun 2:00 1:00 D
-# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone America/Nassau -5:09:30 - LMT 1912 Mar 2
- -5:00 Bahamas E%sT 1976
- -5:00 US E%sT
# Barbados
# For 1899 Milne gives -3:58:29.2; round that.
+# From P Chan (2020-12-09 and 2020-12-11):
+# Standard time of GMT-4 was adopted in 1911.
+# Definition of Time Act, 1911 (1911-7) [1911-08-28]
+# 1912, Laws of Barbados (5 v.), OCLC Number: 919801291, Vol. 4, Image No. 522
+# 1944, Laws of Barbados (5 v.), OCLC Number: 84548697, Vol. 4, Image No. 122
+# http://llmc.com/browse.aspx?type=2&coll=85&div=297
+#
+# DST was observed in 1942-44.
+# Defence (Daylight Saving) Regulations, 1942, 1942-04-13
+# Defence (Daylight Saving) (Repeal) Regulations, 1942, 1942-08-22
+# Defence (Daylight Saving) Regulations, 1943, 1943-04-16
+# Defence (Daylight Saving) (Repeal) Regulations, 1943, 1943-09-01
+# Defence (Daylight Saving) Regulations, 1944, 1944-03-21
+# [Defence (Daylight Saving) (Amendment) Regulations 1944, 1944-03-28]
+# Defence (Daylight Saving) (Repeal) Regulations, 1944, 1944-08-30
+#
+# 1914-, Subsidiary Legis., Annual Vols. OCLC Number: 226290591
+# 1942: Image Nos. 527-528, 555-556
+# 1943: Image Nos. 178-179, 198
+# 1944: Image Nos. 113-115, 129
+# http://llmc.com/titledescfull.aspx?type=2&coll=85&div=297&set=98437
+#
+# From Tim Parenti (2021-02-20):
+# The transitions below are derived from P Chan's sources, except that the 1977
+# through 1980 transitions are from Shanks & Pottenger since we have no better
+# data there. Of particular note, the 1944 DST regulation only advanced the
+# time to "exactly three and a half hours later than Greenwich mean time", as
+# opposed to "three hours" in the 1942 and 1943 regulations.
+
# Rule NAME FROM TO - IN ON AT SAVE LETTER/S
+Rule Barb 1942 only - Apr 19 5:00u 1:00 D
+Rule Barb 1942 only - Aug 31 6:00u 0 S
+Rule Barb 1943 only - May 2 5:00u 1:00 D
+Rule Barb 1943 only - Sep 5 6:00u 0 S
+Rule Barb 1944 only - Apr 10 5:00u 0:30 -
+Rule Barb 1944 only - Sep 10 6:00u 0 S
Rule Barb 1977 only - Jun 12 2:00 1:00 D
Rule Barb 1977 1978 - Oct Sun>=1 2:00 0 S
Rule Barb 1978 1980 - Apr Sun>=15 2:00 1:00 D
Rule Barb 1979 only - Sep 30 2:00 0 S
Rule Barb 1980 only - Sep 25 2:00 0 S
# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone America/Barbados -3:58:29 - LMT 1924 # Bridgetown
- -3:58:29 - BMT 1932 # Bridgetown Mean Time
+Zone America/Barbados -3:58:29 - LMT 1911 Aug 28 # Bridgetown
+ -4:00 Barb A%sT 1944
+ -4:00 Barb AST/-0330 1945
-4:00 Barb A%sT
# Belize
@@ -3171,6 +3049,9 @@ Zone Atlantic/Bermuda -4:19:18 - LMT 1890 # Hamilton
-4:00 Canada A%sT 1976
-4:00 US A%sT
+# Caribbean Netherlands
+# See America/Puerto_Rico.
+
# Cayman Is
# See America/Panama.
@@ -3399,7 +3280,7 @@ Zone America/Havana -5:29:28 - LMT 1890
-5:00 Cuba C%sT
# Dominica
-# See America/Port_of_Spain.
+# See America/Puerto_Rico.
# Dominican Republic
@@ -3451,7 +3332,7 @@ Zone America/El_Salvador -5:56:48 - LMT 1921 # San Salvador
# Guadeloupe
# St Barthélemy
# St Martin (French part)
-# See America/Port_of_Spain.
+# See America/Puerto_Rico.
# Guatemala
#
@@ -3638,7 +3519,7 @@ Zone America/Martinique -4:04:20 - LMT 1890 # Fort-de-France
-4:00 - AST
# Montserrat
-# See America/Port_of_Spain.
+# See America/Puerto_Rico.
# Nicaragua
#
@@ -3710,6 +3591,7 @@ Zone America/Managua -5:45:08 - LMT 1890
Zone America/Panama -5:18:08 - LMT 1890
-5:19:36 - CMT 1908 Apr 22 # Colón Mean Time
-5:00 - EST
+Link America/Panama America/Atikokan
Link America/Panama America/Cayman
# Puerto Rico
@@ -3719,10 +3601,29 @@ Zone America/Puerto_Rico -4:24:25 - LMT 1899 Mar 28 12:00 # San Juan
-4:00 - AST 1942 May 3
-4:00 US A%sT 1946
-4:00 - AST
+Link America/Puerto_Rico America/Anguilla
+Link America/Puerto_Rico America/Antigua
+Link America/Puerto_Rico America/Aruba
+Link America/Puerto_Rico America/Curacao
+Link America/Puerto_Rico America/Blanc-Sablon # Quebec (Lower North Shore)
+Link America/Puerto_Rico America/Dominica
+Link America/Puerto_Rico America/Grenada
+Link America/Puerto_Rico America/Guadeloupe
+Link America/Puerto_Rico America/Kralendijk # Caribbean Netherlands
+Link America/Puerto_Rico America/Lower_Princes # Sint Maarten
+Link America/Puerto_Rico America/Marigot # St Martin (French part)
+Link America/Puerto_Rico America/Montserrat
+Link America/Puerto_Rico America/Port_of_Spain # Trinidad & Tobago
+Link America/Puerto_Rico America/St_Barthelemy # St Barthélemy
+Link America/Puerto_Rico America/St_Kitts # St Kitts & Nevis
+Link America/Puerto_Rico America/St_Lucia
+Link America/Puerto_Rico America/St_Thomas # Virgin Islands (US)
+Link America/Puerto_Rico America/St_Vincent
+Link America/Puerto_Rico America/Tortola # Virgin Islands (UK)
# St Kitts-Nevis
# St Lucia
-# See America/Port_of_Spain.
+# See America/Puerto_Rico.
# St Pierre and Miquelon
# There are too many St Pierres elsewhere, so we'll use 'Miquelon'.
@@ -3733,7 +3634,10 @@ Zone America/Miquelon -3:44:40 - LMT 1911 May 15 # St Pierre
-3:00 Canada -03/-02
# St Vincent and the Grenadines
-# See America/Port_of_Spain.
+# See America/Puerto_Rico.
+
+# Sint Maarten
+# See America/Puerto_Rico.
# Turks and Caicos
#
@@ -3804,8 +3708,8 @@ Zone America/Grand_Turk -4:44:32 - LMT 1890
-5:00 US E%sT
# British Virgin Is
-# Virgin Is
-# See America/Port_of_Spain.
+# US Virgin Is
+# See America/Puerto_Rico.
# Local Variables:
diff --git a/make/data/tzdata/southamerica b/make/data/tzdata/southamerica
index 566dabfadb46e..503ed65f58036 100644
--- a/make/data/tzdata/southamerica
+++ b/make/data/tzdata/southamerica
@@ -597,7 +597,7 @@ Zone America/Argentina/Ushuaia -4:33:12 - LMT 1894 Oct 31
-3:00 - -03
# Aruba
-Link America/Curacao America/Aruba
+# See America/Puerto_Rico.
# Bolivia
# Zone NAME STDOFF RULES FORMAT [UNTIL]
@@ -1392,35 +1392,14 @@ Zone America/Bogota -4:56:16 - LMT 1884 Mar 13
# no information; probably like America/Bogota
# Curaçao
-
-# Milne gives 4:35:46.9 for Curaçao mean time; round to nearest.
-#
-# From Paul Eggert (2006-03-22):
-# Shanks & Pottenger say that The Bottom and Philipsburg have been at
-# -4:00 since standard time was introduced on 1912-03-02; and that
-# Kralendijk and Rincon used Kralendijk Mean Time (-4:33:08) from
-# 1912-02-02 to 1965-01-01. The former is dubious, since S&P also say
-# Saba Island has been like Curaçao.
-# This all predates our 1970 cutoff, though.
-#
-# By July 2007 Curaçao and St Maarten are planned to become
-# associated states within the Netherlands, much like Aruba;
-# Bonaire, Saba and St Eustatius would become directly part of the
-# Netherlands as Kingdom Islands. This won't affect their time zones
-# though, as far as we know.
+# See America/Puerto_Rico.
#
-# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone America/Curacao -4:35:47 - LMT 1912 Feb 12 # Willemstad
- -4:30 - -0430 1965
- -4:00 - AST
-
# From Arthur David Olson (2011-06-15):
# use links for places with new iso3166 codes.
# The name "Lower Prince's Quarter" is both longer than fourteen characters
-# and contains an apostrophe; use "Lower_Princes" below.
-
-Link America/Curacao America/Lower_Princes # Sint Maarten
-Link America/Curacao America/Kralendijk # Caribbean Netherlands
+# and contains an apostrophe; use "Lower_Princes"....
+# From Paul Eggert (2021-09-29):
+# These backward-compatibility links now are in the 'northamerica' file.
# Ecuador
#
@@ -1563,11 +1542,40 @@ Zone America/Cayenne -3:29:20 - LMT 1911 Jul
-3:00 - -03
# Guyana
+
+# From P Chan (2020-11-27):
+# https://books.google.com/books?id=5-5CAQAAMAAJ&pg=SA1-PA547
+# The Official Gazette of British Guiana. (New Series.) Vol. XL. July to
+# December, 1915, p 1547, lists as several notes:
+# "Local Mean Time 3 hours 52 mins. 39 secs. slow of Greenwich Mean Time
+# (Georgetown.) From 1st August, 1911, British Guiana Standard Mean Time 4
+# hours slow of Greenwich Mean Time, by notice in Official Gazette on 1st July,
+# 1911. From 1st March, 1915, British Guiana Standard Mean Time 3 hours 45
+# mins. 0 secs. slow of Greenwich Mean Time, by notice in Official Gazette on
+# 23rd January, 1915."
+#
+# https://parliament.gov.gy/documents/acts/10923-act_no._27_of_1975_-_interpretation_and_general_clauses_(amendment)_act_1975.pdf
+# Interpretation and general clauses (Amendment) Act 1975 (Act No. 27 of 1975)
+# [dated 1975-07-31]
+# "This Act...shall come into operation on 1st August, 1975."
+# "...where any expression of time occurs...the time referred to shall signify
+# the standard time of Guyana which shall be three hours behind Greenwich Mean
+# Time."
+#
+# Circular No. 10/1992 dated 1992-03-20
+# https://dps.gov.gy/wp-content/uploads/2018/12/1992-03-20-Circular-010.pdf
+# "...cabinet has decided that with effect from Sunday 29th March, 1992, Guyana
+# Standard Time would be re-established at 01:00 hours by adjusting the hands
+# of the clock back to 24:00 hours."
+# Legislated in the Interpretation and general clauses (Amendment) Act 1992
+# (Act No. 6 of 1992) [passed 1992-03-27, published 1992-04-18]
+# https://parliament.gov.gy/documents/acts/5885-6_of_1992_interpretation_and_general_clauses_(amendment)_act_1992.pdf
+
# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone America/Guyana -3:52:40 - LMT 1915 Mar # Georgetown
- -3:45 - -0345 1975 Jul 31
- -3:00 - -03 1991
-# IATA SSIM (1996-06) says -4:00. Assume a 1991 switch.
+Zone America/Guyana -3:52:39 - LMT 1911 Aug 1 # Georgetown
+ -4:00 - -04 1915 Mar 1
+ -3:45 - -0345 1975 Aug 1
+ -3:00 - -03 1992 Mar 29 1:00
-4:00 - -04
# Paraguay
@@ -1708,24 +1716,7 @@ Zone America/Paramaribo -3:40:40 - LMT 1911
-3:00 - -03
# Trinidad and Tobago
-# Zone NAME STDOFF RULES FORMAT [UNTIL]
-Zone America/Port_of_Spain -4:06:04 - LMT 1912 Mar 2
- -4:00 - AST
-
-# These all agree with Trinidad and Tobago since 1970.
-Link America/Port_of_Spain America/Anguilla
-Link America/Port_of_Spain America/Antigua
-Link America/Port_of_Spain America/Dominica
-Link America/Port_of_Spain America/Grenada
-Link America/Port_of_Spain America/Guadeloupe
-Link America/Port_of_Spain America/Marigot # St Martin (French part)
-Link America/Port_of_Spain America/Montserrat
-Link America/Port_of_Spain America/St_Barthelemy # St Barthélemy
-Link America/Port_of_Spain America/St_Kitts # St Kitts & Nevis
-Link America/Port_of_Spain America/St_Lucia
-Link America/Port_of_Spain America/St_Thomas # Virgin Islands (US)
-Link America/Port_of_Spain America/St_Vincent
-Link America/Port_of_Spain America/Tortola # Virgin Islands (UK)
+# See America/Puerto_Rico.
# Uruguay
# From Paul Eggert (1993-11-18):
diff --git a/make/data/tzdata/zone.tab b/make/data/tzdata/zone.tab
index 28db0745e08be..0420a6934c92b 100644
--- a/make/data/tzdata/zone.tab
+++ b/make/data/tzdata/zone.tab
@@ -26,7 +26,7 @@
# This file is in the public domain, so clarified as of
# 2009-05-17 by Arthur David Olson.
#
-# From Paul Eggert (2018-06-27):
+# From Paul Eggert (2021-09-20):
# This file is intended as a backward-compatibility aid for older programs.
# New programs should use zone1970.tab. This file is like zone1970.tab (see
# zone1970.tab's comments), but with the following additional restrictions:
@@ -39,6 +39,9 @@
# clocks have agreed since 1970; this is a narrower definition than
# that of zone1970.tab.
#
+# Unlike zone1970.tab, a row's third column can be a Link from
+# 'backward' instead of a Zone.
+#
# This table is intended as an aid for users, to help them select timezones
# appropriate for their practical needs. It is not intended to take or
# endorse any position on legal or territorial claims.
@@ -251,7 +254,7 @@ KE -0117+03649 Africa/Nairobi
KG +4254+07436 Asia/Bishkek
KH +1133+10455 Asia/Phnom_Penh
KI +0125+17300 Pacific/Tarawa Gilbert Islands
-KI -0308-17105 Pacific/Enderbury Phoenix Islands
+KI -0247-17143 Pacific/Kanton Phoenix Islands
KI +0152-15720 Pacific/Kiritimati Line Islands
KM -1141+04316 Indian/Comoro
KN +1718-06243 America/St_Kitts
@@ -414,7 +417,7 @@ TK -0922-17114 Pacific/Fakaofo
TL -0833+12535 Asia/Dili
TM +3757+05823 Asia/Ashgabat
TN +3648+01011 Africa/Tunis
-TO -2110-17510 Pacific/Tongatapu
+TO -210800-1751200 Pacific/Tongatapu
TR +4101+02858 Europe/Istanbul
TT +1039-06131 America/Port_of_Spain
TV -0831+17913 Pacific/Funafuti
diff --git a/make/devkit/createJMHBundle.sh b/make/devkit/createJMHBundle.sh
index 9e0b9c06e4f1f..059fb7acb6fb9 100644
--- a/make/devkit/createJMHBundle.sh
+++ b/make/devkit/createJMHBundle.sh
@@ -1,6 +1,6 @@
#!/bin/bash -e
#
-# Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2018, 2021, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -26,7 +26,7 @@
# Create a bundle in the build directory, containing what's needed to
# build and run JMH microbenchmarks from the OpenJDK build.
-JMH_VERSION=1.28
+JMH_VERSION=1.34
COMMONS_MATH3_VERSION=3.2
JOPT_SIMPLE_VERSION=4.6
diff --git a/make/hotspot/gensrc/GensrcAdlc.gmk b/make/hotspot/gensrc/GensrcAdlc.gmk
index ba8165c2ff036..25c132729142e 100644
--- a/make/hotspot/gensrc/GensrcAdlc.gmk
+++ b/make/hotspot/gensrc/GensrcAdlc.gmk
@@ -149,12 +149,14 @@ ifeq ($(call check-jvm-feature, compiler2), true)
ifeq ($(call check-jvm-feature, shenandoahgc), true)
AD_SRC_FILES += $(call uniq, $(wildcard $(foreach d, $(AD_SRC_ROOTS), \
$d/cpu/$(HOTSPOT_TARGET_CPU_ARCH)/gc/shenandoah/shenandoah_$(HOTSPOT_TARGET_CPU).ad \
+ $d/cpu/$(HOTSPOT_TARGET_CPU_ARCH)/gc/shenandoah/shenandoah_$(HOTSPOT_TARGET_CPU_ARCH).ad \
)))
endif
ifeq ($(call check-jvm-feature, zgc), true)
AD_SRC_FILES += $(call uniq, $(wildcard $(foreach d, $(AD_SRC_ROOTS), \
$d/cpu/$(HOTSPOT_TARGET_CPU_ARCH)/gc/z/z_$(HOTSPOT_TARGET_CPU).ad \
+ $d/cpu/$(HOTSPOT_TARGET_CPU_ARCH)/gc/z/z_$(HOTSPOT_TARGET_CPU_ARCH).ad \
)))
endif
diff --git a/make/hotspot/lib/CompileGtest.gmk b/make/hotspot/lib/CompileGtest.gmk
index 03c4de783cd94..cb2bbccc1686a 100644
--- a/make/hotspot/lib/CompileGtest.gmk
+++ b/make/hotspot/lib/CompileGtest.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2016, 2021, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -101,7 +101,7 @@ $(eval $(call SetupJdkLibrary, BUILD_GTEST_LIBJVM, \
CFLAGS_windows := -EHsc, \
CFLAGS_macosx := -DGTEST_OS_MAC=1, \
DISABLED_WARNINGS_gcc := $(DISABLED_WARNINGS_gcc) \
- undef, \
+ undef stringop-overflow, \
DISABLED_WARNINGS_clang := $(DISABLED_WARNINGS_clang) \
undef switch format-nonliteral tautological-undefined-compare \
self-assign-overloaded, \
diff --git a/make/jdk/src/classes/build/tools/depend/Depend.java b/make/jdk/src/classes/build/tools/depend/Depend.java
index 2a2a568ef368f..74df2af2d575f 100644
--- a/make/jdk/src/classes/build/tools/depend/Depend.java
+++ b/make/jdk/src/classes/build/tools/depend/Depend.java
@@ -102,7 +102,7 @@ public void init(JavacTask jt, String... args) {
private final MessageDigest apiHash;
{
try {
- apiHash = MessageDigest.getInstance("MD5");
+ apiHash = MessageDigest.getInstance("SHA-256");
} catch (NoSuchAlgorithmException ex) {
throw new IllegalStateException(ex);
}
diff --git a/make/jdk/src/classes/build/tools/makezipreproducible/MakeZipReproducible.java b/make/jdk/src/classes/build/tools/makezipreproducible/MakeZipReproducible.java
new file mode 100644
index 0000000000000..48f08541d6731
--- /dev/null
+++ b/make/jdk/src/classes/build/tools/makezipreproducible/MakeZipReproducible.java
@@ -0,0 +1,231 @@
+/*
+ * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ */
+
+package build.tools.makezipreproducible;
+
+import java.io.*;
+import java.nio.file.*;
+import java.nio.file.attribute.BasicFileAttributes;
+import java.nio.channels.Channels;
+import java.nio.channels.FileChannel;
+import java.util.*;
+import java.util.zip.ZipEntry;
+import java.util.zip.ZipException;
+import java.util.zip.ZipFile;
+import java.util.zip.ZipInputStream;
+import java.util.zip.ZipOutputStream;
+
+/**
+ * Generate a zip file in a "reproducible" manner from the input zip file.
+ * Standard zip tools rely on OS file list querying whose ordering can vary
+ * by platform architecture, this class ensures the zip entries are ordered
+ * and also supports SOURCE_DATE_EPOCH timestamps.
+ */
+public class MakeZipReproducible {
+ String input_file = null;
+ String fname = null;
+ String zname = "";
+ long timestamp = -1L;
+ boolean verbose = false;
+
+ // Keep a sorted Set of ZipEntrys to be processed, so that the zip is reproducible
+ SortedMap entries = new TreeMap();
+
+ private boolean ok;
+
+ public MakeZipReproducible() {
+ }
+
+ public synchronized boolean run(String args[]) {
+ ok = true;
+ if (!parseArgs(args)) {
+ return false;
+ }
+ try {
+ zname = fname.replace(File.separatorChar, '/');
+ if (zname.startsWith("./")) {
+ zname = zname.substring(2);
+ }
+
+ if (verbose) System.out.println("Input zip file: " + input_file);
+
+ File inFile = new File(input_file);
+ if (!inFile.exists()) {
+ error("Input zip file does not exist");
+ ok = false;
+ } else {
+ File zipFile = new File(fname);
+ // Check archive to create does not exist
+ if (!zipFile.exists()) {
+ // Process input ZipEntries
+ ok = processInputEntries(inFile);
+ if (ok) {
+ try (FileOutputStream out = new FileOutputStream(fname)) {
+ ok = create(inFile, new BufferedOutputStream(out, 4096));
+ }
+ } else {
+ }
+ } else {
+ error("Target zip file "+fname+" already exists.");
+ ok = false;
+ }
+ }
+ } catch (IOException e) {
+ fatalError(e);
+ ok = false;
+ } catch (Error ee) {
+ ee.printStackTrace();
+ ok = false;
+ } catch (Throwable t) {
+ t.printStackTrace();
+ ok = false;
+ }
+ return ok;
+ }
+
+ boolean parseArgs(String args[]) {
+ try {
+ boolean parsingIncludes = false;
+ boolean parsingExcludes = false;
+ int count = 0;
+ while(count < args.length) {
+ if (args[count].startsWith("-")) {
+ String flag = args[count].substring(1);
+ switch (flag.charAt(0)) {
+ case 'f':
+ fname = args[++count];
+ break;
+ case 't':
+ // SOURCE_DATE_EPOCH timestamp specified
+ timestamp = Long.parseLong(args[++count]) * 1000;
+ break;
+ case 'v':
+ verbose = true;
+ break;
+ default:
+ error(String.format("Illegal option -%s", String.valueOf(flag.charAt(0))));
+ usageError();
+ return false;
+ }
+ } else {
+ // input zip file
+ if (input_file != null) {
+ error("Input zip file already specified");
+ usageError();
+ return false;
+ }
+ input_file = args[count];
+ }
+ count++;
+ }
+ } catch (ArrayIndexOutOfBoundsException e) {
+ usageError();
+ return false;
+ } catch (NumberFormatException e) {
+ usageError();
+ return false;
+ }
+ if (fname == null) {
+ error("-f must be specified");
+ usageError();
+ return false;
+ }
+ // If no files specified then default to current directory
+ if (input_file == null) {
+ error("No input zip file specified");
+ usageError();
+ return false;
+ }
+
+ return true;
+ }
+
+ // Process input zip file and add to sorted entries set
+ boolean processInputEntries(File inFile) throws IOException {
+ ZipFile zipFile = new ZipFile(inFile);
+ zipFile.stream().forEach(entry -> entries.put(entry.getName(), entry));
+
+ return true;
+ }
+
+ // Create new zip from entries
+ boolean create(File inFile, OutputStream out) throws IOException
+ {
+ try (ZipFile zipFile = new ZipFile(inFile);
+ ZipOutputStream zos = new ZipOutputStream(out)) {
+ for (Map.Entry entry : entries.entrySet()) {
+ ZipEntry zipEntry = entry.getValue();
+ if (zipEntry.getSize() > 0) {
+ try (InputStream eis = zipFile.getInputStream(zipEntry)) {
+ addEntry(zos, zipEntry, eis);
+ }
+ } else {
+ addEntry(zos, zipEntry, null);
+ }
+ }
+ }
+ return true;
+ }
+
+ // Add Entry and data to Zip
+ void addEntry(ZipOutputStream zos, ZipEntry entry, InputStream entryInputStream) throws IOException {
+ if (verbose) {
+ System.out.println("Adding: "+entry.getName());
+ }
+
+ // Set to specified timestamp if set otherwise leave as original lastModified time
+ if (timestamp != -1L) {
+ entry.setTime(timestamp);
+ }
+
+ zos.putNextEntry(entry);
+ if (entry.getSize() > 0 && entryInputStream != null) {
+ entryInputStream.transferTo(zos);
+ }
+ zos.closeEntry();
+ }
+
+ void usageError() {
+ error(
+ "Usage: MakeZipReproducible [-v] [-t ] -f \n" +
+ "Options:\n" +
+ " -v verbose output\n" +
+ " -f specify archive file name to create\n" +
+ " -t specific SOURCE_DATE_EPOCH value to use for timestamps\n" +
+ " input_zip_file re-written as a reproducible zip output_zip_file.\n");
+ }
+
+ void fatalError(Exception e) {
+ e.printStackTrace();
+ }
+
+ protected void error(String s) {
+ System.err.println(s);
+ }
+
+ public static void main(String args[]) {
+ MakeZipReproducible z = new MakeZipReproducible();
+ System.exit(z.run(args) ? 0 : 1);
+ }
+}
+
diff --git a/make/modules/java.base/Gendata.gmk b/make/modules/java.base/Gendata.gmk
index f9c25c8c53fce..4b894eeae4a66 100644
--- a/make/modules/java.base/Gendata.gmk
+++ b/make/modules/java.base/Gendata.gmk
@@ -60,7 +60,11 @@ TARGETS += $(GENDATA_CURDATA)
################################################################################
-GENDATA_CACERTS_SRC := $(TOPDIR)/make/data/cacerts/
+ifneq ($(CACERTS_SRC), )
+ GENDATA_CACERTS_SRC := $(CACERTS_SRC)
+else
+ GENDATA_CACERTS_SRC := $(TOPDIR)/make/data/cacerts/
+endif
GENDATA_CACERTS := $(SUPPORT_OUTPUTDIR)/modules_libs/java.base/security/cacerts
$(GENDATA_CACERTS): $(BUILD_TOOLS_JDK) $(wildcard $(GENDATA_CACERTS_SRC)/*)
diff --git a/make/modules/java.desktop/lib/Awt2dLibraries.gmk b/make/modules/java.desktop/lib/Awt2dLibraries.gmk
index 4d0c0c00dbf0b..a0c4082554626 100644
--- a/make/modules/java.desktop/lib/Awt2dLibraries.gmk
+++ b/make/modules/java.desktop/lib/Awt2dLibraries.gmk
@@ -435,7 +435,7 @@ endif
ifeq ($(USE_EXTERNAL_HARFBUZZ), true)
LIBFONTMANAGER_EXTRA_SRC =
- BUILD_LIBFONTMANAGER_FONTLIB += $(LIBHARFBUZZ_LIBS)
+ BUILD_LIBFONTMANAGER_FONTLIB += $(HARFBUZZ_LIBS)
else
LIBFONTMANAGER_EXTRA_SRC = libharfbuzz
@@ -834,6 +834,19 @@ endif
################################################################################
+# MACOSX_METAL_VERSION_MIN specifies the lowest version of Macosx
+# that should be used to compile Metal shaders. We support Metal
+# pipeline only on Macosx >=10.14. For Macosx versions <10.14 even if
+# we enable Metal pipeline using -Dsun.java2d.metal=true, at
+# runtime we force it to use OpenGL pipeline. And MACOSX_VERSION_MIN
+# for aarch64 has always been >10.14 so we use continue to use
+# MACOSX_VERSION_MIN for aarch64.
+ifeq ($(OPENJDK_TARGET_CPU_ARCH), xaarch64)
+ MACOSX_METAL_VERSION_MIN=$(MACOSX_VERSION_MIN)
+else
+ MACOSX_METAL_VERSION_MIN=10.14.0
+endif
+
ifeq ($(call isTargetOs, macosx), true)
SHADERS_SRC := $(TOPDIR)/src/java.desktop/macosx/native/libawt_lwawt/java2d/metal/shaders.metal
SHADERS_SUPPORT_DIR := $(SUPPORT_OUTPUTDIR)/native/java.desktop/libosxui
@@ -845,7 +858,9 @@ ifeq ($(call isTargetOs, macosx), true)
DEPS := $(SHADERS_SRC), \
OUTPUT_FILE := $(SHADERS_AIR), \
SUPPORT_DIR := $(SHADERS_SUPPORT_DIR), \
- COMMAND := $(METAL) -c -std=osx-metal2.0 -o $(SHADERS_AIR) $(SHADERS_SRC), \
+ COMMAND := $(METAL) -c -std=osx-metal2.0 \
+ -mmacosx-version-min=$(MACOSX_METAL_VERSION_MIN) \
+ -o $(SHADERS_AIR) $(SHADERS_SRC), \
))
$(eval $(call SetupExecute, metallib_shaders, \
diff --git a/make/modules/jdk.incubator.vector/Lib.gmk b/make/modules/jdk.incubator.vector/Lib.gmk
index a759b5b6745bc..bab2c9fe8a592 100644
--- a/make/modules/jdk.incubator.vector/Lib.gmk
+++ b/make/modules/jdk.incubator.vector/Lib.gmk
@@ -28,15 +28,15 @@ include LibCommon.gmk
################################################################################
ifeq ($(call isTargetOs, linux windows)+$(call isTargetCpu, x86_64)+$(INCLUDE_COMPILER2), true+true+true)
- $(eval $(call SetupJdkLibrary, BUILD_LIBSVML, \
- NAME := svml, \
+ $(eval $(call SetupJdkLibrary, BUILD_LIBJSVML, \
+ NAME := jsvml, \
CFLAGS := $(CFLAGS_JDKLIB), \
LDFLAGS := $(LDFLAGS_JDKLIB) \
$(call SET_SHARED_LIBRARY_ORIGIN), \
LDFLAGS_windows := -defaultlib:msvcrt, \
))
- TARGETS += $(BUILD_LIBSVML)
+ TARGETS += $(BUILD_LIBJSVML)
endif
################################################################################
diff --git a/make/modules/jdk.javadoc/Gendata.gmk b/make/modules/jdk.javadoc/Gendata.gmk
index c648df6e032f6..50ef87545a4cd 100644
--- a/make/modules/jdk.javadoc/Gendata.gmk
+++ b/make/modules/jdk.javadoc/Gendata.gmk
@@ -72,7 +72,9 @@ $(JDK_JAVADOC_DIR)/_element_lists.marker: \
$(MODULE_INFOS)
$(call MakeTargetDir)
$(call LogInfo, Creating javadoc element lists)
- $(RM) -r $(ELEMENT_LISTS_DIR)
+ $(RM) $(ELEMENT_LISTS_DIR)/element-list-{$(call CommaList, \
+ $(call sequence, $(GENERATE_SYMBOLS_FROM_JDK_VERSION), \
+ $(JDK_SOURCE_TARGET_VERSION)))}.txt
# Generate element-list files for JDK 11 to current-1
$(call ExecuteWithLog, $@_historic, \
$(JAVA_SMALL) $(INTERIM_LANGTOOLS_ARGS) \
diff --git a/make/test/BuildFailureHandler.gmk b/make/test/BuildFailureHandler.gmk
index d0a81a11eed2f..e69c9bf6fea68 100644
--- a/make/test/BuildFailureHandler.gmk
+++ b/make/test/BuildFailureHandler.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2016, 2021, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -60,24 +60,6 @@ $(eval $(call SetupJavaCompilation, BUILD_FAILURE_HANDLER, \
TARGETS += $(BUILD_FAILURE_HANDLER)
-################################################################################
-
-ifeq ($(call isTargetOs, windows), true)
-
- $(eval $(call SetupNativeCompilation, BUILD_LIBTIMEOUT_HANDLER, \
- NAME := timeoutHandler, \
- SRC := $(FH_BASEDIR)/src/windows/native/libtimeoutHandler, \
- OBJECT_DIR := $(FH_SUPPORT)/libtimeoutHandler, \
- OUTPUT_DIR := $(FH_SUPPORT), \
- CFLAGS := $(CFLAGS_JDKLIB), \
- LDFLAGS := $(LDFLAGS_JDKLIB), \
- OPTIMIZATION := LOW, \
- ))
-
- TARGETS += $(BUILD_LIBTIMEOUT_HANDLER)
-
-endif
-
################################################################################
# Targets for building test-image.
################################################################################
@@ -99,10 +81,6 @@ IMAGES_TARGETS += $(COPY_FH)
# Use JTREG_TESTS for jtreg tests parameter
#
RUN_DIR := $(FH_SUPPORT)/test
-# Add the dir of the dll to the path on windows
-ifeq ($(call isTargetOs, windows), true)
- export PATH := $(PATH);$(FH_SUPPORT)
-endif
test:
$(RM) -r $(RUN_DIR)
diff --git a/make/test/JtregNativeHotspot.gmk b/make/test/JtregNativeHotspot.gmk
index cfddb6fe0d962..58390110251d4 100644
--- a/make/test/JtregNativeHotspot.gmk
+++ b/make/test/JtregNativeHotspot.gmk
@@ -863,7 +863,7 @@ ifeq ($(call isTargetOs, linux), true)
BUILD_HOTSPOT_JTREG_EXECUTABLES_LIBS_exeFPRegs := -ldl
BUILD_HOTSPOT_JTREG_LIBRARIES_LIBS_libAsyncGetCallTraceTest := -ldl
else
- BUILD_HOTSPOT_JTREG_EXCLUDE += libtest-rw.c libtest-rwx.c libTestJNI.c \
+ BUILD_HOTSPOT_JTREG_EXCLUDE += libtest-rw.c libtest-rwx.c \
exeinvoke.c exestack-gap.c exestack-tls.c libAsyncGetCallTraceTest.cpp
endif
@@ -871,7 +871,7 @@ BUILD_HOTSPOT_JTREG_EXECUTABLES_LIBS_exesigtest := -ljvm
ifeq ($(call isTargetOs, windows), true)
BUILD_HOTSPOT_JTREG_EXECUTABLES_CFLAGS_exeFPRegs := -MT
- BUILD_HOTSPOT_JTREG_EXCLUDE += exesigtest.c libterminatedThread.c
+ BUILD_HOTSPOT_JTREG_EXCLUDE += exesigtest.c libterminatedThread.c libTestJNI.c
BUILD_HOTSPOT_JTREG_LIBRARIES_LIBS_libatExit := jvm.lib
else
BUILD_HOTSPOT_JTREG_LIBRARIES_LIBS_libbootclssearch_agent += -lpthread
diff --git a/make/test/JtregNativeJdk.gmk b/make/test/JtregNativeJdk.gmk
index 8ed5cbd2a58b8..270ff93d1475d 100644
--- a/make/test/JtregNativeJdk.gmk
+++ b/make/test/JtregNativeJdk.gmk
@@ -87,8 +87,9 @@ ifeq ($(call isTargetOs, macosx), true)
-framework Cocoa -framework SystemConfiguration
else
BUILD_JDK_JTREG_EXCLUDE += libTestMainKeyWindow.m
- BUILD_JDK_JTREG_EXCLUDE += exeJniInvocationTest.c
BUILD_JDK_JTREG_EXCLUDE += libTestDynamicStore.m
+ BUILD_JDK_JTREG_EXCLUDE += exeJniInvocationTest.c
+ BUILD_JDK_JTREG_EXCLUDE += exeLibraryCache.c
endif
ifeq ($(call isTargetOs, linux), true)
diff --git a/src/demo/share/java2d/J2DBench/Makefile b/src/demo/share/java2d/J2DBench/Makefile
index 16cbe5cd86ea2..04b0818a2c35b 100644
--- a/src/demo/share/java2d/J2DBench/Makefile
+++ b/src/demo/share/java2d/J2DBench/Makefile
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2002, 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2002, 2021, Oracle and/or its affiliates. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
@@ -80,10 +80,10 @@ SCM_DIRs = .hg .svn CVS RCS SCCS Codemgr_wsdata deleted_files
all: mkdirs J2DBench.jar J2DAnalyzer.jar
run: mkdirs J2DBench.jar
- java -jar J2DBench.jar
+ java -jar $(DIST)/J2DBench.jar
analyze: mkdirs J2DAnalyzer.jar
- java -jar J2DAnalyzer.jar
+ java -jar $(DIST)/J2DAnalyzer.jar
J2DBench.jar: \
$(J2DBENCH_CLASSES) $(J2DBENCH_RESOURCES) \
diff --git a/src/demo/share/jfc/SwingSet2/TableDemo.java b/src/demo/share/jfc/SwingSet2/TableDemo.java
index 48199c564a850..42a8825c6dd81 100644
--- a/src/demo/share/jfc/SwingSet2/TableDemo.java
+++ b/src/demo/share/jfc/SwingSet2/TableDemo.java
@@ -549,7 +549,10 @@ public JScrollPane createTable() {
public int getRowCount() { return data.length;}
public Object getValueAt(int row, int col) {return data[row][col];}
public String getColumnName(int column) {return names[column];}
- public Class> getColumnClass(int c) {return getValueAt(0, c).getClass();}
+ public Class> getColumnClass(int c) {
+ Object obj = getValueAt(0, c);
+ return obj != null ? obj.getClass() : Object.class;
+ }
public boolean isCellEditable(int row, int col) {return col != 5;}
public void setValueAt(Object aValue, int row, int column) { data[row][column] = aValue; }
};
@@ -738,4 +741,13 @@ void updateDragEnabled(boolean dragEnabled) {
footerTextField.setDragEnabled(dragEnabled);
}
+ @Override
+ public ImageIcon createImageIcon(String filename, String description) {
+ ImageIcon imageIcon = super.createImageIcon(filename, description);
+ AccessibleContext context = imageIcon.getAccessibleContext();
+ if (context!= null) {
+ context.setAccessibleName(description);
+ }
+ return imageIcon;
+ }
}
diff --git a/src/hotspot/cpu/aarch64/aarch64.ad b/src/hotspot/cpu/aarch64/aarch64.ad
index 6738708d0eb2b..a030555c9ffe9 100644
--- a/src/hotspot/cpu/aarch64/aarch64.ad
+++ b/src/hotspot/cpu/aarch64/aarch64.ad
@@ -2373,6 +2373,8 @@ const bool Matcher::match_rule_supported(int opcode) {
bool ret_value = true;
switch (opcode) {
+ case Op_OnSpinWait:
+ return VM_Version::supports_on_spin_wait();
case Op_CacheWB:
case Op_CacheWBPreSync:
case Op_CacheWBPostSync:
@@ -2613,6 +2615,13 @@ const RegMask Matcher::method_handle_invoke_SP_save_mask() {
bool size_fits_all_mem_uses(AddPNode* addp, int shift) {
for (DUIterator_Fast imax, i = addp->fast_outs(imax); i < imax; i++) {
Node* u = addp->fast_out(i);
+ if (u->is_LoadStore()) {
+ // On AArch64, LoadStoreNodes (i.e. compare and swap
+ // instructions) only take register indirect as an operand, so
+ // any attempt to use an AddPNode as an input to a LoadStoreNode
+ // must fail.
+ return false;
+ }
if (u->is_Mem()) {
int opsize = u->as_Mem()->memory_size();
assert(opsize > 0, "unexpected memory operand size");
@@ -3841,7 +3850,7 @@ encode %{
// Try to CAS m->owner from NULL to current thread.
__ add(tmp, disp_hdr, (ObjectMonitor::owner_offset_in_bytes()-markWord::monitor_value));
__ cmpxchg(tmp, zr, rthread, Assembler::xword, /*acquire*/ true,
- /*release*/ true, /*weak*/ false, noreg); // Sets flags for result
+ /*release*/ true, /*weak*/ false, rscratch1); // Sets flags for result
// Store a non-null value into the box to avoid looking like a re-entrant
// lock. The fast-path monitor unlock code checks for
@@ -3850,6 +3859,15 @@ encode %{
__ mov(tmp, (address)markWord::unused_mark().value());
__ str(tmp, Address(box, BasicLock::displaced_header_offset_in_bytes()));
+ __ br(Assembler::EQ, cont); // CAS success means locking succeeded
+
+ __ cmp(rscratch1, rthread);
+ __ br(Assembler::NE, cont); // Check for recursive locking
+
+ // Recursive lock case
+ __ increment(Address(disp_hdr, ObjectMonitor::recursions_offset_in_bytes() - markWord::monitor_value), 1);
+ // flag == EQ still from the cmp above, checking if this is a reentrant lock
+
__ bind(cont);
// flag == EQ indicates success
// flag == NE indicates failure
@@ -3897,11 +3915,20 @@ encode %{
__ add(tmp, tmp, -(int)markWord::monitor_value); // monitor
__ ldr(rscratch1, Address(tmp, ObjectMonitor::owner_offset_in_bytes()));
__ ldr(disp_hdr, Address(tmp, ObjectMonitor::recursions_offset_in_bytes()));
- __ eor(rscratch1, rscratch1, rthread); // Will be 0 if we are the owner.
- __ orr(rscratch1, rscratch1, disp_hdr); // Will be 0 if there are 0 recursions
- __ cmp(rscratch1, zr); // Sets flags for result
+
+ Label notRecursive;
+ __ cmp(rscratch1, rthread);
__ br(Assembler::NE, cont);
+ __ cbz(disp_hdr, notRecursive);
+
+ // Recursive lock
+ __ sub(disp_hdr, disp_hdr, 1u);
+ __ str(disp_hdr, Address(tmp, ObjectMonitor::recursions_offset_in_bytes()));
+ // flag == EQ was set in the ownership check above
+ __ b(cont);
+
+ __ bind(notRecursive);
__ ldr(rscratch1, Address(tmp, ObjectMonitor::EntryList_offset_in_bytes()));
__ ldr(disp_hdr, Address(tmp, ObjectMonitor::cxq_offset_in_bytes()));
__ orr(rscratch1, rscratch1, disp_hdr); // Will be 0 if both are 0.
@@ -14314,6 +14341,18 @@ instruct signumF_reg(vRegF dst, vRegF src, vRegF zero, vRegF one) %{
ins_pipe(fp_uop_d);
%}
+instruct onspinwait() %{
+ match(OnSpinWait);
+ ins_cost(INSN_COST);
+
+ format %{ "onspinwait" %}
+
+ ins_encode %{
+ __ spin_wait();
+ %}
+ ins_pipe(pipe_class_empty);
+%}
+
// ============================================================================
// Logical Instructions
@@ -16823,6 +16862,7 @@ instruct encode_iso_array(iRegP_R2 src, iRegP_R1 dst, iRegI_R3 len,
vRegD_V2 Vtmp3, vRegD_V3 Vtmp4,
iRegI_R0 result, rFlagsReg cr)
%{
+ predicate(!((EncodeISOArrayNode*)n)->is_ascii());
match(Set result (EncodeISOArray src (Binary dst len)));
effect(USE_KILL src, USE_KILL dst, USE_KILL len,
KILL Vtmp1, KILL Vtmp2, KILL Vtmp3, KILL Vtmp4, KILL cr);
diff --git a/src/hotspot/cpu/aarch64/atomic_aarch64.hpp b/src/hotspot/cpu/aarch64/atomic_aarch64.hpp
index ac12ba9e23d7d..6f9425e43ac14 100644
--- a/src/hotspot/cpu/aarch64/atomic_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/atomic_aarch64.hpp
@@ -45,5 +45,9 @@ extern aarch64_atomic_stub_t aarch64_atomic_cmpxchg_8_impl;
extern aarch64_atomic_stub_t aarch64_atomic_cmpxchg_1_relaxed_impl;
extern aarch64_atomic_stub_t aarch64_atomic_cmpxchg_4_relaxed_impl;
extern aarch64_atomic_stub_t aarch64_atomic_cmpxchg_8_relaxed_impl;
+extern aarch64_atomic_stub_t aarch64_atomic_cmpxchg_4_release_impl;
+extern aarch64_atomic_stub_t aarch64_atomic_cmpxchg_8_release_impl;
+extern aarch64_atomic_stub_t aarch64_atomic_cmpxchg_4_seq_cst_impl;
+extern aarch64_atomic_stub_t aarch64_atomic_cmpxchg_8_seq_cst_impl;
#endif // CPU_AARCH64_ATOMIC_AARCH64_HPP
diff --git a/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp
index e8d0c537d6de7..f488e863a6851 100644
--- a/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp
@@ -2837,7 +2837,7 @@ void LIR_Assembler::emit_profile_type(LIR_OpProfileType* op) {
}
#endif
// first time here. Set profile type.
- __ ldr(tmp, mdo_addr);
+ __ str(tmp, mdo_addr);
} else {
assert(ciTypeEntries::valid_ciklass(current_klass) != NULL &&
ciTypeEntries::valid_ciklass(current_klass) != exact_klass, "inconsistent");
@@ -2988,7 +2988,7 @@ void LIR_Assembler::membar_loadstore() { __ membar(MacroAssembler::LoadStore); }
void LIR_Assembler::membar_storeload() { __ membar(MacroAssembler::StoreLoad); }
void LIR_Assembler::on_spin_wait() {
- Unimplemented();
+ __ spin_wait();
}
void LIR_Assembler::get_thread(LIR_Opr result_reg) {
diff --git a/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.cpp
index 6ce4af0372d6c..603efefe0e0a7 100644
--- a/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.cpp
@@ -355,7 +355,7 @@ void C1_MacroAssembler::remove_frame(int framesize) {
}
-void C1_MacroAssembler::verified_entry() {
+void C1_MacroAssembler::verified_entry(bool breakAtEntry) {
// If we have to make this method not-entrant we'll overwrite its
// first instruction with a jump. For this action to be legal we
// must ensure that this first instruction is a B, BL, NOP, BKPT,
diff --git a/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.cpp
index bc3fc6355d018..a4a2b14203976 100644
--- a/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.cpp
@@ -276,7 +276,7 @@ void BarrierSetAssembler::c2i_entry_barrier(MacroAssembler* masm) {
__ load_method_holder_cld(rscratch1, rmethod);
// Is it a strong CLD?
- __ ldr(rscratch2, Address(rscratch1, ClassLoaderData::keep_alive_offset()));
+ __ ldrw(rscratch2, Address(rscratch1, ClassLoaderData::keep_alive_offset()));
__ cbnz(rscratch2, method_live);
// Is it a weak but alive CLD?
diff --git a/src/hotspot/cpu/aarch64/globals_aarch64.hpp b/src/hotspot/cpu/aarch64/globals_aarch64.hpp
index a3159db967a7b..3ccf7630b84f0 100644
--- a/src/hotspot/cpu/aarch64/globals_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/globals_aarch64.hpp
@@ -111,7 +111,15 @@ define_pd_global(intx, InlineSmallCode, 1000);
product(int, SoftwarePrefetchHintDistance, -1, \
"Use prfm hint with specified distance in compiled code." \
"Value -1 means off.") \
- range(-1, 4096)
+ range(-1, 4096) \
+ product(ccstr, OnSpinWaitInst, "none", DIAGNOSTIC, \
+ "The instruction to use to implement " \
+ "java.lang.Thread.onSpinWait()." \
+ "Options: none, nop, isb, yield.") \
+ product(uint, OnSpinWaitInstCount, 1, DIAGNOSTIC, \
+ "The number of OnSpinWaitInst instructions to generate." \
+ "It cannot be used with OnSpinWaitInst=none.") \
+ range(1, 99)
// end of ARCH_FLAGS
diff --git a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
index a0986200c8717..85ce4c44b7f0a 100644
--- a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
@@ -2027,15 +2027,6 @@ void MacroAssembler::increment(Address dst, int value)
str(rscratch1, dst);
}
-
-void MacroAssembler::pusha() {
- push(0x7fffffff, sp);
-}
-
-void MacroAssembler::popa() {
- pop(0x7fffffff, sp);
-}
-
// Push lots of registers in the bit set supplied. Don't push sp.
// Return the number of words pushed
int MacroAssembler::push(unsigned int bitset, Register stack) {
@@ -2677,7 +2668,7 @@ void MacroAssembler::pop_call_clobbered_registers_except(RegSet exclude) {
void MacroAssembler::push_CPU_state(bool save_vectors, bool use_sve,
int sve_vector_size_in_bytes) {
- push(0x3fffffff, sp); // integer registers except lr & sp
+ push(RegSet::range(r0, r29), sp); // integer registers except lr & sp
if (save_vectors && use_sve && sve_vector_size_in_bytes > 16) {
sub(sp, sp, sve_vector_size_in_bytes * FloatRegisterImpl::number_of_registers);
for (int i = 0; i < FloatRegisterImpl::number_of_registers; i++) {
@@ -2713,7 +2704,14 @@ void MacroAssembler::pop_CPU_state(bool restore_vectors, bool use_sve,
reinitialize_ptrue();
}
- pop(0x3fffffff, sp); // integer registers except lr & sp
+ // integer registers except lr & sp
+ pop(RegSet::range(r0, r17), sp);
+#ifdef R18_RESERVED
+ ldp(zr, r19, Address(post(sp, 2 * wordSize)));
+ pop(RegSet::range(r20, r29), sp);
+#else
+ pop(RegSet::range(r18_tls, r29), sp);
+#endif
}
/**
@@ -5354,3 +5352,21 @@ void MacroAssembler::verify_cross_modify_fence_not_required() {
}
}
#endif
+
+void MacroAssembler::spin_wait() {
+ for (int i = 0; i < VM_Version::spin_wait_desc().inst_count(); ++i) {
+ switch (VM_Version::spin_wait_desc().inst()) {
+ case SpinWait::NOP:
+ nop();
+ break;
+ case SpinWait::ISB:
+ isb();
+ break;
+ case SpinWait::YIELD:
+ yield();
+ break;
+ default:
+ ShouldNotReachHere();
+ }
+ }
+}
diff --git a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp
index b70e197de9807..dcfa239b66207 100644
--- a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp
@@ -1123,10 +1123,6 @@ class MacroAssembler: public Assembler {
void push(Register src);
void pop(Register dst);
- // push all registers onto the stack
- void pusha();
- void popa();
-
void repne_scan(Register addr, Register value, Register count,
Register scratch);
void repne_scanw(Register addr, Register value, Register count,
@@ -1318,6 +1314,23 @@ class MacroAssembler: public Assembler {
Register zlen, Register tmp1, Register tmp2, Register tmp3,
Register tmp4, Register tmp5, Register tmp6, Register tmp7);
void mul_add(Register out, Register in, Register offs, Register len, Register k);
+ void ghash_multiply(FloatRegister result_lo, FloatRegister result_hi,
+ FloatRegister a, FloatRegister b, FloatRegister a1_xor_a0,
+ FloatRegister tmp1, FloatRegister tmp2, FloatRegister tmp3);
+ void ghash_reduce(FloatRegister result, FloatRegister lo, FloatRegister hi,
+ FloatRegister p, FloatRegister z, FloatRegister t1);
+ void ghash_processBlocks_wide(address p, Register state, Register subkeyH,
+ Register data, Register blocks, int unrolls);
+ void ghash_modmul (FloatRegister result,
+ FloatRegister result_lo, FloatRegister result_hi, FloatRegister b,
+ FloatRegister a, FloatRegister vzr, FloatRegister a1_xor_a0, FloatRegister p,
+ FloatRegister t1, FloatRegister t2, FloatRegister t3);
+
+ void aesenc_loadkeys(Register key, Register keylen);
+ void aesecb_encrypt(Register from, Register to, Register keylen,
+ FloatRegister data = v0, int unrolls = 1);
+ void aesecb_decrypt(Register from, Register to, Register key, Register keylen);
+ void aes_round(FloatRegister input, FloatRegister subkey);
// Place an ISB after code may have been modified due to a safepoint.
void safepoint_isb();
@@ -1397,6 +1410,9 @@ class MacroAssembler: public Assembler {
void cache_wb(Address line);
void cache_wbsync(bool is_pre);
+ // Code for java.lang.Thread::onSpinWait() intrinsic.
+ void spin_wait();
+
private:
// Check the current thread doesn't need a cross modify fence.
void verify_cross_modify_fence_not_required() PRODUCT_RETURN;
diff --git a/src/hotspot/cpu/aarch64/macroAssembler_aarch64_aes.cpp b/src/hotspot/cpu/aarch64/macroAssembler_aarch64_aes.cpp
new file mode 100644
index 0000000000000..588ef67d7ad45
--- /dev/null
+++ b/src/hotspot/cpu/aarch64/macroAssembler_aarch64_aes.cpp
@@ -0,0 +1,685 @@
+/*
+ * Copyright (c) 2003, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2021, Red Hat Inc. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#include "precompiled.hpp"
+
+#include "asm/assembler.hpp"
+#include "asm/assembler.inline.hpp"
+#include "macroAssembler_aarch64.hpp"
+#include "memory/resourceArea.hpp"
+#include "runtime/stubRoutines.hpp"
+
+void MacroAssembler::aesecb_decrypt(Register from, Register to, Register key, Register keylen) {
+ Label L_doLast;
+
+ ld1(v0, T16B, from); // get 16 bytes of input
+
+ ld1(v5, T16B, post(key, 16));
+ rev32(v5, T16B, v5);
+
+ ld1(v1, v2, v3, v4, T16B, post(key, 64));
+ rev32(v1, T16B, v1);
+ rev32(v2, T16B, v2);
+ rev32(v3, T16B, v3);
+ rev32(v4, T16B, v4);
+ aesd(v0, v1);
+ aesimc(v0, v0);
+ aesd(v0, v2);
+ aesimc(v0, v0);
+ aesd(v0, v3);
+ aesimc(v0, v0);
+ aesd(v0, v4);
+ aesimc(v0, v0);
+
+ ld1(v1, v2, v3, v4, T16B, post(key, 64));
+ rev32(v1, T16B, v1);
+ rev32(v2, T16B, v2);
+ rev32(v3, T16B, v3);
+ rev32(v4, T16B, v4);
+ aesd(v0, v1);
+ aesimc(v0, v0);
+ aesd(v0, v2);
+ aesimc(v0, v0);
+ aesd(v0, v3);
+ aesimc(v0, v0);
+ aesd(v0, v4);
+ aesimc(v0, v0);
+
+ ld1(v1, v2, T16B, post(key, 32));
+ rev32(v1, T16B, v1);
+ rev32(v2, T16B, v2);
+
+ cmpw(keylen, 44);
+ br(Assembler::EQ, L_doLast);
+
+ aesd(v0, v1);
+ aesimc(v0, v0);
+ aesd(v0, v2);
+ aesimc(v0, v0);
+
+ ld1(v1, v2, T16B, post(key, 32));
+ rev32(v1, T16B, v1);
+ rev32(v2, T16B, v2);
+
+ cmpw(keylen, 52);
+ br(Assembler::EQ, L_doLast);
+
+ aesd(v0, v1);
+ aesimc(v0, v0);
+ aesd(v0, v2);
+ aesimc(v0, v0);
+
+ ld1(v1, v2, T16B, post(key, 32));
+ rev32(v1, T16B, v1);
+ rev32(v2, T16B, v2);
+
+ bind(L_doLast);
+
+ aesd(v0, v1);
+ aesimc(v0, v0);
+ aesd(v0, v2);
+
+ eor(v0, T16B, v0, v5);
+
+ st1(v0, T16B, to);
+
+ // Preserve the address of the start of the key
+ sub(key, key, keylen, LSL, exact_log2(sizeof (jint)));
+}
+
+// Load expanded key into v17..v31
+void MacroAssembler::aesenc_loadkeys(Register key, Register keylen) {
+ Label L_loadkeys_44, L_loadkeys_52;
+ cmpw(keylen, 52);
+ br(Assembler::LO, L_loadkeys_44);
+ br(Assembler::EQ, L_loadkeys_52);
+
+ ld1(v17, v18, T16B, post(key, 32));
+ rev32(v17, T16B, v17);
+ rev32(v18, T16B, v18);
+ bind(L_loadkeys_52);
+ ld1(v19, v20, T16B, post(key, 32));
+ rev32(v19, T16B, v19);
+ rev32(v20, T16B, v20);
+ bind(L_loadkeys_44);
+ ld1(v21, v22, v23, v24, T16B, post(key, 64));
+ rev32(v21, T16B, v21);
+ rev32(v22, T16B, v22);
+ rev32(v23, T16B, v23);
+ rev32(v24, T16B, v24);
+ ld1(v25, v26, v27, v28, T16B, post(key, 64));
+ rev32(v25, T16B, v25);
+ rev32(v26, T16B, v26);
+ rev32(v27, T16B, v27);
+ rev32(v28, T16B, v28);
+ ld1(v29, v30, v31, T16B, post(key, 48));
+ rev32(v29, T16B, v29);
+ rev32(v30, T16B, v30);
+ rev32(v31, T16B, v31);
+
+ // Preserve the address of the start of the key
+ sub(key, key, keylen, LSL, exact_log2(sizeof (jint)));
+}
+
+// NeoverseTM N1Software Optimization Guide:
+// Adjacent AESE/AESMC instruction pairs and adjacent AESD/AESIMC
+// instruction pairs will exhibit the performance characteristics
+// described in Section 4.6.
+void MacroAssembler::aes_round(FloatRegister input, FloatRegister subkey) {
+ aese(input, subkey); aesmc(input, input);
+}
+
+// KernelGenerator
+//
+// The abstract base class of an unrolled function generator.
+// Subclasses override generate(), length(), and next() to generate
+// unrolled and interleaved functions.
+//
+// The core idea is that a subclass defines a method which generates
+// the base case of a function and a method to generate a clone of it,
+// shifted to a different set of registers. KernelGenerator will then
+// generate several interleaved copies of the function, with each one
+// using a different set of registers.
+
+// The subclass must implement three methods: length(), which is the
+// number of instruction bundles in the intrinsic, generate(int n)
+// which emits the nth instruction bundle in the intrinsic, and next()
+// which takes an instance of the generator and returns a version of it,
+// shifted to a new set of registers.
+
+class KernelGenerator: public MacroAssembler {
+protected:
+ const int _unrolls;
+public:
+ KernelGenerator(Assembler *as, int unrolls)
+ : MacroAssembler(as->code()), _unrolls(unrolls) { }
+ virtual void generate(int index) = 0;
+ virtual int length() = 0;
+ virtual KernelGenerator *next() = 0;
+ int unrolls() { return _unrolls; }
+ void unroll();
+};
+
+void KernelGenerator::unroll() {
+ ResourceMark rm;
+ KernelGenerator **generators
+ = NEW_RESOURCE_ARRAY(KernelGenerator *, unrolls());
+
+ generators[0] = this;
+ for (int i = 1; i < unrolls(); i++) {
+ generators[i] = generators[i-1]->next();
+ }
+
+ for (int j = 0; j < length(); j++) {
+ for (int i = 0; i < unrolls(); i++) {
+ generators[i]->generate(j);
+ }
+ }
+}
+
+// An unrolled and interleaved generator for AES encryption.
+class AESKernelGenerator: public KernelGenerator {
+ Register _from, _to;
+ const Register _keylen;
+ FloatRegister _data;
+ const FloatRegister _subkeys;
+ bool _once;
+ Label _rounds_44, _rounds_52;
+
+public:
+ AESKernelGenerator(Assembler *as, int unrolls,
+ Register from, Register to, Register keylen, FloatRegister data,
+ FloatRegister subkeys, bool once = true)
+ : KernelGenerator(as, unrolls),
+ _from(from), _to(to), _keylen(keylen), _data(data),
+ _subkeys(subkeys), _once(once) {
+ }
+
+ virtual void generate(int index) {
+ switch (index) {
+ case 0:
+ if (_from != noreg) {
+ ld1(_data, T16B, _from); // get 16 bytes of input
+ }
+ break;
+ case 1:
+ if (_once) {
+ cmpw(_keylen, 52);
+ br(Assembler::LO, _rounds_44);
+ br(Assembler::EQ, _rounds_52);
+ }
+ break;
+ case 2: aes_round(_data, _subkeys + 0); break;
+ case 3: aes_round(_data, _subkeys + 1); break;
+ case 4:
+ if (_once) bind(_rounds_52);
+ break;
+ case 5: aes_round(_data, _subkeys + 2); break;
+ case 6: aes_round(_data, _subkeys + 3); break;
+ case 7:
+ if (_once) bind(_rounds_44);
+ break;
+ case 8: aes_round(_data, _subkeys + 4); break;
+ case 9: aes_round(_data, _subkeys + 5); break;
+ case 10: aes_round(_data, _subkeys + 6); break;
+ case 11: aes_round(_data, _subkeys + 7); break;
+ case 12: aes_round(_data, _subkeys + 8); break;
+ case 13: aes_round(_data, _subkeys + 9); break;
+ case 14: aes_round(_data, _subkeys + 10); break;
+ case 15: aes_round(_data, _subkeys + 11); break;
+ case 16: aes_round(_data, _subkeys + 12); break;
+ case 17: aese(_data, _subkeys + 13); break;
+ case 18: eor(_data, T16B, _data, _subkeys + 14); break;
+ case 19:
+ if (_to != noreg) {
+ st1(_data, T16B, _to);
+ }
+ break;
+ default: ShouldNotReachHere();
+ }
+ }
+
+ virtual KernelGenerator *next() {
+ return new AESKernelGenerator(this, _unrolls,
+ _from, _to, _keylen,
+ _data + 1, _subkeys, /*once*/false);
+ }
+
+ virtual int length() { return 20; }
+};
+
+// Uses expanded key in v17..v31
+// Returns encrypted values in inputs.
+// If to != noreg, store value at to; likewise from
+// Preserves key, keylen
+// Increments from, to
+// Input data in v0, v1, ...
+// unrolls controls the number of times to unroll the generated function
+void MacroAssembler::aesecb_encrypt(Register from, Register to, Register keylen,
+ FloatRegister data, int unrolls) {
+ AESKernelGenerator(this, unrolls, from, to, keylen, data, v17) .unroll();
+}
+
+// ghash_multiply and ghash_reduce are the non-unrolled versions of
+// the GHASH function generators.
+void MacroAssembler::ghash_multiply(FloatRegister result_lo, FloatRegister result_hi,
+ FloatRegister a, FloatRegister b, FloatRegister a1_xor_a0,
+ FloatRegister tmp1, FloatRegister tmp2, FloatRegister tmp3) {
+ // Karatsuba multiplication performs a 128*128 -> 256-bit
+ // multiplication in three 128-bit multiplications and a few
+ // additions.
+ //
+ // (C1:C0) = A1*B1, (D1:D0) = A0*B0, (E1:E0) = (A0+A1)(B0+B1)
+ // (A1:A0)(B1:B0) = C1:(C0+C1+D1+E1):(D1+C0+D0+E0):D0
+ //
+ // Inputs:
+ //
+ // A0 in a.d[0] (subkey)
+ // A1 in a.d[1]
+ // (A1+A0) in a1_xor_a0.d[0]
+ //
+ // B0 in b.d[0] (state)
+ // B1 in b.d[1]
+
+ ext(tmp1, T16B, b, b, 0x08);
+ pmull2(result_hi, T1Q, b, a, T2D); // A1*B1
+ eor(tmp1, T16B, tmp1, b); // (B1+B0)
+ pmull(result_lo, T1Q, b, a, T1D); // A0*B0
+ pmull(tmp2, T1Q, tmp1, a1_xor_a0, T1D); // (A1+A0)(B1+B0)
+
+ ext(tmp1, T16B, result_lo, result_hi, 0x08);
+ eor(tmp3, T16B, result_hi, result_lo); // A1*B1+A0*B0
+ eor(tmp2, T16B, tmp2, tmp1);
+ eor(tmp2, T16B, tmp2, tmp3);
+
+ // Register pair holds the result of carry-less multiplication
+ ins(result_hi, D, tmp2, 0, 1);
+ ins(result_lo, D, tmp2, 1, 0);
+}
+
+void MacroAssembler::ghash_reduce(FloatRegister result, FloatRegister lo, FloatRegister hi,
+ FloatRegister p, FloatRegister vzr, FloatRegister t1) {
+ const FloatRegister t0 = result;
+
+ // The GCM field polynomial f is z^128 + p(z), where p =
+ // z^7+z^2+z+1.
+ //
+ // z^128 === -p(z) (mod (z^128 + p(z)))
+ //
+ // so, given that the product we're reducing is
+ // a == lo + hi * z^128
+ // substituting,
+ // === lo - hi * p(z) (mod (z^128 + p(z)))
+ //
+ // we reduce by multiplying hi by p(z) and subtracting the result
+ // from (i.e. XORing it with) lo. Because p has no nonzero high
+ // bits we can do this with two 64-bit multiplications, lo*p and
+ // hi*p.
+
+ pmull2(t0, T1Q, hi, p, T2D);
+ ext(t1, T16B, t0, vzr, 8);
+ eor(hi, T16B, hi, t1);
+ ext(t1, T16B, vzr, t0, 8);
+ eor(lo, T16B, lo, t1);
+ pmull(t0, T1Q, hi, p, T1D);
+ eor(result, T16B, lo, t0);
+}
+
+class GHASHMultiplyGenerator: public KernelGenerator {
+ FloatRegister _result_lo, _result_hi, _b,
+ _a, _vzr, _a1_xor_a0, _p,
+ _tmp1, _tmp2, _tmp3;
+
+public:
+ GHASHMultiplyGenerator(Assembler *as, int unrolls,
+ FloatRegister result_lo, FloatRegister result_hi,
+ /* offsetted registers */
+ FloatRegister b,
+ /* non-offsetted (shared) registers */
+ FloatRegister a, FloatRegister a1_xor_a0, FloatRegister p, FloatRegister vzr,
+ /* offseted (temp) registers */
+ FloatRegister tmp1, FloatRegister tmp2, FloatRegister tmp3)
+ : KernelGenerator(as, unrolls),
+ _result_lo(result_lo), _result_hi(result_hi), _b(b),
+ _a(a), _vzr(vzr), _a1_xor_a0(a1_xor_a0), _p(p),
+ _tmp1(tmp1), _tmp2(tmp2), _tmp3(tmp3) { }
+
+ static const int register_stride = 7;
+
+ virtual void generate(int index) {
+ // Karatsuba multiplication performs a 128*128 -> 256-bit
+ // multiplication in three 128-bit multiplications and a few
+ // additions.
+ //
+ // (C1:C0) = A1*B1, (D1:D0) = A0*B0, (E1:E0) = (A0+A1)(B0+B1)
+ // (A1:A0)(B1:B0) = C1:(C0+C1+D1+E1):(D1+C0+D0+E0):D0
+ //
+ // Inputs:
+ //
+ // A0 in a.d[0] (subkey)
+ // A1 in a.d[1]
+ // (A1+A0) in a1_xor_a0.d[0]
+ //
+ // B0 in b.d[0] (state)
+ // B1 in b.d[1]
+
+ switch (index) {
+ case 0: ext(_tmp1, T16B, _b, _b, 0x08); break;
+ case 1: pmull2(_result_hi, T1Q, _b, _a, T2D); // A1*B1
+ break;
+ case 2: eor(_tmp1, T16B, _tmp1, _b); // (B1+B0)
+ break;
+ case 3: pmull(_result_lo, T1Q, _b, _a, T1D); // A0*B0
+ break;
+ case 4: pmull(_tmp2, T1Q, _tmp1, _a1_xor_a0, T1D); // (A1+A0)(B1+B0)
+ break;
+
+ case 5: ext(_tmp1, T16B, _result_lo, _result_hi, 0x08); break;
+ case 6: eor(_tmp3, T16B, _result_hi, _result_lo); // A1*B1+A0*B0
+ break;
+ case 7: eor(_tmp2, T16B, _tmp2, _tmp1); break;
+ case 8: eor(_tmp2, T16B, _tmp2, _tmp3); break;
+
+ // Register pair <_result_hi:_result_lo> holds the _result of carry-less multiplication
+ case 9: ins(_result_hi, D, _tmp2, 0, 1); break;
+ case 10: ins(_result_lo, D, _tmp2, 1, 0); break;
+ default: ShouldNotReachHere();
+ }
+ }
+
+ virtual KernelGenerator *next() {
+ GHASHMultiplyGenerator *result
+ = new GHASHMultiplyGenerator(this, _unrolls, _result_lo, _result_hi,
+ _b, _a, _a1_xor_a0, _p, _vzr,
+ _tmp1, _tmp2, _tmp3);
+ result->_result_lo += register_stride;
+ result->_result_hi += register_stride;
+ result->_b += register_stride;
+ result->_tmp1 += register_stride;
+ result->_tmp2 += register_stride;
+ result->_tmp3 += register_stride;
+ return result;
+ }
+
+ virtual int length() { return 11; }
+};
+
+// Reduce the 128-bit product in hi:lo by the GCM field polynomial.
+// The FloatRegister argument called data is optional: if it is a
+// valid register, we interleave LD1 instructions with the
+// reduction. This is to reduce latency next time around the loop.
+class GHASHReduceGenerator: public KernelGenerator {
+ FloatRegister _result, _lo, _hi, _p, _vzr, _data, _t1;
+ int _once;
+public:
+ GHASHReduceGenerator(Assembler *as, int unrolls,
+ /* offsetted registers */
+ FloatRegister result, FloatRegister lo, FloatRegister hi,
+ /* non-offsetted (shared) registers */
+ FloatRegister p, FloatRegister vzr, FloatRegister data,
+ /* offseted (temp) registers */
+ FloatRegister t1)
+ : KernelGenerator(as, unrolls),
+ _result(result), _lo(lo), _hi(hi),
+ _p(p), _vzr(vzr), _data(data), _t1(t1), _once(true) { }
+
+ static const int register_stride = 7;
+
+ virtual void generate(int index) {
+ const FloatRegister t0 = _result;
+
+ switch (index) {
+ // The GCM field polynomial f is z^128 + p(z), where p =
+ // z^7+z^2+z+1.
+ //
+ // z^128 === -p(z) (mod (z^128 + p(z)))
+ //
+ // so, given that the product we're reducing is
+ // a == lo + hi * z^128
+ // substituting,
+ // === lo - hi * p(z) (mod (z^128 + p(z)))
+ //
+ // we reduce by multiplying hi by p(z) and subtracting the _result
+ // from (i.e. XORing it with) lo. Because p has no nonzero high
+ // bits we can do this with two 64-bit multiplications, lo*p and
+ // hi*p.
+
+ case 0: pmull2(t0, T1Q, _hi, _p, T2D); break;
+ case 1: ext(_t1, T16B, t0, _vzr, 8); break;
+ case 2: eor(_hi, T16B, _hi, _t1); break;
+ case 3: ext(_t1, T16B, _vzr, t0, 8); break;
+ case 4: eor(_lo, T16B, _lo, _t1); break;
+ case 5: pmull(t0, T1Q, _hi, _p, T1D); break;
+ case 6: eor(_result, T16B, _lo, t0); break;
+ default: ShouldNotReachHere();
+ }
+
+ // Sprinkle load instructions into the generated instructions
+ if (_data->is_valid() && _once) {
+ assert(length() >= unrolls(), "not enough room for inteleaved loads");
+ if (index < unrolls()) {
+ ld1((_data + index*register_stride), T16B, post(r2, 0x10));
+ }
+ }
+ }
+
+ virtual KernelGenerator *next() {
+ GHASHReduceGenerator *result
+ = new GHASHReduceGenerator(this, _unrolls,
+ _result, _lo, _hi, _p, _vzr, _data, _t1);
+ result->_result += register_stride;
+ result->_hi += register_stride;
+ result->_lo += register_stride;
+ result->_t1 += register_stride;
+ result->_once = false;
+ return result;
+ }
+
+ int length() { return 7; }
+};
+
+// Perform a GHASH multiply/reduce on a single FloatRegister.
+void MacroAssembler::ghash_modmul(FloatRegister result,
+ FloatRegister result_lo, FloatRegister result_hi, FloatRegister b,
+ FloatRegister a, FloatRegister vzr, FloatRegister a1_xor_a0, FloatRegister p,
+ FloatRegister t1, FloatRegister t2, FloatRegister t3) {
+ ghash_multiply(result_lo, result_hi, a, b, a1_xor_a0, t1, t2, t3);
+ ghash_reduce(result, result_lo, result_hi, p, vzr, t1);
+}
+
+// Interleaved GHASH processing.
+//
+// Clobbers all vector registers.
+//
+void MacroAssembler::ghash_processBlocks_wide(address field_polynomial, Register state,
+ Register subkeyH,
+ Register data, Register blocks, int unrolls) {
+ int register_stride = 7;
+
+ // Bafflingly, GCM uses little-endian for the byte order, but
+ // big-endian for the bit order. For example, the polynomial 1 is
+ // represented as the 16-byte string 80 00 00 00 | 12 bytes of 00.
+ //
+ // So, we must either reverse the bytes in each word and do
+ // everything big-endian or reverse the bits in each byte and do
+ // it little-endian. On AArch64 it's more idiomatic to reverse
+ // the bits in each byte (we have an instruction, RBIT, to do
+ // that) and keep the data in little-endian bit order throught the
+ // calculation, bit-reversing the inputs and outputs.
+
+ assert(unrolls * register_stride < 32, "out of registers");
+
+ FloatRegister a1_xor_a0 = v28;
+ FloatRegister Hprime = v29;
+ FloatRegister vzr = v30;
+ FloatRegister p = v31;
+ eor(vzr, T16B, vzr, vzr); // zero register
+
+ ldrq(p, field_polynomial); // The field polynomial
+
+ ldrq(v0, Address(state));
+ ldrq(Hprime, Address(subkeyH));
+
+ rev64(v0, T16B, v0); // Bit-reverse words in state and subkeyH
+ rbit(v0, T16B, v0);
+ rev64(Hprime, T16B, Hprime);
+ rbit(Hprime, T16B, Hprime);
+
+ // Powers of H -> Hprime
+
+ Label already_calculated, done;
+ {
+ // The first time around we'll have to calculate H**2, H**3, etc.
+ // Look at the largest power of H in the subkeyH array to see if
+ // it's already been calculated.
+ ldp(rscratch1, rscratch2, Address(subkeyH, 16 * (unrolls - 1)));
+ orr(rscratch1, rscratch1, rscratch2);
+ cbnz(rscratch1, already_calculated);
+
+ orr(v6, T16B, Hprime, Hprime); // Start with H in v6 and Hprime
+ for (int i = 1; i < unrolls; i++) {
+ ext(a1_xor_a0, T16B, Hprime, Hprime, 0x08); // long-swap subkeyH into a1_xor_a0
+ eor(a1_xor_a0, T16B, a1_xor_a0, Hprime); // xor subkeyH into subkeyL (Karatsuba: (A1+A0))
+ ghash_modmul(/*result*/v6, /*result_lo*/v5, /*result_hi*/v4, /*b*/v6,
+ Hprime, vzr, a1_xor_a0, p,
+ /*temps*/v1, v3, v2);
+ rev64(v1, T16B, v6);
+ rbit(v1, T16B, v1);
+ strq(v1, Address(subkeyH, 16 * i));
+ }
+ b(done);
+ }
+ {
+ bind(already_calculated);
+
+ // Load the largest power of H we need into v6.
+ ldrq(v6, Address(subkeyH, 16 * (unrolls - 1)));
+ rev64(v6, T16B, v6);
+ rbit(v6, T16B, v6);
+ }
+ bind(done);
+
+ orr(Hprime, T16B, v6, v6); // Move H ** unrolls into Hprime
+
+ // Hprime contains (H ** 1, H ** 2, ... H ** unrolls)
+ // v0 contains the initial state. Clear the others.
+ for (int i = 1; i < unrolls; i++) {
+ int ofs = register_stride * i;
+ eor(ofs+v0, T16B, ofs+v0, ofs+v0); // zero each state register
+ }
+
+ ext(a1_xor_a0, T16B, Hprime, Hprime, 0x08); // long-swap subkeyH into a1_xor_a0
+ eor(a1_xor_a0, T16B, a1_xor_a0, Hprime); // xor subkeyH into subkeyL (Karatsuba: (A1+A0))
+
+ // Load #unrolls blocks of data
+ for (int ofs = 0; ofs < unrolls * register_stride; ofs += register_stride) {
+ ld1(v2+ofs, T16B, post(data, 0x10));
+ }
+
+ // Register assignments, replicated across 4 clones, v0 ... v23
+ //
+ // v0: input / output: current state, result of multiply/reduce
+ // v1: temp
+ // v2: input: one block of data (the ciphertext)
+ // also used as a temp once the data has been consumed
+ // v3: temp
+ // v4: output: high part of product
+ // v5: output: low part ...
+ // v6: unused
+ //
+ // Not replicated:
+ //
+ // v28: High part of H xor low part of H'
+ // v29: H' (hash subkey)
+ // v30: zero
+ // v31: Reduction polynomial of the Galois field
+
+ // Inner loop.
+ // Do the whole load/add/multiply/reduce over all our data except
+ // the last few rows.
+ {
+ Label L_ghash_loop;
+ bind(L_ghash_loop);
+
+ // Prefetching doesn't help here. In fact, on Neoverse N1 it's worse.
+ // prfm(Address(data, 128), PLDL1KEEP);
+
+ // Xor data into current state
+ for (int ofs = 0; ofs < unrolls * register_stride; ofs += register_stride) {
+ rbit((v2+ofs), T16B, (v2+ofs));
+ eor((v2+ofs), T16B, v0+ofs, (v2+ofs)); // bit-swapped data ^ bit-swapped state
+ }
+
+ // Generate fully-unrolled multiply-reduce in two stages.
+
+ (new GHASHMultiplyGenerator(this, unrolls,
+ /*result_lo*/v5, /*result_hi*/v4, /*data*/v2,
+ Hprime, a1_xor_a0, p, vzr,
+ /*temps*/v1, v3, /* reuse b*/v2))->unroll();
+
+ // NB: GHASHReduceGenerator also loads the next #unrolls blocks of
+ // data into v0, v0+ofs, the current state.
+ (new GHASHReduceGenerator (this, unrolls,
+ /*result*/v0, /*lo*/v5, /*hi*/v4, p, vzr,
+ /*data*/v2, /*temp*/v3))->unroll();
+
+ sub(blocks, blocks, unrolls);
+ cmp(blocks, (unsigned char)(unrolls * 2));
+ br(GE, L_ghash_loop);
+ }
+
+ // Merge the #unrolls states. Note that the data for the next
+ // iteration has already been loaded into v4, v4+ofs, etc...
+
+ // First, we multiply/reduce each clone by the appropriate power of H.
+ for (int i = 0; i < unrolls; i++) {
+ int ofs = register_stride * i;
+ ldrq(Hprime, Address(subkeyH, 16 * (unrolls - i - 1)));
+
+ rbit(v2+ofs, T16B, v2+ofs);
+ eor(v2+ofs, T16B, ofs+v0, v2+ofs); // bit-swapped data ^ bit-swapped state
+
+ rev64(Hprime, T16B, Hprime);
+ rbit(Hprime, T16B, Hprime);
+ ext(a1_xor_a0, T16B, Hprime, Hprime, 0x08); // long-swap subkeyH into a1_xor_a0
+ eor(a1_xor_a0, T16B, a1_xor_a0, Hprime); // xor subkeyH into subkeyL (Karatsuba: (A1+A0))
+ ghash_modmul(/*result*/v0+ofs, /*result_lo*/v5+ofs, /*result_hi*/v4+ofs, /*b*/v2+ofs,
+ Hprime, vzr, a1_xor_a0, p,
+ /*temps*/v1+ofs, v3+ofs, /* reuse b*/v2+ofs);
+ }
+
+ // Then we sum the results.
+ for (int i = 0; i < unrolls - 1; i++) {
+ int ofs = register_stride * i;
+ eor(v0, T16B, v0, v0 + register_stride + ofs);
+ }
+
+ sub(blocks, blocks, (unsigned char)unrolls);
+
+ // And finally bit-reverse the state back to big endian.
+ rev64(v0, T16B, v0);
+ rbit(v0, T16B, v0);
+ st1(v0, T16B, state);
+}
diff --git a/src/hotspot/cpu/aarch64/matcher_aarch64.hpp b/src/hotspot/cpu/aarch64/matcher_aarch64.hpp
index f08c0d494aa65..5f93c841e4e86 100644
--- a/src/hotspot/cpu/aarch64/matcher_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/matcher_aarch64.hpp
@@ -155,4 +155,7 @@
return true;
}
+ // Implements a variant of EncodeISOArrayNode that encode ASCII only
+ static const bool supports_encode_ascii_array = false;
+
#endif // CPU_AARCH64_MATCHER_AARCH64_HPP
diff --git a/src/hotspot/cpu/aarch64/pauth_aarch64.hpp b/src/hotspot/cpu/aarch64/pauth_aarch64.hpp
index 6109964458fb8..e12a671daf1e2 100644
--- a/src/hotspot/cpu/aarch64/pauth_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/pauth_aarch64.hpp
@@ -22,8 +22,8 @@
*
*/
-#ifndef CPU_AARCH64_PAUTH_AARCH64_INLINE_HPP
-#define CPU_AARCH64_PAUTH_AARCH64_INLINE_HPP
+#ifndef CPU_AARCH64_PAUTH_AARCH64_HPP
+#define CPU_AARCH64_PAUTH_AARCH64_HPP
#include OS_CPU_HEADER_INLINE(pauth)
@@ -32,4 +32,4 @@ inline bool pauth_ptr_is_raw(address ptr) {
return ptr == pauth_strip_pointer(ptr);
}
-#endif // CPU_AARCH64_PAUTH_AARCH64_INLINE_HPP
+#endif // CPU_AARCH64_PAUTH_AARCH64_HPP
diff --git a/src/hotspot/cpu/aarch64/spin_wait_aarch64.hpp b/src/hotspot/cpu/aarch64/spin_wait_aarch64.hpp
new file mode 100644
index 0000000000000..4edce2642e9ff
--- /dev/null
+++ b/src/hotspot/cpu/aarch64/spin_wait_aarch64.hpp
@@ -0,0 +1,48 @@
+/*
+ * Copyright (c) 2021, Amazon.com Inc. or its affiliates. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#ifndef CPU_AARCH64_SPIN_WAIT_AARCH64_HPP
+#define CPU_AARCH64_SPIN_WAIT_AARCH64_HPP
+
+class SpinWait {
+public:
+ enum Inst {
+ NONE = -1,
+ NOP,
+ ISB,
+ YIELD
+ };
+
+private:
+ Inst _inst;
+ int _count;
+
+public:
+ SpinWait(Inst inst = NONE, int count = 0) : _inst(inst), _count(count) {}
+
+ Inst inst() const { return _inst; }
+ int inst_count() const { return _count; }
+};
+
+#endif // CPU_AARCH64_SPIN_WAIT_AARCH64_HPP
diff --git a/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp b/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp
index 331b29ea37259..6f9b320bc4816 100644
--- a/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp
@@ -2964,6 +2964,453 @@ class StubGenerator: public StubCodeGenerator {
return start;
}
+ // CTR AES crypt.
+ // Arguments:
+ //
+ // Inputs:
+ // c_rarg0 - source byte array address
+ // c_rarg1 - destination byte array address
+ // c_rarg2 - K (key) in little endian int array
+ // c_rarg3 - counter vector byte array address
+ // c_rarg4 - input length
+ // c_rarg5 - saved encryptedCounter start
+ // c_rarg6 - saved used length
+ //
+ // Output:
+ // r0 - input length
+ //
+ address generate_counterMode_AESCrypt() {
+ const Register in = c_rarg0;
+ const Register out = c_rarg1;
+ const Register key = c_rarg2;
+ const Register counter = c_rarg3;
+ const Register saved_len = c_rarg4, len = r10;
+ const Register saved_encrypted_ctr = c_rarg5;
+ const Register used_ptr = c_rarg6, used = r12;
+
+ const Register offset = r7;
+ const Register keylen = r11;
+
+ const unsigned char block_size = 16;
+ const int bulk_width = 4;
+ // NB: bulk_width can be 4 or 8. 8 gives slightly faster
+ // performance with larger data sizes, but it also means that the
+ // fast path isn't used until you have at least 8 blocks, and up
+ // to 127 bytes of data will be executed on the slow path. For
+ // that reason, and also so as not to blow away too much icache, 4
+ // blocks seems like a sensible compromise.
+
+ // Algorithm:
+ //
+ // if (len == 0) {
+ // goto DONE;
+ // }
+ // int result = len;
+ // do {
+ // if (used >= blockSize) {
+ // if (len >= bulk_width * blockSize) {
+ // CTR_large_block();
+ // if (len == 0)
+ // goto DONE;
+ // }
+ // for (;;) {
+ // 16ByteVector v0 = counter;
+ // embeddedCipher.encryptBlock(v0, 0, encryptedCounter, 0);
+ // used = 0;
+ // if (len < blockSize)
+ // break; /* goto NEXT */
+ // 16ByteVector v1 = load16Bytes(in, offset);
+ // v1 = v1 ^ encryptedCounter;
+ // store16Bytes(out, offset);
+ // used = blockSize;
+ // offset += blockSize;
+ // len -= blockSize;
+ // if (len == 0)
+ // goto DONE;
+ // }
+ // }
+ // NEXT:
+ // out[outOff++] = (byte)(in[inOff++] ^ encryptedCounter[used++]);
+ // len--;
+ // } while (len != 0);
+ // DONE:
+ // return result;
+ //
+ // CTR_large_block()
+ // Wide bulk encryption of whole blocks.
+
+ __ align(CodeEntryAlignment);
+ StubCodeMark mark(this, "StubRoutines", "counterMode_AESCrypt");
+ const address start = __ pc();
+ __ enter();
+
+ Label DONE, CTR_large_block, large_block_return;
+ __ ldrw(used, Address(used_ptr));
+ __ cbzw(saved_len, DONE);
+
+ __ mov(len, saved_len);
+ __ mov(offset, 0);
+
+ // Compute #rounds for AES based on the length of the key array
+ __ ldrw(keylen, Address(key, arrayOopDesc::length_offset_in_bytes() - arrayOopDesc::base_offset_in_bytes(T_INT)));
+
+ __ aesenc_loadkeys(key, keylen);
+
+ {
+ Label L_CTR_loop, NEXT;
+
+ __ bind(L_CTR_loop);
+
+ __ cmp(used, block_size);
+ __ br(__ LO, NEXT);
+
+ // Maybe we have a lot of data
+ __ subsw(rscratch1, len, bulk_width * block_size);
+ __ br(__ HS, CTR_large_block);
+ __ BIND(large_block_return);
+ __ cbzw(len, DONE);
+
+ // Setup the counter
+ __ movi(v4, __ T4S, 0);
+ __ movi(v5, __ T4S, 1);
+ __ ins(v4, __ S, v5, 3, 3); // v4 contains { 0, 0, 0, 1 }
+
+ __ ld1(v0, __ T16B, counter); // Load the counter into v0
+ __ rev32(v16, __ T16B, v0);
+ __ addv(v16, __ T4S, v16, v4);
+ __ rev32(v16, __ T16B, v16);
+ __ st1(v16, __ T16B, counter); // Save the incremented counter back
+
+ {
+ // We have fewer than bulk_width blocks of data left. Encrypt
+ // them one by one until there is less than a full block
+ // remaining, being careful to save both the encrypted counter
+ // and the counter.
+
+ Label inner_loop;
+ __ bind(inner_loop);
+ // Counter to encrypt is in v0
+ __ aesecb_encrypt(noreg, noreg, keylen);
+ __ st1(v0, __ T16B, saved_encrypted_ctr);
+
+ // Do we have a remaining full block?
+
+ __ mov(used, 0);
+ __ cmp(len, block_size);
+ __ br(__ LO, NEXT);
+
+ // Yes, we have a full block
+ __ ldrq(v1, Address(in, offset));
+ __ eor(v1, __ T16B, v1, v0);
+ __ strq(v1, Address(out, offset));
+ __ mov(used, block_size);
+ __ add(offset, offset, block_size);
+
+ __ subw(len, len, block_size);
+ __ cbzw(len, DONE);
+
+ // Increment the counter, store it back
+ __ orr(v0, __ T16B, v16, v16);
+ __ rev32(v16, __ T16B, v16);
+ __ addv(v16, __ T4S, v16, v4);
+ __ rev32(v16, __ T16B, v16);
+ __ st1(v16, __ T16B, counter); // Save the incremented counter back
+
+ __ b(inner_loop);
+ }
+
+ __ BIND(NEXT);
+
+ // Encrypt a single byte, and loop.
+ // We expect this to be a rare event.
+ __ ldrb(rscratch1, Address(in, offset));
+ __ ldrb(rscratch2, Address(saved_encrypted_ctr, used));
+ __ eor(rscratch1, rscratch1, rscratch2);
+ __ strb(rscratch1, Address(out, offset));
+ __ add(offset, offset, 1);
+ __ add(used, used, 1);
+ __ subw(len, len,1);
+ __ cbnzw(len, L_CTR_loop);
+ }
+
+ __ bind(DONE);
+ __ strw(used, Address(used_ptr));
+ __ mov(r0, saved_len);
+
+ __ leave(); // required for proper stackwalking of RuntimeStub frame
+ __ ret(lr);
+
+ // Bulk encryption
+
+ __ BIND (CTR_large_block);
+ assert(bulk_width == 4 || bulk_width == 8, "must be");
+
+ if (bulk_width == 8) {
+ __ sub(sp, sp, 4 * 16);
+ __ st1(v12, v13, v14, v15, __ T16B, Address(sp));
+ }
+ __ sub(sp, sp, 4 * 16);
+ __ st1(v8, v9, v10, v11, __ T16B, Address(sp));
+ RegSet saved_regs = (RegSet::of(in, out, offset)
+ + RegSet::of(saved_encrypted_ctr, used_ptr, len));
+ __ push(saved_regs, sp);
+ __ andr(len, len, -16 * bulk_width); // 8/4 encryptions, 16 bytes per encryption
+ __ add(in, in, offset);
+ __ add(out, out, offset);
+
+ // Keys should already be loaded into the correct registers
+
+ __ ld1(v0, __ T16B, counter); // v0 contains the first counter
+ __ rev32(v16, __ T16B, v0); // v16 contains byte-reversed counter
+
+ // AES/CTR loop
+ {
+ Label L_CTR_loop;
+ __ BIND(L_CTR_loop);
+
+ // Setup the counters
+ __ movi(v8, __ T4S, 0);
+ __ movi(v9, __ T4S, 1);
+ __ ins(v8, __ S, v9, 3, 3); // v8 contains { 0, 0, 0, 1 }
+
+ for (FloatRegister f = v0; f < v0 + bulk_width; f++) {
+ __ rev32(f, __ T16B, v16);
+ __ addv(v16, __ T4S, v16, v8);
+ }
+
+ __ ld1(v8, v9, v10, v11, __ T16B, __ post(in, 4 * 16));
+
+ // Encrypt the counters
+ __ aesecb_encrypt(noreg, noreg, keylen, v0, bulk_width);
+
+ if (bulk_width == 8) {
+ __ ld1(v12, v13, v14, v15, __ T16B, __ post(in, 4 * 16));
+ }
+
+ // XOR the encrypted counters with the inputs
+ for (int i = 0; i < bulk_width; i++) {
+ __ eor(v0 + i, __ T16B, v0 + i, v8 + i);
+ }
+
+ // Write the encrypted data
+ __ st1(v0, v1, v2, v3, __ T16B, __ post(out, 4 * 16));
+ if (bulk_width == 8) {
+ __ st1(v4, v5, v6, v7, __ T16B, __ post(out, 4 * 16));
+ }
+
+ __ subw(len, len, 16 * bulk_width);
+ __ cbnzw(len, L_CTR_loop);
+ }
+
+ // Save the counter back where it goes
+ __ rev32(v16, __ T16B, v16);
+ __ st1(v16, __ T16B, counter);
+
+ __ pop(saved_regs, sp);
+
+ __ ld1(v8, v9, v10, v11, __ T16B, __ post(sp, 4 * 16));
+ if (bulk_width == 8) {
+ __ ld1(v12, v13, v14, v15, __ T16B, __ post(sp, 4 * 16));
+ }
+
+ __ andr(rscratch1, len, -16 * bulk_width);
+ __ sub(len, len, rscratch1);
+ __ add(offset, offset, rscratch1);
+ __ mov(used, 16);
+ __ strw(used, Address(used_ptr));
+ __ b(large_block_return);
+
+ return start;
+ }
+
+ // Arguments:
+ //
+ // Inputs:
+ // c_rarg0 - byte[] source+offset
+ // c_rarg1 - int[] SHA.state
+ // c_rarg2 - int offset
+ // c_rarg3 - int limit
+ //
+ address generate_md5_implCompress(bool multi_block, const char *name) {
+ __ align(CodeEntryAlignment);
+ StubCodeMark mark(this, "StubRoutines", name);
+ address start = __ pc();
+
+ Register buf = c_rarg0;
+ Register state = c_rarg1;
+ Register ofs = c_rarg2;
+ Register limit = c_rarg3;
+ Register a = r4;
+ Register b = r5;
+ Register c = r6;
+ Register d = r7;
+ Register rscratch3 = r10;
+ Register rscratch4 = r11;
+
+ Label keys;
+ Label md5_loop;
+
+ __ BIND(md5_loop);
+
+ // Save hash values for addition after rounds
+ __ ldrw(a, Address(state, 0));
+ __ ldrw(b, Address(state, 4));
+ __ ldrw(c, Address(state, 8));
+ __ ldrw(d, Address(state, 12));
+
+#define FF(r1, r2, r3, r4, k, s, t) \
+ __ eorw(rscratch3, r3, r4); \
+ __ movw(rscratch2, t); \
+ __ andw(rscratch3, rscratch3, r2); \
+ __ addw(rscratch4, r1, rscratch2); \
+ __ ldrw(rscratch1, Address(buf, k*4)); \
+ __ eorw(rscratch3, rscratch3, r4); \
+ __ addw(rscratch3, rscratch3, rscratch1); \
+ __ addw(rscratch3, rscratch3, rscratch4); \
+ __ rorw(rscratch2, rscratch3, 32 - s); \
+ __ addw(r1, rscratch2, r2);
+
+#define GG(r1, r2, r3, r4, k, s, t) \
+ __ eorw(rscratch2, r2, r3); \
+ __ ldrw(rscratch1, Address(buf, k*4)); \
+ __ andw(rscratch3, rscratch2, r4); \
+ __ movw(rscratch2, t); \
+ __ eorw(rscratch3, rscratch3, r3); \
+ __ addw(rscratch4, r1, rscratch2); \
+ __ addw(rscratch3, rscratch3, rscratch1); \
+ __ addw(rscratch3, rscratch3, rscratch4); \
+ __ rorw(rscratch2, rscratch3, 32 - s); \
+ __ addw(r1, rscratch2, r2);
+
+#define HH(r1, r2, r3, r4, k, s, t) \
+ __ eorw(rscratch3, r3, r4); \
+ __ movw(rscratch2, t); \
+ __ addw(rscratch4, r1, rscratch2); \
+ __ ldrw(rscratch1, Address(buf, k*4)); \
+ __ eorw(rscratch3, rscratch3, r2); \
+ __ addw(rscratch3, rscratch3, rscratch1); \
+ __ addw(rscratch3, rscratch3, rscratch4); \
+ __ rorw(rscratch2, rscratch3, 32 - s); \
+ __ addw(r1, rscratch2, r2);
+
+#define II(r1, r2, r3, r4, k, s, t) \
+ __ movw(rscratch3, t); \
+ __ ornw(rscratch2, r2, r4); \
+ __ addw(rscratch4, r1, rscratch3); \
+ __ ldrw(rscratch1, Address(buf, k*4)); \
+ __ eorw(rscratch3, rscratch2, r3); \
+ __ addw(rscratch3, rscratch3, rscratch1); \
+ __ addw(rscratch3, rscratch3, rscratch4); \
+ __ rorw(rscratch2, rscratch3, 32 - s); \
+ __ addw(r1, rscratch2, r2);
+
+ // Round 1
+ FF(a, b, c, d, 0, 7, 0xd76aa478)
+ FF(d, a, b, c, 1, 12, 0xe8c7b756)
+ FF(c, d, a, b, 2, 17, 0x242070db)
+ FF(b, c, d, a, 3, 22, 0xc1bdceee)
+ FF(a, b, c, d, 4, 7, 0xf57c0faf)
+ FF(d, a, b, c, 5, 12, 0x4787c62a)
+ FF(c, d, a, b, 6, 17, 0xa8304613)
+ FF(b, c, d, a, 7, 22, 0xfd469501)
+ FF(a, b, c, d, 8, 7, 0x698098d8)
+ FF(d, a, b, c, 9, 12, 0x8b44f7af)
+ FF(c, d, a, b, 10, 17, 0xffff5bb1)
+ FF(b, c, d, a, 11, 22, 0x895cd7be)
+ FF(a, b, c, d, 12, 7, 0x6b901122)
+ FF(d, a, b, c, 13, 12, 0xfd987193)
+ FF(c, d, a, b, 14, 17, 0xa679438e)
+ FF(b, c, d, a, 15, 22, 0x49b40821)
+
+ // Round 2
+ GG(a, b, c, d, 1, 5, 0xf61e2562)
+ GG(d, a, b, c, 6, 9, 0xc040b340)
+ GG(c, d, a, b, 11, 14, 0x265e5a51)
+ GG(b, c, d, a, 0, 20, 0xe9b6c7aa)
+ GG(a, b, c, d, 5, 5, 0xd62f105d)
+ GG(d, a, b, c, 10, 9, 0x02441453)
+ GG(c, d, a, b, 15, 14, 0xd8a1e681)
+ GG(b, c, d, a, 4, 20, 0xe7d3fbc8)
+ GG(a, b, c, d, 9, 5, 0x21e1cde6)
+ GG(d, a, b, c, 14, 9, 0xc33707d6)
+ GG(c, d, a, b, 3, 14, 0xf4d50d87)
+ GG(b, c, d, a, 8, 20, 0x455a14ed)
+ GG(a, b, c, d, 13, 5, 0xa9e3e905)
+ GG(d, a, b, c, 2, 9, 0xfcefa3f8)
+ GG(c, d, a, b, 7, 14, 0x676f02d9)
+ GG(b, c, d, a, 12, 20, 0x8d2a4c8a)
+
+ // Round 3
+ HH(a, b, c, d, 5, 4, 0xfffa3942)
+ HH(d, a, b, c, 8, 11, 0x8771f681)
+ HH(c, d, a, b, 11, 16, 0x6d9d6122)
+ HH(b, c, d, a, 14, 23, 0xfde5380c)
+ HH(a, b, c, d, 1, 4, 0xa4beea44)
+ HH(d, a, b, c, 4, 11, 0x4bdecfa9)
+ HH(c, d, a, b, 7, 16, 0xf6bb4b60)
+ HH(b, c, d, a, 10, 23, 0xbebfbc70)
+ HH(a, b, c, d, 13, 4, 0x289b7ec6)
+ HH(d, a, b, c, 0, 11, 0xeaa127fa)
+ HH(c, d, a, b, 3, 16, 0xd4ef3085)
+ HH(b, c, d, a, 6, 23, 0x04881d05)
+ HH(a, b, c, d, 9, 4, 0xd9d4d039)
+ HH(d, a, b, c, 12, 11, 0xe6db99e5)
+ HH(c, d, a, b, 15, 16, 0x1fa27cf8)
+ HH(b, c, d, a, 2, 23, 0xc4ac5665)
+
+ // Round 4
+ II(a, b, c, d, 0, 6, 0xf4292244)
+ II(d, a, b, c, 7, 10, 0x432aff97)
+ II(c, d, a, b, 14, 15, 0xab9423a7)
+ II(b, c, d, a, 5, 21, 0xfc93a039)
+ II(a, b, c, d, 12, 6, 0x655b59c3)
+ II(d, a, b, c, 3, 10, 0x8f0ccc92)
+ II(c, d, a, b, 10, 15, 0xffeff47d)
+ II(b, c, d, a, 1, 21, 0x85845dd1)
+ II(a, b, c, d, 8, 6, 0x6fa87e4f)
+ II(d, a, b, c, 15, 10, 0xfe2ce6e0)
+ II(c, d, a, b, 6, 15, 0xa3014314)
+ II(b, c, d, a, 13, 21, 0x4e0811a1)
+ II(a, b, c, d, 4, 6, 0xf7537e82)
+ II(d, a, b, c, 11, 10, 0xbd3af235)
+ II(c, d, a, b, 2, 15, 0x2ad7d2bb)
+ II(b, c, d, a, 9, 21, 0xeb86d391)
+
+#undef FF
+#undef GG
+#undef HH
+#undef II
+
+ // write hash values back in the correct order
+ __ ldrw(rscratch1, Address(state, 0));
+ __ addw(rscratch1, rscratch1, a);
+ __ strw(rscratch1, Address(state, 0));
+
+ __ ldrw(rscratch2, Address(state, 4));
+ __ addw(rscratch2, rscratch2, b);
+ __ strw(rscratch2, Address(state, 4));
+
+ __ ldrw(rscratch3, Address(state, 8));
+ __ addw(rscratch3, rscratch3, c);
+ __ strw(rscratch3, Address(state, 8));
+
+ __ ldrw(rscratch4, Address(state, 12));
+ __ addw(rscratch4, rscratch4, d);
+ __ strw(rscratch4, Address(state, 12));
+
+ if (multi_block) {
+ __ add(buf, buf, 64);
+ __ add(ofs, ofs, 64);
+ __ cmp(ofs, limit);
+ __ br(Assembler::LE, md5_loop);
+ __ mov(c_rarg0, ofs); // return ofs
+ }
+
+ __ ret(lr);
+
+ return start;
+ }
+
// Arguments:
//
// Inputs:
@@ -5874,6 +6321,67 @@ class StubGenerator: public StubCodeGenerator {
return start;
}
+ address generate_ghash_processBlocks_wide() {
+ address small = generate_ghash_processBlocks();
+
+ StubCodeMark mark(this, "StubRoutines", "ghash_processBlocks_wide");
+ __ align(wordSize * 2);
+ address p = __ pc();
+ __ emit_int64(0x87); // The low-order bits of the field
+ // polynomial (i.e. p = z^7+z^2+z+1)
+ // repeated in the low and high parts of a
+ // 128-bit vector
+ __ emit_int64(0x87);
+
+ __ align(CodeEntryAlignment);
+ address start = __ pc();
+
+ Register state = c_rarg0;
+ Register subkeyH = c_rarg1;
+ Register data = c_rarg2;
+ Register blocks = c_rarg3;
+
+ const int unroll = 4;
+
+ __ cmp(blocks, (unsigned char)(unroll * 2));
+ __ br(__ LT, small);
+
+ if (unroll > 1) {
+ // Save state before entering routine
+ __ sub(sp, sp, 4 * 16);
+ __ st1(v12, v13, v14, v15, __ T16B, Address(sp));
+ __ sub(sp, sp, 4 * 16);
+ __ st1(v8, v9, v10, v11, __ T16B, Address(sp));
+ }
+
+ __ ghash_processBlocks_wide(p, state, subkeyH, data, blocks, unroll);
+
+ if (unroll > 1) {
+ // And restore state
+ __ ld1(v8, v9, v10, v11, __ T16B, __ post(sp, 4 * 16));
+ __ ld1(v12, v13, v14, v15, __ T16B, __ post(sp, 4 * 16));
+ }
+
+ __ cmp(blocks, zr);
+ __ br(__ GT, small);
+
+ __ ret(lr);
+
+ return start;
+ }
+
+ // Support for spin waits.
+ address generate_spin_wait() {
+ __ align(CodeEntryAlignment);
+ StubCodeMark mark(this, "StubRoutines", "spin_wait");
+ address start = __ pc();
+
+ __ spin_wait();
+ __ ret(lr);
+
+ return start;
+ }
+
#ifdef LINUX
// ARMv8.1 LSE versions of the atomic stubs used by Atomic::PlatformXX.
@@ -5954,6 +6462,10 @@ class StubGenerator: public StubCodeGenerator {
acquire = false;
release = false;
break;
+ case memory_order_release:
+ acquire = false;
+ release = true;
+ break;
default:
acquire = true;
release = true;
@@ -6035,6 +6547,20 @@ class StubGenerator: public StubCodeGenerator {
(_masm, &aarch64_atomic_cmpxchg_8_relaxed_impl);
gen_cas_entry(MacroAssembler::xword, memory_order_relaxed);
+ AtomicStubMark mark_cmpxchg_4_release
+ (_masm, &aarch64_atomic_cmpxchg_4_release_impl);
+ gen_cas_entry(MacroAssembler::word, memory_order_release);
+ AtomicStubMark mark_cmpxchg_8_release
+ (_masm, &aarch64_atomic_cmpxchg_8_release_impl);
+ gen_cas_entry(MacroAssembler::xword, memory_order_release);
+
+ AtomicStubMark mark_cmpxchg_4_seq_cst
+ (_masm, &aarch64_atomic_cmpxchg_4_seq_cst_impl);
+ gen_cas_entry(MacroAssembler::word, memory_order_seq_cst);
+ AtomicStubMark mark_cmpxchg_8_seq_cst
+ (_masm, &aarch64_atomic_cmpxchg_8_seq_cst_impl);
+ gen_cas_entry(MacroAssembler::xword, memory_order_seq_cst);
+
ICache::invalidate_range(first_entry, __ pc() - first_entry);
}
#endif // LINUX
@@ -7111,7 +7637,11 @@ class StubGenerator: public StubCodeGenerator {
// generate GHASH intrinsics code
if (UseGHASHIntrinsics) {
- StubRoutines::_ghash_processBlocks = generate_ghash_processBlocks();
+ if (UseAESCTRIntrinsics) {
+ StubRoutines::_ghash_processBlocks = generate_ghash_processBlocks_wide();
+ } else {
+ StubRoutines::_ghash_processBlocks = generate_ghash_processBlocks();
+ }
}
if (UseBASE64Intrinsics) {
@@ -7130,6 +7660,14 @@ class StubGenerator: public StubCodeGenerator {
StubRoutines::_cipherBlockChaining_decryptAESCrypt = generate_cipherBlockChaining_decryptAESCrypt();
}
+ if (UseAESCTRIntrinsics) {
+ StubRoutines::_counterMode_AESCrypt = generate_counterMode_AESCrypt();
+ }
+
+ if (UseMD5Intrinsics) {
+ StubRoutines::_md5_implCompress = generate_md5_implCompress(false, "md5_implCompress");
+ StubRoutines::_md5_implCompressMB = generate_md5_implCompress(true, "md5_implCompressMB");
+ }
if (UseSHA1Intrinsics) {
StubRoutines::_sha1_implCompress = generate_sha1_implCompress(false, "sha1_implCompress");
StubRoutines::_sha1_implCompressMB = generate_sha1_implCompress(true, "sha1_implCompressMB");
@@ -7152,6 +7690,8 @@ class StubGenerator: public StubCodeGenerator {
StubRoutines::_updateBytesAdler32 = generate_updateBytesAdler32();
}
+ StubRoutines::aarch64::_spin_wait = generate_spin_wait();
+
#ifdef LINUX
generate_atomic_entry_points();
@@ -7201,6 +7741,10 @@ DEFAULT_ATOMIC_OP(cmpxchg, 8, )
DEFAULT_ATOMIC_OP(cmpxchg, 1, _relaxed)
DEFAULT_ATOMIC_OP(cmpxchg, 4, _relaxed)
DEFAULT_ATOMIC_OP(cmpxchg, 8, _relaxed)
+DEFAULT_ATOMIC_OP(cmpxchg, 4, _release)
+DEFAULT_ATOMIC_OP(cmpxchg, 8, _release)
+DEFAULT_ATOMIC_OP(cmpxchg, 4, _seq_cst)
+DEFAULT_ATOMIC_OP(cmpxchg, 8, _seq_cst)
#undef DEFAULT_ATOMIC_OP
diff --git a/src/hotspot/cpu/aarch64/stubRoutines_aarch64.cpp b/src/hotspot/cpu/aarch64/stubRoutines_aarch64.cpp
index 1cbc3ed21d16f..9e16a1f9f8812 100644
--- a/src/hotspot/cpu/aarch64/stubRoutines_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/stubRoutines_aarch64.cpp
@@ -57,6 +57,10 @@ address StubRoutines::aarch64::_string_indexof_linear_uu = NULL;
address StubRoutines::aarch64::_string_indexof_linear_ul = NULL;
address StubRoutines::aarch64::_large_byte_array_inflate = NULL;
address StubRoutines::aarch64::_method_entry_barrier = NULL;
+
+static void empty_spin_wait() { }
+address StubRoutines::aarch64::_spin_wait = CAST_FROM_FN_PTR(address, empty_spin_wait);
+
bool StubRoutines::aarch64::_completed = false;
/**
diff --git a/src/hotspot/cpu/aarch64/stubRoutines_aarch64.hpp b/src/hotspot/cpu/aarch64/stubRoutines_aarch64.hpp
index 7578791b5401c..295264b7aaf22 100644
--- a/src/hotspot/cpu/aarch64/stubRoutines_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/stubRoutines_aarch64.hpp
@@ -36,7 +36,7 @@ static bool returns_to_call_stub(address return_pc) {
enum platform_dependent_constants {
code_size1 = 19000, // simply increase if too small (assembler will crash if too small)
- code_size2 = 28000 // simply increase if too small (assembler will crash if too small)
+ code_size2 = 45000 // simply increase if too small (assembler will crash if too small)
};
class aarch64 {
@@ -72,6 +72,8 @@ class aarch64 {
static address _method_entry_barrier;
+ static address _spin_wait;
+
static bool _completed;
public:
@@ -177,6 +179,10 @@ class aarch64 {
return _method_entry_barrier;
}
+ static address spin_wait() {
+ return _spin_wait;
+ }
+
static bool complete() {
return _completed;
}
diff --git a/src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp b/src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp
index db253fe5c2cd0..e20cffd57670b 100644
--- a/src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp
@@ -1397,11 +1397,12 @@ address TemplateInterpreterGenerator::generate_native_entry(bool synchronized) {
__ cmp(rscratch1, (u1)StackOverflow::stack_guard_yellow_reserved_disabled);
__ br(Assembler::NE, no_reguard);
- __ pusha(); // XXX only save smashed registers
+ __ push_call_clobbered_registers();
__ mov(c_rarg0, rthread);
__ mov(rscratch2, CAST_FROM_FN_PTR(address, SharedRuntime::reguard_yellow_pages));
__ blr(rscratch2);
- __ popa(); // XXX only restore smashed registers
+ __ pop_call_clobbered_registers();
+
__ bind(no_reguard);
}
diff --git a/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp b/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp
index 123c429fa0f9e..f9de8861358be 100644
--- a/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/vm_version_aarch64.cpp
@@ -46,6 +46,26 @@ int VM_Version::_dcache_line_size;
int VM_Version::_icache_line_size;
int VM_Version::_initial_sve_vector_length;
+SpinWait VM_Version::_spin_wait;
+
+static SpinWait get_spin_wait_desc() {
+ if (strcmp(OnSpinWaitInst, "nop") == 0) {
+ return SpinWait(SpinWait::NOP, OnSpinWaitInstCount);
+ } else if (strcmp(OnSpinWaitInst, "isb") == 0) {
+ return SpinWait(SpinWait::ISB, OnSpinWaitInstCount);
+ } else if (strcmp(OnSpinWaitInst, "yield") == 0) {
+ return SpinWait(SpinWait::YIELD, OnSpinWaitInstCount);
+ } else if (strcmp(OnSpinWaitInst, "none") != 0) {
+ vm_exit_during_initialization("The options for OnSpinWaitInst are nop, isb, yield, and none", OnSpinWaitInst);
+ }
+
+ if (!FLAG_IS_DEFAULT(OnSpinWaitInstCount) && OnSpinWaitInstCount > 0) {
+ vm_exit_during_initialization("OnSpinWaitInstCount cannot be used for OnSpinWaitInst 'none'");
+ }
+
+ return SpinWait{};
+}
+
void VM_Version::initialize() {
_supports_cx8 = true;
_supports_atomic_getset4 = true;
@@ -182,6 +202,14 @@ void VM_Version::initialize() {
if (FLAG_IS_DEFAULT(UseSIMDForMemoryOps)) {
FLAG_SET_DEFAULT(UseSIMDForMemoryOps, true);
}
+
+ if (FLAG_IS_DEFAULT(OnSpinWaitInst)) {
+ FLAG_SET_DEFAULT(OnSpinWaitInst, "isb");
+ }
+
+ if (FLAG_IS_DEFAULT(OnSpinWaitInstCount)) {
+ FLAG_SET_DEFAULT(OnSpinWaitInstCount, 1);
+ }
}
if (_cpu == CPU_ARM) {
@@ -237,6 +265,9 @@ void VM_Version::initialize() {
warning("UseAESIntrinsics enabled, but UseAES not, enabling");
UseAES = true;
}
+ if (FLAG_IS_DEFAULT(UseAESCTRIntrinsics)) {
+ FLAG_SET_DEFAULT(UseAESCTRIntrinsics, false);
+ }
} else {
if (UseAES) {
warning("AES instructions are not available on this CPU");
@@ -246,11 +277,10 @@ void VM_Version::initialize() {
warning("AES intrinsics are not available on this CPU");
FLAG_SET_DEFAULT(UseAESIntrinsics, false);
}
- }
-
- if (UseAESCTRIntrinsics) {
- warning("AES/CTR intrinsics are not available on this CPU");
- FLAG_SET_DEFAULT(UseAESCTRIntrinsics, false);
+ if (UseAESCTRIntrinsics) {
+ warning("AES/CTR intrinsics are not available on this CPU");
+ FLAG_SET_DEFAULT(UseAESCTRIntrinsics, false);
+ }
}
if (FLAG_IS_DEFAULT(UseCRC32Intrinsics)) {
@@ -270,9 +300,8 @@ void VM_Version::initialize() {
FLAG_SET_DEFAULT(UseFMA, true);
}
- if (UseMD5Intrinsics) {
- warning("MD5 intrinsics are not available on this CPU");
- FLAG_SET_DEFAULT(UseMD5Intrinsics, false);
+ if (FLAG_IS_DEFAULT(UseMD5Intrinsics)) {
+ UseMD5Intrinsics = true;
}
if (_features & (CPU_SHA1 | CPU_SHA2 | CPU_SHA3 | CPU_SHA512)) {
@@ -448,5 +477,7 @@ void VM_Version::initialize() {
}
#endif
+ _spin_wait = get_spin_wait_desc();
+
UNSUPPORTED_OPTION(CriticalJNINatives);
}
diff --git a/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp b/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp
index 6817eed08e95e..61f422bd2d38b 100644
--- a/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/vm_version_aarch64.hpp
@@ -26,6 +26,7 @@
#ifndef CPU_AARCH64_VM_VERSION_AARCH64_HPP
#define CPU_AARCH64_VM_VERSION_AARCH64_HPP
+#include "spin_wait_aarch64.hpp"
#include "runtime/abstract_vm_version.hpp"
#include "utilities/sizes.hpp"
@@ -45,6 +46,8 @@ class VM_Version : public Abstract_VM_Version {
static int _icache_line_size;
static int _initial_sve_vector_length;
+ static SpinWait _spin_wait;
+
// Read additional info using OS-specific interfaces
static void get_os_cpu_info();
@@ -142,6 +145,10 @@ class VM_Version : public Abstract_VM_Version {
static void get_compatible_board(char *buf, int buflen);
+ static const SpinWait& spin_wait_desc() { return _spin_wait; }
+
+ static bool supports_on_spin_wait() { return _spin_wait.inst() != SpinWait::NONE; }
+
#ifdef __APPLE__
// Is the CPU running emulated (for example macOS Rosetta running x86_64 code on M1 ARM (aarch64)
static bool is_cpu_emulated();
diff --git a/src/hotspot/cpu/arm/assembler_arm_32.cpp b/src/hotspot/cpu/arm/assembler_arm_32.cpp
index 0166d85a21ad0..46e74cfa49701 100644
--- a/src/hotspot/cpu/arm/assembler_arm_32.cpp
+++ b/src/hotspot/cpu/arm/assembler_arm_32.cpp
@@ -47,7 +47,7 @@
// Convert the raw encoding form into the form expected by the
// constructor for Address.
Address Address::make_raw(int base, int index, int scale, int disp, relocInfo::relocType disp_reloc) {
- RelocationHolder rspec;
+ RelocationHolder rspec = RelocationHolder::none;
if (disp_reloc != relocInfo::none) {
rspec = Relocation::spec_simple(disp_reloc);
}
diff --git a/src/hotspot/cpu/arm/c1_LIRAssembler_arm.cpp b/src/hotspot/cpu/arm/c1_LIRAssembler_arm.cpp
index 13a95b26db6fd..776f00977cba8 100644
--- a/src/hotspot/cpu/arm/c1_LIRAssembler_arm.cpp
+++ b/src/hotspot/cpu/arm/c1_LIRAssembler_arm.cpp
@@ -1685,6 +1685,9 @@ void LIR_Assembler::logic_op(LIR_Code code, LIR_Opr left, LIR_Opr right, LIR_Opr
} else {
assert(right->is_constant(), "must be");
const uint c = (uint)right->as_constant_ptr()->as_jint();
+ if (!Assembler::is_arith_imm_in_range(c)) {
+ BAILOUT("illegal arithmetic operand");
+ }
switch (code) {
case lir_logic_and: __ and_32(res, lreg, c); break;
case lir_logic_or: __ orr_32(res, lreg, c); break;
@@ -1825,8 +1828,8 @@ void LIR_Assembler::comp_op(LIR_Condition condition, LIR_Opr opr1, LIR_Opr opr2,
__ teq(xhi, yhi);
__ teq(xlo, ylo, eq);
} else {
- __ subs(xlo, xlo, ylo);
- __ sbcs(xhi, xhi, yhi);
+ __ subs(Rtemp, xlo, ylo);
+ __ sbcs(Rtemp, xhi, yhi);
}
} else {
ShouldNotReachHere();
diff --git a/src/hotspot/cpu/arm/c1_MacroAssembler_arm.cpp b/src/hotspot/cpu/arm/c1_MacroAssembler_arm.cpp
index 920e5cd27f472..0c77b1c82e76a 100644
--- a/src/hotspot/cpu/arm/c1_MacroAssembler_arm.cpp
+++ b/src/hotspot/cpu/arm/c1_MacroAssembler_arm.cpp
@@ -70,8 +70,8 @@ void C1_MacroAssembler::remove_frame(int frame_size_in_bytes) {
raw_pop(FP, LR);
}
-void C1_MacroAssembler::verified_entry() {
- if (C1Breakpoint) {
+void C1_MacroAssembler::verified_entry(bool breakAtEntry) {
+ if (breakAtEntry) {
breakpoint();
}
}
diff --git a/src/hotspot/cpu/arm/matcher_arm.hpp b/src/hotspot/cpu/arm/matcher_arm.hpp
index 0d011a620f95d..c727b7c4022a1 100644
--- a/src/hotspot/cpu/arm/matcher_arm.hpp
+++ b/src/hotspot/cpu/arm/matcher_arm.hpp
@@ -147,4 +147,7 @@
return false;
}
+ // Implements a variant of EncodeISOArrayNode that encode ASCII only
+ static const bool supports_encode_ascii_array = false;
+
#endif // CPU_ARM_MATCHER_ARM_HPP
diff --git a/src/hotspot/cpu/ppc/assembler_ppc.hpp b/src/hotspot/cpu/ppc/assembler_ppc.hpp
index 3f8184f966e32..0d0c31177458b 100644
--- a/src/hotspot/cpu/ppc/assembler_ppc.hpp
+++ b/src/hotspot/cpu/ppc/assembler_ppc.hpp
@@ -47,6 +47,9 @@ class Address {
Address(Register b, address d = 0)
: _base(b), _index(noreg), _disp((intptr_t)d) {}
+ Address(Register b, ByteSize d)
+ : _base(b), _index(noreg), _disp((intptr_t)d) {}
+
Address(Register b, intptr_t d)
: _base(b), _index(noreg), _disp(d) {}
diff --git a/src/hotspot/cpu/ppc/c1_CodeStubs_ppc.cpp b/src/hotspot/cpu/ppc/c1_CodeStubs_ppc.cpp
index 8eedca7886cbd..3d18e7283633a 100644
--- a/src/hotspot/cpu/ppc/c1_CodeStubs_ppc.cpp
+++ b/src/hotspot/cpu/ppc/c1_CodeStubs_ppc.cpp
@@ -81,8 +81,6 @@ void RangeCheckStub::emit_code(LIR_Assembler* ce) {
if (_info->deoptimize_on_exception()) {
address a = Runtime1::entry_for(Runtime1::predicate_failed_trap_id);
- // May be used by optimizations like LoopInvariantCodeMotion or RangeCheckEliminator.
- DEBUG_ONLY( __ untested("RangeCheckStub: predicate_failed_trap_id"); )
//__ load_const_optimized(R0, a);
__ add_const_optimized(R0, R29_TOC, MacroAssembler::offset_to_global_toc(a));
__ mtctr(R0);
diff --git a/src/hotspot/cpu/ppc/c1_MacroAssembler_ppc.cpp b/src/hotspot/cpu/ppc/c1_MacroAssembler_ppc.cpp
index 47cee69549021..b8c304c4033a1 100644
--- a/src/hotspot/cpu/ppc/c1_MacroAssembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/c1_MacroAssembler_ppc.cpp
@@ -87,8 +87,8 @@ void C1_MacroAssembler::build_frame(int frame_size_in_bytes, int bang_size_in_by
}
-void C1_MacroAssembler::verified_entry() {
- if (C1Breakpoint) illtrap();
+void C1_MacroAssembler::verified_entry(bool breakAtEntry) {
+ if (breakAtEntry) illtrap();
// build frame
}
diff --git a/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.cpp b/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.cpp
index 69a29c02654e4..1ae5c13e62e97 100644
--- a/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.cpp
@@ -41,17 +41,18 @@
// Compress char[] to byte[] by compressing 16 bytes at once.
void C2_MacroAssembler::string_compress_16(Register src, Register dst, Register cnt,
Register tmp1, Register tmp2, Register tmp3, Register tmp4, Register tmp5,
- Label& Lfailure) {
+ Label& Lfailure, bool ascii) {
const Register tmp0 = R0;
+ const int byte_mask = ascii ? 0x7F : 0xFF;
assert_different_registers(src, dst, cnt, tmp0, tmp1, tmp2, tmp3, tmp4, tmp5);
Label Lloop, Lslow;
// Check if cnt >= 8 (= 16 bytes)
- lis(tmp1, 0xFF); // tmp1 = 0x00FF00FF00FF00FF
+ lis(tmp1, byte_mask); // tmp1 = 0x00FF00FF00FF00FF (non ascii case)
srwi_(tmp2, cnt, 3);
beq(CCR0, Lslow);
- ori(tmp1, tmp1, 0xFF);
+ ori(tmp1, tmp1, byte_mask);
rldimi(tmp1, tmp1, 32, 0);
mtctr(tmp2);
@@ -67,7 +68,7 @@ void C2_MacroAssembler::string_compress_16(Register src, Register dst, Register
rldimi(tmp4, tmp4, 2*8, 2*8); // _4_6_7_7
andc_(tmp0, tmp0, tmp1);
- bne(CCR0, Lfailure); // Not latin1.
+ bne(CCR0, Lfailure); // Not latin1/ascii.
addi(src, src, 16);
rlwimi(tmp3, tmp2, 0*8, 24, 31);// _____1_3
@@ -87,20 +88,49 @@ void C2_MacroAssembler::string_compress_16(Register src, Register dst, Register
}
// Compress char[] to byte[]. cnt must be positive int.
-void C2_MacroAssembler::string_compress(Register src, Register dst, Register cnt, Register tmp, Label& Lfailure) {
+void C2_MacroAssembler::string_compress(Register src, Register dst, Register cnt, Register tmp,
+ Label& Lfailure, bool ascii) {
+ const int byte_mask = ascii ? 0x7F : 0xFF;
Label Lloop;
mtctr(cnt);
bind(Lloop);
lhz(tmp, 0, src);
- cmplwi(CCR0, tmp, 0xff);
- bgt(CCR0, Lfailure); // Not latin1.
+ cmplwi(CCR0, tmp, byte_mask);
+ bgt(CCR0, Lfailure); // Not latin1/ascii.
addi(src, src, 2);
stb(tmp, 0, dst);
addi(dst, dst, 1);
bdnz(Lloop);
}
+void C2_MacroAssembler::encode_iso_array(Register src, Register dst, Register len,
+ Register tmp1, Register tmp2, Register tmp3, Register tmp4, Register tmp5,
+ Register result, bool ascii) {
+ Label Lslow, Lfailure1, Lfailure2, Ldone;
+
+ string_compress_16(src, dst, len, tmp1, tmp2, tmp3, tmp4, tmp5, Lfailure1, ascii);
+ rldicl_(result, len, 0, 64-3); // Remaining characters.
+ beq(CCR0, Ldone);
+ bind(Lslow);
+ string_compress(src, dst, result, tmp2, Lfailure2, ascii);
+ li(result, 0);
+ b(Ldone);
+
+ bind(Lfailure1);
+ mr(result, len);
+ mfctr(tmp1);
+ rldimi_(result, tmp1, 3, 0); // Remaining characters.
+ beq(CCR0, Ldone);
+ b(Lslow);
+
+ bind(Lfailure2);
+ mfctr(result); // Remaining characters.
+
+ bind(Ldone);
+ subf(result, result, len);
+}
+
// Inflate byte[] to char[] by inflating 16 bytes at once.
void C2_MacroAssembler::string_inflate_16(Register src, Register dst, Register cnt,
Register tmp1, Register tmp2, Register tmp3, Register tmp4, Register tmp5) {
diff --git a/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.hpp b/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.hpp
index 9f363f2b96217..9c4576f2eaf04 100644
--- a/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.hpp
+++ b/src/hotspot/cpu/ppc/c2_MacroAssembler_ppc.hpp
@@ -32,10 +32,16 @@
// Compress char[] to byte[] by compressing 16 bytes at once.
void string_compress_16(Register src, Register dst, Register cnt,
Register tmp1, Register tmp2, Register tmp3, Register tmp4, Register tmp5,
- Label& Lfailure);
+ Label& Lfailure, bool ascii = false);
// Compress char[] to byte[]. cnt must be positive int.
- void string_compress(Register src, Register dst, Register cnt, Register tmp, Label& Lfailure);
+ void string_compress(Register src, Register dst, Register cnt, Register tmp,
+ Label& Lfailure, bool ascii = false);
+
+ // Encode UTF16 to ISO_8859_1 or ASCII. Return len on success or position of first mismatch.
+ void encode_iso_array(Register src, Register dst, Register len,
+ Register tmp1, Register tmp2, Register tmp3, Register tmp4, Register tmp5,
+ Register result, bool ascii);
// Inflate byte[] to char[] by inflating 16 bytes at once.
void string_inflate_16(Register src, Register dst, Register cnt,
diff --git a/src/hotspot/cpu/ppc/frame_ppc.cpp b/src/hotspot/cpu/ppc/frame_ppc.cpp
index 36585185812e8..dae75944542d8 100644
--- a/src/hotspot/cpu/ppc/frame_ppc.cpp
+++ b/src/hotspot/cpu/ppc/frame_ppc.cpp
@@ -1,6 +1,6 @@
/*
- * Copyright (c) 2000, 2021, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2012, 2021 SAP SE. All rights reserved.
+ * Copyright (c) 2000, 2022, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2022 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -52,7 +52,6 @@ void RegisterMap::check_location_valid() {
#endif // ASSERT
bool frame::safe_for_sender(JavaThread *thread) {
- bool safe = false;
address sp = (address)_sp;
address fp = (address)_fp;
address unextended_sp = (address)_unextended_sp;
@@ -70,28 +69,23 @@ bool frame::safe_for_sender(JavaThread *thread) {
// An fp must be within the stack and above (but not equal) sp.
bool fp_safe = thread->is_in_stack_range_excl(fp, sp);
- // An interpreter fp must be within the stack and above (but not equal) sp.
- // Moreover, it must be at least the size of the ijava_state structure.
+ // An interpreter fp must be fp_safe.
+ // Moreover, it must be at a distance at least the size of the ijava_state structure.
bool fp_interp_safe = fp_safe && ((fp - sp) >= ijava_state_size);
// We know sp/unextended_sp are safe, only fp is questionable here
// If the current frame is known to the code cache then we can attempt to
- // to construct the sender and do some validation of it. This goes a long way
+ // construct the sender and do some validation of it. This goes a long way
// toward eliminating issues when we get in frame construction code
if (_cb != NULL ){
- // Entry frame checks
- if (is_entry_frame()) {
- // An entry frame must have a valid fp.
- return fp_safe && is_entry_frame_valid(thread);
- }
- // Now check if the frame is complete and the test is
- // reliable. Unfortunately we can only check frame completeness for
- // runtime stubs and nmethods. Other generic buffer blobs are more
- // problematic so we just assume they are OK. Adapter blobs never have a
- // complete frame and are never OK
+ // First check if the frame is complete and the test is reliable.
+ // Unfortunately we can only check frame completeness for runtime stubs
+ // and nmethods. Other generic buffer blobs are more problematic
+ // so we just assume they are OK.
+ // Adapter blobs never have a complete frame and are never OK
if (!_cb->is_frame_complete_at(_pc)) {
if (_cb->is_compiled() || _cb->is_adapter_blob() || _cb->is_runtime_stub()) {
return false;
@@ -103,10 +97,23 @@ bool frame::safe_for_sender(JavaThread *thread) {
return false;
}
+ // Entry frame checks
+ if (is_entry_frame()) {
+ // An entry frame must have a valid fp.
+ return fp_safe && is_entry_frame_valid(thread);
+ }
+
if (is_interpreted_frame() && !fp_interp_safe) {
return false;
}
+ // At this point, there still is a chance that fp_safe is false.
+ // In particular, (fp == NULL) might be true. So let's check and
+ // bail out before we actually dereference from fp.
+ if (!fp_safe) {
+ return false;
+ }
+
abi_minframe* sender_abi = (abi_minframe*) fp;
intptr_t* sender_sp = (intptr_t*) fp;
address sender_pc = (address) sender_abi->lr;;
@@ -287,9 +294,57 @@ void frame::patch_pc(Thread* thread, address pc) {
}
bool frame::is_interpreted_frame_valid(JavaThread* thread) const {
- // Is there anything to do?
assert(is_interpreted_frame(), "Not an interpreted frame");
- return true;
+ // These are reasonable sanity checks
+ if (fp() == 0 || (intptr_t(fp()) & (wordSize-1)) != 0) {
+ return false;
+ }
+ if (sp() == 0 || (intptr_t(sp()) & (wordSize-1)) != 0) {
+ return false;
+ }
+ int min_frame_slots = (abi_minframe_size + ijava_state_size) / sizeof(intptr_t);
+ if (fp() - min_frame_slots < sp()) {
+ return false;
+ }
+ // These are hacks to keep us out of trouble.
+ // The problem with these is that they mask other problems
+ if (fp() <= sp()) { // this attempts to deal with unsigned comparison above
+ return false;
+ }
+
+ // do some validation of frame elements
+
+ // first the method
+
+ Method* m = *interpreter_frame_method_addr();
+
+ // validate the method we'd find in this potential sender
+ if (!Method::is_valid_method(m)) return false;
+
+ // stack frames shouldn't be much larger than max_stack elements
+ // this test requires the use of unextended_sp which is the sp as seen by
+ // the current frame, and not sp which is the "raw" pc which could point
+ // further because of local variables of the callee method inserted after
+ // method arguments
+ if (fp() - unextended_sp() > 1024 + m->max_stack()*Interpreter::stackElementSize) {
+ return false;
+ }
+
+ // validate bci/bcx
+
+ address bcp = interpreter_frame_bcp();
+ if (m->validate_bci_from_bcp(bcp) < 0) {
+ return false;
+ }
+
+ // validate constantPoolCache*
+ ConstantPoolCache* cp = *interpreter_frame_cache_addr();
+ if (MetaspaceObj::is_valid(cp) == false) return false;
+
+ // validate locals
+
+ address locals = (address) *interpreter_frame_locals_addr();
+ return thread->is_in_stack_range_incl(locals, (address)fp());
}
BasicType frame::interpreter_frame_result(oop* oop_result, jvalue* value_result) {
diff --git a/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.cpp b/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.cpp
index 800b34e4ba736..c4b152a6db390 100644
--- a/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.cpp
@@ -111,16 +111,28 @@ void BarrierSetAssembler::load_at(MacroAssembler* masm, DecoratorSet decorators,
}
}
+// Generic implementation. GCs can provide an optimized one.
void BarrierSetAssembler::resolve_jobject(MacroAssembler* masm, Register value,
Register tmp1, Register tmp2,
MacroAssembler::PreservationLevel preservation_level) {
- Label done;
+ Label done, not_weak, verify;
__ cmpdi(CCR0, value, 0);
__ beq(CCR0, done); // Use NULL as-is.
- __ clrrdi(tmp1, value, JNIHandles::weak_tag_size);
- __ ld(value, 0, tmp1); // Resolve (untagged) jobject.
+ __ andi_(tmp1, value, JNIHandles::weak_tag_mask);
+ __ beq(CCR0, not_weak); // Test for jweak tag.
+ // Resolve (untagged) jobject.
+ __ clrrdi(value, value, JNIHandles::weak_tag_size);
+ load_at(masm, IN_NATIVE | ON_PHANTOM_OOP_REF, T_OBJECT,
+ value, (intptr_t)0, value, tmp1, tmp2, preservation_level);
+ __ b(verify);
+
+ __ bind(not_weak);
+ load_at(masm, IN_NATIVE, T_OBJECT,
+ value, (intptr_t)0, value, tmp1, tmp2, preservation_level);
+
+ __ bind(verify);
__ verify_oop(value, FILE_AND_LINE);
__ bind(done);
}
@@ -139,6 +151,8 @@ void BarrierSetAssembler::nmethod_entry_barrier(MacroAssembler* masm, Register t
assert_different_registers(tmp, R0);
+ __ block_comment("nmethod_entry_barrier (nmethod_entry_barrier) {");
+
// Load stub address using toc (fixed instruction size, unlike load_const_optimized)
__ calculate_address_from_global_toc(tmp, StubRoutines::ppc::nmethod_entry_barrier(),
true, true, false); // 2 instructions
@@ -155,6 +169,8 @@ void BarrierSetAssembler::nmethod_entry_barrier(MacroAssembler* masm, Register t
// Oops may have been changed; exploiting isync semantics (used as acquire) to make those updates observable.
__ isync();
+
+ __ block_comment("} nmethod_entry_barrier (nmethod_entry_barrier)");
}
void BarrierSetAssembler::c2i_entry_barrier(MacroAssembler *masm, Register tmp1, Register tmp2, Register tmp3) {
@@ -165,6 +181,8 @@ void BarrierSetAssembler::c2i_entry_barrier(MacroAssembler *masm, Register tmp1,
assert_different_registers(tmp1, tmp2, tmp3);
+ __ block_comment("c2i_entry_barrier (c2i_entry_barrier) {");
+
Register tmp1_class_loader_data = tmp1;
Label bad_call, skip_barrier;
@@ -178,7 +196,7 @@ void BarrierSetAssembler::c2i_entry_barrier(MacroAssembler *masm, Register tmp1,
__ ld(tmp1_class_loader_data, in_bytes(InstanceKlass::class_loader_data_offset()), tmp1);
// Fast path: If class loader is strong, the holder cannot be unloaded.
- __ ld(tmp2, in_bytes(ClassLoaderData::keep_alive_offset()), tmp1_class_loader_data);
+ __ lwz(tmp2, in_bytes(ClassLoaderData::keep_alive_offset()), tmp1_class_loader_data);
__ cmpdi(CCR0, tmp2, 0);
__ bne(CCR0, skip_barrier);
@@ -195,4 +213,6 @@ void BarrierSetAssembler::c2i_entry_barrier(MacroAssembler *masm, Register tmp1,
__ bctr();
__ bind(skip_barrier);
+
+ __ block_comment("} c2i_entry_barrier (c2i_entry_barrier)");
}
diff --git a/src/hotspot/cpu/ppc/gc/shared/modRefBarrierSetAssembler_ppc.cpp b/src/hotspot/cpu/ppc/gc/shared/modRefBarrierSetAssembler_ppc.cpp
index ed66c5f892918..1d1f923108f2a 100644
--- a/src/hotspot/cpu/ppc/gc/shared/modRefBarrierSetAssembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/gc/shared/modRefBarrierSetAssembler_ppc.cpp
@@ -26,6 +26,7 @@
#include "precompiled.hpp"
#include "asm/macroAssembler.inline.hpp"
#include "gc/shared/modRefBarrierSetAssembler.hpp"
+#include "runtime/jniHandles.hpp"
#define __ masm->
@@ -74,3 +75,17 @@ void ModRefBarrierSetAssembler::store_at(MacroAssembler* masm, DecoratorSet deco
preservation_level);
}
}
+
+void ModRefBarrierSetAssembler::resolve_jobject(MacroAssembler* masm, Register value,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level) {
+ Label done;
+ __ cmpdi(CCR0, value, 0);
+ __ beq(CCR0, done); // Use NULL as-is.
+
+ __ clrrdi(tmp1, value, JNIHandles::weak_tag_size);
+ __ ld(value, 0, tmp1); // Resolve (untagged) jobject.
+
+ __ verify_oop(value, FILE_AND_LINE);
+ __ bind(done);
+}
diff --git a/src/hotspot/cpu/ppc/gc/shared/modRefBarrierSetAssembler_ppc.hpp b/src/hotspot/cpu/ppc/gc/shared/modRefBarrierSetAssembler_ppc.hpp
index eec826212803c..5d105f6c0484f 100644
--- a/src/hotspot/cpu/ppc/gc/shared/modRefBarrierSetAssembler_ppc.hpp
+++ b/src/hotspot/cpu/ppc/gc/shared/modRefBarrierSetAssembler_ppc.hpp
@@ -57,6 +57,10 @@ class ModRefBarrierSetAssembler: public BarrierSetAssembler {
Register base, RegisterOrConstant ind_or_offs, Register val,
Register tmp1, Register tmp2, Register tmp3,
MacroAssembler::PreservationLevel preservation_level);
+
+ virtual void resolve_jobject(MacroAssembler* masm, Register value,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level);
};
#endif // CPU_PPC_GC_SHARED_MODREFBARRIERSETASSEMBLER_PPC_HPP
diff --git a/src/hotspot/cpu/ppc/gc/shenandoah/c1/shenandoahBarrierSetC1_ppc.cpp b/src/hotspot/cpu/ppc/gc/shenandoah/c1/shenandoahBarrierSetC1_ppc.cpp
new file mode 100644
index 0000000000000..fc06e1b71e0b8
--- /dev/null
+++ b/src/hotspot/cpu/ppc/gc/shenandoah/c1/shenandoahBarrierSetC1_ppc.cpp
@@ -0,0 +1,162 @@
+/*
+ * Copyright (c) 2018, 2021, Red Hat, Inc. All rights reserved.
+ * Copyright (c) 2012, 2021 SAP SE. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#include "precompiled.hpp"
+#include "asm/macroAssembler.inline.hpp"
+#include "c1/c1_LIRAssembler.hpp"
+#include "c1/c1_MacroAssembler.hpp"
+#include "gc/shenandoah/shenandoahBarrierSet.hpp"
+#include "gc/shenandoah/shenandoahBarrierSetAssembler.hpp"
+#include "gc/shenandoah/c1/shenandoahBarrierSetC1.hpp"
+
+#define __ masm->masm()->
+
+void LIR_OpShenandoahCompareAndSwap::emit_code(LIR_Assembler *masm) {
+ __ block_comment("LIR_OpShenandoahCompareAndSwap (shenandaohgc) {");
+
+ Register addr = _addr->as_register_lo();
+ Register new_val = _new_value->as_register();
+ Register cmp_val = _cmp_value->as_register();
+ Register tmp1 = _tmp1->as_register();
+ Register tmp2 = _tmp2->as_register();
+ Register result = result_opr()->as_register();
+
+ if (ShenandoahIUBarrier) {
+ ShenandoahBarrierSet::assembler()->iu_barrier(masm->masm(), new_val, tmp1, tmp2,
+ MacroAssembler::PRESERVATION_FRAME_LR_GP_FP_REGS);
+ }
+
+ if (UseCompressedOops) {
+ __ encode_heap_oop(cmp_val, cmp_val);
+ __ encode_heap_oop(new_val, new_val);
+ }
+
+ // Due to the memory barriers emitted in ShenandoahBarrierSetC1::atomic_cmpxchg_at_resolved,
+ // there is no need to specify stronger memory semantics.
+ ShenandoahBarrierSet::assembler()->cmpxchg_oop(masm->masm(), addr, cmp_val, new_val, tmp1, tmp2,
+ false, result);
+
+ if (UseCompressedOops) {
+ __ decode_heap_oop(cmp_val);
+ __ decode_heap_oop(new_val);
+ }
+
+ __ block_comment("} LIR_OpShenandoahCompareAndSwap (shenandaohgc)");
+}
+
+#undef __
+
+#ifdef ASSERT
+#define __ gen->lir(__FILE__, __LINE__)->
+#else
+#define __ gen->lir()->
+#endif
+
+LIR_Opr ShenandoahBarrierSetC1::atomic_cmpxchg_at_resolved(LIRAccess &access, LIRItem &cmp_value, LIRItem &new_value) {
+ BasicType bt = access.type();
+
+ if (access.is_oop()) {
+ LIRGenerator* gen = access.gen();
+
+ if (ShenandoahCASBarrier) {
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ __ membar();
+ } else {
+ __ membar_release();
+ }
+ }
+
+ if (ShenandoahSATBBarrier) {
+ pre_barrier(gen, access.access_emit_info(), access.decorators(), access.resolved_addr(),
+ LIR_OprFact::illegalOpr);
+ }
+
+ if (ShenandoahCASBarrier) {
+ cmp_value.load_item();
+ new_value.load_item();
+
+ LIR_Opr t1 = gen->new_register(T_OBJECT);
+ LIR_Opr t2 = gen->new_register(T_OBJECT);
+ LIR_Opr addr = access.resolved_addr()->as_address_ptr()->base();
+ LIR_Opr result = gen->new_register(T_INT);
+
+ __ append(new LIR_OpShenandoahCompareAndSwap(addr, cmp_value.result(), new_value.result(), t1, t2, result));
+
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ __ membar_acquire();
+ } else {
+ __ membar();
+ }
+
+ return result;
+ }
+ }
+
+ return BarrierSetC1::atomic_cmpxchg_at_resolved(access, cmp_value, new_value);
+}
+
+LIR_Opr ShenandoahBarrierSetC1::atomic_xchg_at_resolved(LIRAccess &access, LIRItem &value) {
+ LIRGenerator* gen = access.gen();
+ BasicType type = access.type();
+
+ LIR_Opr result = gen->new_register(type);
+ value.load_item();
+ LIR_Opr value_opr = value.result();
+
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ __ membar();
+ } else {
+ __ membar_release();
+ }
+
+ if (access.is_oop()) {
+ value_opr = iu_barrier(access.gen(), value_opr, access.access_emit_info(), access.decorators());
+ }
+
+ assert(type == T_INT || is_reference_type(type) LP64_ONLY( || type == T_LONG ), "unexpected type");
+ LIR_Opr tmp_xchg = gen->new_register(T_INT);
+ __ xchg(access.resolved_addr(), value_opr, result, tmp_xchg);
+
+ if (access.is_oop()) {
+ result = load_reference_barrier_impl(access.gen(), result, LIR_OprFact::addressConst(0),
+ access.decorators());
+
+ LIR_Opr tmp_barrier = gen->new_register(type);
+ __ move(result, tmp_barrier);
+ result = tmp_barrier;
+
+ if (ShenandoahSATBBarrier) {
+ pre_barrier(access.gen(), access.access_emit_info(), access.decorators(), LIR_OprFact::illegalOpr, result);
+ }
+ }
+
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ __ membar_acquire();
+ } else {
+ __ membar();
+ }
+
+ return result;
+}
diff --git a/src/hotspot/cpu/ppc/gc/shenandoah/shenandoahBarrierSetAssembler_ppc.cpp b/src/hotspot/cpu/ppc/gc/shenandoah/shenandoahBarrierSetAssembler_ppc.cpp
new file mode 100644
index 0000000000000..5065c4cac0fa4
--- /dev/null
+++ b/src/hotspot/cpu/ppc/gc/shenandoah/shenandoahBarrierSetAssembler_ppc.cpp
@@ -0,0 +1,1012 @@
+/*
+ * Copyright (c) 2018, 2021, Red Hat, Inc. All rights reserved.
+ * Copyright (c) 2012, 2021 SAP SE. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#include "gc/shared/gcArguments.hpp"
+#include "gc/shared/gc_globals.hpp"
+#include "macroAssembler_ppc.hpp"
+#include "precompiled.hpp"
+#include "asm/macroAssembler.inline.hpp"
+#include "gc/shenandoah/shenandoahBarrierSet.hpp"
+#include "gc/shenandoah/shenandoahBarrierSetAssembler.hpp"
+#include "gc/shenandoah/shenandoahForwarding.hpp"
+#include "gc/shenandoah/shenandoahHeap.hpp"
+#include "gc/shenandoah/shenandoahHeap.inline.hpp"
+#include "gc/shenandoah/shenandoahHeapRegion.hpp"
+#include "gc/shenandoah/shenandoahRuntime.hpp"
+#include "gc/shenandoah/shenandoahThreadLocalData.hpp"
+#include "gc/shenandoah/heuristics/shenandoahHeuristics.hpp"
+#include "interpreter/interpreter.hpp"
+#include "runtime/sharedRuntime.hpp"
+#include "runtime/thread.hpp"
+#include "utilities/globalDefinitions.hpp"
+#include "vm_version_ppc.hpp"
+
+#ifdef COMPILER1
+
+#include "c1/c1_LIRAssembler.hpp"
+#include "c1/c1_MacroAssembler.hpp"
+#include "gc/shenandoah/c1/shenandoahBarrierSetC1.hpp"
+
+#endif
+
+#define __ masm->
+
+void ShenandoahBarrierSetAssembler::satb_write_barrier(MacroAssembler *masm,
+ Register base, RegisterOrConstant ind_or_offs,
+ Register tmp1, Register tmp2, Register tmp3,
+ MacroAssembler::PreservationLevel preservation_level) {
+ if (ShenandoahSATBBarrier) {
+ __ block_comment("satb_write_barrier (shenandoahgc) {");
+ satb_write_barrier_impl(masm, 0, base, ind_or_offs, tmp1, tmp2, tmp3, preservation_level);
+ __ block_comment("} satb_write_barrier (shenandoahgc)");
+ }
+}
+
+void ShenandoahBarrierSetAssembler::iu_barrier(MacroAssembler *masm,
+ Register val,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level,
+ DecoratorSet decorators) {
+ // IU barriers are also employed to avoid resurrection of weak references,
+ // even if Shenandoah does not operate in incremental update mode.
+ if (ShenandoahIUBarrier || ShenandoahSATBBarrier) {
+ __ block_comment("iu_barrier (shenandoahgc) {");
+ satb_write_barrier_impl(masm, decorators, noreg, noreg, val, tmp1, tmp2, preservation_level);
+ __ block_comment("} iu_barrier (shenandoahgc)");
+ }
+}
+
+void ShenandoahBarrierSetAssembler::load_reference_barrier(MacroAssembler *masm, DecoratorSet decorators,
+ Register base, RegisterOrConstant ind_or_offs,
+ Register dst,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level) {
+ if (ShenandoahLoadRefBarrier) {
+ __ block_comment("load_reference_barrier (shenandoahgc) {");
+ load_reference_barrier_impl(masm, decorators, base, ind_or_offs, dst, tmp1, tmp2, preservation_level);
+ __ block_comment("} load_reference_barrier (shenandoahgc)");
+ }
+}
+
+void ShenandoahBarrierSetAssembler::arraycopy_prologue(MacroAssembler *masm, DecoratorSet decorators, BasicType type,
+ Register src, Register dst, Register count,
+ Register preserve1, Register preserve2) {
+ __ block_comment("arraycopy_prologue (shenandoahgc) {");
+
+ Register R11_tmp = R11_scratch1;
+
+ assert_different_registers(src, dst, count, R11_tmp, noreg);
+ if (preserve1 != noreg) {
+ // Technically not required, but likely to indicate an error.
+ assert_different_registers(preserve1, preserve2);
+ }
+
+ /* ==== Check whether barrier is required (optimizations) ==== */
+ // Fast path: Component type of array is not a reference type.
+ if (!is_reference_type(type)) {
+ return;
+ }
+
+ bool dest_uninitialized = (decorators & IS_DEST_UNINITIALIZED) != 0;
+
+ // Fast path: No barrier required if for every barrier type, it is either disabled or would not store
+ // any useful information.
+ if ((!ShenandoahSATBBarrier || dest_uninitialized) && !ShenandoahIUBarrier && !ShenandoahLoadRefBarrier) {
+ return;
+ }
+
+ Label skip_prologue;
+
+ // Fast path: Array is of length zero.
+ __ cmpdi(CCR0, count, 0);
+ __ beq(CCR0, skip_prologue);
+
+ /* ==== Check whether barrier is required (gc state) ==== */
+ __ lbz(R11_tmp, in_bytes(ShenandoahThreadLocalData::gc_state_offset()),
+ R16_thread);
+
+ // The set of garbage collection states requiring barriers depends on the available barrier types and the
+ // type of the reference in question.
+ // For instance, satb barriers may be skipped if it is certain that the overridden values are not relevant
+ // for the garbage collector.
+ const int required_states = ShenandoahSATBBarrier && dest_uninitialized
+ ? ShenandoahHeap::HAS_FORWARDED
+ : ShenandoahHeap::HAS_FORWARDED | ShenandoahHeap::MARKING;
+
+ __ andi_(R11_tmp, R11_tmp, required_states);
+ __ beq(CCR0, skip_prologue);
+
+ /* ==== Invoke runtime ==== */
+ // Save to-be-preserved registers.
+ int highest_preserve_register_index = 0;
+ {
+ if (preserve1 != noreg && preserve1->is_volatile()) {
+ __ std(preserve1, -BytesPerWord * ++highest_preserve_register_index, R1_SP);
+ }
+ if (preserve2 != noreg && preserve2 != preserve1 && preserve2->is_volatile()) {
+ __ std(preserve2, -BytesPerWord * ++highest_preserve_register_index, R1_SP);
+ }
+
+ __ std(src, -BytesPerWord * ++highest_preserve_register_index, R1_SP);
+ __ std(dst, -BytesPerWord * ++highest_preserve_register_index, R1_SP);
+ __ std(count, -BytesPerWord * ++highest_preserve_register_index, R1_SP);
+
+ __ save_LR_CR(R11_tmp);
+ __ push_frame_reg_args(-BytesPerWord * highest_preserve_register_index,
+ R11_tmp);
+ }
+
+ // Invoke runtime.
+ address jrt_address = NULL;
+ if (UseCompressedOops) {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::arraycopy_barrier_narrow_oop_entry);
+ } else {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::arraycopy_barrier_oop_entry);
+ }
+ assert(jrt_address != nullptr, "jrt routine cannot be found");
+
+ __ call_VM_leaf(jrt_address, src, dst, count);
+
+ // Restore to-be-preserved registers.
+ {
+ __ pop_frame();
+ __ restore_LR_CR(R11_tmp);
+
+ __ ld(count, -BytesPerWord * highest_preserve_register_index--, R1_SP);
+ __ ld(dst, -BytesPerWord * highest_preserve_register_index--, R1_SP);
+ __ ld(src, -BytesPerWord * highest_preserve_register_index--, R1_SP);
+
+ if (preserve2 != noreg && preserve2 != preserve1 && preserve2->is_volatile()) {
+ __ ld(preserve2, -BytesPerWord * highest_preserve_register_index--, R1_SP);
+ }
+ if (preserve1 != noreg && preserve1->is_volatile()) {
+ __ ld(preserve1, -BytesPerWord * highest_preserve_register_index--, R1_SP);
+ }
+ }
+
+ __ bind(skip_prologue);
+ __ block_comment("} arraycopy_prologue (shenandoahgc)");
+}
+
+// The to-be-enqueued value can either be determined
+// - dynamically by passing the reference's address information (load mode) or
+// - statically by passing a register the value is stored in (preloaded mode)
+// - for performance optimizations in cases where the previous value is known (currently not implemented) and
+// - for incremental-update barriers.
+//
+// decorators: The previous value's decorator set.
+// In "load mode", the value must equal '0'.
+// base: Base register of the reference's address (load mode).
+// In "preloaded mode", the register must equal 'noreg'.
+// ind_or_offs: Index or offset of the reference's address (load mode).
+// If 'base' equals 'noreg' (preloaded mode), the passed value is ignored.
+// pre_val: Register holding the to-be-stored value (preloaded mode).
+// In "load mode", this register acts as a temporary register and must
+// thus not be 'noreg'. In "preloaded mode", its content will be sustained.
+// tmp1/tmp2: Temporary registers, one of which must be non-volatile in "preloaded mode".
+void ShenandoahBarrierSetAssembler::satb_write_barrier_impl(MacroAssembler *masm, DecoratorSet decorators,
+ Register base, RegisterOrConstant ind_or_offs,
+ Register pre_val,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level) {
+ assert_different_registers(tmp1, tmp2, pre_val, noreg);
+
+ Label skip_barrier;
+
+ /* ==== Determine necessary runtime invocation preservation measures ==== */
+ const bool needs_frame = preservation_level >= MacroAssembler::PRESERVATION_FRAME_LR;
+ const bool preserve_gp_registers = preservation_level >= MacroAssembler::PRESERVATION_FRAME_LR_GP_REGS;
+ const bool preserve_fp_registers = preservation_level >= MacroAssembler::PRESERVATION_FRAME_LR_GP_FP_REGS;
+
+ // Check whether marking is active.
+ __ lbz(tmp1, in_bytes(ShenandoahThreadLocalData::gc_state_offset()), R16_thread);
+
+ __ andi_(tmp1, tmp1, ShenandoahHeap::MARKING);
+ __ beq(CCR0, skip_barrier);
+
+ /* ==== Determine the reference's previous value ==== */
+ bool preloaded_mode = base == noreg;
+ Register pre_val_save = noreg;
+
+ if (preloaded_mode) {
+ // Previous value has been passed to the method, so it must not be determined manually.
+ // In case 'pre_val' is a volatile register, it must be saved across the C-call
+ // as callers may depend on its value.
+ // Unless the general purposes registers are saved anyway, one of the temporary registers
+ // (i.e., 'tmp1' and 'tmp2') is used to the preserve 'pre_val'.
+ if (!preserve_gp_registers && pre_val->is_volatile()) {
+ pre_val_save = !tmp1->is_volatile() ? tmp1 : tmp2;
+ assert(!pre_val_save->is_volatile(), "at least one of the temporary registers must be non-volatile");
+ }
+
+ if ((decorators & IS_NOT_NULL) != 0) {
+#ifdef ASSERT
+ __ cmpdi(CCR0, pre_val, 0);
+ __ asm_assert_ne("null oop is not allowed");
+#endif // ASSERT
+ } else {
+ __ cmpdi(CCR0, pre_val, 0);
+ __ beq(CCR0, skip_barrier);
+ }
+ } else {
+ // Load from the reference address to determine the reference's current value (before the store is being performed).
+ // Contrary to the given value in "preloaded mode", it is not necessary to preserve it.
+ assert(decorators == 0, "decorator set must be empty");
+ assert(base != noreg, "base must be a register");
+ assert(!ind_or_offs.is_register() || ind_or_offs.as_register() != noreg, "ind_or_offs must be a register");
+ if (UseCompressedOops) {
+ __ lwz(pre_val, ind_or_offs, base);
+ } else {
+ __ ld(pre_val, ind_or_offs, base);
+ }
+
+ __ cmpdi(CCR0, pre_val, 0);
+ __ beq(CCR0, skip_barrier);
+
+ if (UseCompressedOops) {
+ __ decode_heap_oop_not_null(pre_val);
+ }
+ }
+
+ /* ==== Try to enqueue the to-be-stored value directly into thread's local SATB mark queue ==== */
+ {
+ Label runtime;
+ Register Rbuffer = tmp1, Rindex = tmp2;
+
+ // Check whether the queue has enough capacity to store another oop.
+ // If not, jump to the runtime to commit the buffer and to allocate a new one.
+ // (The buffer's index corresponds to the amount of remaining free space.)
+ __ ld(Rindex, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_index_offset()), R16_thread);
+ __ cmpdi(CCR0, Rindex, 0);
+ __ beq(CCR0, runtime); // If index == 0 (buffer is full), goto runtime.
+
+ // Capacity suffices. Decrement the queue's size by the size of one oop.
+ // (The buffer is filled contrary to the heap's growing direction, i.e., it is filled downwards.)
+ __ addi(Rindex, Rindex, -wordSize);
+ __ std(Rindex, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_index_offset()), R16_thread);
+
+ // Enqueue the previous value and skip the invocation of the runtime.
+ __ ld(Rbuffer, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_buffer_offset()), R16_thread);
+ __ stdx(pre_val, Rbuffer, Rindex);
+ __ b(skip_barrier);
+
+ __ bind(runtime);
+ }
+
+ /* ==== Invoke runtime to commit SATB mark queue to gc and allocate a new buffer ==== */
+ // Save to-be-preserved registers.
+ int nbytes_save = 0;
+
+ if (needs_frame) {
+ if (preserve_gp_registers) {
+ nbytes_save = (preserve_fp_registers
+ ? MacroAssembler::num_volatile_gp_regs + MacroAssembler::num_volatile_fp_regs
+ : MacroAssembler::num_volatile_gp_regs) * BytesPerWord;
+ __ save_volatile_gprs(R1_SP, -nbytes_save, preserve_fp_registers);
+ }
+
+ __ save_LR_CR(tmp1);
+ __ push_frame_reg_args(nbytes_save, tmp2);
+ }
+
+ if (!preserve_gp_registers && preloaded_mode && pre_val->is_volatile()) {
+ assert(pre_val_save != noreg, "nv_save must not be noreg");
+
+ // 'pre_val' register must be saved manually unless general-purpose are preserved in general.
+ __ mr(pre_val_save, pre_val);
+ }
+
+ // Invoke runtime.
+ __ call_VM_leaf(CAST_FROM_FN_PTR(address, ShenandoahRuntime::write_ref_field_pre_entry), pre_val, R16_thread);
+
+ // Restore to-be-preserved registers.
+ if (!preserve_gp_registers && preloaded_mode && pre_val->is_volatile()) {
+ __ mr(pre_val, pre_val_save);
+ }
+
+ if (needs_frame) {
+ __ pop_frame();
+ __ restore_LR_CR(tmp1);
+
+ if (preserve_gp_registers) {
+ __ restore_volatile_gprs(R1_SP, -nbytes_save, preserve_fp_registers);
+ }
+ }
+
+ __ bind(skip_barrier);
+}
+
+void ShenandoahBarrierSetAssembler::resolve_forward_pointer_not_null(MacroAssembler *masm, Register dst, Register tmp) {
+ __ block_comment("resolve_forward_pointer_not_null (shenandoahgc) {");
+
+ Register tmp1 = tmp,
+ R0_tmp2 = R0;
+ assert_different_registers(dst, tmp1, R0_tmp2, noreg);
+
+ // If the object has been evacuated, the mark word layout is as follows:
+ // | forwarding pointer (62-bit) | '11' (2-bit) |
+
+ // The invariant that stack/thread pointers have the lowest two bits cleared permits retrieving
+ // the forwarding pointer solely by inversing the lowest two bits.
+ // This invariant follows inevitably from hotspot's minimal alignment.
+ assert(markWord::marked_value <= (unsigned long) MinObjAlignmentInBytes,
+ "marked value must not be higher than hotspot's minimal alignment");
+
+ Label done;
+
+ // Load the object's mark word.
+ __ ld(tmp1, oopDesc::mark_offset_in_bytes(), dst);
+
+ // Load the bit mask for the lock bits.
+ __ li(R0_tmp2, markWord::lock_mask_in_place);
+
+ // Check whether all bits matching the bit mask are set.
+ // If that is the case, the object has been evacuated and the most significant bits form the forward pointer.
+ __ andc_(R0_tmp2, R0_tmp2, tmp1);
+
+ assert(markWord::lock_mask_in_place == markWord::marked_value,
+ "marked value must equal the value obtained when all lock bits are being set");
+ if (VM_Version::has_isel()) {
+ __ xori(tmp1, tmp1, markWord::lock_mask_in_place);
+ __ isel(dst, CCR0, Assembler::equal, false, tmp1);
+ } else {
+ __ bne(CCR0, done);
+ __ xori(dst, tmp1, markWord::lock_mask_in_place);
+ }
+
+ __ bind(done);
+ __ block_comment("} resolve_forward_pointer_not_null (shenandoahgc)");
+}
+
+// base: Base register of the reference's address.
+// ind_or_offs: Index or offset of the reference's address (load mode).
+// dst: Reference's address. In case the object has been evacuated, this is the to-space version
+// of that object.
+void ShenandoahBarrierSetAssembler::load_reference_barrier_impl(
+ MacroAssembler *masm, DecoratorSet decorators,
+ Register base, RegisterOrConstant ind_or_offs,
+ Register dst,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level) {
+ if (ind_or_offs.is_register()) {
+ assert_different_registers(tmp1, tmp2, base, ind_or_offs.as_register(), dst, noreg);
+ } else {
+ assert_different_registers(tmp1, tmp2, base, dst, noreg);
+ }
+
+ Label skip_barrier;
+
+ bool is_strong = ShenandoahBarrierSet::is_strong_access(decorators);
+ bool is_weak = ShenandoahBarrierSet::is_weak_access(decorators);
+ bool is_phantom = ShenandoahBarrierSet::is_phantom_access(decorators);
+ bool is_native = ShenandoahBarrierSet::is_native_access(decorators);
+ bool is_narrow = UseCompressedOops && !is_native;
+
+ /* ==== Check whether heap is stable ==== */
+ __ lbz(tmp2, in_bytes(ShenandoahThreadLocalData::gc_state_offset()), R16_thread);
+
+ if (is_strong) {
+ // For strong references, the heap is considered stable if "has forwarded" is not active.
+ __ andi_(tmp1, tmp2, ShenandoahHeap::HAS_FORWARDED | ShenandoahHeap::EVACUATION);
+ __ beq(CCR0, skip_barrier);
+#ifdef ASSERT
+ // "evacuation" -> (implies) "has forwarded". If we reach this code, "has forwarded" must thus be set.
+ __ andi_(tmp1, tmp1, ShenandoahHeap::HAS_FORWARDED);
+ __ asm_assert_ne("'has forwarded' is missing");
+#endif // ASSERT
+ } else {
+ // For all non-strong references, the heap is considered stable if not any of "has forwarded",
+ // "root set processing", and "weak reference processing" is active.
+ // The additional phase conditions are in place to avoid the resurrection of weak references (see JDK-8266440).
+ Label skip_fastpath;
+ __ andi_(tmp1, tmp2, ShenandoahHeap::WEAK_ROOTS);
+ __ bne(CCR0, skip_fastpath);
+
+ __ andi_(tmp1, tmp2, ShenandoahHeap::HAS_FORWARDED | ShenandoahHeap::EVACUATION);
+ __ beq(CCR0, skip_barrier);
+#ifdef ASSERT
+ // "evacuation" -> (implies) "has forwarded". If we reach this code, "has forwarded" must thus be set.
+ __ andi_(tmp1, tmp1, ShenandoahHeap::HAS_FORWARDED);
+ __ asm_assert_ne("'has forwarded' is missing");
+#endif // ASSERT
+
+ __ bind(skip_fastpath);
+ }
+
+ /* ==== Check whether region is in collection set ==== */
+ if (is_strong) {
+ // Shenandoah stores metadata on regions in a continuous area of memory in which a single byte corresponds to
+ // an entire region of the shenandoah heap. At present, only the least significant bit is of significance
+ // and indicates whether the region is part of the collection set.
+ //
+ // All regions are of the same size and are always aligned by a power of two.
+ // Any address can thus be shifted by a fixed number of bits to retrieve the address prefix shared by
+ // all objects within that region (region identification bits).
+ //
+ // | unused bits | region identification bits | object identification bits |
+ // (Region size depends on a couple of criteria, such as page size, user-provided arguments and the max heap size.
+ // The number of object identification bits can thus not be determined at compile time.)
+ //
+ // ------------------------------------------------------- <--- cs (collection set) base address
+ // | lost space due to heap space base address -> 'ShenandoahHeap::in_cset_fast_test_addr()'
+ // | (region identification bits contain heap base offset)
+ // |------------------------------------------------------ <--- cs base address + (heap_base >> region size shift)
+ // | collection set in the proper -> shift: 'region_size_bytes_shift_jint()'
+ // |
+ // |------------------------------------------------------ <--- cs base address + (heap_base >> region size shift)
+ // + number of regions
+ __ load_const_optimized(tmp2, ShenandoahHeap::in_cset_fast_test_addr(), tmp1);
+ __ srdi(tmp1, dst, ShenandoahHeapRegion::region_size_bytes_shift_jint());
+ __ lbzx(tmp2, tmp1, tmp2);
+ __ andi_(tmp2, tmp2, 1);
+ __ beq(CCR0, skip_barrier);
+ }
+
+ /* ==== Invoke runtime ==== */
+ // Save to-be-preserved registers.
+ int nbytes_save = 0;
+
+ const bool needs_frame = preservation_level >= MacroAssembler::PRESERVATION_FRAME_LR;
+ const bool preserve_gp_registers = preservation_level >= MacroAssembler::PRESERVATION_FRAME_LR_GP_REGS;
+ const bool preserve_fp_registers = preservation_level >= MacroAssembler::PRESERVATION_FRAME_LR_GP_FP_REGS;
+
+ if (needs_frame) {
+ if (preserve_gp_registers) {
+ nbytes_save = (preserve_fp_registers
+ ? MacroAssembler::num_volatile_gp_regs + MacroAssembler::num_volatile_fp_regs
+ : MacroAssembler::num_volatile_gp_regs) * BytesPerWord;
+ __ save_volatile_gprs(R1_SP, -nbytes_save, preserve_fp_registers);
+ }
+
+ __ save_LR_CR(tmp1);
+ __ push_frame_reg_args(nbytes_save, tmp1);
+ }
+
+ // Calculate the reference's absolute address.
+ __ add(R4_ARG2, ind_or_offs, base);
+
+ // Invoke runtime.
+ address jrt_address = nullptr;
+
+ if (is_strong) {
+ if (is_narrow) {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_strong_narrow);
+ } else {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_strong);
+ }
+ } else if (is_weak) {
+ if (is_narrow) {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_weak_narrow);
+ } else {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_weak);
+ }
+ } else {
+ assert(is_phantom, "only remaining strength");
+ assert(!is_narrow, "phantom access cannot be narrow");
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_phantom);
+ }
+ assert(jrt_address != nullptr, "jrt routine cannot be found");
+
+ __ call_VM_leaf(jrt_address, dst /* reference */, R4_ARG2 /* reference address */);
+
+ // Restore to-be-preserved registers.
+ if (preserve_gp_registers) {
+ __ mr(R0, R3_RET);
+ } else {
+ __ mr_if_needed(dst, R3_RET);
+ }
+
+ if (needs_frame) {
+ __ pop_frame();
+ __ restore_LR_CR(tmp1);
+
+ if (preserve_gp_registers) {
+ __ restore_volatile_gprs(R1_SP, -nbytes_save, preserve_fp_registers);
+ __ mr(dst, R0);
+ }
+ }
+
+ __ bind(skip_barrier);
+}
+
+// base: Base register of the reference's address.
+// ind_or_offs: Index or offset of the reference's address.
+// L_handle_null: An optional label that will be jumped to if the reference is null.
+void ShenandoahBarrierSetAssembler::load_at(
+ MacroAssembler *masm, DecoratorSet decorators, BasicType type,
+ Register base, RegisterOrConstant ind_or_offs, Register dst,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level, Label *L_handle_null) {
+ // Register must not clash, except 'base' and 'dst'.
+ if (ind_or_offs.is_register()) {
+ if (base != noreg) {
+ assert_different_registers(tmp1, tmp2, base, ind_or_offs.register_or_noreg(), R0, noreg);
+ }
+ assert_different_registers(tmp1, tmp2, dst, ind_or_offs.register_or_noreg(), R0, noreg);
+ } else {
+ if (base == noreg) {
+ assert_different_registers(tmp1, tmp2, base, R0, noreg);
+ }
+ assert_different_registers(tmp1, tmp2, dst, R0, noreg);
+ }
+
+ /* ==== Apply load barrier, if required ==== */
+ if (ShenandoahBarrierSet::need_load_reference_barrier(decorators, type)) {
+ assert(is_reference_type(type), "need_load_reference_barrier must check whether type is a reference type");
+
+ // If 'dst' clashes with either 'base' or 'ind_or_offs', use an intermediate result register
+ // to keep the values of those alive until the load reference barrier is applied.
+ Register intermediate_dst = (dst == base || (ind_or_offs.is_register() && dst == ind_or_offs.as_register()))
+ ? tmp2
+ : dst;
+
+ BarrierSetAssembler::load_at(masm, decorators, type,
+ base, ind_or_offs,
+ intermediate_dst,
+ tmp1, noreg,
+ preservation_level, L_handle_null);
+
+ load_reference_barrier(masm, decorators,
+ base, ind_or_offs,
+ intermediate_dst,
+ tmp1, R0,
+ preservation_level);
+
+ __ mr_if_needed(dst, intermediate_dst);
+ } else {
+ BarrierSetAssembler::load_at(masm, decorators, type,
+ base, ind_or_offs,
+ dst,
+ tmp1, tmp2,
+ preservation_level, L_handle_null);
+ }
+
+ /* ==== Apply keep-alive barrier, if required (e.g., to inhibit weak reference resurrection) ==== */
+ if (ShenandoahBarrierSet::need_keep_alive_barrier(decorators, type)) {
+ iu_barrier(masm, dst, tmp1, tmp2, preservation_level);
+ }
+}
+
+// base: Base register of the reference's address.
+// ind_or_offs: Index or offset of the reference's address.
+// val: To-be-stored value/reference's new value.
+void ShenandoahBarrierSetAssembler::store_at(MacroAssembler *masm, DecoratorSet decorators, BasicType type,
+ Register base, RegisterOrConstant ind_or_offs, Register val,
+ Register tmp1, Register tmp2, Register tmp3,
+ MacroAssembler::PreservationLevel preservation_level) {
+ if (is_reference_type(type)) {
+ if (ShenandoahSATBBarrier) {
+ satb_write_barrier(masm, base, ind_or_offs, tmp1, tmp2, tmp3, preservation_level);
+ }
+
+ if (ShenandoahIUBarrier && val != noreg) {
+ iu_barrier(masm, val, tmp1, tmp2, preservation_level, decorators);
+ }
+ }
+
+ BarrierSetAssembler::store_at(masm, decorators, type,
+ base, ind_or_offs,
+ val,
+ tmp1, tmp2, tmp3,
+ preservation_level);
+}
+
+void ShenandoahBarrierSetAssembler::try_resolve_jobject_in_native(MacroAssembler *masm,
+ Register dst, Register jni_env, Register obj,
+ Register tmp, Label &slowpath) {
+ __ block_comment("try_resolve_jobject_in_native (shenandoahgc) {");
+
+ assert_different_registers(jni_env, obj, tmp);
+
+ Label done;
+
+ // Fast path: Reference is null (JNI tags are zero for null pointers).
+ __ cmpdi(CCR0, obj, 0);
+ __ beq(CCR0, done);
+
+ // Resolve jobject using standard implementation.
+ BarrierSetAssembler::try_resolve_jobject_in_native(masm, dst, jni_env, obj, tmp, slowpath);
+
+ // Check whether heap is stable.
+ __ lbz(tmp,
+ in_bytes(ShenandoahThreadLocalData::gc_state_offset() - JavaThread::jni_environment_offset()),
+ jni_env);
+
+ __ andi_(tmp, tmp, ShenandoahHeap::EVACUATION | ShenandoahHeap::HAS_FORWARDED);
+ __ bne(CCR0, slowpath);
+
+ __ bind(done);
+ __ block_comment("} try_resolve_jobject_in_native (shenandoahgc)");
+}
+
+// Special shenandoah CAS implementation that handles false negatives due
+// to concurrent evacuation. That is, the CAS operation is intended to succeed in
+// the following scenarios (success criteria):
+// s1) The reference pointer ('base_addr') equals the expected ('expected') pointer.
+// s2) The reference pointer refers to the from-space version of an already-evacuated
+// object, whereas the expected pointer refers to the to-space version of the same object.
+// Situations in which the reference pointer refers to the to-space version of an object
+// and the expected pointer refers to the from-space version of the same object can not occur due to
+// shenandoah's strong to-space invariant. This also implies that the reference stored in 'new_val'
+// can not refer to the from-space version of an already-evacuated object.
+//
+// To guarantee correct behavior in concurrent environments, two races must be addressed:
+// r1) A concurrent thread may heal the reference pointer (i.e., it is no longer referring to the
+// from-space version but to the to-space version of the object in question).
+// In this case, the CAS operation should succeed.
+// r2) A concurrent thread may mutate the reference (i.e., the reference pointer refers to an entirely different object).
+// In this case, the CAS operation should fail.
+//
+// By default, the value held in the 'result' register is zero to indicate failure of CAS,
+// non-zero to indicate success. If 'is_cae' is set, the result is the most recently fetched
+// value from 'base_addr' rather than a boolean success indicator.
+void ShenandoahBarrierSetAssembler::cmpxchg_oop(MacroAssembler *masm, Register base_addr,
+ Register expected, Register new_val, Register tmp1, Register tmp2,
+ bool is_cae, Register result) {
+ __ block_comment("cmpxchg_oop (shenandoahgc) {");
+
+ assert_different_registers(base_addr, new_val, tmp1, tmp2, result, R0);
+ assert_different_registers(base_addr, expected, tmp1, tmp2, result, R0);
+
+ // Potential clash of 'success_flag' and 'tmp' is being accounted for.
+ Register success_flag = is_cae ? noreg : result,
+ current_value = is_cae ? result : tmp1,
+ tmp = is_cae ? tmp1 : result,
+ initial_value = tmp2;
+
+ Label done, step_four;
+
+ __ bind(step_four);
+
+ /* ==== Step 1 ("Standard" CAS) ==== */
+ // Fast path: The values stored in 'expected' and 'base_addr' are equal.
+ // Given that 'expected' must refer to the to-space object of an evacuated object (strong to-space invariant),
+ // no special processing is required.
+ if (UseCompressedOops) {
+ __ cmpxchgw(CCR0, current_value, expected, new_val, base_addr, MacroAssembler::MemBarNone,
+ false, success_flag, true);
+ } else {
+ __ cmpxchgd(CCR0, current_value, expected, new_val, base_addr, MacroAssembler::MemBarNone,
+ false, success_flag, NULL, true);
+ }
+
+ // Skip the rest of the barrier if the CAS operation succeeds immediately.
+ // If it does not, the value stored at the address is either the from-space pointer of the
+ // referenced object (success criteria s2)) or simply another object.
+ __ beq(CCR0, done);
+
+ /* ==== Step 2 (Null check) ==== */
+ // The success criteria s2) cannot be matched with a null pointer
+ // (null pointers cannot be subject to concurrent evacuation). The failure of the CAS operation is thus legitimate.
+ __ cmpdi(CCR0, current_value, 0);
+ __ beq(CCR0, done);
+
+ /* ==== Step 3 (reference pointer refers to from-space version; success criteria s2)) ==== */
+ // To check whether the reference pointer refers to the from-space version, the forward
+ // pointer of the object referred to by the reference is resolved and compared against the expected pointer.
+ // If this check succeed, another CAS operation is issued with the from-space pointer being the expected pointer.
+ //
+ // Save the potential from-space pointer.
+ __ mr(initial_value, current_value);
+
+ // Resolve forward pointer.
+ if (UseCompressedOops) { __ decode_heap_oop_not_null(current_value); }
+ resolve_forward_pointer_not_null(masm, current_value, tmp);
+ if (UseCompressedOops) { __ encode_heap_oop_not_null(current_value); }
+
+ if (!is_cae) {
+ // 'success_flag' was overwritten by call to 'resovle_forward_pointer_not_null'.
+ // Load zero into register for the potential failure case.
+ __ li(success_flag, 0);
+ }
+ __ cmpd(CCR0, current_value, expected);
+ __ bne(CCR0, done);
+
+ // Discard fetched value as it might be a reference to the from-space version of an object.
+ if (UseCompressedOops) {
+ __ cmpxchgw(CCR0, R0, initial_value, new_val, base_addr, MacroAssembler::MemBarNone,
+ false, success_flag);
+ } else {
+ __ cmpxchgd(CCR0, R0, initial_value, new_val, base_addr, MacroAssembler::MemBarNone,
+ false, success_flag);
+ }
+
+ /* ==== Step 4 (Retry CAS with to-space pointer (success criteria s2) under race r1)) ==== */
+ // The reference pointer could have been healed whilst the previous CAS operation was being performed.
+ // Another CAS operation must thus be issued with the to-space pointer being the expected pointer.
+ // If that CAS operation fails as well, race r2) must have occurred, indicating that
+ // the operation failure is legitimate.
+ //
+ // To keep the code's size small and thus improving cache (icache) performance, this highly
+ // unlikely case should be handled by the smallest possible code. Instead of emitting a third,
+ // explicit CAS operation, the code jumps back and reuses the first CAS operation (step 1)
+ // (passed arguments are identical).
+ //
+ // A failure of the CAS operation in step 1 would imply that the overall CAS operation is supposed
+ // to fail. Jumping back to step 1 requires, however, that step 2 and step 3 are re-executed as well.
+ // It is thus important to ensure that a re-execution of those steps does not put program correctness
+ // at risk:
+ // - Step 2: Either terminates in failure (desired result) or falls through to step 3.
+ // - Step 3: Terminates if the comparison between the forwarded, fetched pointer and the expected value
+ // fails. Unless the reference has been updated in the meanwhile once again, this is
+ // guaranteed to be the case.
+ // In case of a concurrent update, the CAS would be retried again. This is legitimate
+ // in terms of program correctness (even though it is not desired).
+ __ bne(CCR0, step_four);
+
+ __ bind(done);
+ __ block_comment("} cmpxchg_oop (shenandoahgc)");
+}
+
+#undef __
+
+#ifdef COMPILER1
+
+#define __ ce->masm()->
+
+void ShenandoahBarrierSetAssembler::gen_pre_barrier_stub(LIR_Assembler *ce, ShenandoahPreBarrierStub *stub) {
+ __ block_comment("gen_pre_barrier_stub (shenandoahgc) {");
+
+ ShenandoahBarrierSetC1 *bs = (ShenandoahBarrierSetC1*) BarrierSet::barrier_set()->barrier_set_c1();
+ __ bind(*stub->entry());
+
+ // GC status has already been verified by 'ShenandoahBarrierSetC1::pre_barrier'.
+ // This stub is the slowpath of that function.
+
+ assert(stub->pre_val()->is_register(), "pre_val must be a register");
+ Register pre_val = stub->pre_val()->as_register();
+
+ // If 'do_load()' returns false, the to-be-stored value is already available in 'stub->pre_val()'
+ // ("preloaded mode" of the store barrier).
+ if (stub->do_load()) {
+ ce->mem2reg(stub->addr(), stub->pre_val(), T_OBJECT, stub->patch_code(), stub->info(), false, false /*unaligned*/);
+ }
+
+ // Fast path: Reference is null.
+ __ cmpdi(CCR0, pre_val, 0);
+ __ bc_far_optimized(Assembler::bcondCRbiIs1_bhintNoHint, __ bi0(CCR0, Assembler::equal), *stub->continuation());
+
+ // Argument passing via the stack.
+ __ std(pre_val, -8, R1_SP);
+
+ __ load_const_optimized(R0, bs->pre_barrier_c1_runtime_code_blob()->code_begin());
+ __ call_stub(R0);
+
+ __ b(*stub->continuation());
+ __ block_comment("} gen_pre_barrier_stub (shenandoahgc)");
+}
+
+void ShenandoahBarrierSetAssembler::gen_load_reference_barrier_stub(LIR_Assembler *ce,
+ ShenandoahLoadReferenceBarrierStub *stub) {
+ __ block_comment("gen_load_reference_barrier_stub (shenandoahgc) {");
+
+ ShenandoahBarrierSetC1 *bs = (ShenandoahBarrierSetC1*) BarrierSet::barrier_set()->barrier_set_c1();
+ __ bind(*stub->entry());
+
+ Register obj = stub->obj()->as_register();
+ Register res = stub->result()->as_register();
+ Register addr = stub->addr()->as_pointer_register();
+ Register tmp1 = stub->tmp1()->as_register();
+ Register tmp2 = stub->tmp2()->as_register();
+ assert_different_registers(addr, res, tmp1, tmp2);
+
+#ifdef ASSERT
+ // Ensure that 'res' is 'R3_ARG1' and contains the same value as 'obj' to reduce the number of required
+ // copy instructions.
+ assert(R3_RET == res, "res must be r3");
+ __ cmpd(CCR0, res, obj);
+ __ asm_assert_eq("result register must contain the reference stored in obj");
+#endif
+
+ DecoratorSet decorators = stub->decorators();
+
+ /* ==== Check whether region is in collection set ==== */
+ // GC status (unstable) has already been verified by 'ShenandoahBarrierSetC1::load_reference_barrier_impl'.
+ // This stub is the slowpath of that function.
+
+ bool is_strong = ShenandoahBarrierSet::is_strong_access(decorators);
+ bool is_weak = ShenandoahBarrierSet::is_weak_access(decorators);
+ bool is_phantom = ShenandoahBarrierSet::is_phantom_access(decorators);
+ bool is_native = ShenandoahBarrierSet::is_native_access(decorators);
+
+ if (is_strong) {
+ // Check whether object is in collection set.
+ __ load_const_optimized(tmp2, ShenandoahHeap::in_cset_fast_test_addr(), tmp1);
+ __ srdi(tmp1, obj, ShenandoahHeapRegion::region_size_bytes_shift_jint());
+ __ lbzx(tmp2, tmp1, tmp2);
+
+ __ andi_(tmp2, tmp2, 1);
+ __ bc_far_optimized(Assembler::bcondCRbiIs1_bhintNoHint, __ bi0(CCR0, Assembler::equal), *stub->continuation());
+ }
+
+ address blob_addr = nullptr;
+
+ if (is_strong) {
+ if (is_native) {
+ blob_addr = bs->load_reference_barrier_strong_native_rt_code_blob()->code_begin();
+ } else {
+ blob_addr = bs->load_reference_barrier_strong_rt_code_blob()->code_begin();
+ }
+ } else if (is_weak) {
+ blob_addr = bs->load_reference_barrier_weak_rt_code_blob()->code_begin();
+ } else {
+ assert(is_phantom, "only remaining strength");
+ blob_addr = bs->load_reference_barrier_phantom_rt_code_blob()->code_begin();
+ }
+
+ assert(blob_addr != nullptr, "code blob cannot be found");
+
+ // Argument passing via the stack. 'obj' is passed implicitly (as asserted above).
+ __ std(addr, -8, R1_SP);
+
+ __ load_const_optimized(tmp1, blob_addr, tmp2);
+ __ call_stub(tmp1);
+
+ // 'res' is 'R3_RET'. The result is thus already in the correct register.
+
+ __ b(*stub->continuation());
+ __ block_comment("} gen_load_reference_barrier_stub (shenandoahgc)");
+}
+
+#undef __
+
+#define __ sasm->
+
+void ShenandoahBarrierSetAssembler::generate_c1_pre_barrier_runtime_stub(StubAssembler *sasm) {
+ __ block_comment("generate_c1_pre_barrier_runtime_stub (shenandoahgc) {");
+
+ Label runtime, skip_barrier;
+ BarrierSet *bs = BarrierSet::barrier_set();
+
+ // Argument passing via the stack.
+ const int caller_stack_slots = 3;
+
+ Register R0_pre_val = R0;
+ __ ld(R0, -8, R1_SP);
+ Register R11_tmp1 = R11_scratch1;
+ __ std(R11_tmp1, -16, R1_SP);
+ Register R12_tmp2 = R12_scratch2;
+ __ std(R12_tmp2, -24, R1_SP);
+
+ /* ==== Check whether marking is active ==== */
+ // Even though gc status was checked in 'ShenandoahBarrierSetAssembler::gen_pre_barrier_stub',
+ // another check is required as a safepoint might have been reached in the meantime (JDK-8140588).
+ __ lbz(R12_tmp2, in_bytes(ShenandoahThreadLocalData::gc_state_offset()), R16_thread);
+
+ __ andi_(R12_tmp2, R12_tmp2, ShenandoahHeap::MARKING);
+ __ beq(CCR0, skip_barrier);
+
+ /* ==== Add previous value directly to thread-local SATB mark queue ==== */
+ // Check queue's capacity. Jump to runtime if no free slot is available.
+ __ ld(R12_tmp2, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_index_offset()), R16_thread);
+ __ cmpdi(CCR0, R12_tmp2, 0);
+ __ beq(CCR0, runtime);
+
+ // Capacity suffices. Decrement the queue's size by one slot (size of one oop).
+ __ addi(R12_tmp2, R12_tmp2, -wordSize);
+ __ std(R12_tmp2, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_index_offset()), R16_thread);
+
+ // Enqueue the previous value and skip the runtime invocation.
+ __ ld(R11_tmp1, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_buffer_offset()), R16_thread);
+ __ stdx(R0_pre_val, R11_tmp1, R12_tmp2);
+ __ b(skip_barrier);
+
+ __ bind(runtime);
+
+ /* ==== Invoke runtime to commit SATB mark queue to gc and allocate a new buffer ==== */
+ // Save to-be-preserved registers.
+ const int nbytes_save = (MacroAssembler::num_volatile_regs + caller_stack_slots) * BytesPerWord;
+ __ save_volatile_gprs(R1_SP, -nbytes_save);
+ __ save_LR_CR(R11_tmp1);
+ __ push_frame_reg_args(nbytes_save, R11_tmp1);
+
+ // Invoke runtime.
+ __ call_VM_leaf(CAST_FROM_FN_PTR(address, ShenandoahRuntime::write_ref_field_pre_entry), R0_pre_val, R16_thread);
+
+ // Restore to-be-preserved registers.
+ __ pop_frame();
+ __ restore_LR_CR(R11_tmp1);
+ __ restore_volatile_gprs(R1_SP, -nbytes_save);
+
+ __ bind(skip_barrier);
+
+ // Restore spilled registers.
+ __ ld(R11_tmp1, -16, R1_SP);
+ __ ld(R12_tmp2, -24, R1_SP);
+
+ __ blr();
+ __ block_comment("} generate_c1_pre_barrier_runtime_stub (shenandoahgc)");
+}
+
+void ShenandoahBarrierSetAssembler::generate_c1_load_reference_barrier_runtime_stub(StubAssembler *sasm,
+ DecoratorSet decorators) {
+ __ block_comment("generate_c1_load_reference_barrier_runtime_stub (shenandoahgc) {");
+
+ // Argument passing via the stack.
+ const int caller_stack_slots = 1;
+
+ // Save to-be-preserved registers.
+ const int nbytes_save = (MacroAssembler::num_volatile_regs - 1 // 'R3_ARG1' is skipped
+ + caller_stack_slots) * BytesPerWord;
+ __ save_volatile_gprs(R1_SP, -nbytes_save, true, false);
+
+ // Load arguments from stack.
+ // No load required, as assured by assertions in 'ShenandoahBarrierSetAssembler::gen_load_reference_barrier_stub'.
+ Register R3_obj = R3_ARG1;
+ Register R4_load_addr = R4_ARG2;
+ __ ld(R4_load_addr, -8, R1_SP);
+
+ Register R11_tmp = R11_scratch1;
+
+ /* ==== Invoke runtime ==== */
+ bool is_strong = ShenandoahBarrierSet::is_strong_access(decorators);
+ bool is_weak = ShenandoahBarrierSet::is_weak_access(decorators);
+ bool is_phantom = ShenandoahBarrierSet::is_phantom_access(decorators);
+ bool is_native = ShenandoahBarrierSet::is_native_access(decorators);
+
+ address jrt_address = NULL;
+
+ if (is_strong) {
+ if (is_native) {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_strong);
+ } else {
+ if (UseCompressedOops) {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_strong_narrow);
+ } else {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_strong);
+ }
+ }
+ } else if (is_weak) {
+ assert(!is_native, "weak load reference barrier must not be called off-heap");
+ if (UseCompressedOops) {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_weak_narrow);
+ } else {
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_weak);
+ }
+ } else {
+ assert(is_phantom, "reference type must be phantom");
+ assert(is_native, "phantom load reference barrier must be called off-heap");
+ jrt_address = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_phantom);
+ }
+ assert(jrt_address != NULL, "load reference barrier runtime routine cannot be found");
+
+ __ save_LR_CR(R11_tmp);
+ __ push_frame_reg_args(nbytes_save, R11_tmp);
+
+ // Invoke runtime. Arguments are already stored in the corresponding registers.
+ __ call_VM_leaf(jrt_address, R3_obj, R4_load_addr);
+
+ // Restore to-be-preserved registers.
+ __ pop_frame();
+ __ restore_LR_CR(R11_tmp);
+ __ restore_volatile_gprs(R1_SP, -nbytes_save, true, false); // Skip 'R3_RET' register.
+
+ __ blr();
+ __ block_comment("} generate_c1_load_reference_barrier_runtime_stub (shenandoahgc)");
+}
+
+#undef __
+
+#endif // COMPILER1
diff --git a/src/hotspot/cpu/ppc/gc/shenandoah/shenandoahBarrierSetAssembler_ppc.hpp b/src/hotspot/cpu/ppc/gc/shenandoah/shenandoahBarrierSetAssembler_ppc.hpp
new file mode 100644
index 0000000000000..cf55f505b2207
--- /dev/null
+++ b/src/hotspot/cpu/ppc/gc/shenandoah/shenandoahBarrierSetAssembler_ppc.hpp
@@ -0,0 +1,118 @@
+/*
+ * Copyright (c) 2018, 2021, Red Hat, Inc. All rights reserved.
+ * Copyright (c) 2012, 2021 SAP SE. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#ifndef CPU_PPC_GC_SHENANDOAH_SHENANDOAHBARRIERSETASSEMBLER_PPC_HPP
+#define CPU_PPC_GC_SHENANDOAH_SHENANDOAHBARRIERSETASSEMBLER_PPC_HPP
+
+#include "asm/macroAssembler.hpp"
+#include "gc/shared/barrierSetAssembler.hpp"
+#include "gc/shenandoah/shenandoahBarrierSet.hpp"
+
+#ifdef COMPILER1
+
+class LIR_Assembler;
+class ShenandoahPreBarrierStub;
+class ShenandoahLoadReferenceBarrierStub;
+class StubAssembler;
+
+#endif
+
+class StubCodeGenerator;
+
+class ShenandoahBarrierSetAssembler: public BarrierSetAssembler {
+private:
+
+ /* ==== Actual barrier implementations ==== */
+ void satb_write_barrier_impl(MacroAssembler* masm, DecoratorSet decorators,
+ Register base, RegisterOrConstant ind_or_offs,
+ Register pre_val,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level);
+
+ void load_reference_barrier_impl(MacroAssembler* masm, DecoratorSet decorators,
+ Register base, RegisterOrConstant ind_or_offs,
+ Register dst,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level);
+
+ /* ==== Helper methods for barrier implementations ==== */
+ void resolve_forward_pointer_not_null(MacroAssembler* masm, Register dst, Register tmp);
+
+public:
+
+ /* ==== C1 stubs ==== */
+#ifdef COMPILER1
+
+ void gen_pre_barrier_stub(LIR_Assembler* ce, ShenandoahPreBarrierStub* stub);
+
+ void gen_load_reference_barrier_stub(LIR_Assembler* ce, ShenandoahLoadReferenceBarrierStub* stub);
+
+ void generate_c1_pre_barrier_runtime_stub(StubAssembler* sasm);
+
+ void generate_c1_load_reference_barrier_runtime_stub(StubAssembler* sasm, DecoratorSet decorators);
+
+#endif
+
+ /* ==== Available barriers (facades of the actual implementations) ==== */
+ void satb_write_barrier(MacroAssembler* masm,
+ Register base, RegisterOrConstant ind_or_offs,
+ Register tmp1, Register tmp2, Register tmp3,
+ MacroAssembler::PreservationLevel preservation_level);
+
+ void iu_barrier(MacroAssembler* masm,
+ Register val,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level, DecoratorSet decorators = 0);
+
+ void load_reference_barrier(MacroAssembler* masm, DecoratorSet decorators,
+ Register base, RegisterOrConstant ind_or_offs,
+ Register dst,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level);
+
+ /* ==== Helper methods used by C1 and C2 ==== */
+ void cmpxchg_oop(MacroAssembler* masm, Register base_addr, Register expected, Register new_val,
+ Register tmp1, Register tmp2,
+ bool is_cae, Register result);
+
+ /* ==== Access api ==== */
+ virtual void arraycopy_prologue(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
+ Register src, Register dst, Register count, Register preserve1, Register preserve2);
+
+ virtual void store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
+ Register base, RegisterOrConstant ind_or_offs, Register val,
+ Register tmp1, Register tmp2, Register tmp3,
+ MacroAssembler::PreservationLevel preservation_level);
+
+ virtual void load_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
+ Register base, RegisterOrConstant ind_or_offs, Register dst,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level, Label* L_handle_null = NULL);
+
+ virtual void try_resolve_jobject_in_native(MacroAssembler* masm, Register dst, Register jni_env,
+ Register obj, Register tmp, Label& slowpath);
+};
+
+#endif // CPU_PPC_GC_SHENANDOAH_SHENANDOAHBARRIERSETASSEMBLER_PPC_HPP
diff --git a/src/hotspot/cpu/ppc/gc/shenandoah/shenandoah_ppc.ad b/src/hotspot/cpu/ppc/gc/shenandoah/shenandoah_ppc.ad
new file mode 100644
index 0000000000000..4825ca9cf81cd
--- /dev/null
+++ b/src/hotspot/cpu/ppc/gc/shenandoah/shenandoah_ppc.ad
@@ -0,0 +1,217 @@
+//
+// Copyright (c) 2018, 2021, Red Hat, Inc. All rights reserved.
+// Copyright (c) 2012, 2021 SAP SE. All rights reserved.
+// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+//
+// This code is free software; you can redistribute it and/or modify it
+// under the terms of the GNU General Public License version 2 only, as
+// published by the Free Software Foundation.
+//
+// This code is distributed in the hope that it will be useful, but WITHOUT
+// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+// version 2 for more details (a copy is included in the LICENSE file that
+// accompanied this code).
+//
+// You should have received a copy of the GNU General Public License version
+// 2 along with this work; if not, write to the Free Software Foundation,
+// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+//
+// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+// or visit www.oracle.com if you need additional information or have any
+// questions.
+//
+//
+
+source_hpp %{
+#include "gc/shenandoah/shenandoahBarrierSet.hpp"
+#include "gc/shenandoah/shenandoahBarrierSetAssembler.hpp"
+%}
+
+// Weak compareAndSwap operations are treated as strong compareAndSwap operations.
+// This is motivated by the retry logic of ShenandoahBarrierSetAssembler::cmpxchg_oop which is hard to realise
+// using weak CAS operations.
+
+instruct compareAndSwapP_shenandoah(iRegIdst res, indirect mem, iRegPsrc oldval, iRegPsrc newval,
+ iRegPdst tmp1, iRegPdst tmp2, flagsRegCR0 cr) %{
+ match(Set res (ShenandoahCompareAndSwapP mem (Binary oldval newval)));
+ match(Set res (ShenandoahWeakCompareAndSwapP mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp1, TEMP tmp2, KILL cr);
+
+ predicate(((CompareAndSwapNode*)n)->order() != MemNode::acquire
+ && ((CompareAndSwapNode*)n)->order() != MemNode::seqcst);
+
+ format %{ "CMPXCHG $res, $mem, $oldval, $newval; as bool; ptr" %}
+ ins_encode %{
+ ShenandoahBarrierSet::assembler()->cmpxchg_oop(
+ &_masm,
+ $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp1$$Register, $tmp2$$Register,
+ false, $res$$Register
+ );
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct compareAndSwapN_shenandoah(iRegIdst res, indirect mem, iRegNsrc oldval, iRegNsrc newval,
+ iRegNdst tmp1, iRegNdst tmp2, flagsRegCR0 cr) %{
+ match(Set res (ShenandoahCompareAndSwapN mem (Binary oldval newval)));
+ match(Set res (ShenandoahWeakCompareAndSwapN mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp1, TEMP tmp2, KILL cr);
+
+ predicate(((CompareAndSwapNode*)n)->order() != MemNode::acquire
+ && ((CompareAndSwapNode*)n)->order() != MemNode::seqcst);
+
+ format %{ "CMPXCHG $res, $mem, $oldval, $newval; as bool; ptr" %}
+ ins_encode %{
+ ShenandoahBarrierSet::assembler()->cmpxchg_oop(
+ &_masm,
+ $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp1$$Register, $tmp2$$Register,
+ false, $res$$Register
+ );
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct compareAndSwapP_acq_shenandoah(iRegIdst res, indirect mem, iRegPsrc oldval, iRegPsrc newval,
+ iRegPdst tmp1, iRegPdst tmp2, flagsRegCR0 cr) %{
+ match(Set res (ShenandoahCompareAndSwapP mem (Binary oldval newval)));
+ match(Set res (ShenandoahWeakCompareAndSwapP mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp1, TEMP tmp2, KILL cr);
+
+ predicate(((CompareAndSwapNode*)n)->order() == MemNode::acquire
+ || ((CompareAndSwapNode*)n)->order() == MemNode::seqcst);
+
+ format %{ "CMPXCHGD acq $res, $mem, $oldval, $newval; as bool; ptr" %}
+ ins_encode %{
+ ShenandoahBarrierSet::assembler()->cmpxchg_oop(
+ &_masm,
+ $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp1$$Register, $tmp2$$Register,
+ false, $res$$Register
+ );
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ __ isync();
+ } else {
+ __ sync();
+ }
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct compareAndSwapN_acq_shenandoah(iRegIdst res, indirect mem, iRegNsrc oldval, iRegNsrc newval,
+ iRegNdst tmp1, iRegNdst tmp2, flagsRegCR0 cr) %{
+ match(Set res (ShenandoahCompareAndSwapN mem (Binary oldval newval)));
+ match(Set res (ShenandoahWeakCompareAndSwapN mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp1, TEMP tmp2, KILL cr);
+
+ predicate(((CompareAndSwapNode*)n)->order() == MemNode::acquire
+ || ((CompareAndSwapNode*)n)->order() == MemNode::seqcst);
+
+ format %{ "CMPXCHGD acq $res, $mem, $oldval, $newval; as bool; ptr" %}
+ ins_encode %{
+ ShenandoahBarrierSet::assembler()->cmpxchg_oop(
+ &_masm,
+ $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp1$$Register, $tmp2$$Register,
+ false, $res$$Register
+ );
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ __ isync();
+ } else {
+ __ sync();
+ }
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct compareAndExchangeP_shenandoah(iRegPdst res, indirect mem, iRegPsrc oldval, iRegPsrc newval,
+ iRegPdst tmp1, iRegPdst tmp2, flagsRegCR0 cr) %{
+ match(Set res (ShenandoahCompareAndExchangeP mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp1, TEMP tmp2, KILL cr);
+
+ predicate(((CompareAndSwapNode*)n)->order() != MemNode::acquire
+ && ((CompareAndSwapNode*)n)->order() != MemNode::seqcst);
+
+ format %{ "CMPXCHGD $res, $mem, $oldval, $newval; as ptr; ptr" %}
+ ins_encode %{
+ ShenandoahBarrierSet::assembler()->cmpxchg_oop(
+ &_masm,
+ $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp1$$Register, $tmp2$$Register,
+ true, $res$$Register
+ );
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct compareAndExchangeN_shenandoah(iRegNdst res, indirect mem, iRegNsrc oldval, iRegNsrc newval,
+ iRegNdst tmp1, iRegNdst tmp2, flagsRegCR0 cr) %{
+ match(Set res (ShenandoahCompareAndExchangeN mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp1, TEMP tmp2, KILL cr);
+
+ predicate(((CompareAndSwapNode*)n)->order() != MemNode::acquire
+ && ((CompareAndSwapNode*)n)->order() != MemNode::seqcst);
+
+ format %{ "CMPXCHGD $res, $mem, $oldval, $newval; as ptr; ptr" %}
+ ins_encode %{
+ ShenandoahBarrierSet::assembler()->cmpxchg_oop(
+ &_masm,
+ $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp1$$Register, $tmp2$$Register,
+ true, $res$$Register
+ );
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct compareAndExchangePAcq_shenandoah(iRegPdst res, indirect mem, iRegPsrc oldval, iRegPsrc newval,
+ iRegPdst tmp1, iRegPdst tmp2, flagsRegCR0 cr) %{
+ match(Set res (ShenandoahCompareAndExchangeP mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp1, TEMP tmp2, KILL cr);
+
+ predicate(((CompareAndSwapNode*)n)->order() == MemNode::acquire
+ || ((CompareAndSwapNode*)n)->order() == MemNode::seqcst);
+
+ format %{ "CMPXCHGD acq $res, $mem, $oldval, $newval; as ptr; ptr" %}
+ ins_encode %{
+ ShenandoahBarrierSet::assembler()->cmpxchg_oop(
+ &_masm,
+ $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp1$$Register, $tmp2$$Register,
+ true, $res$$Register
+ );
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ __ isync();
+ } else {
+ __ sync();
+ }
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct compareAndExchangeNAcq_shenandoah(iRegNdst res, indirect mem, iRegNsrc oldval, iRegNsrc newval,
+ iRegNdst tmp1, iRegNdst tmp2, flagsRegCR0 cr) %{
+ match(Set res (ShenandoahCompareAndExchangeN mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp1, TEMP tmp2, KILL cr);
+
+ predicate(((CompareAndSwapNode*)n)->order() == MemNode::acquire
+ || ((CompareAndSwapNode*)n)->order() == MemNode::seqcst);
+
+ format %{ "CMPXCHGD acq $res, $mem, $oldval, $newval; as ptr; ptr" %}
+ ins_encode %{
+ ShenandoahBarrierSet::assembler()->cmpxchg_oop(
+ &_masm,
+ $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp1$$Register, $tmp2$$Register,
+ true, $res$$Register
+ );
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ __ isync();
+ } else {
+ __ sync();
+ }
+ %}
+ ins_pipe(pipe_class_default);
+%}
diff --git a/src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.cpp b/src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.cpp
new file mode 100644
index 0000000000000..26c3bf371f3fe
--- /dev/null
+++ b/src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.cpp
@@ -0,0 +1,567 @@
+/*
+ * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021 SAP SE. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ */
+
+#include "asm/register.hpp"
+#include "precompiled.hpp"
+#include "asm/macroAssembler.inline.hpp"
+#include "code/codeBlob.hpp"
+#include "code/vmreg.inline.hpp"
+#include "gc/z/zBarrier.inline.hpp"
+#include "gc/z/zBarrierSet.hpp"
+#include "gc/z/zBarrierSetAssembler.hpp"
+#include "gc/z/zBarrierSetRuntime.hpp"
+#include "gc/z/zThreadLocalData.hpp"
+#include "memory/resourceArea.hpp"
+#include "register_ppc.hpp"
+#include "runtime/sharedRuntime.hpp"
+#include "utilities/globalDefinitions.hpp"
+#include "utilities/macros.hpp"
+#ifdef COMPILER1
+#include "c1/c1_LIRAssembler.hpp"
+#include "c1/c1_MacroAssembler.hpp"
+#include "gc/z/c1/zBarrierSetC1.hpp"
+#endif // COMPILER1
+#ifdef COMPILER2
+#include "gc/z/c2/zBarrierSetC2.hpp"
+#endif // COMPILER2
+
+#undef __
+#define __ masm->
+
+void ZBarrierSetAssembler::load_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
+ Register base, RegisterOrConstant ind_or_offs, Register dst,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level, Label *L_handle_null) {
+ __ block_comment("load_at (zgc) {");
+
+ // Check whether a special gc barrier is required for this particular load
+ // (e.g. whether it's a reference load or not)
+ if (!ZBarrierSet::barrier_needed(decorators, type)) {
+ BarrierSetAssembler::load_at(masm, decorators, type, base, ind_or_offs, dst,
+ tmp1, tmp2, preservation_level, L_handle_null);
+ return;
+ }
+
+ if (ind_or_offs.is_register()) {
+ assert_different_registers(base, ind_or_offs.as_register(), tmp1, tmp2, R0, noreg);
+ assert_different_registers(dst, ind_or_offs.as_register(), tmp1, tmp2, R0, noreg);
+ } else {
+ assert_different_registers(base, tmp1, tmp2, R0, noreg);
+ assert_different_registers(dst, tmp1, tmp2, R0, noreg);
+ }
+
+ /* ==== Load the pointer using the standard implementation for the actual heap access
+ and the decompression of compressed pointers ==== */
+ // Result of 'load_at' (standard implementation) will be written back to 'dst'.
+ // As 'base' is required for the C-call, it must be reserved in case of a register clash.
+ Register saved_base = base;
+ if (base == dst) {
+ __ mr(tmp2, base);
+ saved_base = tmp2;
+ }
+
+ BarrierSetAssembler::load_at(masm, decorators, type, base, ind_or_offs, dst,
+ tmp1, noreg, preservation_level, L_handle_null);
+
+ /* ==== Check whether pointer is dirty ==== */
+ Label skip_barrier;
+
+ // Load bad mask into scratch register.
+ __ ld(tmp1, (intptr_t) ZThreadLocalData::address_bad_mask_offset(), R16_thread);
+
+ // The color bits of the to-be-tested pointer do not have to be equivalent to the 'bad_mask' testing bits.
+ // A pointer is classified as dirty if any of the color bits that also match the bad mask is set.
+ // Conversely, it follows that the logical AND of the bad mask and the pointer must be zero
+ // if the pointer is not dirty.
+ // Only dirty pointers must be processed by this barrier, so we can skip it in case the latter condition holds true.
+ __ and_(tmp1, tmp1, dst);
+ __ beq(CCR0, skip_barrier);
+
+ /* ==== Invoke barrier ==== */
+ int nbytes_save = 0;
+
+ const bool needs_frame = preservation_level >= MacroAssembler::PRESERVATION_FRAME_LR;
+ const bool preserve_gp_registers = preservation_level >= MacroAssembler::PRESERVATION_FRAME_LR_GP_REGS;
+ const bool preserve_fp_registers = preservation_level >= MacroAssembler::PRESERVATION_FRAME_LR_GP_FP_REGS;
+
+ const bool preserve_R3 = dst != R3_ARG1;
+
+ if (needs_frame) {
+ if (preserve_gp_registers) {
+ nbytes_save = (preserve_fp_registers
+ ? MacroAssembler::num_volatile_gp_regs + MacroAssembler::num_volatile_fp_regs
+ : MacroAssembler::num_volatile_gp_regs) * BytesPerWord;
+ nbytes_save -= preserve_R3 ? 0 : BytesPerWord;
+ __ save_volatile_gprs(R1_SP, -nbytes_save, preserve_fp_registers, preserve_R3);
+ }
+
+ __ save_LR_CR(tmp1);
+ __ push_frame_reg_args(nbytes_save, tmp1);
+ }
+
+ // Setup arguments
+ if (saved_base != R3_ARG1) {
+ __ mr_if_needed(R3_ARG1, dst);
+ __ add(R4_ARG2, ind_or_offs, saved_base);
+ } else if (dst != R4_ARG2) {
+ __ add(R4_ARG2, ind_or_offs, saved_base);
+ __ mr(R3_ARG1, dst);
+ } else {
+ __ add(R0, ind_or_offs, saved_base);
+ __ mr(R3_ARG1, dst);
+ __ mr(R4_ARG2, R0);
+ }
+
+ __ call_VM_leaf(ZBarrierSetRuntime::load_barrier_on_oop_field_preloaded_addr(decorators));
+
+ Register result = R3_RET;
+ if (needs_frame) {
+ __ pop_frame();
+ __ restore_LR_CR(tmp1);
+
+ if (preserve_R3) {
+ __ mr(R0, R3_RET);
+ result = R0;
+ }
+
+ if (preserve_gp_registers) {
+ __ restore_volatile_gprs(R1_SP, -nbytes_save, preserve_fp_registers, preserve_R3);
+ }
+ }
+ __ mr_if_needed(dst, result);
+
+ __ bind(skip_barrier);
+ __ block_comment("} load_at (zgc)");
+}
+
+#ifdef ASSERT
+// The Z store barrier only verifies the pointers it is operating on and is thus a sole debugging measure.
+void ZBarrierSetAssembler::store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
+ Register base, RegisterOrConstant ind_or_offs, Register val,
+ Register tmp1, Register tmp2, Register tmp3,
+ MacroAssembler::PreservationLevel preservation_level) {
+ __ block_comment("store_at (zgc) {");
+
+ // If the 'val' register is 'noreg', the to-be-stored value is a null pointer.
+ if (is_reference_type(type) && val != noreg) {
+ __ ld(tmp1, in_bytes(ZThreadLocalData::address_bad_mask_offset()), R16_thread);
+ __ and_(tmp1, tmp1, val);
+ __ asm_assert_eq("Detected dirty pointer on the heap in Z store barrier");
+ }
+
+ // Store value
+ BarrierSetAssembler::store_at(masm, decorators, type, base, ind_or_offs, val, tmp1, tmp2, tmp3, preservation_level);
+
+ __ block_comment("} store_at (zgc)");
+}
+#endif // ASSERT
+
+void ZBarrierSetAssembler::arraycopy_prologue(MacroAssembler *masm, DecoratorSet decorators, BasicType component_type,
+ Register src, Register dst, Register count,
+ Register preserve1, Register preserve2) {
+ __ block_comment("arraycopy_prologue (zgc) {");
+
+ /* ==== Check whether a special gc barrier is required for this particular load ==== */
+ if (!is_reference_type(component_type)) {
+ return;
+ }
+
+ Label skip_barrier;
+
+ // Fast path: Array is of length zero
+ __ cmpdi(CCR0, count, 0);
+ __ beq(CCR0, skip_barrier);
+
+ /* ==== Ensure register sanity ==== */
+ Register tmp_R11 = R11_scratch1;
+
+ assert_different_registers(src, dst, count, tmp_R11, noreg);
+ if (preserve1 != noreg) {
+ // Not technically required, but unlikely being intended.
+ assert_different_registers(preserve1, preserve2);
+ }
+
+ /* ==== Invoke barrier (slowpath) ==== */
+ int nbytes_save = 0;
+
+ {
+ assert(!noreg->is_volatile(), "sanity");
+
+ if (preserve1->is_volatile()) {
+ __ std(preserve1, -BytesPerWord * ++nbytes_save, R1_SP);
+ }
+
+ if (preserve2->is_volatile() && preserve1 != preserve2) {
+ __ std(preserve2, -BytesPerWord * ++nbytes_save, R1_SP);
+ }
+
+ __ std(src, -BytesPerWord * ++nbytes_save, R1_SP);
+ __ std(dst, -BytesPerWord * ++nbytes_save, R1_SP);
+ __ std(count, -BytesPerWord * ++nbytes_save, R1_SP);
+
+ __ save_LR_CR(tmp_R11);
+ __ push_frame_reg_args(nbytes_save, tmp_R11);
+ }
+
+ // ZBarrierSetRuntime::load_barrier_on_oop_array_addr(src, count)
+ if (count == R3_ARG1) {
+ if (src == R4_ARG2) {
+ // Arguments are provided in reverse order
+ __ mr(tmp_R11, count);
+ __ mr(R3_ARG1, src);
+ __ mr(R4_ARG2, tmp_R11);
+ } else {
+ __ mr(R4_ARG2, count);
+ __ mr(R3_ARG1, src);
+ }
+ } else {
+ __ mr_if_needed(R3_ARG1, src);
+ __ mr_if_needed(R4_ARG2, count);
+ }
+
+ __ call_VM_leaf(ZBarrierSetRuntime::load_barrier_on_oop_array_addr());
+
+ __ pop_frame();
+ __ restore_LR_CR(tmp_R11);
+
+ {
+ __ ld(count, -BytesPerWord * nbytes_save--, R1_SP);
+ __ ld(dst, -BytesPerWord * nbytes_save--, R1_SP);
+ __ ld(src, -BytesPerWord * nbytes_save--, R1_SP);
+
+ if (preserve2->is_volatile() && preserve1 != preserve2) {
+ __ ld(preserve2, -BytesPerWord * nbytes_save--, R1_SP);
+ }
+
+ if (preserve1->is_volatile()) {
+ __ ld(preserve1, -BytesPerWord * nbytes_save--, R1_SP);
+ }
+ }
+
+ __ bind(skip_barrier);
+
+ __ block_comment("} arraycopy_prologue (zgc)");
+}
+
+void ZBarrierSetAssembler::try_resolve_jobject_in_native(MacroAssembler* masm, Register dst, Register jni_env,
+ Register obj, Register tmp, Label& slowpath) {
+ __ block_comment("try_resolve_jobject_in_native (zgc) {");
+
+ assert_different_registers(jni_env, obj, tmp);
+
+ // Resolve the pointer using the standard implementation for weak tag handling and pointer verfication.
+ BarrierSetAssembler::try_resolve_jobject_in_native(masm, dst, jni_env, obj, tmp, slowpath);
+
+ // Check whether pointer is dirty.
+ __ ld(tmp,
+ in_bytes(ZThreadLocalData::address_bad_mask_offset() - JavaThread::jni_environment_offset()),
+ jni_env);
+
+ __ and_(tmp, obj, tmp);
+ __ bne(CCR0, slowpath);
+
+ __ block_comment("} try_resolve_jobject_in_native (zgc)");
+}
+
+#undef __
+
+#ifdef COMPILER1
+#define __ ce->masm()->
+
+// Code emitted by LIR node "LIR_OpZLoadBarrierTest" which in turn is emitted by ZBarrierSetC1::load_barrier.
+// The actual compare and branch instructions are represented as stand-alone LIR nodes.
+void ZBarrierSetAssembler::generate_c1_load_barrier_test(LIR_Assembler* ce,
+ LIR_Opr ref) const {
+ __ block_comment("load_barrier_test (zgc) {");
+
+ __ ld(R0, in_bytes(ZThreadLocalData::address_bad_mask_offset()), R16_thread);
+ __ andr(R0, R0, ref->as_pointer_register());
+ __ cmpdi(CCR5 /* as mandated by LIR node */, R0, 0);
+
+ __ block_comment("} load_barrier_test (zgc)");
+}
+
+// Code emitted by code stub "ZLoadBarrierStubC1" which in turn is emitted by ZBarrierSetC1::load_barrier.
+// Invokes the runtime stub which is defined just below.
+void ZBarrierSetAssembler::generate_c1_load_barrier_stub(LIR_Assembler* ce,
+ ZLoadBarrierStubC1* stub) const {
+ __ block_comment("c1_load_barrier_stub (zgc) {");
+
+ __ bind(*stub->entry());
+
+ /* ==== Determine relevant data registers and ensure register sanity ==== */
+ Register ref = stub->ref()->as_register();
+ Register ref_addr = noreg;
+
+ // Determine reference address
+ if (stub->tmp()->is_valid()) {
+ // 'tmp' register is given, so address might have an index or a displacement.
+ ce->leal(stub->ref_addr(), stub->tmp());
+ ref_addr = stub->tmp()->as_pointer_register();
+ } else {
+ // 'tmp' register is not given, so address must have neither an index nor a displacement.
+ // The address' base register is thus usable as-is.
+ assert(stub->ref_addr()->as_address_ptr()->disp() == 0, "illegal displacement");
+ assert(!stub->ref_addr()->as_address_ptr()->index()->is_valid(), "illegal index");
+
+ ref_addr = stub->ref_addr()->as_address_ptr()->base()->as_pointer_register();
+ }
+
+ assert_different_registers(ref, ref_addr, R0, noreg);
+
+ /* ==== Invoke stub ==== */
+ // Pass arguments via stack. The stack pointer will be bumped by the stub.
+ __ std(ref, (intptr_t) -1 * BytesPerWord, R1_SP);
+ __ std(ref_addr, (intptr_t) -2 * BytesPerWord, R1_SP);
+
+ __ load_const_optimized(R0, stub->runtime_stub());
+ __ call_stub(R0);
+
+ // The runtime stub passes the result via the R0 register, overriding the previously-loaded stub address.
+ __ mr_if_needed(ref, R0);
+ __ b(*stub->continuation());
+
+ __ block_comment("} c1_load_barrier_stub (zgc)");
+}
+
+#undef __
+#define __ sasm->
+
+// Code emitted by runtime code stub which in turn is emitted by ZBarrierSetC1::generate_c1_runtime_stubs.
+void ZBarrierSetAssembler::generate_c1_load_barrier_runtime_stub(StubAssembler* sasm,
+ DecoratorSet decorators) const {
+ __ block_comment("c1_load_barrier_runtime_stub (zgc) {");
+
+ const int stack_parameters = 2;
+ const int nbytes_save = (MacroAssembler::num_volatile_regs + stack_parameters) * BytesPerWord;
+
+ __ save_volatile_gprs(R1_SP, -nbytes_save);
+ __ save_LR_CR(R0);
+
+ // Load arguments back again from the stack.
+ __ ld(R3_ARG1, (intptr_t) -1 * BytesPerWord, R1_SP); // ref
+ __ ld(R4_ARG2, (intptr_t) -2 * BytesPerWord, R1_SP); // ref_addr
+
+ __ push_frame_reg_args(nbytes_save, R0);
+
+ __ call_VM_leaf(ZBarrierSetRuntime::load_barrier_on_oop_field_preloaded_addr(decorators));
+
+ __ verify_oop(R3_RET, "Bad pointer after barrier invocation");
+ __ mr(R0, R3_RET);
+
+ __ pop_frame();
+ __ restore_LR_CR(R3_RET);
+ __ restore_volatile_gprs(R1_SP, -nbytes_save);
+
+ __ blr();
+
+ __ block_comment("} c1_load_barrier_runtime_stub (zgc)");
+}
+
+#undef __
+#endif // COMPILER1
+
+#ifdef COMPILER2
+
+OptoReg::Name ZBarrierSetAssembler::refine_register(const Node* node, OptoReg::Name opto_reg) const {
+ if (!OptoReg::is_reg(opto_reg)) {
+ return OptoReg::Bad;
+ }
+
+ VMReg vm_reg = OptoReg::as_VMReg(opto_reg);
+ if ((vm_reg->is_Register() || vm_reg ->is_FloatRegister()) && (opto_reg & 1) != 0) {
+ return OptoReg::Bad;
+ }
+
+ return opto_reg;
+}
+
+#define __ _masm->
+
+class ZSaveLiveRegisters {
+
+ private:
+ MacroAssembler* _masm;
+ RegMask _reg_mask;
+ Register _result_reg;
+
+ public:
+ ZSaveLiveRegisters(MacroAssembler *masm, ZLoadBarrierStubC2 *stub)
+ : _masm(masm), _reg_mask(stub->live()), _result_reg(stub->ref()) {
+
+ const int total_regs_amount = iterate_over_register_mask(ACTION_SAVE);
+
+ __ save_LR_CR(R0);
+ __ push_frame_reg_args(total_regs_amount * BytesPerWord, R0);
+ }
+
+ ~ZSaveLiveRegisters() {
+ __ pop_frame();
+ __ restore_LR_CR(R0);
+
+ iterate_over_register_mask(ACTION_RESTORE);
+ }
+
+ private:
+ enum IterationAction : int {
+ ACTION_SAVE = 0,
+ ACTION_RESTORE = 1
+ };
+
+ int iterate_over_register_mask(IterationAction action) {
+ int reg_save_index = 0;
+ RegMaskIterator live_regs_iterator(_reg_mask);
+
+ while(live_regs_iterator.has_next()) {
+ const OptoReg::Name opto_reg = live_regs_iterator.next();
+
+ // Filter out stack slots (spilled registers, i.e., stack-allocated registers).
+ if (!OptoReg::is_reg(opto_reg)) {
+ continue;
+ }
+
+ const VMReg vm_reg = OptoReg::as_VMReg(opto_reg);
+ if (vm_reg->is_Register()) {
+ Register std_reg = vm_reg->as_Register();
+
+ // '_result_reg' will hold the end result of the operation. Its content must thus not be preserved.
+ if (std_reg == _result_reg) {
+ continue;
+ }
+
+ if (std_reg->encoding() >= R2->encoding() && std_reg->encoding() <= R12->encoding()) {
+ reg_save_index++;
+
+ if (action == ACTION_SAVE) {
+ _masm->std(std_reg, (intptr_t) -reg_save_index * BytesPerWord, R1_SP);
+ } else if (action == ACTION_RESTORE) {
+ _masm->ld(std_reg, (intptr_t) -reg_save_index * BytesPerWord, R1_SP);
+ } else {
+ fatal("Sanity");
+ }
+ }
+ } else if (vm_reg->is_FloatRegister()) {
+ FloatRegister fp_reg = vm_reg->as_FloatRegister();
+ if (fp_reg->encoding() >= F0->encoding() && fp_reg->encoding() <= F13->encoding()) {
+ reg_save_index++;
+
+ if (action == ACTION_SAVE) {
+ _masm->stfd(fp_reg, (intptr_t) -reg_save_index * BytesPerWord, R1_SP);
+ } else if (action == ACTION_RESTORE) {
+ _masm->lfd(fp_reg, (intptr_t) -reg_save_index * BytesPerWord, R1_SP);
+ } else {
+ fatal("Sanity");
+ }
+ }
+ } else if (vm_reg->is_ConditionRegister()) {
+ // NOP. Conditions registers are covered by save_LR_CR
+ } else {
+ if (vm_reg->is_VectorRegister()) {
+ fatal("Vector registers are unsupported. Found register %s", vm_reg->name());
+ } else if (vm_reg->is_SpecialRegister()) {
+ fatal("Special registers are unsupported. Found register %s", vm_reg->name());
+ } else {
+ fatal("Register type is not known");
+ }
+ }
+ }
+
+ return reg_save_index;
+ }
+};
+
+#undef __
+#define __ _masm->
+
+class ZSetupArguments {
+ private:
+ MacroAssembler* const _masm;
+ const Register _ref;
+ const Address _ref_addr;
+
+ public:
+ ZSetupArguments(MacroAssembler* masm, ZLoadBarrierStubC2* stub) :
+ _masm(masm),
+ _ref(stub->ref()),
+ _ref_addr(stub->ref_addr()) {
+
+ // Desired register/argument configuration:
+ // _ref: R3_ARG1
+ // _ref_addr: R4_ARG2
+
+ // '_ref_addr' can be unspecified. In that case, the barrier will not heal the reference.
+ if (_ref_addr.base() == noreg) {
+ assert_different_registers(_ref, R0, noreg);
+
+ __ mr_if_needed(R3_ARG1, _ref);
+ __ li(R4_ARG2, 0);
+ } else {
+ assert_different_registers(_ref, _ref_addr.base(), R0, noreg);
+ assert(!_ref_addr.index()->is_valid(), "reference addresses must not contain an index component");
+
+ if (_ref != R4_ARG2) {
+ // Calculate address first as the address' base register might clash with R4_ARG2
+ __ add(R4_ARG2, (intptr_t) _ref_addr.disp(), _ref_addr.base());
+ __ mr_if_needed(R3_ARG1, _ref);
+ } else if (_ref_addr.base() != R3_ARG1) {
+ __ mr(R3_ARG1, _ref);
+ __ add(R4_ARG2, (intptr_t) _ref_addr.disp(), _ref_addr.base()); // Cloberring _ref
+ } else {
+ // Arguments are provided in inverse order (i.e. _ref == R4_ARG2, _ref_addr == R3_ARG1)
+ __ mr(R0, _ref);
+ __ add(R4_ARG2, (intptr_t) _ref_addr.disp(), _ref_addr.base());
+ __ mr(R3_ARG1, R0);
+ }
+ }
+ }
+};
+
+#undef __
+#define __ masm->
+
+void ZBarrierSetAssembler::generate_c2_load_barrier_stub(MacroAssembler* masm, ZLoadBarrierStubC2* stub) const {
+ __ block_comment("generate_c2_load_barrier_stub (zgc) {");
+
+ __ bind(*stub->entry());
+
+ Register ref = stub->ref();
+ Address ref_addr = stub->ref_addr();
+
+ assert_different_registers(ref, ref_addr.base());
+
+ {
+ ZSaveLiveRegisters save_live_registers(masm, stub);
+ ZSetupArguments setup_arguments(masm, stub);
+
+ __ call_VM_leaf(stub->slow_path());
+ __ mr_if_needed(ref, R3_RET);
+ }
+
+ __ b(*stub->continuation());
+
+ __ block_comment("} generate_c2_load_barrier_stub (zgc)");
+}
+
+#undef __
+#endif // COMPILER2
diff --git a/src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.hpp b/src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.hpp
new file mode 100644
index 0000000000000..e2ff1bf53ae80
--- /dev/null
+++ b/src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.hpp
@@ -0,0 +1,86 @@
+/*
+ * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021 SAP SE. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ */
+
+#ifndef CPU_PPC_GC_Z_ZBARRIERSETASSEMBLER_PPC_HPP
+#define CPU_PPC_GC_Z_ZBARRIERSETASSEMBLER_PPC_HPP
+
+#include "code/vmreg.hpp"
+#include "oops/accessDecorators.hpp"
+#ifdef COMPILER2
+#include "opto/optoreg.hpp"
+#endif // COMPILER2
+
+#ifdef COMPILER1
+class LIR_Assembler;
+class LIR_OprDesc;
+typedef LIR_OprDesc* LIR_Opr;
+class StubAssembler;
+class ZLoadBarrierStubC1;
+#endif // COMPILER1
+
+#ifdef COMPILER2
+class Node;
+class ZLoadBarrierStubC2;
+#endif // COMPILER2
+
+class ZBarrierSetAssembler : public ZBarrierSetAssemblerBase {
+public:
+ virtual void load_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
+ Register base, RegisterOrConstant ind_or_offs, Register dst,
+ Register tmp1, Register tmp2,
+ MacroAssembler::PreservationLevel preservation_level, Label *L_handle_null = NULL);
+
+#ifdef ASSERT
+ virtual void store_at(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
+ Register base, RegisterOrConstant ind_or_offs, Register val,
+ Register tmp1, Register tmp2, Register tmp3,
+ MacroAssembler::PreservationLevel preservation_level);
+#endif // ASSERT
+
+ virtual void arraycopy_prologue(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
+ Register src, Register dst, Register count,
+ Register preserve1, Register preserve2);
+
+ virtual void try_resolve_jobject_in_native(MacroAssembler* masm, Register dst, Register jni_env,
+ Register obj, Register tmp, Label& slowpath);
+
+#ifdef COMPILER1
+ void generate_c1_load_barrier_test(LIR_Assembler* ce,
+ LIR_Opr ref) const;
+
+ void generate_c1_load_barrier_stub(LIR_Assembler* ce,
+ ZLoadBarrierStubC1* stub) const;
+
+ void generate_c1_load_barrier_runtime_stub(StubAssembler* sasm,
+ DecoratorSet decorators) const;
+#endif // COMPILER1
+
+#ifdef COMPILER2
+ OptoReg::Name refine_register(const Node* node, OptoReg::Name opto_reg) const;
+
+ void generate_c2_load_barrier_stub(MacroAssembler* masm, ZLoadBarrierStubC2* stub) const;
+#endif // COMPILER2
+};
+
+#endif // CPU_AARCH64_GC_Z_ZBARRIERSETASSEMBLER_AARCH64_HPP
diff --git a/src/hotspot/cpu/ppc/gc/z/zGlobals_ppc.cpp b/src/hotspot/cpu/ppc/gc/z/zGlobals_ppc.cpp
new file mode 100644
index 0000000000000..93c2f9b4dc44e
--- /dev/null
+++ b/src/hotspot/cpu/ppc/gc/z/zGlobals_ppc.cpp
@@ -0,0 +1,203 @@
+/*
+ * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021 SAP SE. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ */
+
+#include "precompiled.hpp"
+#include "gc/shared/gcLogPrecious.hpp"
+#include "gc/shared/gc_globals.hpp"
+#include "gc/z/zGlobals.hpp"
+#include "runtime/globals.hpp"
+#include "runtime/os.hpp"
+#include "utilities/globalDefinitions.hpp"
+#include "utilities/powerOfTwo.hpp"
+#include
+
+#ifdef LINUX
+#include
+#endif // LINUX
+
+//
+// The overall memory layouts across different power platforms are similar and only differ with regards to
+// the position of the highest addressable bit; the position of the metadata bits and the size of the actual
+// addressable heap address space are adjusted accordingly.
+//
+// The following memory schema shows an exemplary layout in which bit '45' is the highest addressable bit.
+// It is assumed that this virtual memroy address space layout is predominant on the power platform.
+//
+// Standard Address Space & Pointer Layout
+// ---------------------------------------
+//
+// +--------------------------------+ 0x00007FFFFFFFFFFF (127 TiB - 1)
+// . .
+// . .
+// . .
+// +--------------------------------+ 0x0000140000000000 (20 TiB)
+// | Remapped View |
+// +--------------------------------+ 0x0000100000000000 (16 TiB)
+// . .
+// +--------------------------------+ 0x00000c0000000000 (12 TiB)
+// | Marked1 View |
+// +--------------------------------+ 0x0000080000000000 (8 TiB)
+// | Marked0 View |
+// +--------------------------------+ 0x0000040000000000 (4 TiB)
+// . .
+// +--------------------------------+ 0x0000000000000000
+//
+// 6 4 4 4 4
+// 3 6 5 2 1 0
+// +--------------------+----+-----------------------------------------------+
+// |00000000 00000000 00|1111|11 11111111 11111111 11111111 11111111 11111111|
+// +--------------------+----+-----------------------------------------------+
+// | | |
+// | | * 41-0 Object Offset (42-bits, 4TB address space)
+// | |
+// | * 45-42 Metadata Bits (4-bits) 0001 = Marked0 (Address view 4-8TB)
+// | 0010 = Marked1 (Address view 8-12TB)
+// | 0100 = Remapped (Address view 16-20TB)
+// | 1000 = Finalizable (Address view N/A)
+// |
+// * 63-46 Fixed (18-bits, always zero)
+//
+
+// Maximum value as per spec (Power ISA v2.07): 2 ^ 60 bytes, i.e. 1 EiB (exbibyte)
+static const unsigned int MAXIMUM_MAX_ADDRESS_BIT = 60;
+
+// Most modern power processors provide an address space with not more than 45 bit addressable bit,
+// that is an address space of 32 TiB in size.
+static const unsigned int DEFAULT_MAX_ADDRESS_BIT = 45;
+
+// Minimum value returned, if probing fails: 64 GiB
+static const unsigned int MINIMUM_MAX_ADDRESS_BIT = 36;
+
+// Determines the highest addressable bit of the virtual address space (depends on platform)
+// by trying to interact with memory in that address range,
+// i.e. by syncing existing mappings (msync) or by temporarily mapping the memory area (mmap).
+// If one of those operations succeeds, it is proven that the targeted memory area is within the virtual address space.
+//
+// To reduce the number of required system calls to a bare minimum, the DEFAULT_MAX_ADDRESS_BIT is intentionally set
+// lower than what the ABI would theoretically permit.
+// Such an avoidance strategy, however, might impose unnecessary limits on processors that exceed this limit.
+// If DEFAULT_MAX_ADDRESS_BIT is addressable, the next higher bit will be tested as well to ensure that
+// the made assumption does not artificially restrict the memory availability.
+static unsigned int probe_valid_max_address_bit(size_t init_bit, size_t min_bit) {
+ assert(init_bit >= min_bit, "Sanity");
+ assert(init_bit <= MAXIMUM_MAX_ADDRESS_BIT, "Test bit is outside the assumed address space range");
+
+#ifdef LINUX
+ unsigned int max_valid_address_bit = 0;
+ void* last_allocatable_address = nullptr;
+
+ const unsigned int page_size = os::vm_page_size();
+
+ for (size_t i = init_bit; i >= min_bit; --i) {
+ void* base_addr = (void*) (((unsigned long) 1U) << i);
+
+ /* ==== Try msync-ing already mapped memory page ==== */
+ if (msync(base_addr, page_size, MS_ASYNC) == 0) {
+ // The page of the given address was synced by the linux kernel and must thus be both, mapped and valid.
+ max_valid_address_bit = i;
+ break;
+ }
+ if (errno != ENOMEM) {
+ // An unexpected error occurred, i.e. an error not indicating that the targeted memory page is unmapped,
+ // but pointing out another type of issue.
+ // Even though this should never happen, those issues may come up due to undefined behavior.
+#ifdef ASSERT
+ fatal("Received '%s' while probing the address space for the highest valid bit", os::errno_name(errno));
+#else // ASSERT
+ log_warning_p(gc)("Received '%s' while probing the address space for the highest valid bit", os::errno_name(errno));
+#endif // ASSERT
+ continue;
+ }
+
+ /* ==== Try mapping memory page on our own ==== */
+ last_allocatable_address = mmap(base_addr, page_size, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0);
+ if (last_allocatable_address != MAP_FAILED) {
+ munmap(last_allocatable_address, page_size);
+ }
+
+ if (last_allocatable_address == base_addr) {
+ // As the linux kernel mapped exactly the page we have requested, the address must be valid.
+ max_valid_address_bit = i;
+ break;
+ }
+
+ log_info_p(gc, init)("Probe failed for bit '%zu'", i);
+ }
+
+ if (max_valid_address_bit == 0) {
+ // Probing did not bring up any usable address bit.
+ // As an alternative, the VM evaluates the address returned by mmap as it is expected that the reserved page
+ // will be close to the probed address that was out-of-range.
+ // As per mmap(2), "the kernel [will take] [the address] as a hint about where to
+ // place the mapping; on Linux, the mapping will be created at a nearby page boundary".
+ // It should thus be a "close enough" approximation to the real virtual memory address space limit.
+ //
+ // This recovery strategy is only applied in production builds.
+ // In debug builds, an assertion in 'ZPlatformAddressOffsetBits' will bail out the VM to indicate that
+ // the assumed address space is no longer up-to-date.
+ if (last_allocatable_address != MAP_FAILED) {
+ const unsigned int bitpos = BitsPerSize_t - count_leading_zeros((size_t) last_allocatable_address) - 1;
+ log_info_p(gc, init)("Did not find any valid addresses within the range, using address '%u' instead", bitpos);
+ return bitpos;
+ }
+
+#ifdef ASSERT
+ fatal("Available address space can not be determined");
+#else // ASSERT
+ log_warning_p(gc)("Cannot determine available address space. Falling back to default value.");
+ return DEFAULT_MAX_ADDRESS_BIT;
+#endif // ASSERT
+ } else {
+ if (max_valid_address_bit == init_bit) {
+ // An usable address bit has been found immediately.
+ // To ensure that the entire virtual address space is exploited, the next highest bit will be tested as well.
+ log_info_p(gc, init)("Hit valid address '%u' on first try, retrying with next higher bit", max_valid_address_bit);
+ return MAX2(max_valid_address_bit, probe_valid_max_address_bit(init_bit + 1, init_bit + 1));
+ }
+ }
+
+ log_info_p(gc, init)("Found valid address '%u'", max_valid_address_bit);
+ return max_valid_address_bit;
+#else // LINUX
+ return DEFAULT_MAX_ADDRESS_BIT;
+#endif // LINUX
+}
+
+size_t ZPlatformAddressOffsetBits() {
+ const static unsigned int valid_max_address_offset_bits =
+ probe_valid_max_address_bit(DEFAULT_MAX_ADDRESS_BIT, MINIMUM_MAX_ADDRESS_BIT) + 1;
+ assert(valid_max_address_offset_bits >= MINIMUM_MAX_ADDRESS_BIT,
+ "Highest addressable bit is outside the assumed address space range");
+
+ const size_t max_address_offset_bits = valid_max_address_offset_bits - 3;
+ const size_t min_address_offset_bits = max_address_offset_bits - 2;
+ const size_t address_offset = round_up_power_of_2(MaxHeapSize * ZVirtualToPhysicalRatio);
+ const size_t address_offset_bits = log2i_exact(address_offset);
+
+ return clamp(address_offset_bits, min_address_offset_bits, max_address_offset_bits);
+}
+
+size_t ZPlatformAddressMetadataShift() {
+ return ZPlatformAddressOffsetBits();
+}
diff --git a/src/hotspot/cpu/ppc/gc/z/zGlobals_ppc.hpp b/src/hotspot/cpu/ppc/gc/z/zGlobals_ppc.hpp
new file mode 100644
index 0000000000000..3657b16fc1aa6
--- /dev/null
+++ b/src/hotspot/cpu/ppc/gc/z/zGlobals_ppc.hpp
@@ -0,0 +1,36 @@
+/*
+ * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021 SAP SE. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ */
+
+#ifndef CPU_PPC_GC_Z_ZGLOBALS_PPC_HPP
+#define CPU_PPC_GC_Z_ZGLOBALS_PPC_HPP
+
+#include "globalDefinitions_ppc.hpp"
+const size_t ZPlatformGranuleSizeShift = 21; // 2MB
+const size_t ZPlatformHeapViews = 3;
+const size_t ZPlatformCacheLineSize = DEFAULT_CACHE_LINE_SIZE;
+
+size_t ZPlatformAddressOffsetBits();
+size_t ZPlatformAddressMetadataShift();
+
+#endif // CPU_PPC_GC_Z_ZGLOBALS_PPC_HPP
diff --git a/src/hotspot/cpu/ppc/gc/z/z_ppc.ad b/src/hotspot/cpu/ppc/gc/z/z_ppc.ad
new file mode 100644
index 0000000000000..a8ce64ed1d9c1
--- /dev/null
+++ b/src/hotspot/cpu/ppc/gc/z/z_ppc.ad
@@ -0,0 +1,298 @@
+//
+// Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+// Copyright (c) 2021 SAP SE. All rights reserved.
+// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+//
+// This code is free software; you can redistribute it and/or modify it
+// under the terms of the GNU General Public License version 2 only, as
+// published by the Free Software Foundation.
+//
+// This code is distributed in the hope that it will be useful, but WITHOUT
+// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+// FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+// version 2 for more details (a copy is included in the LICENSE file that
+// accompanied this code).
+//
+// You should have received a copy of the GNU General Public License version
+// 2 along with this work; if not, write to the Free Software Foundation,
+// Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+//
+// Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+// or visit www.oracle.com if you need additional information or have any
+// questions.
+//
+
+source_hpp %{
+
+#include "gc/shared/gc_globals.hpp"
+#include "gc/z/c2/zBarrierSetC2.hpp"
+#include "gc/z/zThreadLocalData.hpp"
+
+%}
+
+source %{
+
+static void z_load_barrier(MacroAssembler& _masm, const MachNode* node, Address ref_addr, Register ref,
+ Register tmp, uint8_t barrier_data) {
+ if (barrier_data == ZLoadBarrierElided) {
+ return;
+ }
+
+ ZLoadBarrierStubC2* const stub = ZLoadBarrierStubC2::create(node, ref_addr, ref, tmp, barrier_data);
+ __ ld(tmp, in_bytes(ZThreadLocalData::address_bad_mask_offset()), R16_thread);
+ __ and_(tmp, tmp, ref);
+ __ bne_far(CCR0, *stub->entry(), MacroAssembler::bc_far_optimize_on_relocate);
+ __ bind(*stub->continuation());
+}
+
+static void z_load_barrier_slow_path(MacroAssembler& _masm, const MachNode* node, Address ref_addr, Register ref,
+ Register tmp) {
+ ZLoadBarrierStubC2* const stub = ZLoadBarrierStubC2::create(node, ref_addr, ref, tmp, ZLoadBarrierStrong);
+ __ b(*stub->entry());
+ __ bind(*stub->continuation());
+}
+
+static void z_compare_and_swap(MacroAssembler& _masm, const MachNode* node,
+ Register res, Register mem, Register oldval, Register newval,
+ Register tmp_xchg, Register tmp_mask,
+ bool weak, bool acquire) {
+ // z-specific load barrier requires strong CAS operations.
+ // Weak CAS operations are thus only emitted if the barrier is elided.
+ __ cmpxchgd(CCR0, tmp_xchg, oldval, newval, mem,
+ MacroAssembler::MemBarNone, MacroAssembler::cmpxchgx_hint_atomic_update(), res, NULL, true,
+ weak && node->barrier_data() == ZLoadBarrierElided);
+
+ if (node->barrier_data() != ZLoadBarrierElided) {
+ Label skip_barrier;
+
+ __ ld(tmp_mask, in_bytes(ZThreadLocalData::address_bad_mask_offset()), R16_thread);
+ __ and_(tmp_mask, tmp_mask, tmp_xchg);
+ __ beq(CCR0, skip_barrier);
+
+ // CAS must have failed because pointer in memory is bad.
+ z_load_barrier_slow_path(_masm, node, Address(mem), tmp_xchg, res /* used as tmp */);
+
+ __ cmpxchgd(CCR0, tmp_xchg, oldval, newval, mem,
+ MacroAssembler::MemBarNone, MacroAssembler::cmpxchgx_hint_atomic_update(), res, NULL, true, weak);
+
+ __ bind(skip_barrier);
+ }
+
+ if (acquire) {
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ // Uses the isync instruction as an acquire barrier.
+ // This exploits the compare and the branch in the z load barrier (load, compare and branch, isync).
+ __ isync();
+ } else {
+ __ sync();
+ }
+ }
+}
+
+static void z_compare_and_exchange(MacroAssembler& _masm, const MachNode* node,
+ Register res, Register mem, Register oldval, Register newval, Register tmp,
+ bool weak, bool acquire) {
+ // z-specific load barrier requires strong CAS operations.
+ // Weak CAS operations are thus only emitted if the barrier is elided.
+ __ cmpxchgd(CCR0, res, oldval, newval, mem,
+ MacroAssembler::MemBarNone, MacroAssembler::cmpxchgx_hint_atomic_update(), noreg, NULL, true,
+ weak && node->barrier_data() == ZLoadBarrierElided);
+
+ if (node->barrier_data() != ZLoadBarrierElided) {
+ Label skip_barrier;
+ __ ld(tmp, in_bytes(ZThreadLocalData::address_bad_mask_offset()), R16_thread);
+ __ and_(tmp, tmp, res);
+ __ beq(CCR0, skip_barrier);
+
+ z_load_barrier_slow_path(_masm, node, Address(mem), res, tmp);
+
+ __ cmpxchgd(CCR0, res, oldval, newval, mem,
+ MacroAssembler::MemBarNone, MacroAssembler::cmpxchgx_hint_atomic_update(), noreg, NULL, true, weak);
+
+ __ bind(skip_barrier);
+ }
+
+ if (acquire) {
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ // Uses the isync instruction as an acquire barrier.
+ // This exploits the compare and the branch in the z load barrier (load, compare and branch, isync).
+ __ isync();
+ } else {
+ __ sync();
+ }
+ }
+}
+
+%}
+
+instruct zLoadP(iRegPdst dst, memoryAlg4 mem, iRegPdst tmp, flagsRegCR0 cr0)
+%{
+ match(Set dst (LoadP mem));
+ effect(TEMP_DEF dst, TEMP tmp, KILL cr0);
+ ins_cost(MEMORY_REF_COST);
+
+ predicate((UseZGC && n->as_Load()->barrier_data() != 0)
+ && (n->as_Load()->is_unordered() || followed_by_acquire(n)));
+
+ format %{ "LD $dst, $mem" %}
+ ins_encode %{
+ assert($mem$$index == 0, "sanity");
+ __ ld($dst$$Register, $mem$$disp, $mem$$base$$Register);
+ z_load_barrier(_masm, this, Address($mem$$base$$Register, $mem$$disp), $dst$$Register, $tmp$$Register, barrier_data());
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+// Load Pointer Volatile
+instruct zLoadP_acq(iRegPdst dst, memoryAlg4 mem, iRegPdst tmp, flagsRegCR0 cr0)
+%{
+ match(Set dst (LoadP mem));
+ effect(TEMP_DEF dst, TEMP tmp, KILL cr0);
+ ins_cost(3 * MEMORY_REF_COST);
+
+ // Predicate on instruction order is implicitly present due to the predicate of the cheaper zLoadP operation
+ predicate(UseZGC && n->as_Load()->barrier_data() != 0);
+
+ format %{ "LD acq $dst, $mem" %}
+ ins_encode %{
+ __ ld($dst$$Register, $mem$$disp, $mem$$base$$Register);
+ z_load_barrier(_masm, this, Address($mem$$base$$Register, $mem$$disp), $dst$$Register, $tmp$$Register, barrier_data());
+
+ // Uses the isync instruction as an acquire barrier.
+ // This exploits the compare and the branch in the z load barrier (load, compare and branch, isync).
+ __ isync();
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct zCompareAndSwapP(iRegIdst res, iRegPdst mem, iRegPsrc oldval, iRegPsrc newval,
+ iRegPdst tmp_xchg, iRegPdst tmp_mask, flagsRegCR0 cr0) %{
+ match(Set res (CompareAndSwapP mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp_xchg, TEMP tmp_mask, KILL cr0);
+
+ predicate((UseZGC && n->as_LoadStore()->barrier_data() == ZLoadBarrierStrong)
+ && (((CompareAndSwapNode*)n)->order() != MemNode::acquire && ((CompareAndSwapNode*) n)->order() != MemNode::seqcst));
+
+ format %{ "CMPXCHG $res, $mem, $oldval, $newval; as bool; ptr" %}
+ ins_encode %{
+ z_compare_and_swap(_masm, this,
+ $res$$Register, $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp_xchg$$Register, $tmp_mask$$Register,
+ false /* weak */, false /* acquire */);
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct zCompareAndSwapP_acq(iRegIdst res, iRegPdst mem, iRegPsrc oldval, iRegPsrc newval,
+ iRegPdst tmp_xchg, iRegPdst tmp_mask, flagsRegCR0 cr0) %{
+ match(Set res (CompareAndSwapP mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp_xchg, TEMP tmp_mask, KILL cr0);
+
+ predicate((UseZGC && n->as_LoadStore()->barrier_data() == ZLoadBarrierStrong)
+ && (((CompareAndSwapNode*)n)->order() == MemNode::acquire || ((CompareAndSwapNode*) n)->order() == MemNode::seqcst));
+
+ format %{ "CMPXCHG acq $res, $mem, $oldval, $newval; as bool; ptr" %}
+ ins_encode %{
+ z_compare_and_swap(_masm, this,
+ $res$$Register, $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp_xchg$$Register, $tmp_mask$$Register,
+ false /* weak */, true /* acquire */);
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct zCompareAndSwapPWeak(iRegIdst res, iRegPdst mem, iRegPsrc oldval, iRegPsrc newval,
+ iRegPdst tmp_xchg, iRegPdst tmp_mask, flagsRegCR0 cr0) %{
+ match(Set res (WeakCompareAndSwapP mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp_xchg, TEMP tmp_mask, KILL cr0);
+
+ predicate((UseZGC && n->as_LoadStore()->barrier_data() == ZLoadBarrierStrong)
+ && ((CompareAndSwapNode*)n)->order() != MemNode::acquire && ((CompareAndSwapNode*) n)->order() != MemNode::seqcst);
+
+ format %{ "weak CMPXCHG $res, $mem, $oldval, $newval; as bool; ptr" %}
+ ins_encode %{
+ z_compare_and_swap(_masm, this,
+ $res$$Register, $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp_xchg$$Register, $tmp_mask$$Register,
+ true /* weak */, false /* acquire */);
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct zCompareAndSwapPWeak_acq(iRegIdst res, iRegPdst mem, iRegPsrc oldval, iRegPsrc newval,
+ iRegPdst tmp_xchg, iRegPdst tmp_mask, flagsRegCR0 cr0) %{
+ match(Set res (WeakCompareAndSwapP mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp_xchg, TEMP tmp_mask, KILL cr0);
+
+ predicate((UseZGC && n->as_LoadStore()->barrier_data() == ZLoadBarrierStrong)
+ && (((CompareAndSwapNode*)n)->order() == MemNode::acquire || ((CompareAndSwapNode*) n)->order() == MemNode::seqcst));
+
+ format %{ "weak CMPXCHG acq $res, $mem, $oldval, $newval; as bool; ptr" %}
+ ins_encode %{
+ z_compare_and_swap(_masm, this,
+ $res$$Register, $mem$$Register, $oldval$$Register, $newval$$Register,
+ $tmp_xchg$$Register, $tmp_mask$$Register,
+ true /* weak */, true /* acquire */);
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct zCompareAndExchangeP(iRegPdst res, iRegPdst mem, iRegPsrc oldval, iRegPsrc newval,
+ iRegPdst tmp, flagsRegCR0 cr0) %{
+ match(Set res (CompareAndExchangeP mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp, KILL cr0);
+
+ predicate((UseZGC && n->as_LoadStore()->barrier_data() == ZLoadBarrierStrong)
+ && (
+ ((CompareAndSwapNode*)n)->order() != MemNode::acquire
+ && ((CompareAndSwapNode*)n)->order() != MemNode::seqcst
+ ));
+
+ format %{ "CMPXCHG $res, $mem, $oldval, $newval; as ptr; ptr" %}
+ ins_encode %{
+ z_compare_and_exchange(_masm, this,
+ $res$$Register, $mem$$Register, $oldval$$Register, $newval$$Register, $tmp$$Register,
+ false /* weak */, false /* acquire */);
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct zCompareAndExchangeP_acq(iRegPdst res, iRegPdst mem, iRegPsrc oldval, iRegPsrc newval,
+ iRegPdst tmp, flagsRegCR0 cr0) %{
+ match(Set res (CompareAndExchangeP mem (Binary oldval newval)));
+ effect(TEMP_DEF res, TEMP tmp, KILL cr0);
+
+ predicate((UseZGC && n->as_LoadStore()->barrier_data() == ZLoadBarrierStrong)
+ && (
+ ((CompareAndSwapNode*)n)->order() == MemNode::acquire
+ || ((CompareAndSwapNode*)n)->order() == MemNode::seqcst
+ ));
+
+ format %{ "CMPXCHG acq $res, $mem, $oldval, $newval; as ptr; ptr" %}
+ ins_encode %{
+ z_compare_and_exchange(_masm, this,
+ $res$$Register, $mem$$Register, $oldval$$Register, $newval$$Register, $tmp$$Register,
+ false /* weak */, true /* acquire */);
+ %}
+ ins_pipe(pipe_class_default);
+%}
+
+instruct zGetAndSetP(iRegPdst res, iRegPdst mem, iRegPsrc newval, iRegPdst tmp, flagsRegCR0 cr0) %{
+ match(Set res (GetAndSetP mem newval));
+ effect(TEMP_DEF res, TEMP tmp, KILL cr0);
+
+ predicate(UseZGC && n->as_LoadStore()->barrier_data() != 0);
+
+ format %{ "GetAndSetP $res, $mem, $newval" %}
+ ins_encode %{
+ __ getandsetd($res$$Register, $newval$$Register, $mem$$Register, MacroAssembler::cmpxchgx_hint_atomic_update());
+ z_load_barrier(_masm, this, Address(noreg, (intptr_t) 0), $res$$Register, $tmp$$Register, barrier_data());
+
+ if (support_IRIW_for_not_multiple_copy_atomic_cpu) {
+ __ isync();
+ } else {
+ __ sync();
+ }
+ %}
+ ins_pipe(pipe_class_default);
+%}
diff --git a/src/hotspot/cpu/ppc/matcher_ppc.hpp b/src/hotspot/cpu/ppc/matcher_ppc.hpp
index cbcebc23ddc92..473f28f4b91c4 100644
--- a/src/hotspot/cpu/ppc/matcher_ppc.hpp
+++ b/src/hotspot/cpu/ppc/matcher_ppc.hpp
@@ -156,5 +156,7 @@
return VM_Version::has_fcfids();
}
+ // Implements a variant of EncodeISOArrayNode that encode ASCII only
+ static const bool supports_encode_ascii_array = true;
#endif // CPU_PPC_MATCHER_PPC_HPP
diff --git a/src/hotspot/cpu/ppc/ppc.ad b/src/hotspot/cpu/ppc/ppc.ad
index 404e7646e1c1d..89565cdb87ca3 100644
--- a/src/hotspot/cpu/ppc/ppc.ad
+++ b/src/hotspot/cpu/ppc/ppc.ad
@@ -5525,7 +5525,7 @@ instruct loadN2P_klass_unscaled(iRegPdst dst, memory mem) %{
// Load Pointer
instruct loadP(iRegPdst dst, memoryAlg4 mem) %{
match(Set dst (LoadP mem));
- predicate(n->as_Load()->is_unordered() || followed_by_acquire(n));
+ predicate((n->as_Load()->is_unordered() || followed_by_acquire(n)) && n->as_Load()->barrier_data() == 0);
ins_cost(MEMORY_REF_COST);
format %{ "LD $dst, $mem \t// ptr" %}
@@ -5539,6 +5539,8 @@ instruct loadP_ac(iRegPdst dst, memoryAlg4 mem) %{
match(Set dst (LoadP mem));
ins_cost(3*MEMORY_REF_COST);
+ predicate(n->as_Load()->barrier_data() == 0);
+
format %{ "LD $dst, $mem \t// ptr acquire\n\t"
"TWI $dst\n\t"
"ISYNC" %}
@@ -5550,7 +5552,7 @@ instruct loadP_ac(iRegPdst dst, memoryAlg4 mem) %{
// LoadP + CastP2L
instruct loadP2X(iRegLdst dst, memoryAlg4 mem) %{
match(Set dst (CastP2X (LoadP mem)));
- predicate(_kids[0]->_leaf->as_Load()->is_unordered());
+ predicate(_kids[0]->_leaf->as_Load()->is_unordered() && _kids[0]->_leaf->as_Load()->barrier_data() == 0);
ins_cost(MEMORY_REF_COST);
format %{ "LD $dst, $mem \t// ptr + p2x" %}
@@ -7472,6 +7474,7 @@ instruct storeLConditional_regP_regL_regL(flagsReg crx, indirect mem_ptr, iRegLs
instruct storePConditional_regP_regP_regP(flagsRegCR0 cr0, indirect mem_ptr, iRegPsrc oldVal, iRegPsrc newVal) %{
match(Set cr0 (StorePConditional mem_ptr (Binary oldVal newVal)));
ins_cost(2*MEMORY_REF_COST);
+ predicate(n->as_LoadStore()->barrier_data() == 0);
format %{ "STDCX_ if ($cr0 = ($oldVal == *$mem_ptr)) *mem_ptr = $newVal; as bool" %}
ins_encode %{
@@ -7636,6 +7639,7 @@ instruct compareAndSwapL_regP_regL_regL(iRegIdst res, iRegPdst mem_ptr, iRegLsrc
instruct compareAndSwapP_regP_regP_regP(iRegIdst res, iRegPdst mem_ptr, iRegPsrc src1, iRegPsrc src2, flagsRegCR0 cr0) %{
match(Set res (CompareAndSwapP mem_ptr (Binary src1 src2)));
effect(TEMP_DEF res, TEMP cr0); // TEMP_DEF to avoid jump
+ predicate(n->as_LoadStore()->barrier_data() == 0);
format %{ "CMPXCHGD $res, $mem_ptr, $src1, $src2; as bool; ptr" %}
ins_encode %{
// CmpxchgX sets CCR0 to cmpX(src1, src2) and Rres to 'true'/'false'.
@@ -7858,7 +7862,7 @@ instruct weakCompareAndSwapL_acq_regP_regL_regL(iRegIdst res, iRegPdst mem_ptr,
instruct weakCompareAndSwapP_regP_regP_regP(iRegIdst res, iRegPdst mem_ptr, iRegPsrc src1, iRegPsrc src2, flagsRegCR0 cr0) %{
match(Set res (WeakCompareAndSwapP mem_ptr (Binary src1 src2)));
- predicate(((CompareAndSwapNode*)n)->order() != MemNode::acquire && ((CompareAndSwapNode*)n)->order() != MemNode::seqcst);
+ predicate((((CompareAndSwapNode*)n)->order() != MemNode::acquire && ((CompareAndSwapNode*)n)->order() != MemNode::seqcst) && n->as_LoadStore()->barrier_data() == 0);
effect(TEMP_DEF res, TEMP cr0); // TEMP_DEF to avoid jump
format %{ "weak CMPXCHGD $res, $mem_ptr, $src1, $src2; as bool; ptr" %}
ins_encode %{
@@ -7872,7 +7876,7 @@ instruct weakCompareAndSwapP_regP_regP_regP(iRegIdst res, iRegPdst mem_ptr, iReg
instruct weakCompareAndSwapP_acq_regP_regP_regP(iRegIdst res, iRegPdst mem_ptr, iRegPsrc src1, iRegPsrc src2, flagsRegCR0 cr0) %{
match(Set res (WeakCompareAndSwapP mem_ptr (Binary src1 src2)));
- predicate(((CompareAndSwapNode*)n)->order() == MemNode::acquire || ((CompareAndSwapNode*)n)->order() == MemNode::seqcst);
+ predicate((((CompareAndSwapNode*)n)->order() == MemNode::acquire || ((CompareAndSwapNode*)n)->order() == MemNode::seqcst) && n->as_LoadStore()->barrier_data() == 0);
effect(TEMP_DEF res, TEMP cr0); // TEMP_DEF to avoid jump
format %{ "weak CMPXCHGD acq $res, $mem_ptr, $src1, $src2; as bool; ptr" %}
ins_encode %{
@@ -8128,7 +8132,8 @@ instruct compareAndExchangeL_acq_regP_regL_regL(iRegLdst res, iRegPdst mem_ptr,
instruct compareAndExchangeP_regP_regP_regP(iRegPdst res, iRegPdst mem_ptr, iRegPsrc src1, iRegPsrc src2, flagsRegCR0 cr0) %{
match(Set res (CompareAndExchangeP mem_ptr (Binary src1 src2)));
- predicate(((CompareAndSwapNode*)n)->order() != MemNode::acquire && ((CompareAndSwapNode*)n)->order() != MemNode::seqcst);
+ predicate((((CompareAndSwapNode*)n)->order() != MemNode::acquire && ((CompareAndSwapNode*)n)->order() != MemNode::seqcst)
+ && n->as_LoadStore()->barrier_data() == 0);
effect(TEMP_DEF res, TEMP cr0);
format %{ "CMPXCHGD $res, $mem_ptr, $src1, $src2; as ptr; ptr" %}
ins_encode %{
@@ -8142,7 +8147,8 @@ instruct compareAndExchangeP_regP_regP_regP(iRegPdst res, iRegPdst mem_ptr, iReg
instruct compareAndExchangeP_acq_regP_regP_regP(iRegPdst res, iRegPdst mem_ptr, iRegPsrc src1, iRegPsrc src2, flagsRegCR0 cr0) %{
match(Set res (CompareAndExchangeP mem_ptr (Binary src1 src2)));
- predicate(((CompareAndSwapNode*)n)->order() == MemNode::acquire || ((CompareAndSwapNode*)n)->order() == MemNode::seqcst);
+ predicate((((CompareAndSwapNode*)n)->order() == MemNode::acquire || ((CompareAndSwapNode*)n)->order() == MemNode::seqcst)
+ && n->as_LoadStore()->barrier_data() == 0);
effect(TEMP_DEF res, TEMP cr0);
format %{ "CMPXCHGD acq $res, $mem_ptr, $src1, $src2; as ptr; ptr" %}
ins_encode %{
@@ -8364,6 +8370,7 @@ instruct getAndSetL(iRegLdst res, iRegPdst mem_ptr, iRegLsrc src, flagsRegCR0 cr
instruct getAndSetP(iRegPdst res, iRegPdst mem_ptr, iRegPsrc src, flagsRegCR0 cr0) %{
match(Set res (GetAndSetP mem_ptr src));
+ predicate(n->as_LoadStore()->barrier_data() == 0);
effect(TEMP_DEF res, TEMP cr0);
format %{ "GetAndSetP $res, $mem_ptr, $src" %}
ins_encode %{
@@ -12786,34 +12793,31 @@ instruct has_negatives(rarg1RegP ary1, iRegIsrc len, iRegIdst result, iRegLdst t
// encode char[] to byte[] in ISO_8859_1
instruct encode_iso_array(rarg1RegP src, rarg2RegP dst, iRegIsrc len, iRegIdst result, iRegLdst tmp1,
iRegLdst tmp2, iRegLdst tmp3, iRegLdst tmp4, iRegLdst tmp5, regCTR ctr, flagsRegCR0 cr0) %{
+ predicate(!((EncodeISOArrayNode*)n)->is_ascii());
match(Set result (EncodeISOArray src (Binary dst len)));
effect(TEMP_DEF result, TEMP tmp1, TEMP tmp2, TEMP tmp3, TEMP tmp4, TEMP tmp5,
USE_KILL src, USE_KILL dst, KILL ctr, KILL cr0);
ins_cost(300);
- format %{ "Encode array $src,$dst,$len -> $result \t// KILL $tmp1, $tmp2, $tmp3, $tmp4, $tmp5" %}
+ format %{ "Encode iso array $src,$dst,$len -> $result \t// KILL $tmp1, $tmp2, $tmp3, $tmp4, $tmp5" %}
ins_encode %{
- Label Lslow, Lfailure1, Lfailure2, Ldone;
- __ string_compress_16($src$$Register, $dst$$Register, $len$$Register, $tmp1$$Register,
- $tmp2$$Register, $tmp3$$Register, $tmp4$$Register, $tmp5$$Register, Lfailure1);
- __ rldicl_($result$$Register, $len$$Register, 0, 64-3); // Remaining characters.
- __ beq(CCR0, Ldone);
- __ bind(Lslow);
- __ string_compress($src$$Register, $dst$$Register, $result$$Register, $tmp2$$Register, Lfailure2);
- __ li($result$$Register, 0);
- __ b(Ldone);
-
- __ bind(Lfailure1);
- __ mr($result$$Register, $len$$Register);
- __ mfctr($tmp1$$Register);
- __ rldimi_($result$$Register, $tmp1$$Register, 3, 0); // Remaining characters.
- __ beq(CCR0, Ldone);
- __ b(Lslow);
-
- __ bind(Lfailure2);
- __ mfctr($result$$Register); // Remaining characters.
+ __ encode_iso_array($src$$Register, $dst$$Register, $len$$Register, $tmp1$$Register, $tmp2$$Register,
+ $tmp3$$Register, $tmp4$$Register, $tmp5$$Register, $result$$Register, false);
+ %}
+ ins_pipe(pipe_class_default);
+%}
- __ bind(Ldone);
- __ subf($result$$Register, $result$$Register, $len$$Register);
+// encode char[] to byte[] in ASCII
+instruct encode_ascii_array(rarg1RegP src, rarg2RegP dst, iRegIsrc len, iRegIdst result, iRegLdst tmp1,
+ iRegLdst tmp2, iRegLdst tmp3, iRegLdst tmp4, iRegLdst tmp5, regCTR ctr, flagsRegCR0 cr0) %{
+ predicate(((EncodeISOArrayNode*)n)->is_ascii());
+ match(Set result (EncodeISOArray src (Binary dst len)));
+ effect(TEMP_DEF result, TEMP tmp1, TEMP tmp2, TEMP tmp3, TEMP tmp4, TEMP tmp5,
+ USE_KILL src, USE_KILL dst, KILL ctr, KILL cr0);
+ ins_cost(300);
+ format %{ "Encode ascii array $src,$dst,$len -> $result \t// KILL $tmp1, $tmp2, $tmp3, $tmp4, $tmp5" %}
+ ins_encode %{
+ __ encode_iso_array($src$$Register, $dst$$Register, $len$$Register, $tmp1$$Register, $tmp2$$Register,
+ $tmp3$$Register, $tmp4$$Register, $tmp5$$Register, $result$$Register, true);
%}
ins_pipe(pipe_class_default);
%}
diff --git a/src/hotspot/cpu/ppc/vmreg_ppc.hpp b/src/hotspot/cpu/ppc/vmreg_ppc.hpp
index 090fe1d72a2a0..16f6799d04643 100644
--- a/src/hotspot/cpu/ppc/vmreg_ppc.hpp
+++ b/src/hotspot/cpu/ppc/vmreg_ppc.hpp
@@ -1,6 +1,6 @@
/*
- * Copyright (c) 2001, 2019, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2012, 2013 SAP SE. All rights reserved.
+ * Copyright (c) 2001, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2021 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -35,6 +35,21 @@ inline bool is_FloatRegister() {
value() < ConcreteRegisterImpl::max_fpr;
}
+inline bool is_VectorRegister() {
+ return value() >= ConcreteRegisterImpl::max_fpr &&
+ value() < ConcreteRegisterImpl::max_vsr;
+}
+
+inline bool is_ConditionRegister() {
+ return value() >= ConcreteRegisterImpl::max_vsr &&
+ value() < ConcreteRegisterImpl::max_cnd;
+}
+
+inline bool is_SpecialRegister() {
+ return value() >= ConcreteRegisterImpl::max_cnd &&
+ value() < ConcreteRegisterImpl::max_spr;
+}
+
inline Register as_Register() {
assert(is_Register() && is_even(value()), "even-aligned GPR name");
return ::as_Register(value()>>1);
diff --git a/src/hotspot/cpu/s390/c1_MacroAssembler_s390.cpp b/src/hotspot/cpu/s390/c1_MacroAssembler_s390.cpp
index 89cea9598a4fd..c6faec867ecbe 100644
--- a/src/hotspot/cpu/s390/c1_MacroAssembler_s390.cpp
+++ b/src/hotspot/cpu/s390/c1_MacroAssembler_s390.cpp
@@ -74,8 +74,8 @@ void C1_MacroAssembler::build_frame(int frame_size_in_bytes, int bang_size_in_by
push_frame(frame_size_in_bytes);
}
-void C1_MacroAssembler::verified_entry() {
- if (C1Breakpoint) z_illtrap(0xC1);
+void C1_MacroAssembler::verified_entry(bool breakAtEntry) {
+ if (breakAtEntry) z_illtrap(0xC1);
}
void C1_MacroAssembler::lock_object(Register hdr, Register obj, Register disp_hdr, Label& slow_case) {
diff --git a/src/hotspot/cpu/s390/frame_s390.cpp b/src/hotspot/cpu/s390/frame_s390.cpp
index 2486c6c636083..4eff78cdbfcdf 100644
--- a/src/hotspot/cpu/s390/frame_s390.cpp
+++ b/src/hotspot/cpu/s390/frame_s390.cpp
@@ -1,6 +1,6 @@
/*
- * Copyright (c) 2016, 2021, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2016, 2019 SAP SE. All rights reserved.
+ * Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2016, 2022 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -55,7 +55,6 @@ void RegisterMap::check_location_valid() {
// Profiling/safepoint support
bool frame::safe_for_sender(JavaThread *thread) {
- bool safe = false;
address sp = (address)_sp;
address fp = (address)_fp;
address unextended_sp = (address)_unextended_sp;
@@ -73,28 +72,23 @@ bool frame::safe_for_sender(JavaThread *thread) {
// An fp must be within the stack and above (but not equal) sp.
bool fp_safe = thread->is_in_stack_range_excl(fp, sp);
- // An interpreter fp must be within the stack and above (but not equal) sp.
- // Moreover, it must be at least the size of the z_ijava_state structure.
+ // An interpreter fp must be fp_safe.
+ // Moreover, it must be at a distance at least the size of the z_ijava_state structure.
bool fp_interp_safe = fp_safe && ((fp - sp) >= z_ijava_state_size);
// We know sp/unextended_sp are safe, only fp is questionable here
// If the current frame is known to the code cache then we can attempt to
- // to construct the sender and do some validation of it. This goes a long way
+ // construct the sender and do some validation of it. This goes a long way
// toward eliminating issues when we get in frame construction code
if (_cb != NULL ) {
- // Entry frame checks
- if (is_entry_frame()) {
- // An entry frame must have a valid fp.
- return fp_safe && is_entry_frame_valid(thread);
- }
- // Now check if the frame is complete and the test is
- // reliable. Unfortunately we can only check frame completeness for
- // runtime stubs. Other generic buffer blobs are more
- // problematic so we just assume they are OK. Adapter blobs never have a
- // complete frame and are never OK. nmethods should be OK on s390.
+ // First check if the frame is complete and the test is reliable.
+ // Unfortunately we can only check frame completeness for runtime stubs.
+ // Other generic buffer blobs are more problematic so we just assume they are OK.
+ // Adapter blobs never have a complete frame and are never OK.
+ // nmethods should be OK on s390.
if (!_cb->is_frame_complete_at(_pc)) {
if (_cb->is_adapter_blob() || _cb->is_runtime_stub()) {
return false;
@@ -106,13 +100,26 @@ bool frame::safe_for_sender(JavaThread *thread) {
return false;
}
+ // Entry frame checks
+ if (is_entry_frame()) {
+ // An entry frame must have a valid fp.
+ return fp_safe && is_entry_frame_valid(thread);
+ }
+
if (is_interpreted_frame() && !fp_interp_safe) {
return false;
}
- z_abi_160* sender_abi = (z_abi_160*) fp;
- intptr_t* sender_sp = (intptr_t*) sender_abi->callers_sp;
- address sender_pc = (address) sender_abi->return_pc;
+ // At this point, there still is a chance that fp_safe is false.
+ // In particular, (fp == NULL) might be true. So let's check and
+ // bail out before we actually dereference from fp.
+ if (!fp_safe) {
+ return false;
+ }
+
+ z_abi_16* sender_abi = (z_abi_16*)fp;
+ intptr_t* sender_sp = (intptr_t*) fp;
+ address sender_pc = (address) sender_abi->return_pc;
// We must always be able to find a recognizable pc.
CodeBlob* sender_blob = CodeCache::find_blob_unsafe(sender_pc);
@@ -135,7 +142,7 @@ bool frame::safe_for_sender(JavaThread *thread) {
// sender_fp must be within the stack and above (but not
// equal) current frame's fp.
if (!thread->is_in_stack_range_excl(sender_fp, fp)) {
- return false;
+ return false;
}
// If the potential sender is the interpreter then we can do some more checking.
@@ -291,9 +298,55 @@ void frame::patch_pc(Thread* thread, address pc) {
}
bool frame::is_interpreted_frame_valid(JavaThread* thread) const {
- // Is there anything to do?
assert(is_interpreted_frame(), "Not an interpreted frame");
- return true;
+ // These are reasonable sanity checks
+ if (fp() == 0 || (intptr_t(fp()) & (wordSize-1)) != 0) {
+ return false;
+ }
+ if (sp() == 0 || (intptr_t(sp()) & (wordSize-1)) != 0) {
+ return false;
+ }
+ int min_frame_slots = (z_abi_16_size + z_ijava_state_size) / sizeof(intptr_t);
+ if (fp() - min_frame_slots < sp()) {
+ return false;
+ }
+ // These are hacks to keep us out of trouble.
+ // The problem with these is that they mask other problems
+ if (fp() <= sp()) { // this attempts to deal with unsigned comparison above
+ return false;
+ }
+
+ // do some validation of frame elements
+
+ // first the method
+ // Need to use "unchecked" versions to avoid "z_istate_magic_number" assertion.
+ Method* m = (Method*)(ijava_state_unchecked()->method);
+
+ // validate the method we'd find in this potential sender
+ if (!Method::is_valid_method(m)) return false;
+
+ // stack frames shouldn't be much larger than max_stack elements
+ // this test requires the use of unextended_sp which is the sp as seen by
+ // the current frame, and not sp which is the "raw" pc which could point
+ // further because of local variables of the callee method inserted after
+ // method arguments
+ if (fp() - unextended_sp() > 1024 + m->max_stack()*Interpreter::stackElementSize) {
+ return false;
+ }
+
+ // validate bci/bcx
+ address bcp = (address)(ijava_state_unchecked()->bcp);
+ if (m->validate_bci_from_bcp(bcp) < 0) {
+ return false;
+ }
+
+ // validate constantPoolCache*
+ ConstantPoolCache* cp = (ConstantPoolCache*)(ijava_state_unchecked()->cpoolCache);
+ if (MetaspaceObj::is_valid(cp) == false) return false;
+
+ // validate locals
+ address locals = (address)(ijava_state_unchecked()->locals);
+ return thread->is_in_stack_range_incl(locals, (address)fp());
}
BasicType frame::interpreter_frame_result(oop* oop_result, jvalue* value_result) {
diff --git a/src/hotspot/cpu/s390/matcher_s390.hpp b/src/hotspot/cpu/s390/matcher_s390.hpp
index 2906f584a3176..e9f43f0465a7a 100644
--- a/src/hotspot/cpu/s390/matcher_s390.hpp
+++ b/src/hotspot/cpu/s390/matcher_s390.hpp
@@ -144,4 +144,7 @@
return true;
}
+ // Implements a variant of EncodeISOArrayNode that encode ASCII only
+ static const bool supports_encode_ascii_array = false;
+
#endif // CPU_S390_MATCHER_S390_HPP
diff --git a/src/hotspot/cpu/s390/s390.ad b/src/hotspot/cpu/s390/s390.ad
index acd601b4929ae..6610e394e9552 100644
--- a/src/hotspot/cpu/s390/s390.ad
+++ b/src/hotspot/cpu/s390/s390.ad
@@ -10277,6 +10277,7 @@ instruct has_negatives(rarg5RegP ary1, iRegI len, iRegI result, roddRegI oddReg,
// encode char[] to byte[] in ISO_8859_1
instruct encode_iso_array(iRegP src, iRegP dst, iRegI result, iRegI len, iRegI tmp, flagsReg cr) %{
+ predicate(!((EncodeISOArrayNode*)n)->is_ascii());
match(Set result (EncodeISOArray src (Binary dst len)));
effect(TEMP_DEF result, TEMP tmp, KILL cr); // R0, R1 are killed, too.
ins_cost(300);
diff --git a/src/hotspot/cpu/x86/assembler_x86.cpp b/src/hotspot/cpu/x86/assembler_x86.cpp
index 7c6bbc37eec11..70b4cd84672e6 100644
--- a/src/hotspot/cpu/x86/assembler_x86.cpp
+++ b/src/hotspot/cpu/x86/assembler_x86.cpp
@@ -187,7 +187,7 @@ Address::Address(address loc, RelocationHolder spec) {
// Address. An index of 4 (rsp) corresponds to having no index, so convert
// that to noreg for the Address constructor.
Address Address::make_raw(int base, int index, int scale, int disp, relocInfo::relocType disp_reloc) {
- RelocationHolder rspec;
+ RelocationHolder rspec = RelocationHolder::none;
if (disp_reloc != relocInfo::none) {
rspec = Relocation::spec_simple(disp_reloc);
}
diff --git a/src/hotspot/cpu/x86/c1_LIRGenerator_x86.cpp b/src/hotspot/cpu/x86/c1_LIRGenerator_x86.cpp
index ba18ce30cfa91..b99f16fea057a 100644
--- a/src/hotspot/cpu/x86/c1_LIRGenerator_x86.cpp
+++ b/src/hotspot/cpu/x86/c1_LIRGenerator_x86.cpp
@@ -192,8 +192,32 @@ LIR_Address* LIRGenerator::emit_array_address(LIR_Opr array_opr, LIR_Opr index_o
LIR_Address* addr;
if (index_opr->is_constant()) {
int elem_size = type2aelembytes(type);
- addr = new LIR_Address(array_opr,
- offset_in_bytes + (intx)(index_opr->as_jint()) * elem_size, type);
+#ifdef _LP64
+ jint index = index_opr->as_jint();
+ jlong disp = offset_in_bytes + (jlong)(index) * elem_size;
+ if (disp > max_jint) {
+ // Displacement overflow. Cannot directly use instruction with 32-bit displacement for 64-bit addresses.
+ // Convert array index to long to do array offset computation with 64-bit values.
+ index_opr = new_register(T_LONG);
+ __ move(LIR_OprFact::longConst(index), index_opr);
+ addr = new LIR_Address(array_opr, index_opr, LIR_Address::scale(type), offset_in_bytes, type);
+ } else {
+ addr = new LIR_Address(array_opr, (intx)disp, type);
+ }
+#else
+ // A displacement overflow can also occur for x86 but that is not a problem due to the 32-bit address range!
+ // Let's assume an array 'a' and an access with displacement 'disp'. When disp overflows, then "a + disp" will
+ // always be negative (i.e. underflows the 32-bit address range):
+ // Let N = 2^32: a + signed_overflow(disp) = a + disp - N.
+ // "a + disp" is always smaller than N. If an index was chosen which would point to an address beyond N, then
+ // range checks would catch that and throw an exception. Thus, a + disp < 0 holds which means that it always
+ // underflows the 32-bit address range:
+ // unsigned_underflow(a + signed_overflow(disp)) = unsigned_underflow(a + disp - N)
+ // = (a + disp - N) + N = a + disp
+ // This shows that we still end up at the correct address with a displacement overflow due to the 32-bit address
+ // range limitation. This overflow only needs to be handled if addresses can be larger as on 64-bit platforms.
+ addr = new LIR_Address(array_opr, offset_in_bytes + (intx)(index_opr->as_jint()) * elem_size, type);
+#endif // _LP64
} else {
#ifdef _LP64
if (index_opr->type() == T_INT) {
diff --git a/src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp b/src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp
index 9301f5d604a45..b022f11990ca7 100644
--- a/src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp
@@ -354,18 +354,18 @@ void C1_MacroAssembler::remove_frame(int frame_size_in_bytes) {
}
-void C1_MacroAssembler::verified_entry() {
- if (C1Breakpoint || VerifyFPU) {
+void C1_MacroAssembler::verified_entry(bool breakAtEntry) {
+ if (breakAtEntry || VerifyFPU) {
// Verified Entry first instruction should be 5 bytes long for correct
// patching by patch_verified_entry().
//
- // C1Breakpoint and VerifyFPU have one byte first instruction.
+ // Breakpoint and VerifyFPU have one byte first instruction.
// Also first instruction will be one byte "push(rbp)" if stack banging
// code is not generated (see build_frame() above).
// For all these cases generate long instruction first.
fat_nop();
}
- if (C1Breakpoint)int3();
+ if (breakAtEntry) int3();
// build frame
IA32_ONLY( verify_FPU(0, "method_entry"); )
}
diff --git a/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp b/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp
index e951e808af0ed..19ba67f851935 100644
--- a/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp
@@ -602,8 +602,13 @@ void C2_MacroAssembler::fast_lock(Register objReg, Register boxReg, Register tmp
// Unconditionally set box->_displaced_header = markWord::unused_mark().
// Without cast to int32_t this style of movptr will destroy r10 which is typically obj.
movptr(Address(boxReg, 0), (int32_t)intptr_t(markWord::unused_mark().value()));
- // Intentional fall-through into DONE_LABEL ...
// Propagate ICC.ZF from CAS above into DONE_LABEL.
+ jcc(Assembler::equal, DONE_LABEL); // CAS above succeeded; propagate ZF = 1 (success)
+
+ cmpptr(r15_thread, rax); // Check if we are already the owner (recursive lock)
+ jcc(Assembler::notEqual, DONE_LABEL); // If not recursive, ZF = 0 at this point (fail)
+ incq(Address(scrReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(recursions)));
+ xorq(rax, rax); // Set ZF = 1 (success) for recursive lock, denoting locking success
#endif // _LP64
#if INCLUDE_RTM_OPT
} // use_rtm()
@@ -705,10 +710,6 @@ void C2_MacroAssembler::fast_unlock(Register objReg, Register boxReg, Register t
// Refer to the comments in synchronizer.cpp for how we might encode extra
// state in _succ so we can avoid fetching EntryList|cxq.
//
- // I'd like to add more cases in fast_lock() and fast_unlock() --
- // such as recursive enter and exit -- but we have to be wary of
- // I$ bloat, T$ effects and BP$ effects.
- //
// If there's no contention try a 1-0 exit. That is, exit without
// a costly MEMBAR or CAS. See synchronizer.cpp for details on how
// we detect and recover from the race that the 1-0 exit admits.
@@ -756,9 +757,16 @@ void C2_MacroAssembler::fast_unlock(Register objReg, Register boxReg, Register t
bind (CheckSucc);
#else // _LP64
// It's inflated
- xorptr(boxReg, boxReg);
- orptr(boxReg, Address(tmpReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(recursions)));
- jccb (Assembler::notZero, DONE_LABEL);
+ Label LNotRecursive, LSuccess, LGoSlowPath;
+
+ cmpptr(Address(tmpReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(recursions)), 0);
+ jccb(Assembler::equal, LNotRecursive);
+
+ // Recursive inflated unlock
+ decq(Address(tmpReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(recursions)));
+ jmpb(LSuccess);
+
+ bind(LNotRecursive);
movptr(boxReg, Address(tmpReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(cxq)));
orptr(boxReg, Address(tmpReg, OM_OFFSET_NO_MONITOR_VALUE_TAG(EntryList)));
jccb (Assembler::notZero, CheckSucc);
@@ -767,7 +775,6 @@ void C2_MacroAssembler::fast_unlock(Register objReg, Register boxReg, Register t
jmpb (DONE_LABEL);
// Try to avoid passing control into the slow_path ...
- Label LSuccess, LGoSlowPath ;
bind (CheckSucc);
// The following optional optimization can be elided if necessary
diff --git a/src/hotspot/cpu/x86/c2_globals_x86.hpp b/src/hotspot/cpu/x86/c2_globals_x86.hpp
index 776caa30cf9a5..e0598408899d1 100644
--- a/src/hotspot/cpu/x86/c2_globals_x86.hpp
+++ b/src/hotspot/cpu/x86/c2_globals_x86.hpp
@@ -44,7 +44,7 @@ define_pd_global(intx, OnStackReplacePercentage, 140);
define_pd_global(intx, ConditionalMoveLimit, 3);
define_pd_global(intx, FreqInlineSize, 325);
define_pd_global(intx, MinJumpTableSize, 10);
-define_pd_global(intx, LoopPercentProfileLimit, 30);
+define_pd_global(intx, LoopPercentProfileLimit, 10);
#ifdef AMD64
define_pd_global(intx, INTPRESSURE, 13);
define_pd_global(intx, FLOATPRESSURE, 14);
diff --git a/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp
index 32d9050fcc87b..2fa9dbe4ef786 100644
--- a/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp
@@ -682,13 +682,14 @@ void ShenandoahBarrierSetAssembler::cmpxchg_oop(MacroAssembler* masm,
//
// Try to CAS with given arguments. If successful, then we are done.
- if (os::is_MP()) __ lock();
#ifdef _LP64
if (UseCompressedOops) {
+ __ lock();
__ cmpxchgl(newval, addr);
} else
#endif
{
+ __ lock();
__ cmpxchgptr(newval, addr);
}
__ jcc(Assembler::equal, L_success);
@@ -765,13 +766,14 @@ void ShenandoahBarrierSetAssembler::cmpxchg_oop(MacroAssembler* masm,
}
#endif
- if (os::is_MP()) __ lock();
#ifdef _LP64
if (UseCompressedOops) {
+ __ lock();
__ cmpxchgl(tmp2, addr);
} else
#endif
{
+ __ lock();
__ cmpxchgptr(tmp2, addr);
}
@@ -791,13 +793,14 @@ void ShenandoahBarrierSetAssembler::cmpxchg_oop(MacroAssembler* masm,
__ movptr(oldval, tmp2);
}
- if (os::is_MP()) __ lock();
#ifdef _LP64
if (UseCompressedOops) {
+ __ lock();
__ cmpxchgl(newval, addr);
} else
#endif
{
+ __ lock();
__ cmpxchgptr(newval, addr);
}
if (!exchange) {
diff --git a/src/hotspot/cpu/x86/jvmciCodeInstaller_x86.cpp b/src/hotspot/cpu/x86/jvmciCodeInstaller_x86.cpp
index 38f696b50c2f8..7d93ed522ba01 100644
--- a/src/hotspot/cpu/x86/jvmciCodeInstaller_x86.cpp
+++ b/src/hotspot/cpu/x86/jvmciCodeInstaller_x86.cpp
@@ -155,14 +155,15 @@ void CodeInstaller::pd_relocate_JavaMethod(CodeBuffer &, JVMCIObject hotspot_met
method = JVMCIENV->asMethod(hotspot_method);
}
#endif
+ NativeCall* call = NULL;
switch (_next_call_type) {
case INLINE_INVOKE:
- break;
+ return;
case INVOKEVIRTUAL:
case INVOKEINTERFACE: {
assert(method == NULL || !method->is_static(), "cannot call static method with invokeinterface");
- NativeCall* call = nativeCall_at(_instructions->start() + pc_offset);
+ call = nativeCall_at(_instructions->start() + pc_offset);
call->set_destination(SharedRuntime::get_resolve_virtual_call_stub());
_instructions->relocate(call->instruction_address(),
virtual_call_Relocation::spec(_invoke_mark_pc),
@@ -172,7 +173,7 @@ void CodeInstaller::pd_relocate_JavaMethod(CodeBuffer &, JVMCIObject hotspot_met
case INVOKESTATIC: {
assert(method == NULL || method->is_static(), "cannot call non-static method with invokestatic");
- NativeCall* call = nativeCall_at(_instructions->start() + pc_offset);
+ call = nativeCall_at(_instructions->start() + pc_offset);
call->set_destination(SharedRuntime::get_resolve_static_call_stub());
_instructions->relocate(call->instruction_address(),
relocInfo::static_call_type, Assembler::call32_operand);
@@ -180,15 +181,18 @@ void CodeInstaller::pd_relocate_JavaMethod(CodeBuffer &, JVMCIObject hotspot_met
}
case INVOKESPECIAL: {
assert(method == NULL || !method->is_static(), "cannot call static method with invokespecial");
- NativeCall* call = nativeCall_at(_instructions->start() + pc_offset);
+ call = nativeCall_at(_instructions->start() + pc_offset);
call->set_destination(SharedRuntime::get_resolve_opt_virtual_call_stub());
_instructions->relocate(call->instruction_address(),
relocInfo::opt_virtual_call_type, Assembler::call32_operand);
break;
}
default:
- JVMCI_ERROR("invalid _next_call_type value");
- break;
+ JVMCI_ERROR("invalid _next_call_type value: %d", _next_call_type);
+ return;
+ }
+ if (!call->is_displacement_aligned()) {
+ JVMCI_ERROR("unaligned displacement for call at offset %d", pc_offset);
}
}
diff --git a/src/hotspot/cpu/x86/macroAssembler_x86.cpp b/src/hotspot/cpu/x86/macroAssembler_x86.cpp
index e94a71c44b42d..74178d4152504 100644
--- a/src/hotspot/cpu/x86/macroAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/macroAssembler_x86.cpp
@@ -5614,7 +5614,7 @@ void MacroAssembler::generate_fill(BasicType t, bool aligned,
BIND(L_exit);
}
-// encode char[] to byte[] in ISO_8859_1
+// encode char[] to byte[] in ISO_8859_1 or ASCII
//@IntrinsicCandidate
//private static int implEncodeISOArray(byte[] sa, int sp,
//byte[] da, int dp, int len) {
@@ -5627,10 +5627,23 @@ void MacroAssembler::generate_fill(BasicType t, bool aligned,
// }
// return i;
//}
+ //
+ //@IntrinsicCandidate
+ //private static int implEncodeAsciiArray(char[] sa, int sp,
+ // byte[] da, int dp, int len) {
+ // int i = 0;
+ // for (; i < len; i++) {
+ // char c = sa[sp++];
+ // if (c >= '\u0080')
+ // break;
+ // da[dp++] = (byte)c;
+ // }
+ // return i;
+ //}
void MacroAssembler::encode_iso_array(Register src, Register dst, Register len,
XMMRegister tmp1Reg, XMMRegister tmp2Reg,
XMMRegister tmp3Reg, XMMRegister tmp4Reg,
- Register tmp5, Register result) {
+ Register tmp5, Register result, bool ascii) {
// rsi: src
// rdi: dst
@@ -5641,6 +5654,9 @@ void MacroAssembler::encode_iso_array(Register src, Register dst, Register len,
assert_different_registers(src, dst, len, tmp5, result);
Label L_done, L_copy_1_char, L_copy_1_char_exit;
+ int mask = ascii ? 0xff80ff80 : 0xff00ff00;
+ int short_mask = ascii ? 0xff80 : 0xff00;
+
// set result
xorl(result, result);
// check for zero length
@@ -5660,7 +5676,7 @@ void MacroAssembler::encode_iso_array(Register src, Register dst, Register len,
if (UseAVX >= 2) {
Label L_chars_32_check, L_copy_32_chars, L_copy_32_chars_exit;
- movl(tmp5, 0xff00ff00); // create mask to test for Unicode chars in vector
+ movl(tmp5, mask); // create mask to test for Unicode or non-ASCII chars in vector
movdl(tmp1Reg, tmp5);
vpbroadcastd(tmp1Reg, tmp1Reg, Assembler::AVX_256bit);
jmp(L_chars_32_check);
@@ -5669,7 +5685,7 @@ void MacroAssembler::encode_iso_array(Register src, Register dst, Register len,
vmovdqu(tmp3Reg, Address(src, len, Address::times_2, -64));
vmovdqu(tmp4Reg, Address(src, len, Address::times_2, -32));
vpor(tmp2Reg, tmp3Reg, tmp4Reg, /* vector_len */ 1);
- vptest(tmp2Reg, tmp1Reg); // check for Unicode chars in vector
+ vptest(tmp2Reg, tmp1Reg); // check for Unicode or non-ASCII chars in vector
jccb(Assembler::notZero, L_copy_32_chars_exit);
vpackuswb(tmp3Reg, tmp3Reg, tmp4Reg, /* vector_len */ 1);
vpermq(tmp4Reg, tmp3Reg, 0xD8, /* vector_len */ 1);
@@ -5684,7 +5700,7 @@ void MacroAssembler::encode_iso_array(Register src, Register dst, Register len,
jccb(Assembler::greater, L_copy_16_chars_exit);
} else if (UseSSE42Intrinsics) {
- movl(tmp5, 0xff00ff00); // create mask to test for Unicode chars in vector
+ movl(tmp5, mask); // create mask to test for Unicode or non-ASCII chars in vector
movdl(tmp1Reg, tmp5);
pshufd(tmp1Reg, tmp1Reg, 0);
jmpb(L_chars_16_check);
@@ -5708,7 +5724,7 @@ void MacroAssembler::encode_iso_array(Register src, Register dst, Register len,
movdqu(tmp4Reg, Address(src, len, Address::times_2, -16));
por(tmp2Reg, tmp4Reg);
}
- ptest(tmp2Reg, tmp1Reg); // check for Unicode chars in vector
+ ptest(tmp2Reg, tmp1Reg); // check for Unicode or non-ASCII chars in vector
jccb(Assembler::notZero, L_copy_16_chars_exit);
packuswb(tmp3Reg, tmp4Reg);
}
@@ -5746,7 +5762,7 @@ void MacroAssembler::encode_iso_array(Register src, Register dst, Register len,
bind(L_copy_1_char);
load_unsigned_short(tmp5, Address(src, len, Address::times_2, 0));
- testl(tmp5, 0xff00); // check if Unicode char
+ testl(tmp5, short_mask); // check if Unicode or non-ASCII char
jccb(Assembler::notZero, L_copy_1_char_exit);
movb(Address(dst, len, Address::times_1, 0), tmp5);
addptr(len, 1);
diff --git a/src/hotspot/cpu/x86/macroAssembler_x86.hpp b/src/hotspot/cpu/x86/macroAssembler_x86.hpp
index 55ed32c969cb7..b7bbe49f75462 100644
--- a/src/hotspot/cpu/x86/macroAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/macroAssembler_x86.hpp
@@ -1734,7 +1734,7 @@ class MacroAssembler: public Assembler {
void encode_iso_array(Register src, Register dst, Register len,
XMMRegister tmp1, XMMRegister tmp2, XMMRegister tmp3,
- XMMRegister tmp4, Register tmp5, Register result);
+ XMMRegister tmp4, Register tmp5, Register result, bool ascii);
#ifdef _LP64
void add2_with_carry(Register dest_hi, Register dest_lo, Register src1, Register src2);
diff --git a/src/hotspot/cpu/x86/matcher_x86.hpp b/src/hotspot/cpu/x86/matcher_x86.hpp
index f0c7aff73f96a..4a1bfb5a56fc7 100644
--- a/src/hotspot/cpu/x86/matcher_x86.hpp
+++ b/src/hotspot/cpu/x86/matcher_x86.hpp
@@ -191,4 +191,7 @@
return true;
}
+ // Implements a variant of EncodeISOArrayNode that encode ASCII only
+ static const bool supports_encode_ascii_array = true;
+
#endif // CPU_X86_MATCHER_X86_HPP
diff --git a/src/hotspot/cpu/x86/nativeInst_x86.cpp b/src/hotspot/cpu/x86/nativeInst_x86.cpp
index fb00defc99e61..0374a9cadeaaa 100644
--- a/src/hotspot/cpu/x86/nativeInst_x86.cpp
+++ b/src/hotspot/cpu/x86/nativeInst_x86.cpp
@@ -260,6 +260,9 @@ void NativeCall::replace_mt_safe(address instr_addr, address code_buffer) {
}
+bool NativeCall::is_displacement_aligned() {
+ return (uintptr_t) displacement_address() % 4 == 0;
+}
// Similar to replace_mt_safe, but just changes the destination. The
// important thing is that free-running threads are able to execute this
@@ -282,8 +285,7 @@ void NativeCall::set_destination_mt_safe(address dest) {
CompiledICLocker::is_safe(instruction_address()), "concurrent code patching");
// Both C1 and C2 should now be generating code which aligns the patched address
// to be within a single cache line.
- bool is_aligned = ((uintptr_t)displacement_address() + 0) / cache_line_size ==
- ((uintptr_t)displacement_address() + 3) / cache_line_size;
+ bool is_aligned = is_displacement_aligned();
guarantee(is_aligned, "destination must be aligned");
diff --git a/src/hotspot/cpu/x86/nativeInst_x86.hpp b/src/hotspot/cpu/x86/nativeInst_x86.hpp
index 94f8b5e637c95..a86128e7e4c02 100644
--- a/src/hotspot/cpu/x86/nativeInst_x86.hpp
+++ b/src/hotspot/cpu/x86/nativeInst_x86.hpp
@@ -160,8 +160,6 @@ class NativeCall: public NativeInstruction {
return_address_offset = 5
};
- enum { cache_line_size = BytesPerWord }; // conservative estimate!
-
address instruction_address() const { return addr_at(instruction_offset); }
address next_instruction_address() const { return addr_at(return_address_offset); }
int displacement() const { return (jint) int_at(displacement_offset); }
@@ -175,9 +173,11 @@ class NativeCall: public NativeInstruction {
#endif // AMD64
set_int_at(displacement_offset, dest - return_address());
}
+ // Returns whether the 4-byte displacement operand is 4-byte aligned.
+ bool is_displacement_aligned();
void set_destination_mt_safe(address dest);
- void verify_alignment() { assert((intptr_t)addr_at(displacement_offset) % BytesPerInt == 0, "must be aligned"); }
+ void verify_alignment() { assert(is_displacement_aligned(), "displacement of call is not aligned"); }
void verify();
void print();
diff --git a/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp b/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp
index dec2cea59c2c0..b020d648d6a81 100644
--- a/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp
+++ b/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp
@@ -6999,15 +6999,15 @@ address generate_avx_ghash_processBlocks() {
}
// Get svml stub routine addresses
- void *libsvml = NULL;
+ void *libjsvml = NULL;
char ebuf[1024];
char dll_name[JVM_MAXPATHLEN];
- if (os::dll_locate_lib(dll_name, sizeof(dll_name), Arguments::get_dll_dir(), "svml")) {
- libsvml = os::dll_load(dll_name, ebuf, sizeof ebuf);
+ if (os::dll_locate_lib(dll_name, sizeof(dll_name), Arguments::get_dll_dir(), "jsvml")) {
+ libjsvml = os::dll_load(dll_name, ebuf, sizeof ebuf);
}
- if (libsvml != NULL) {
+ if (libjsvml != NULL) {
// SVML method naming convention
- // All the methods are named as __svml_op_ha_
+ // All the methods are named as __jsvml_op_ha_
// Where:
// ha stands for high accuracy
// is optional to indicate float/double
@@ -7018,10 +7018,10 @@ address generate_avx_ghash_processBlocks() {
// e.g. 128 bit float vector has 4 float elements
// indicates the avx/sse level:
// z0 is AVX512, l9 is AVX2, e9 is AVX1 and ex is for SSE2
- // e.g. __svml_expf16_ha_z0 is the method for computing 16 element vector float exp using AVX 512 insns
- // __svml_exp8_ha_z0 is the method for computing 8 element vector double exp using AVX 512 insns
+ // e.g. __jsvml_expf16_ha_z0 is the method for computing 16 element vector float exp using AVX 512 insns
+ // __jsvml_exp8_ha_z0 is the method for computing 8 element vector double exp using AVX 512 insns
- log_info(library)("Loaded library %s, handle " INTPTR_FORMAT, JNI_LIB_PREFIX "svml" JNI_LIB_SUFFIX, p2i(libsvml));
+ log_info(library)("Loaded library %s, handle " INTPTR_FORMAT, JNI_LIB_PREFIX "jsvml" JNI_LIB_SUFFIX, p2i(libjsvml));
if (UseAVX > 2) {
for (int op = 0; op < VectorSupport::NUM_SVML_OP; op++) {
int vop = VectorSupport::VECTOR_OP_SVML_START + op;
@@ -7029,11 +7029,11 @@ address generate_avx_ghash_processBlocks() {
(vop == VectorSupport::VECTOR_OP_LOG || vop == VectorSupport::VECTOR_OP_LOG10 || vop == VectorSupport::VECTOR_OP_POW)) {
continue;
}
- snprintf(ebuf, sizeof(ebuf), "__svml_%sf16_ha_z0", VectorSupport::svmlname[op]);
- StubRoutines::_vector_f_math[VectorSupport::VEC_SIZE_512][op] = (address)os::dll_lookup(libsvml, ebuf);
+ snprintf(ebuf, sizeof(ebuf), "__jsvml_%sf16_ha_z0", VectorSupport::svmlname[op]);
+ StubRoutines::_vector_f_math[VectorSupport::VEC_SIZE_512][op] = (address)os::dll_lookup(libjsvml, ebuf);
- snprintf(ebuf, sizeof(ebuf), "__svml_%s8_ha_z0", VectorSupport::svmlname[op]);
- StubRoutines::_vector_d_math[VectorSupport::VEC_SIZE_512][op] = (address)os::dll_lookup(libsvml, ebuf);
+ snprintf(ebuf, sizeof(ebuf), "__jsvml_%s8_ha_z0", VectorSupport::svmlname[op]);
+ StubRoutines::_vector_d_math[VectorSupport::VEC_SIZE_512][op] = (address)os::dll_lookup(libjsvml, ebuf);
}
}
const char* avx_sse_str = (UseAVX >= 2) ? "l9" : ((UseAVX == 1) ? "e9" : "ex");
@@ -7042,23 +7042,23 @@ address generate_avx_ghash_processBlocks() {
if (vop == VectorSupport::VECTOR_OP_POW) {
continue;
}
- snprintf(ebuf, sizeof(ebuf), "__svml_%sf4_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
- StubRoutines::_vector_f_math[VectorSupport::VEC_SIZE_64][op] = (address)os::dll_lookup(libsvml, ebuf);
+ snprintf(ebuf, sizeof(ebuf), "__jsvml_%sf4_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
+ StubRoutines::_vector_f_math[VectorSupport::VEC_SIZE_64][op] = (address)os::dll_lookup(libjsvml, ebuf);
- snprintf(ebuf, sizeof(ebuf), "__svml_%sf4_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
- StubRoutines::_vector_f_math[VectorSupport::VEC_SIZE_128][op] = (address)os::dll_lookup(libsvml, ebuf);
+ snprintf(ebuf, sizeof(ebuf), "__jsvml_%sf4_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
+ StubRoutines::_vector_f_math[VectorSupport::VEC_SIZE_128][op] = (address)os::dll_lookup(libjsvml, ebuf);
- snprintf(ebuf, sizeof(ebuf), "__svml_%sf8_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
- StubRoutines::_vector_f_math[VectorSupport::VEC_SIZE_256][op] = (address)os::dll_lookup(libsvml, ebuf);
+ snprintf(ebuf, sizeof(ebuf), "__jsvml_%sf8_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
+ StubRoutines::_vector_f_math[VectorSupport::VEC_SIZE_256][op] = (address)os::dll_lookup(libjsvml, ebuf);
- snprintf(ebuf, sizeof(ebuf), "__svml_%s1_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
- StubRoutines::_vector_d_math[VectorSupport::VEC_SIZE_64][op] = (address)os::dll_lookup(libsvml, ebuf);
+ snprintf(ebuf, sizeof(ebuf), "__jsvml_%s1_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
+ StubRoutines::_vector_d_math[VectorSupport::VEC_SIZE_64][op] = (address)os::dll_lookup(libjsvml, ebuf);
- snprintf(ebuf, sizeof(ebuf), "__svml_%s2_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
- StubRoutines::_vector_d_math[VectorSupport::VEC_SIZE_128][op] = (address)os::dll_lookup(libsvml, ebuf);
+ snprintf(ebuf, sizeof(ebuf), "__jsvml_%s2_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
+ StubRoutines::_vector_d_math[VectorSupport::VEC_SIZE_128][op] = (address)os::dll_lookup(libjsvml, ebuf);
- snprintf(ebuf, sizeof(ebuf), "__svml_%s4_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
- StubRoutines::_vector_d_math[VectorSupport::VEC_SIZE_256][op] = (address)os::dll_lookup(libsvml, ebuf);
+ snprintf(ebuf, sizeof(ebuf), "__jsvml_%s4_ha_%s", VectorSupport::svmlname[op], avx_sse_str);
+ StubRoutines::_vector_d_math[VectorSupport::VEC_SIZE_256][op] = (address)os::dll_lookup(libjsvml, ebuf);
}
}
#endif // COMPILER2
diff --git a/src/hotspot/cpu/x86/vm_version_x86.cpp b/src/hotspot/cpu/x86/vm_version_x86.cpp
index 8a6bb04f54dfd..2dacfaad5f69b 100644
--- a/src/hotspot/cpu/x86/vm_version_x86.cpp
+++ b/src/hotspot/cpu/x86/vm_version_x86.cpp
@@ -1775,54 +1775,58 @@ bool VM_Version::compute_has_intel_jcc_erratum() {
// https://www.intel.com/content/dam/support/us/en/documents/processors/mitigations-jump-conditional-code-erratum.pdf
switch (_model) {
case 0x8E:
- // 06_8EH | 9 | 8th Generation Intel® Core™ Processor Family based on microarchitecture code name Amber Lake Y
- // 06_8EH | 9 | 7th Generation Intel® Core™ Processor Family based on microarchitecture code name Kaby Lake U
- // 06_8EH | 9 | 7th Generation Intel® Core™ Processor Family based on microarchitecture code name Kaby Lake U 23e
- // 06_8EH | 9 | 7th Generation Intel® Core™ Processor Family based on microarchitecture code name Kaby Lake Y
- // 06_8EH | A | 8th Generation Intel® Core™ Processor Family based on microarchitecture code name Coffee Lake U43e
- // 06_8EH | B | 8th Generation Intel® Core™ Processors based on microarchitecture code name Whiskey Lake U
- // 06_8EH | C | 8th Generation Intel® Core™ Processor Family based on microarchitecture code name Amber Lake Y
- // 06_8EH | C | 10th Generation Intel® Core™ Processor Family based on microarchitecture code name Comet Lake U42
- // 06_8EH | C | 8th Generation Intel® Core™ Processors based on microarchitecture code name Whiskey Lake U
+ // 06_8EH | 9 | 8th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Amber Lake Y
+ // 06_8EH | 9 | 7th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Kaby Lake U
+ // 06_8EH | 9 | 7th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Kaby Lake U 23e
+ // 06_8EH | 9 | 7th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Kaby Lake Y
+ // 06_8EH | A | 8th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Coffee Lake U43e
+ // 06_8EH | B | 8th Generation Intel(R) Core(TM) Processors based on microarchitecture code name Whiskey Lake U
+ // 06_8EH | C | 8th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Amber Lake Y
+ // 06_8EH | C | 10th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Comet Lake U42
+ // 06_8EH | C | 8th Generation Intel(R) Core(TM) Processors based on microarchitecture code name Whiskey Lake U
return _stepping == 0x9 || _stepping == 0xA || _stepping == 0xB || _stepping == 0xC;
case 0x4E:
- // 06_4E | 3 | 6th Generation Intel® Core™ Processors based on microarchitecture code name Skylake U
- // 06_4E | 3 | 6th Generation Intel® Core™ Processor Family based on microarchitecture code name Skylake U23e
- // 06_4E | 3 | 6th Generation Intel® Core™ Processors based on microarchitecture code name Skylake Y
+ // 06_4E | 3 | 6th Generation Intel(R) Core(TM) Processors based on microarchitecture code name Skylake U
+ // 06_4E | 3 | 6th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Skylake U23e
+ // 06_4E | 3 | 6th Generation Intel(R) Core(TM) Processors based on microarchitecture code name Skylake Y
return _stepping == 0x3;
case 0x55:
- // 06_55H | 4 | Intel® Xeon® Processor D Family based on microarchitecture code name Skylake D, Bakerville
- // 06_55H | 4 | Intel® Xeon® Scalable Processors based on microarchitecture code name Skylake Server
- // 06_55H | 4 | Intel® Xeon® Processor W Family based on microarchitecture code name Skylake W
- // 06_55H | 4 | Intel® Core™ X-series Processors based on microarchitecture code name Skylake X
- // 06_55H | 4 | Intel® Xeon® Processor E3 v5 Family based on microarchitecture code name Skylake Xeon E3
- // 06_55 | 7 | 2nd Generation Intel® Xeon® Scalable Processors based on microarchitecture code name Cascade Lake (server)
+ // 06_55H | 4 | Intel(R) Xeon(R) Processor D Family based on microarchitecture code name Skylake D, Bakerville
+ // 06_55H | 4 | Intel(R) Xeon(R) Scalable Processors based on microarchitecture code name Skylake Server
+ // 06_55H | 4 | Intel(R) Xeon(R) Processor W Family based on microarchitecture code name Skylake W
+ // 06_55H | 4 | Intel(R) Core(TM) X-series Processors based on microarchitecture code name Skylake X
+ // 06_55H | 4 | Intel(R) Xeon(R) Processor E3 v5 Family based on microarchitecture code name Skylake Xeon E3
+ // 06_55 | 7 | 2nd Generation Intel(R) Xeon(R) Scalable Processors based on microarchitecture code name Cascade Lake (server)
return _stepping == 0x4 || _stepping == 0x7;
case 0x5E:
- // 06_5E | 3 | 6th Generation Intel® Core™ Processor Family based on microarchitecture code name Skylake H
- // 06_5E | 3 | 6th Generation Intel® Core™ Processor Family based on microarchitecture code name Skylake S
+ // 06_5E | 3 | 6th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Skylake H
+ // 06_5E | 3 | 6th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Skylake S
return _stepping == 0x3;
case 0x9E:
- // 06_9EH | 9 | 8th Generation Intel® Core™ Processor Family based on microarchitecture code name Kaby Lake G
- // 06_9EH | 9 | 7th Generation Intel® Core™ Processor Family based on microarchitecture code name Kaby Lake H
- // 06_9EH | 9 | 7th Generation Intel® Core™ Processor Family based on microarchitecture code name Kaby Lake S
- // 06_9EH | 9 | Intel® Core™ X-series Processors based on microarchitecture code name Kaby Lake X
- // 06_9EH | 9 | Intel® Xeon® Processor E3 v6 Family Kaby Lake Xeon E3
- // 06_9EH | A | 8th Generation Intel® Core™ Processor Family based on microarchitecture code name Coffee Lake H
- // 06_9EH | A | 8th Generation Intel® Core™ Processor Family based on microarchitecture code name Coffee Lake S
- // 06_9EH | A | 8th Generation Intel® Core™ Processor Family based on microarchitecture code name Coffee Lake S (6+2) x/KBP
- // 06_9EH | A | Intel® Xeon® Processor E Family based on microarchitecture code name Coffee Lake S (6+2)
- // 06_9EH | A | Intel® Xeon® Processor E Family based on microarchitecture code name Coffee Lake S (4+2)
- // 06_9EH | B | 8th Generation Intel® Core™ Processor Family based on microarchitecture code name Coffee Lake S (4+2)
- // 06_9EH | B | Intel® Celeron® Processor G Series based on microarchitecture code name Coffee Lake S (4+2)
- // 06_9EH | D | 9th Generation Intel® Core™ Processor Family based on microarchitecturecode name Coffee Lake H (8+2)
- // 06_9EH | D | 9th Generation Intel® Core™ Processor Family based on microarchitecture code name Coffee Lake S (8+2)
+ // 06_9EH | 9 | 8th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Kaby Lake G
+ // 06_9EH | 9 | 7th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Kaby Lake H
+ // 06_9EH | 9 | 7th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Kaby Lake S
+ // 06_9EH | 9 | Intel(R) Core(TM) X-series Processors based on microarchitecture code name Kaby Lake X
+ // 06_9EH | 9 | Intel(R) Xeon(R) Processor E3 v6 Family Kaby Lake Xeon E3
+ // 06_9EH | A | 8th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Coffee Lake H
+ // 06_9EH | A | 8th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Coffee Lake S
+ // 06_9EH | A | 8th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Coffee Lake S (6+2) x/KBP
+ // 06_9EH | A | Intel(R) Xeon(R) Processor E Family based on microarchitecture code name Coffee Lake S (6+2)
+ // 06_9EH | A | Intel(R) Xeon(R) Processor E Family based on microarchitecture code name Coffee Lake S (4+2)
+ // 06_9EH | B | 8th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Coffee Lake S (4+2)
+ // 06_9EH | B | Intel(R) Celeron(R) Processor G Series based on microarchitecture code name Coffee Lake S (4+2)
+ // 06_9EH | D | 9th Generation Intel(R) Core(TM) Processor Family based on microarchitecturecode name Coffee Lake H (8+2)
+ // 06_9EH | D | 9th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Coffee Lake S (8+2)
return _stepping == 0x9 || _stepping == 0xA || _stepping == 0xB || _stepping == 0xD;
+ case 0xA5:
+ // Not in Intel documentation.
+ // 06_A5H | | 10th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Comet Lake S/H
+ return true;
case 0xA6:
- // 06_A6H | 0 | 10th Generation Intel® Core™ Processor Family based on microarchitecture code name Comet Lake U62
+ // 06_A6H | 0 | 10th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Comet Lake U62
return _stepping == 0x0;
case 0xAE:
- // 06_AEH | A | 8th Generation Intel® Core™ Processor Family based on microarchitecture code name Kaby Lake Refresh U (4+2)
+ // 06_AEH | A | 8th Generation Intel(R) Core(TM) Processor Family based on microarchitecture code name Kaby Lake Refresh U (4+2)
return _stepping == 0xA;
default:
// If we are running on another intel machine not recognized in the table, we are okay.
diff --git a/src/hotspot/cpu/x86/vmreg_x86.hpp b/src/hotspot/cpu/x86/vmreg_x86.hpp
index fc0ba718bafb0..58df28f8491b9 100644
--- a/src/hotspot/cpu/x86/vmreg_x86.hpp
+++ b/src/hotspot/cpu/x86/vmreg_x86.hpp
@@ -90,7 +90,13 @@ inline bool is_concrete() {
#ifndef AMD64
if (is_Register()) return true;
#endif // AMD64
- return is_even(value());
+ // Do not use is_XMMRegister() here as it depends on the UseAVX setting.
+ if (value() >= ConcreteRegisterImpl::max_fpr && value() < ConcreteRegisterImpl::max_xmm) {
+ int base = value() - ConcreteRegisterImpl::max_fpr;
+ return base % XMMRegisterImpl::max_slots_per_register == 0;
+ } else {
+ return is_even(value()); // General, float, and K registers are all two slots wide
+ }
}
#endif // CPU_X86_VMREG_X86_HPP
diff --git a/src/hotspot/cpu/x86/x86.ad b/src/hotspot/cpu/x86/x86.ad
index 96d8a6eb2ce45..b6841c1bdd1cb 100644
--- a/src/hotspot/cpu/x86/x86.ad
+++ b/src/hotspot/cpu/x86/x86.ad
@@ -1621,6 +1621,21 @@ const bool Matcher::match_rule_supported(int opcode) {
return false;
}
break;
+ case Op_SqrtF:
+ if (UseSSE < 1) {
+ return false;
+ }
+ break;
+ case Op_SqrtD:
+#ifdef _LP64
+ if (UseSSE < 2) {
+ return false;
+ }
+#else
+ // x86_32.ad has a special match rule for SqrtD.
+ // Together with common x86 rules, this handles all UseSSE cases.
+#endif
+ break;
}
return true; // Match rules are supported by default.
}
diff --git a/src/hotspot/cpu/x86/x86_32.ad b/src/hotspot/cpu/x86/x86_32.ad
index c78e90354c859..0ebe4fa09c210 100644
--- a/src/hotspot/cpu/x86/x86_32.ad
+++ b/src/hotspot/cpu/x86/x86_32.ad
@@ -755,6 +755,7 @@ static enum RC rc_class( OptoReg::Name reg ) {
assert(UseSSE < 2, "shouldn't be used in SSE2+ mode");
return rc_float;
}
+ if (r->is_KRegister()) return rc_kreg;
assert(r->is_XMMRegister(), "must be");
return rc_xmm;
}
@@ -1249,26 +1250,6 @@ uint MachSpillCopyNode::implementation( CodeBuffer *cbuf, PhaseRegAlloc *ra_, bo
return size;
}
- assert( size > 0, "missed a case" );
-
- // --------------------------------------------------------------------
- // Check for second bits still needing moving.
- if( src_second == dst_second )
- return size; // Self copy; no move
- assert( src_second_rc != rc_bad && dst_second_rc != rc_bad, "src_second & dst_second cannot be Bad" );
-
- // Check for second word int-int move
- if( src_second_rc == rc_int && dst_second_rc == rc_int )
- return impl_mov_helper(cbuf,do_size,src_second,dst_second,size, st);
-
- // Check for second word integer store
- if( src_second_rc == rc_int && dst_second_rc == rc_stack )
- return impl_helper(cbuf,do_size,false,ra_->reg2offset(dst_second),src_second,0x89,"MOV ",size, st);
-
- // Check for second word integer load
- if( dst_second_rc == rc_int && src_second_rc == rc_stack )
- return impl_helper(cbuf,do_size,true ,ra_->reg2offset(src_second),dst_second,0x8B,"MOV ",size, st);
-
// AVX-512 opmask specific spilling.
if (src_first_rc == rc_stack && dst_first_rc == rc_kreg) {
assert((src_first & 1) == 0 && src_first + 1 == src_second, "invalid register pair");
@@ -1306,6 +1287,26 @@ uint MachSpillCopyNode::implementation( CodeBuffer *cbuf, PhaseRegAlloc *ra_, bo
return 0;
}
+ assert( size > 0, "missed a case" );
+
+ // --------------------------------------------------------------------
+ // Check for second bits still needing moving.
+ if( src_second == dst_second )
+ return size; // Self copy; no move
+ assert( src_second_rc != rc_bad && dst_second_rc != rc_bad, "src_second & dst_second cannot be Bad" );
+
+ // Check for second word int-int move
+ if( src_second_rc == rc_int && dst_second_rc == rc_int )
+ return impl_mov_helper(cbuf,do_size,src_second,dst_second,size, st);
+
+ // Check for second word integer store
+ if( src_second_rc == rc_int && dst_second_rc == rc_stack )
+ return impl_helper(cbuf,do_size,false,ra_->reg2offset(dst_second),src_second,0x89,"MOV ",size, st);
+
+ // Check for second word integer load
+ if( dst_second_rc == rc_int && src_second_rc == rc_stack )
+ return impl_helper(cbuf,do_size,true ,ra_->reg2offset(src_second),dst_second,0x8B,"MOV ",size, st);
+
Unimplemented();
return 0; // Mute compiler
}
@@ -7189,6 +7190,7 @@ instruct castLL( eRegL dst ) %{
%}
instruct castFF( regF dst ) %{
+ predicate(UseSSE >= 1);
match(Set dst (CastFF dst));
format %{ "#castFF of $dst" %}
ins_encode( /*empty encoding*/ );
@@ -7197,6 +7199,25 @@ instruct castFF( regF dst ) %{
%}
instruct castDD( regD dst ) %{
+ predicate(UseSSE >= 2);
+ match(Set dst (CastDD dst));
+ format %{ "#castDD of $dst" %}
+ ins_encode( /*empty encoding*/ );
+ ins_cost(0);
+ ins_pipe( empty );
+%}
+
+instruct castFF_PR( regFPR dst ) %{
+ predicate(UseSSE < 1);
+ match(Set dst (CastFF dst));
+ format %{ "#castFF of $dst" %}
+ ins_encode( /*empty encoding*/ );
+ ins_cost(0);
+ ins_pipe( empty );
+%}
+
+instruct castDD_PR( regDPR dst ) %{
+ predicate(UseSSE < 2);
match(Set dst (CastDD dst));
format %{ "#castDD of $dst" %}
ins_encode( /*empty encoding*/ );
@@ -12185,18 +12206,35 @@ instruct string_inflate_evex(Universe dummy, eSIRegP src, eDIRegP dst, eDXRegI l
instruct encode_iso_array(eSIRegP src, eDIRegP dst, eDXRegI len,
regD tmp1, regD tmp2, regD tmp3, regD tmp4,
eCXRegI tmp5, eAXRegI result, eFlagsReg cr) %{
+ predicate(!((EncodeISOArrayNode*)n)->is_ascii());
match(Set result (EncodeISOArray src (Binary dst len)));
effect(TEMP tmp1, TEMP tmp2, TEMP tmp3, TEMP tmp4, USE_KILL src, USE_KILL dst, USE_KILL len, KILL tmp5, KILL cr);
- format %{ "Encode array $src,$dst,$len -> $result // KILL ECX, EDX, $tmp1, $tmp2, $tmp3, $tmp4, ESI, EDI " %}
+ format %{ "Encode iso array $src,$dst,$len -> $result // KILL ECX, EDX, $tmp1, $tmp2, $tmp3, $tmp4, ESI, EDI " %}
ins_encode %{
__ encode_iso_array($src$$Register, $dst$$Register, $len$$Register,
$tmp1$$XMMRegister, $tmp2$$XMMRegister, $tmp3$$XMMRegister,
- $tmp4$$XMMRegister, $tmp5$$Register, $result$$Register);
+ $tmp4$$XMMRegister, $tmp5$$Register, $result$$Register, false);
%}
ins_pipe( pipe_slow );
%}
+// encode char[] to byte[] in ASCII
+instruct encode_ascii_array(eSIRegP src, eDIRegP dst, eDXRegI len,
+ regD tmp1, regD tmp2, regD tmp3, regD tmp4,
+ eCXRegI tmp5, eAXRegI result, eFlagsReg cr) %{
+ predicate(((EncodeISOArrayNode*)n)->is_ascii());
+ match(Set result (EncodeISOArray src (Binary dst len)));
+ effect(TEMP tmp1, TEMP tmp2, TEMP tmp3, TEMP tmp4, USE_KILL src, USE_KILL dst, USE_KILL len, KILL tmp5, KILL cr);
+
+ format %{ "Encode ascii array $src,$dst,$len -> $result // KILL ECX, EDX, $tmp1, $tmp2, $tmp3, $tmp4, ESI, EDI " %}
+ ins_encode %{
+ __ encode_iso_array($src$$Register, $dst$$Register, $len$$Register,
+ $tmp1$$XMMRegister, $tmp2$$XMMRegister, $tmp3$$XMMRegister,
+ $tmp4$$XMMRegister, $tmp5$$Register, $result$$Register, true);
+ %}
+ ins_pipe( pipe_slow );
+%}
//----------Control Flow Instructions------------------------------------------
// Signed compare Instructions
diff --git a/src/hotspot/cpu/x86/x86_64.ad b/src/hotspot/cpu/x86/x86_64.ad
index 1bd129892db5e..e012acd69d925 100644
--- a/src/hotspot/cpu/x86/x86_64.ad
+++ b/src/hotspot/cpu/x86/x86_64.ad
@@ -11748,14 +11748,32 @@ instruct string_inflate_evex(Universe dummy, rsi_RegP src, rdi_RegP dst, rdx_Reg
instruct encode_iso_array(rsi_RegP src, rdi_RegP dst, rdx_RegI len,
legRegD tmp1, legRegD tmp2, legRegD tmp3, legRegD tmp4,
rcx_RegI tmp5, rax_RegI result, rFlagsReg cr) %{
+ predicate(!((EncodeISOArrayNode*)n)->is_ascii());
match(Set result (EncodeISOArray src (Binary dst len)));
effect(TEMP tmp1, TEMP tmp2, TEMP tmp3, TEMP tmp4, USE_KILL src, USE_KILL dst, USE_KILL len, KILL tmp5, KILL cr);
- format %{ "Encode array $src,$dst,$len -> $result // KILL RCX, RDX, $tmp1, $tmp2, $tmp3, $tmp4, RSI, RDI " %}
+ format %{ "Encode iso array $src,$dst,$len -> $result // KILL RCX, RDX, $tmp1, $tmp2, $tmp3, $tmp4, RSI, RDI " %}
ins_encode %{
__ encode_iso_array($src$$Register, $dst$$Register, $len$$Register,
$tmp1$$XMMRegister, $tmp2$$XMMRegister, $tmp3$$XMMRegister,
- $tmp4$$XMMRegister, $tmp5$$Register, $result$$Register);
+ $tmp4$$XMMRegister, $tmp5$$Register, $result$$Register, false);
+ %}
+ ins_pipe( pipe_slow );
+%}
+
+// encode char[] to byte[] in ASCII
+instruct encode_ascii_array(rsi_RegP src, rdi_RegP dst, rdx_RegI len,
+ legRegD tmp1, legRegD tmp2, legRegD tmp3, legRegD tmp4,
+ rcx_RegI tmp5, rax_RegI result, rFlagsReg cr) %{
+ predicate(((EncodeISOArrayNode*)n)->is_ascii());
+ match(Set result (EncodeISOArray src (Binary dst len)));
+ effect(TEMP tmp1, TEMP tmp2, TEMP tmp3, TEMP tmp4, USE_KILL src, USE_KILL dst, USE_KILL len, KILL tmp5, KILL cr);
+
+ format %{ "Encode ascii array $src,$dst,$len -> $result // KILL RCX, RDX, $tmp1, $tmp2, $tmp3, $tmp4, RSI, RDI " %}
+ ins_encode %{
+ __ encode_iso_array($src$$Register, $dst$$Register, $len$$Register,
+ $tmp1$$XMMRegister, $tmp2$$XMMRegister, $tmp3$$XMMRegister,
+ $tmp4$$XMMRegister, $tmp5$$Register, $result$$Register, true);
%}
ins_pipe( pipe_slow );
%}
diff --git a/src/hotspot/cpu/zero/frame_zero.cpp b/src/hotspot/cpu/zero/frame_zero.cpp
index 972b835cd4bbf..8a735f3f56385 100644
--- a/src/hotspot/cpu/zero/frame_zero.cpp
+++ b/src/hotspot/cpu/zero/frame_zero.cpp
@@ -34,6 +34,7 @@
#include "runtime/frame.inline.hpp"
#include "runtime/handles.inline.hpp"
#include "runtime/signature.hpp"
+#include "runtime/stackWatermarkSet.hpp"
#include "vmreg_zero.inline.hpp"
#ifdef ASSERT
@@ -82,10 +83,15 @@ frame frame::sender(RegisterMap* map) const {
// sender_for_xxx methods update this accordingly.
map->set_include_argument_oops(false);
- if (is_entry_frame())
- return sender_for_entry_frame(map);
- else
- return sender_for_nonentry_frame(map);
+ frame result = zeroframe()->is_entry_frame() ?
+ sender_for_entry_frame(map) :
+ sender_for_nonentry_frame(map);
+
+ if (map->process_frames()) {
+ StackWatermarkSet::on_iteration(map->thread(), result);
+ }
+
+ return result;
}
BasicObjectLock* frame::interpreter_frame_monitor_begin() const {
diff --git a/src/hotspot/cpu/zero/globals_zero.hpp b/src/hotspot/cpu/zero/globals_zero.hpp
index 33f208b28f27a..dcee492daabe7 100644
--- a/src/hotspot/cpu/zero/globals_zero.hpp
+++ b/src/hotspot/cpu/zero/globals_zero.hpp
@@ -70,8 +70,7 @@ define_pd_global(uintx, TypeProfileLevel, 0);
define_pd_global(bool, PreserveFramePointer, false);
-// No performance work done here yet.
-define_pd_global(bool, CompactStrings, false);
+define_pd_global(bool, CompactStrings, true);
#define ARCH_FLAGS(develop, \
product, \
diff --git a/src/hotspot/cpu/zero/vm_version_zero.cpp b/src/hotspot/cpu/zero/vm_version_zero.cpp
index 14368bed5a04d..c610e2f66bf38 100644
--- a/src/hotspot/cpu/zero/vm_version_zero.cpp
+++ b/src/hotspot/cpu/zero/vm_version_zero.cpp
@@ -45,6 +45,15 @@ void VM_Version::initialize() {
}
FLAG_SET_DEFAULT(AllocatePrefetchDistance, 0);
+ // If lock diagnostics is needed, always call to runtime
+ if (DiagnoseSyncOnValueBasedClasses != 0) {
+ FLAG_SET_DEFAULT(UseHeavyMonitors, true);
+ }
+
// Not implemented
UNSUPPORTED_OPTION(CriticalJNINatives);
+ UNSUPPORTED_OPTION(UseCompiler);
+#ifdef ASSERT
+ UNSUPPORTED_OPTION(CountCompiledCalls);
+#endif
}
diff --git a/src/hotspot/cpu/zero/vm_version_zero.hpp b/src/hotspot/cpu/zero/vm_version_zero.hpp
index 84e1abb5894f6..c63a47719e50d 100644
--- a/src/hotspot/cpu/zero/vm_version_zero.hpp
+++ b/src/hotspot/cpu/zero/vm_version_zero.hpp
@@ -32,6 +32,8 @@
class VM_Version : public Abstract_VM_Version {
public:
static void initialize();
+
+ constexpr static bool supports_stack_watermark_barrier() { return true; }
};
#endif // CPU_ZERO_VM_VERSION_ZERO_HPP
diff --git a/src/hotspot/cpu/zero/zeroInterpreter_zero.cpp b/src/hotspot/cpu/zero/zeroInterpreter_zero.cpp
index 14fc8c0b00c03..f2e8dc2fe2322 100644
--- a/src/hotspot/cpu/zero/zeroInterpreter_zero.cpp
+++ b/src/hotspot/cpu/zero/zeroInterpreter_zero.cpp
@@ -68,14 +68,6 @@ void ZeroInterpreter::initialize_code() {
ZeroInterpreterGenerator g(_code);
if (PrintInterpreter) print();
}
-
- // Allow c++ interpreter to do one initialization now that switches are set, etc.
- BytecodeInterpreter start_msg(BytecodeInterpreter::initialize);
- if (JvmtiExport::can_post_interpreter_events()) {
- BytecodeInterpreter::run(&start_msg);
- } else {
- BytecodeInterpreter::run(&start_msg);
- }
}
void ZeroInterpreter::invoke_method(Method* method, address entry_point, TRAPS) {
@@ -200,6 +192,18 @@ void ZeroInterpreter::main_loop(int recurse, TRAPS) {
}
fixup_after_potential_safepoint();
+ // If we are unwinding, notify the stack watermarks machinery.
+ // Should do this before resetting the frame anchor.
+ if (istate->msg() == BytecodeInterpreter::return_from_method ||
+ istate->msg() == BytecodeInterpreter::do_osr) {
+ stack_watermark_unwind_check(thread);
+ } else {
+ assert(istate->msg() == BytecodeInterpreter::call_method ||
+ istate->msg() == BytecodeInterpreter::more_monitors ||
+ istate->msg() == BytecodeInterpreter::throwing_exception,
+ "Should be one of these otherwise");
+ }
+
// Clear the frame anchor
thread->reset_last_Java_frame();
@@ -320,13 +324,13 @@ int ZeroInterpreter::native_entry(Method* method, intptr_t UNUSED, TRAPS) {
monitor = (BasicObjectLock*) istate->stack_base();
oop lockee = monitor->obj();
markWord disp = lockee->mark().set_unlocked();
-
monitor->lock()->set_displaced_header(disp);
- if (lockee->cas_set_mark(markWord::from_pointer(monitor), disp) != disp) {
- if (thread->is_lock_owned((address) disp.clear_lock_bits().to_pointer())) {
+ bool call_vm = UseHeavyMonitors;
+ if (call_vm || lockee->cas_set_mark(markWord::from_pointer(monitor), disp) != disp) {
+ // Is it simple recursive case?
+ if (!call_vm && thread->is_lock_owned((address) disp.clear_lock_bits().to_pointer())) {
monitor->lock()->set_displaced_header(markWord::from_pointer(NULL));
- }
- else {
+ } else {
CALL_VM_NOCHECK(InterpreterRuntime::monitorenter(thread, monitor));
if (HAS_PENDING_EXCEPTION)
goto unwind_and_return;
@@ -436,6 +440,10 @@ int ZeroInterpreter::native_entry(Method* method, intptr_t UNUSED, TRAPS) {
thread->set_thread_state(_thread_in_Java);
fixup_after_potential_safepoint();
+ // Notify the stack watermarks machinery that we are unwinding.
+ // Should do this before resetting the frame anchor.
+ stack_watermark_unwind_check(thread);
+
// Clear the frame anchor
thread->reset_last_Java_frame();
@@ -546,6 +554,12 @@ int ZeroInterpreter::native_entry(Method* method, intptr_t UNUSED, TRAPS) {
}
}
+ // Already did every pending exception check here.
+ // If HAS_PENDING_EXCEPTION is true, the interpreter would handle the rest.
+ if (CheckJNICalls) {
+ THREAD->clear_pending_jni_exception_check();
+ }
+
// No deoptimized frames on the stack
return 0;
}
@@ -869,3 +883,13 @@ address ZeroInterpreter::remove_activation_early_entry(TosState state) {
bool ZeroInterpreter::contains(address pc) {
return false; // make frame::print_value_on work
}
+
+void ZeroInterpreter::stack_watermark_unwind_check(JavaThread* thread) {
+ // If frame pointer is in the danger zone, notify the runtime that
+ // it needs to act before continuing the unwinding.
+ uintptr_t fp = (uintptr_t)thread->last_Java_fp();
+ uintptr_t watermark = thread->poll_data()->get_polling_word();
+ if (fp > watermark) {
+ InterpreterRuntime::at_unwind(thread);
+ }
+}
diff --git a/src/hotspot/cpu/zero/zeroInterpreter_zero.hpp b/src/hotspot/cpu/zero/zeroInterpreter_zero.hpp
index 2d1d28f4da5a8..3761dfc58145a 100644
--- a/src/hotspot/cpu/zero/zeroInterpreter_zero.hpp
+++ b/src/hotspot/cpu/zero/zeroInterpreter_zero.hpp
@@ -39,6 +39,9 @@
static int empty_entry(Method* method, intptr_t UNUSED, TRAPS);
static int Reference_get_entry(Method* method, intptr_t UNUSED, TRAPS);
+ // Stack watermark machinery
+ static void stack_watermark_unwind_check(JavaThread* thread);
+
public:
// Main loop of normal_entry
static void main_loop(int recurse, TRAPS);
diff --git a/src/hotspot/os/aix/os_aix.cpp b/src/hotspot/os/aix/os_aix.cpp
index 14356b76f4c8c..1d2ce086b656f 100644
--- a/src/hotspot/os/aix/os_aix.cpp
+++ b/src/hotspot/os/aix/os_aix.cpp
@@ -2665,9 +2665,7 @@ int os::open(const char *path, int oflag, int mode) {
// create binary file, rewriting existing file if required
int os::create_binary_file(const char* path, bool rewrite_existing) {
int oflags = O_WRONLY | O_CREAT;
- if (!rewrite_existing) {
- oflags |= O_EXCL;
- }
+ oflags |= rewrite_existing ? O_TRUNC : O_EXCL;
return ::open64(path, oflags, S_IREAD | S_IWRITE);
}
diff --git a/src/hotspot/os/bsd/decoder_machO.cpp b/src/hotspot/os/bsd/decoder_machO.cpp
index f2c60755ca8b5..f2d31969fe6b8 100644
--- a/src/hotspot/os/bsd/decoder_machO.cpp
+++ b/src/hotspot/os/bsd/decoder_machO.cpp
@@ -52,6 +52,12 @@ bool MachODecoder::demangle(const char* symbol, char *buf, int buflen) {
bool MachODecoder::decode(address addr, char *buf,
int buflen, int *offset, const void *mach_base) {
+ if (addr == (address)(intptr_t)-1) {
+ // dladdr() in macOS12/Monterey returns success for -1, but that addr value
+ // won't work in this function. Should have been handled by the caller.
+ ShouldNotReachHere();
+ return false;
+ }
struct symtab_command * symt = (struct symtab_command *)
mach_find_command((struct mach_header_64 *)mach_base, LC_SYMTAB);
if (symt == NULL) {
diff --git a/src/hotspot/os/bsd/os_bsd.cpp b/src/hotspot/os/bsd/os_bsd.cpp
index 4e10c1cb908d0..5a5f50a5c8c5d 100644
--- a/src/hotspot/os/bsd/os_bsd.cpp
+++ b/src/hotspot/os/bsd/os_bsd.cpp
@@ -452,7 +452,9 @@ void os::init_system_properties_values() {
}
}
Arguments::set_java_home(buf);
- set_boot_path('/', ':');
+ if (!set_boot_path('/', ':')) {
+ vm_exit_during_initialization("Failed setting boot class path.", NULL);
+ }
}
// Where to look for native libraries.
@@ -901,6 +903,17 @@ int os::current_process_id() {
const char* os::dll_file_extension() { return JNI_LIB_SUFFIX; }
+static int local_dladdr(const void* addr, Dl_info* info) {
+#ifdef __APPLE__
+ if (addr == (void*)-1) {
+ // dladdr() in macOS12/Monterey returns success for -1, but that addr
+ // value should not be allowed to work to avoid confusion.
+ return 0;
+ }
+#endif
+ return dladdr(addr, info);
+}
+
// This must be hard coded because it's the system's temporary
// directory not the java application's temp directory, ala java.io.tmpdir.
#ifdef __APPLE__
@@ -940,9 +953,6 @@ bool os::address_is_in_vm(address addr) {
return false;
}
-
-#define MACH_MAXSYMLEN 256
-
bool os::dll_address_to_function_name(address addr, char *buf,
int buflen, int *offset,
bool demangle) {
@@ -950,9 +960,8 @@ bool os::dll_address_to_function_name(address addr, char *buf,
assert(buf != NULL, "sanity check");
Dl_info dlinfo;
- char localbuf[MACH_MAXSYMLEN];
- if (dladdr((void*)addr, &dlinfo) != 0) {
+ if (local_dladdr((void*)addr, &dlinfo) != 0) {
// see if we have a matching symbol
if (dlinfo.dli_saddr != NULL && dlinfo.dli_sname != NULL) {
if (!(demangle && Decoder::demangle(dlinfo.dli_sname, buf, buflen))) {
@@ -961,6 +970,14 @@ bool os::dll_address_to_function_name(address addr, char *buf,
if (offset != NULL) *offset = addr - (address)dlinfo.dli_saddr;
return true;
}
+
+#ifndef __APPLE__
+ // The 6-parameter Decoder::decode() function is not implemented on macOS.
+ // The Mach-O binary format does not contain a "list of files" with address
+ // ranges like ELF. That makes sense since Mach-O can contain binaries for
+ // than one instruction set so there can be more than one address range for
+ // each "file".
+
// no matching symbol so try for just file info
if (dlinfo.dli_fname != NULL && dlinfo.dli_fbase != NULL) {
if (Decoder::decode((address)(addr - (address)dlinfo.dli_fbase),
@@ -969,6 +986,10 @@ bool os::dll_address_to_function_name(address addr, char *buf,
}
}
+#else // __APPLE__
+ #define MACH_MAXSYMLEN 256
+
+ char localbuf[MACH_MAXSYMLEN];
// Handle non-dynamic manually:
if (dlinfo.dli_fbase != NULL &&
Decoder::decode(addr, localbuf, MACH_MAXSYMLEN, offset,
@@ -978,6 +999,9 @@ bool os::dll_address_to_function_name(address addr, char *buf,
}
return true;
}
+
+ #undef MACH_MAXSYMLEN
+#endif // __APPLE__
}
buf[0] = '\0';
if (offset != NULL) *offset = -1;
@@ -992,7 +1016,7 @@ bool os::dll_address_to_library_name(address addr, char* buf,
Dl_info dlinfo;
- if (dladdr((void*)addr, &dlinfo) != 0) {
+ if (local_dladdr((void*)addr, &dlinfo) != 0) {
if (dlinfo.dli_fname != NULL) {
jio_snprintf(buf, buflen, "%s", dlinfo.dli_fname);
}
@@ -1390,13 +1414,17 @@ void os::get_summary_cpu_info(char* buf, size_t buflen) {
strncpy(machine, "", sizeof(machine));
}
- const char* emulated = "";
#if defined(__APPLE__) && !defined(ZERO)
if (VM_Version::is_cpu_emulated()) {
- emulated = " (EMULATED)";
+ snprintf(buf, buflen, "\"%s\" %s (EMULATED) %d MHz", model, machine, mhz);
+ } else {
+ NOT_AARCH64(snprintf(buf, buflen, "\"%s\" %s %d MHz", model, machine, mhz));
+ // aarch64 CPU doesn't report its speed
+ AARCH64_ONLY(snprintf(buf, buflen, "\"%s\" %s", model, machine));
}
+#else
+ snprintf(buf, buflen, "\"%s\" %s %d MHz", model, machine, mhz);
#endif
- snprintf(buf, buflen, "\"%s\" %s%s %d MHz", model, machine, emulated, mhz);
}
void os::print_memory_info(outputStream* st) {
@@ -2355,9 +2383,7 @@ int os::open(const char *path, int oflag, int mode) {
// create binary file, rewriting existing file if required
int os::create_binary_file(const char* path, bool rewrite_existing) {
int oflags = O_WRONLY | O_CREAT;
- if (!rewrite_existing) {
- oflags |= O_EXCL;
- }
+ oflags |= rewrite_existing ? O_TRUNC : O_EXCL;
return ::open(path, oflags, S_IREAD | S_IWRITE);
}
diff --git a/src/hotspot/os/linux/cgroupSubsystem_linux.cpp b/src/hotspot/os/linux/cgroupSubsystem_linux.cpp
index fb653c762bce5..1593a701e6799 100644
--- a/src/hotspot/os/linux/cgroupSubsystem_linux.cpp
+++ b/src/hotspot/os/linux/cgroupSubsystem_linux.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -34,11 +34,15 @@
#include "runtime/os.hpp"
#include "utilities/globalDefinitions.hpp"
+// controller names have to match the *_IDX indices
+static const char* cg_controller_name[] = { "cpu", "cpuset", "cpuacct", "memory", "pids" };
+
CgroupSubsystem* CgroupSubsystemFactory::create() {
CgroupV1MemoryController* memory = NULL;
CgroupV1Controller* cpuset = NULL;
CgroupV1Controller* cpu = NULL;
CgroupV1Controller* cpuacct = NULL;
+ CgroupV1Controller* pids = NULL;
CgroupInfo cg_infos[CG_INFO_LENGTH];
u1 cg_type_flags = INVALID_CGROUPS_GENERIC;
const char* proc_cgroups = "/proc/cgroups";
@@ -93,22 +97,29 @@ CgroupSubsystem* CgroupSubsystemFactory::create() {
assert(is_cgroup_v1(&cg_type_flags), "Cgroup v1 expected");
for (int i = 0; i < CG_INFO_LENGTH; i++) {
CgroupInfo info = cg_infos[i];
- if (strcmp(info._name, "memory") == 0) {
- memory = new CgroupV1MemoryController(info._root_mount_path, info._mount_path);
- memory->set_subsystem_path(info._cgroup_path);
- } else if (strcmp(info._name, "cpuset") == 0) {
- cpuset = new CgroupV1Controller(info._root_mount_path, info._mount_path);
- cpuset->set_subsystem_path(info._cgroup_path);
- } else if (strcmp(info._name, "cpu") == 0) {
- cpu = new CgroupV1Controller(info._root_mount_path, info._mount_path);
- cpu->set_subsystem_path(info._cgroup_path);
- } else if (strcmp(info._name, "cpuacct") == 0) {
- cpuacct = new CgroupV1Controller(info._root_mount_path, info._mount_path);
- cpuacct->set_subsystem_path(info._cgroup_path);
+ if (info._data_complete) { // pids controller might have incomplete data
+ if (strcmp(info._name, "memory") == 0) {
+ memory = new CgroupV1MemoryController(info._root_mount_path, info._mount_path);
+ memory->set_subsystem_path(info._cgroup_path);
+ } else if (strcmp(info._name, "cpuset") == 0) {
+ cpuset = new CgroupV1Controller(info._root_mount_path, info._mount_path);
+ cpuset->set_subsystem_path(info._cgroup_path);
+ } else if (strcmp(info._name, "cpu") == 0) {
+ cpu = new CgroupV1Controller(info._root_mount_path, info._mount_path);
+ cpu->set_subsystem_path(info._cgroup_path);
+ } else if (strcmp(info._name, "cpuacct") == 0) {
+ cpuacct = new CgroupV1Controller(info._root_mount_path, info._mount_path);
+ cpuacct->set_subsystem_path(info._cgroup_path);
+ } else if (strcmp(info._name, "pids") == 0) {
+ pids = new CgroupV1Controller(info._root_mount_path, info._mount_path);
+ pids->set_subsystem_path(info._cgroup_path);
+ }
+ } else {
+ log_debug(os, container)("CgroupInfo for %s not complete", cg_controller_name[i]);
}
}
cleanup(cg_infos);
- return new CgroupV1Subsystem(cpuset, cpu, cpuacct, memory);
+ return new CgroupV1Subsystem(cpuset, cpu, cpuacct, pids, memory);
}
bool CgroupSubsystemFactory::determine_type(CgroupInfo* cg_infos,
@@ -122,9 +133,10 @@ bool CgroupSubsystemFactory::determine_type(CgroupInfo* cg_infos,
char buf[MAXPATHLEN+1];
char *p;
bool is_cgroupsV2;
- // true iff all controllers, memory, cpu, cpuset, cpuacct are enabled
+ // true iff all required controllers, memory, cpu, cpuset, cpuacct are enabled
// at the kernel level.
- bool all_controllers_enabled;
+ // pids might not be enabled on older Linux distros (SLES 12.1, RHEL 7.1)
+ bool all_required_controllers_enabled;
/*
* Read /proc/cgroups so as to be able to distinguish cgroups v2 vs cgroups v1.
@@ -136,10 +148,9 @@ bool CgroupSubsystemFactory::determine_type(CgroupInfo* cg_infos,
*/
cgroups = fopen(proc_cgroups, "r");
if (cgroups == NULL) {
- log_debug(os, container)("Can't open %s, %s",
- proc_cgroups, os::strerror(errno));
- *flags = INVALID_CGROUPS_GENERIC;
- return false;
+ log_debug(os, container)("Can't open %s, %s", proc_cgroups, os::strerror(errno));
+ *flags = INVALID_CGROUPS_GENERIC;
+ return false;
}
while ((p = fgets(buf, MAXPATHLEN, cgroups)) != NULL) {
@@ -167,19 +178,30 @@ bool CgroupSubsystemFactory::determine_type(CgroupInfo* cg_infos,
cg_infos[CPUACCT_IDX]._name = os::strdup(name);
cg_infos[CPUACCT_IDX]._hierarchy_id = hierarchy_id;
cg_infos[CPUACCT_IDX]._enabled = (enabled == 1);
+ } else if (strcmp(name, "pids") == 0) {
+ log_debug(os, container)("Detected optional pids controller entry in %s", proc_cgroups);
+ cg_infos[PIDS_IDX]._name = os::strdup(name);
+ cg_infos[PIDS_IDX]._hierarchy_id = hierarchy_id;
+ cg_infos[PIDS_IDX]._enabled = (enabled == 1);
}
}
fclose(cgroups);
is_cgroupsV2 = true;
- all_controllers_enabled = true;
+ all_required_controllers_enabled = true;
for (int i = 0; i < CG_INFO_LENGTH; i++) {
- is_cgroupsV2 = is_cgroupsV2 && cg_infos[i]._hierarchy_id == 0;
- all_controllers_enabled = all_controllers_enabled && cg_infos[i]._enabled;
+ // pids controller is optional. All other controllers are required
+ if (i != PIDS_IDX) {
+ is_cgroupsV2 = is_cgroupsV2 && cg_infos[i]._hierarchy_id == 0;
+ all_required_controllers_enabled = all_required_controllers_enabled && cg_infos[i]._enabled;
+ }
+ if (log_is_enabled(Debug, os, container) && !cg_infos[i]._enabled) {
+ log_debug(os, container)("controller %s is not enabled\n", cg_controller_name[i]);
+ }
}
- if (!all_controllers_enabled) {
- // one or more controllers disabled, disable container support
+ if (!all_required_controllers_enabled) {
+ // one or more required controllers disabled, disable container support
log_debug(os, container)("One or more required controllers disabled at kernel level.");
cleanup(cg_infos);
*flags = INVALID_CGROUPS_GENERIC;
@@ -220,17 +242,21 @@ bool CgroupSubsystemFactory::determine_type(CgroupInfo* cg_infos,
while (!is_cgroupsV2 && (token = strsep(&controllers, ",")) != NULL) {
if (strcmp(token, "memory") == 0) {
- assert(hierarchy_id == cg_infos[MEMORY_IDX]._hierarchy_id, "/proc/cgroups and /proc/self/cgroup hierarchy mismatch");
+ assert(hierarchy_id == cg_infos[MEMORY_IDX]._hierarchy_id, "/proc/cgroups and /proc/self/cgroup hierarchy mismatch for memory");
cg_infos[MEMORY_IDX]._cgroup_path = os::strdup(cgroup_path);
} else if (strcmp(token, "cpuset") == 0) {
- assert(hierarchy_id == cg_infos[CPUSET_IDX]._hierarchy_id, "/proc/cgroups and /proc/self/cgroup hierarchy mismatch");
+ assert(hierarchy_id == cg_infos[CPUSET_IDX]._hierarchy_id, "/proc/cgroups and /proc/self/cgroup hierarchy mismatch for cpuset");
cg_infos[CPUSET_IDX]._cgroup_path = os::strdup(cgroup_path);
} else if (strcmp(token, "cpu") == 0) {
- assert(hierarchy_id == cg_infos[CPU_IDX]._hierarchy_id, "/proc/cgroups and /proc/self/cgroup hierarchy mismatch");
+ assert(hierarchy_id == cg_infos[CPU_IDX]._hierarchy_id, "/proc/cgroups and /proc/self/cgroup hierarchy mismatch for cpu");
cg_infos[CPU_IDX]._cgroup_path = os::strdup(cgroup_path);
} else if (strcmp(token, "cpuacct") == 0) {
- assert(hierarchy_id == cg_infos[CPUACCT_IDX]._hierarchy_id, "/proc/cgroups and /proc/self/cgroup hierarchy mismatch");
+ assert(hierarchy_id == cg_infos[CPUACCT_IDX]._hierarchy_id, "/proc/cgroups and /proc/self/cgroup hierarchy mismatch for cpuacc");
cg_infos[CPUACCT_IDX]._cgroup_path = os::strdup(cgroup_path);
+ } else if (strcmp(token, "pids") == 0) {
+ assert(hierarchy_id == cg_infos[PIDS_IDX]._hierarchy_id, "/proc/cgroups (%d) and /proc/self/cgroup (%d) hierarchy mismatch for pids",
+ cg_infos[PIDS_IDX]._hierarchy_id, hierarchy_id);
+ cg_infos[PIDS_IDX]._cgroup_path = os::strdup(cgroup_path);
}
}
if (is_cgroupsV2) {
@@ -281,13 +307,15 @@ bool CgroupSubsystemFactory::determine_type(CgroupInfo* cg_infos,
/* Cgroup v1 relevant info
*
- * Find the cgroup mount point for memory, cpuset, cpu, cpuacct
+ * Find the cgroup mount point for memory, cpuset, cpu, cpuacct, pids
*
* Example for docker:
* 219 214 0:29 /docker/7208cebd00fa5f2e342b1094f7bed87fa25661471a4637118e65f1c995be8a34 /sys/fs/cgroup/memory ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,memory
*
* Example for host:
* 34 28 0:29 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,memory
+ *
+ * 44 31 0:39 / /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:23 - cgroup cgroup rw,pids
*/
if (sscanf(p, "%*d %*d %*d:%*d %s %s %*[^-]- %s %*s %s", tmproot, tmpmount, tmp_fs_type, tmpcgroups) == 4) {
if (strcmp("cgroup", tmp_fs_type) != 0) {
@@ -333,6 +361,12 @@ bool CgroupSubsystemFactory::determine_type(CgroupInfo* cg_infos,
cg_infos[CPUACCT_IDX]._mount_path = os::strdup(tmpmount);
cg_infos[CPUACCT_IDX]._root_mount_path = os::strdup(tmproot);
cg_infos[CPUACCT_IDX]._data_complete = true;
+ } else if (strcmp(token, "pids") == 0) {
+ any_cgroup_mounts_found = true;
+ assert(cg_infos[PIDS_IDX]._mount_path == NULL, "stomping of _mount_path");
+ cg_infos[PIDS_IDX]._mount_path = os::strdup(tmpmount);
+ cg_infos[PIDS_IDX]._root_mount_path = os::strdup(tmproot);
+ cg_infos[PIDS_IDX]._data_complete = true;
}
}
}
@@ -387,10 +421,13 @@ bool CgroupSubsystemFactory::determine_type(CgroupInfo* cg_infos,
*flags = INVALID_CGROUPS_V1;
return false;
}
+ if (log_is_enabled(Debug, os, container) && !cg_infos[PIDS_IDX]._data_complete) {
+ log_debug(os, container)("Optional cgroup v1 pids subsystem not found");
+ // keep the other controller info, pids is optional
+ }
// Cgroups v1 case, we have all the info we need.
*flags = CGROUPS_V1;
return true;
-
};
void CgroupSubsystemFactory::cleanup(CgroupInfo* cg_infos) {
@@ -514,3 +551,22 @@ jlong CgroupSubsystem::memory_limit_in_bytes() {
memory_limit->set_value(mem_limit, OSCONTAINER_CACHE_TIMEOUT);
return mem_limit;
}
+
+jlong CgroupSubsystem::limit_from_str(char* limit_str) {
+ if (limit_str == NULL) {
+ return OSCONTAINER_ERROR;
+ }
+ // Unlimited memory in cgroups is the literal string 'max' for
+ // some controllers, for example the pids controller.
+ if (strcmp("max", limit_str) == 0) {
+ os::free(limit_str);
+ return (jlong)-1;
+ }
+ julong limit;
+ if (sscanf(limit_str, JULONG_FORMAT, &limit) != 1) {
+ os::free(limit_str);
+ return OSCONTAINER_ERROR;
+ }
+ os::free(limit_str);
+ return (jlong)limit;
+}
diff --git a/src/hotspot/os/linux/cgroupSubsystem_linux.hpp b/src/hotspot/os/linux/cgroupSubsystem_linux.hpp
index 80c147c75764e..1dd3a1f59103e 100644
--- a/src/hotspot/os/linux/cgroupSubsystem_linux.hpp
+++ b/src/hotspot/os/linux/cgroupSubsystem_linux.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -61,12 +61,13 @@
#define INVALID_CGROUPS_NO_MOUNT 5
#define INVALID_CGROUPS_GENERIC 6
-// Four controllers: cpu, cpuset, cpuacct, memory
-#define CG_INFO_LENGTH 4
+// Five controllers: cpu, cpuset, cpuacct, memory, pids
+#define CG_INFO_LENGTH 5
#define CPUSET_IDX 0
#define CPU_IDX 1
#define CPUACCT_IDX 2
#define MEMORY_IDX 3
+#define PIDS_IDX 4
typedef char * cptr;
@@ -156,8 +157,10 @@ PRAGMA_DIAG_POP
NULL, \
scan_fmt, \
&variable); \
- if (err != 0) \
+ if (err != 0) { \
+ log_trace(os, container)(logstring, (return_type) OSCONTAINER_ERROR); \
return (return_type) OSCONTAINER_ERROR; \
+ } \
\
log_trace(os, container)(logstring, variable); \
}
@@ -238,10 +241,13 @@ class CgroupSubsystem: public CHeapObj {
public:
jlong memory_limit_in_bytes();
int active_processor_count();
+ jlong limit_from_str(char* limit_str);
virtual int cpu_quota() = 0;
virtual int cpu_period() = 0;
virtual int cpu_shares() = 0;
+ virtual jlong pids_max() = 0;
+ virtual jlong pids_current() = 0;
virtual jlong memory_usage_in_bytes() = 0;
virtual jlong memory_and_swap_limit_in_bytes() = 0;
virtual jlong memory_soft_limit_in_bytes() = 0;
diff --git a/src/hotspot/os/linux/cgroupV1Subsystem_linux.cpp b/src/hotspot/os/linux/cgroupV1Subsystem_linux.cpp
index 5638213cd60ed..e259206b41ec0 100644
--- a/src/hotspot/os/linux/cgroupV1Subsystem_linux.cpp
+++ b/src/hotspot/os/linux/cgroupV1Subsystem_linux.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -241,3 +241,43 @@ int CgroupV1Subsystem::cpu_shares() {
return shares;
}
+
+
+char* CgroupV1Subsystem::pids_max_val() {
+ GET_CONTAINER_INFO_CPTR(cptr, _pids, "/pids.max",
+ "Maximum number of tasks is: %s", "%s %*d", pidsmax, 1024);
+ if (pidsmax == NULL) {
+ return NULL;
+ }
+ return os::strdup(pidsmax);
+}
+
+/* pids_max
+ *
+ * Return the maximum number of tasks available to the process
+ *
+ * return:
+ * maximum number of tasks
+ * -1 for unlimited
+ * OSCONTAINER_ERROR for not supported
+ */
+jlong CgroupV1Subsystem::pids_max() {
+ if (_pids == NULL) return OSCONTAINER_ERROR;
+ char * pidsmax_str = pids_max_val();
+ return limit_from_str(pidsmax_str);
+}
+
+/* pids_current
+ *
+ * The number of tasks currently in the cgroup (and its descendants) of the process
+ *
+ * return:
+ * current number of tasks
+ * OSCONTAINER_ERROR for not supported
+ */
+jlong CgroupV1Subsystem::pids_current() {
+ if (_pids == NULL) return OSCONTAINER_ERROR;
+ GET_CONTAINER_INFO(jlong, _pids, "/pids.current",
+ "Current number of tasks is: " JLONG_FORMAT, JLONG_FORMAT, pids_current);
+ return pids_current;
+}
diff --git a/src/hotspot/os/linux/cgroupV1Subsystem_linux.hpp b/src/hotspot/os/linux/cgroupV1Subsystem_linux.hpp
index 79a247a4562da..3811a56b32972 100644
--- a/src/hotspot/os/linux/cgroupV1Subsystem_linux.hpp
+++ b/src/hotspot/os/linux/cgroupV1Subsystem_linux.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -87,6 +87,9 @@ class CgroupV1Subsystem: public CgroupSubsystem {
int cpu_shares();
+ jlong pids_max();
+ jlong pids_current();
+
const char * container_type() {
return "cgroupv1";
}
@@ -101,15 +104,20 @@ class CgroupV1Subsystem: public CgroupSubsystem {
CgroupV1Controller* _cpuset = NULL;
CachingCgroupController* _cpu = NULL;
CgroupV1Controller* _cpuacct = NULL;
+ CgroupV1Controller* _pids = NULL;
+
+ char * pids_max_val();
public:
CgroupV1Subsystem(CgroupV1Controller* cpuset,
CgroupV1Controller* cpu,
CgroupV1Controller* cpuacct,
+ CgroupV1Controller* pids,
CgroupV1MemoryController* memory) {
_cpuset = cpuset;
_cpu = new CachingCgroupController(cpu);
_cpuacct = cpuacct;
+ _pids = pids;
_memory = new CachingCgroupController(memory);
_unlimited_memory = (LONG_MAX / os::vm_page_size()) * os::vm_page_size();
}
diff --git a/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp b/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp
index 66192f1d27161..0b1bc9c6cddd8 100644
--- a/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp
+++ b/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2020, Red Hat Inc.
+ * Copyright (c) 2020, 2022, Red Hat Inc.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -36,7 +36,7 @@
*/
int CgroupV2Subsystem::cpu_shares() {
GET_CONTAINER_INFO(int, _unified, "/cpu.weight",
- "Raw value for CPU shares is: %d", "%d", shares);
+ "Raw value for CPU Shares is: %d", "%d", shares);
// Convert default value of 100 to no shares setup
if (shares == 100) {
log_debug(os, container)("CPU Shares is: %d", -1);
@@ -203,24 +203,6 @@ jlong CgroupV2Subsystem::read_memory_limit_in_bytes() {
return limit;
}
-jlong CgroupV2Subsystem::limit_from_str(char* limit_str) {
- if (limit_str == NULL) {
- return OSCONTAINER_ERROR;
- }
- // Unlimited memory in Cgroups V2 is the literal string 'max'
- if (strcmp("max", limit_str) == 0) {
- os::free(limit_str);
- return (jlong)-1;
- }
- julong limit;
- if (sscanf(limit_str, JULONG_FORMAT, &limit) != 1) {
- os::free(limit_str);
- return OSCONTAINER_ERROR;
- }
- os::free(limit_str);
- return (jlong)limit;
-}
-
char* CgroupV2Subsystem::mem_limit_val() {
GET_CONTAINER_INFO_CPTR(cptr, _unified, "/memory.max",
"Raw value for memory limit is: %s", "%s", mem_limit_str, 1024);
@@ -244,3 +226,39 @@ char* CgroupV2Controller::construct_path(char* mount_path, char *cgroup_path) {
return os::strdup(buf);
}
+char* CgroupV2Subsystem::pids_max_val() {
+ GET_CONTAINER_INFO_CPTR(cptr, _unified, "/pids.max",
+ "Maximum number of tasks is: %s", "%s %*d", pidsmax, 1024);
+ if (pidsmax == NULL) {
+ return NULL;
+ }
+ return os::strdup(pidsmax);
+}
+
+/* pids_max
+ *
+ * Return the maximum number of tasks available to the process
+ *
+ * return:
+ * maximum number of tasks
+ * -1 for unlimited
+ * OSCONTAINER_ERROR for not supported
+ */
+jlong CgroupV2Subsystem::pids_max() {
+ char * pidsmax_str = pids_max_val();
+ return limit_from_str(pidsmax_str);
+}
+
+/* pids_current
+ *
+ * The number of tasks currently in the cgroup (and its descendants) of the process
+ *
+ * return:
+ * current number of tasks
+ * OSCONTAINER_ERROR for not supported
+ */
+jlong CgroupV2Subsystem::pids_current() {
+ GET_CONTAINER_INFO(jlong, _unified, "/pids.current",
+ "Current number of tasks is: " JLONG_FORMAT, JLONG_FORMAT, pids_current);
+ return pids_current;
+}
diff --git a/src/hotspot/os/linux/cgroupV2Subsystem_linux.hpp b/src/hotspot/os/linux/cgroupV2Subsystem_linux.hpp
index bd3380e22e3b4..beb78c2174393 100644
--- a/src/hotspot/os/linux/cgroupV2Subsystem_linux.hpp
+++ b/src/hotspot/os/linux/cgroupV2Subsystem_linux.hpp
@@ -60,7 +60,7 @@ class CgroupV2Subsystem: public CgroupSubsystem {
char *mem_swp_limit_val();
char *mem_soft_limit_val();
char *cpu_quota_val();
- jlong limit_from_str(char* limit_str);
+ char *pids_max_val();
public:
CgroupV2Subsystem(CgroupController * unified) {
@@ -79,6 +79,9 @@ class CgroupV2Subsystem: public CgroupSubsystem {
jlong memory_max_usage_in_bytes();
char * cpu_cpuset_cpus();
char * cpu_cpuset_memory_nodes();
+ jlong pids_max();
+ jlong pids_current();
+
const char * container_type() {
return "cgroupv2";
}
diff --git a/src/hotspot/os/linux/osContainer_linux.cpp b/src/hotspot/os/linux/osContainer_linux.cpp
index b89cfd676ebe6..eb6f4a77fccbc 100644
--- a/src/hotspot/os/linux/osContainer_linux.cpp
+++ b/src/hotspot/os/linux/osContainer_linux.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2017, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2017, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -129,3 +129,13 @@ int OSContainer::cpu_shares() {
assert(cgroup_subsystem != NULL, "cgroup subsystem not available");
return cgroup_subsystem->cpu_shares();
}
+
+jlong OSContainer::pids_max() {
+ assert(cgroup_subsystem != NULL, "cgroup subsystem not available");
+ return cgroup_subsystem->pids_max();
+}
+
+jlong OSContainer::pids_current() {
+ assert(cgroup_subsystem != NULL, "cgroup subsystem not available");
+ return cgroup_subsystem->pids_current();
+}
diff --git a/src/hotspot/os/linux/osContainer_linux.hpp b/src/hotspot/os/linux/osContainer_linux.hpp
index 21801b7dc4b38..940bc0e3874bf 100644
--- a/src/hotspot/os/linux/osContainer_linux.hpp
+++ b/src/hotspot/os/linux/osContainer_linux.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2017, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2017, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -62,6 +62,8 @@ class OSContainer: AllStatic {
static int cpu_shares();
+ static jlong pids_max();
+ static jlong pids_current();
};
inline bool OSContainer::is_containerized() {
diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp
index 239cd0d7589f0..783814a303be7 100644
--- a/src/hotspot/os/linux/os_linux.cpp
+++ b/src/hotspot/os/linux/os_linux.cpp
@@ -1668,6 +1668,9 @@ void * os::dll_load(const char *filename, char *ebuf, int ebuflen) {
#ifndef EM_RISCV
#define EM_RISCV 243 /* RISC-V */
#endif
+#ifndef EM_LOONGARCH
+ #define EM_LOONGARCH 258 /* LoongArch */
+#endif
static const arch_t arch_array[]={
{EM_386, EM_386, ELFCLASS32, ELFDATA2LSB, (char*)"IA 32"},
@@ -1695,6 +1698,7 @@ void * os::dll_load(const char *filename, char *ebuf, int ebuflen) {
{EM_68K, EM_68K, ELFCLASS32, ELFDATA2MSB, (char*)"M68k"},
{EM_AARCH64, EM_AARCH64, ELFCLASS64, ELFDATA2LSB, (char*)"AARCH64"},
{EM_RISCV, EM_RISCV, ELFCLASS64, ELFDATA2LSB, (char*)"RISC-V"},
+ {EM_LOONGARCH, EM_LOONGARCH, ELFCLASS64, ELFDATA2LSB, (char*)"LoongArch"},
};
#if (defined IA32)
@@ -1731,9 +1735,11 @@ void * os::dll_load(const char *filename, char *ebuf, int ebuflen) {
static Elf32_Half running_arch_code=EM_SH;
#elif (defined RISCV)
static Elf32_Half running_arch_code=EM_RISCV;
+#elif (defined LOONGARCH)
+ static Elf32_Half running_arch_code=EM_LOONGARCH;
#else
#error Method os::dll_load requires that one of following is defined:\
- AARCH64, ALPHA, ARM, AMD64, IA32, IA64, M68K, MIPS, MIPSEL, PARISC, __powerpc__, __powerpc64__, RISCV, S390, SH, __sparc
+ AARCH64, ALPHA, ARM, AMD64, IA32, IA64, LOONGARCH, M68K, MIPS, MIPSEL, PARISC, __powerpc__, __powerpc64__, RISCV, S390, SH, __sparc
#endif
// Identify compatibility class for VM's architecture and library's architecture
@@ -2137,44 +2143,51 @@ void os::Linux::print_system_memory_info(outputStream* st) {
"/sys/kernel/mm/transparent_hugepage/defrag", st);
}
-void os::Linux::print_process_memory_info(outputStream* st) {
-
- st->print_cr("Process Memory:");
-
- // Print virtual and resident set size; peak values; swap; and for
- // rss its components if the kernel is recent enough.
- ssize_t vmsize = -1, vmpeak = -1, vmswap = -1,
- vmrss = -1, vmhwm = -1, rssanon = -1, rssfile = -1, rssshmem = -1;
- const int num_values = 8;
- int num_found = 0;
+bool os::Linux::query_process_memory_info(os::Linux::meminfo_t* info) {
FILE* f = ::fopen("/proc/self/status", "r");
+ const int num_values = sizeof(os::Linux::meminfo_t) / sizeof(size_t);
+ int num_found = 0;
char buf[256];
+ info->vmsize = info->vmpeak = info->vmrss = info->vmhwm = info->vmswap =
+ info->rssanon = info->rssfile = info->rssshmem = -1;
if (f != NULL) {
while (::fgets(buf, sizeof(buf), f) != NULL && num_found < num_values) {
- if ( (vmsize == -1 && sscanf(buf, "VmSize: " SSIZE_FORMAT " kB", &vmsize) == 1) ||
- (vmpeak == -1 && sscanf(buf, "VmPeak: " SSIZE_FORMAT " kB", &vmpeak) == 1) ||
- (vmswap == -1 && sscanf(buf, "VmSwap: " SSIZE_FORMAT " kB", &vmswap) == 1) ||
- (vmhwm == -1 && sscanf(buf, "VmHWM: " SSIZE_FORMAT " kB", &vmhwm) == 1) ||
- (vmrss == -1 && sscanf(buf, "VmRSS: " SSIZE_FORMAT " kB", &vmrss) == 1) ||
- (rssanon == -1 && sscanf(buf, "RssAnon: " SSIZE_FORMAT " kB", &rssanon) == 1) ||
- (rssfile == -1 && sscanf(buf, "RssFile: " SSIZE_FORMAT " kB", &rssfile) == 1) ||
- (rssshmem == -1 && sscanf(buf, "RssShmem: " SSIZE_FORMAT " kB", &rssshmem) == 1)
+ if ( (info->vmsize == -1 && sscanf(buf, "VmSize: " SSIZE_FORMAT " kB", &info->vmsize) == 1) ||
+ (info->vmpeak == -1 && sscanf(buf, "VmPeak: " SSIZE_FORMAT " kB", &info->vmpeak) == 1) ||
+ (info->vmswap == -1 && sscanf(buf, "VmSwap: " SSIZE_FORMAT " kB", &info->vmswap) == 1) ||
+ (info->vmhwm == -1 && sscanf(buf, "VmHWM: " SSIZE_FORMAT " kB", &info->vmhwm) == 1) ||
+ (info->vmrss == -1 && sscanf(buf, "VmRSS: " SSIZE_FORMAT " kB", &info->vmrss) == 1) ||
+ (info->rssanon == -1 && sscanf(buf, "RssAnon: " SSIZE_FORMAT " kB", &info->rssanon) == 1) || // Needs Linux 4.5
+ (info->rssfile == -1 && sscanf(buf, "RssFile: " SSIZE_FORMAT " kB", &info->rssfile) == 1) || // Needs Linux 4.5
+ (info->rssshmem == -1 && sscanf(buf, "RssShmem: " SSIZE_FORMAT " kB", &info->rssshmem) == 1) // Needs Linux 4.5
)
{
num_found ++;
}
}
fclose(f);
+ return true;
+ }
+ return false;
+}
+
+void os::Linux::print_process_memory_info(outputStream* st) {
+
+ st->print_cr("Process Memory:");
- st->print_cr("Virtual Size: " SSIZE_FORMAT "K (peak: " SSIZE_FORMAT "K)", vmsize, vmpeak);
- st->print("Resident Set Size: " SSIZE_FORMAT "K (peak: " SSIZE_FORMAT "K)", vmrss, vmhwm);
- if (rssanon != -1) { // requires kernel >= 4.5
+ // Print virtual and resident set size; peak values; swap; and for
+ // rss its components if the kernel is recent enough.
+ meminfo_t info;
+ if (query_process_memory_info(&info)) {
+ st->print_cr("Virtual Size: " SSIZE_FORMAT "K (peak: " SSIZE_FORMAT "K)", info.vmsize, info.vmpeak);
+ st->print("Resident Set Size: " SSIZE_FORMAT "K (peak: " SSIZE_FORMAT "K)", info.vmrss, info.vmhwm);
+ if (info.rssanon != -1) { // requires kernel >= 4.5
st->print(" (anon: " SSIZE_FORMAT "K, file: " SSIZE_FORMAT "K, shmem: " SSIZE_FORMAT "K)",
- rssanon, rssfile, rssshmem);
+ info.rssanon, info.rssfile, info.rssshmem);
}
st->cr();
- if (vmswap != -1) { // requires kernel >= 2.6.34
- st->print_cr("Swapped out: " SSIZE_FORMAT "K", vmswap);
+ if (info.vmswap != -1) { // requires kernel >= 2.6.34
+ st->print_cr("Swapped out: " SSIZE_FORMAT "K", info.vmswap);
}
} else {
st->print_cr("Could not open /proc/self/status to get process memory related information");
@@ -2195,7 +2208,7 @@ void os::Linux::print_process_memory_info(outputStream* st) {
struct glibc_mallinfo mi = _mallinfo();
total_allocated = (size_t)(unsigned)mi.uordblks;
// Since mallinfo members are int, glibc values may have wrapped. Warn about this.
- might_have_wrapped = (vmrss * K) > UINT_MAX && (vmrss * K) > (total_allocated + UINT_MAX);
+ might_have_wrapped = (info.vmrss * K) > UINT_MAX && (info.vmrss * K) > (total_allocated + UINT_MAX);
}
if (_mallinfo2 != NULL || _mallinfo != NULL) {
st->print_cr("C-Heap outstanding allocations: " SIZE_FORMAT "K%s",
@@ -2220,6 +2233,7 @@ void os::Linux::print_uptime_info(outputStream* st) {
bool os::Linux::print_container_info(outputStream* st) {
if (!OSContainer::is_containerized()) {
+ st->print_cr("container information not found.");
return false;
}
@@ -2308,6 +2322,24 @@ bool os::Linux::print_container_info(outputStream* st) {
st->print_cr("%s", j == OSCONTAINER_ERROR ? "not supported" : "unlimited");
}
+ j = OSContainer::OSContainer::pids_max();
+ st->print("maximum number of tasks: ");
+ if (j > 0) {
+ st->print_cr(JLONG_FORMAT, j);
+ } else {
+ st->print_cr("%s", j == OSCONTAINER_ERROR ? "not supported" : "unlimited");
+ }
+
+ j = OSContainer::OSContainer::pids_current();
+ st->print("current number of tasks: ");
+ if (j > 0) {
+ st->print_cr(JLONG_FORMAT, j);
+ } else {
+ if (j == OSCONTAINER_ERROR) {
+ st->print_cr("not supported");
+ }
+ }
+
return true;
}
@@ -3347,6 +3379,9 @@ bool os::pd_create_stack_guard_pages(char* addr, size_t size) {
if (mincore((address)stack_extent, os::vm_page_size(), vec) == -1) {
// Fallback to slow path on all errors, including EAGAIN
+ assert((uintptr_t)addr >= stack_extent,
+ "Sanity: addr should be larger than extent, " PTR_FORMAT " >= " PTR_FORMAT,
+ p2i(addr), stack_extent);
stack_extent = (uintptr_t) get_stack_commited_bottom(
os::Linux::initial_thread_stack_bottom(),
(size_t)addr - stack_extent);
@@ -4968,9 +5003,7 @@ int os::open(const char *path, int oflag, int mode) {
// create binary file, rewriting existing file if required
int os::create_binary_file(const char* path, bool rewrite_existing) {
int oflags = O_WRONLY | O_CREAT;
- if (!rewrite_existing) {
- oflags |= O_EXCL;
- }
+ oflags |= rewrite_existing ? O_TRUNC : O_EXCL;
return ::open64(path, oflags, S_IREAD | S_IWRITE);
}
diff --git a/src/hotspot/os/linux/os_linux.hpp b/src/hotspot/os/linux/os_linux.hpp
index ada8db6977ea0..692dae042abda 100644
--- a/src/hotspot/os/linux/os_linux.hpp
+++ b/src/hotspot/os/linux/os_linux.hpp
@@ -174,6 +174,23 @@ class Linux {
// Return the namespace pid if so, otherwise -1.
static int get_namespace_pid(int vmid);
+ // Output structure for query_process_memory_info()
+ struct meminfo_t {
+ ssize_t vmsize; // current virtual size
+ ssize_t vmpeak; // peak virtual size
+ ssize_t vmrss; // current resident set size
+ ssize_t vmhwm; // peak resident set size
+ ssize_t vmswap; // swapped out
+ ssize_t rssanon; // resident set size (anonymous mappings, needs 4.5)
+ ssize_t rssfile; // resident set size (file mappings, needs 4.5)
+ ssize_t rssshmem; // resident set size (shared mappings, needs 4.5)
+ };
+
+ // Attempts to query memory information about the current process and return it in the output structure.
+ // May fail (returns false) or succeed (returns true) but not all output fields are available; unavailable
+ // fields will contain -1.
+ static bool query_process_memory_info(meminfo_t* info);
+
// Stack repair handling
// none present
diff --git a/src/hotspot/os/linux/trimCHeapDCmd.cpp b/src/hotspot/os/linux/trimCHeapDCmd.cpp
new file mode 100644
index 0000000000000..ee93ac5e8c8d7
--- /dev/null
+++ b/src/hotspot/os/linux/trimCHeapDCmd.cpp
@@ -0,0 +1,79 @@
+/*
+ * Copyright (c) 2021 SAP SE. All rights reserved.
+ * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#include "precompiled.hpp"
+#include "logging/log.hpp"
+#include "runtime/os.hpp"
+#include "utilities/debug.hpp"
+#include "utilities/ostream.hpp"
+#include "trimCHeapDCmd.hpp"
+
+#include
+
+void TrimCLibcHeapDCmd::execute(DCmdSource source, TRAPS) {
+#ifdef __GLIBC__
+ stringStream ss_report(1024); // Note: before calling trim
+
+ os::Linux::meminfo_t info1;
+ os::Linux::meminfo_t info2;
+ // Query memory before...
+ bool have_info1 = os::Linux::query_process_memory_info(&info1);
+
+ _output->print_cr("Attempting trim...");
+ ::malloc_trim(0);
+ _output->print_cr("Done.");
+
+ // ...and after trim.
+ bool have_info2 = os::Linux::query_process_memory_info(&info2);
+
+ // Print report both to output stream as well to UL
+ bool wrote_something = false;
+ if (have_info1 && have_info2) {
+ if (info1.vmsize != -1 && info2.vmsize != -1) {
+ ss_report.print_cr("Virtual size before: " SSIZE_FORMAT "k, after: " SSIZE_FORMAT "k, (" SSIZE_FORMAT "k)",
+ info1.vmsize, info2.vmsize, (info2.vmsize - info1.vmsize));
+ wrote_something = true;
+ }
+ if (info1.vmrss != -1 && info2.vmrss != -1) {
+ ss_report.print_cr("RSS before: " SSIZE_FORMAT "k, after: " SSIZE_FORMAT "k, (" SSIZE_FORMAT "k)",
+ info1.vmrss, info2.vmrss, (info2.vmrss - info1.vmrss));
+ wrote_something = true;
+ }
+ if (info1.vmswap != -1 && info2.vmswap != -1) {
+ ss_report.print_cr("Swap before: " SSIZE_FORMAT "k, after: " SSIZE_FORMAT "k, (" SSIZE_FORMAT "k)",
+ info1.vmswap, info2.vmswap, (info2.vmswap - info1.vmswap));
+ wrote_something = true;
+ }
+ }
+ if (!wrote_something) {
+ ss_report.print_raw("No details available.");
+ }
+
+ _output->print_raw(ss_report.base());
+ log_info(os)("malloc_trim:\n%s", ss_report.base());
+#else
+ _output->print_cr("Not available.");
+#endif
+}
diff --git a/src/hotspot/os/linux/trimCHeapDCmd.hpp b/src/hotspot/os/linux/trimCHeapDCmd.hpp
new file mode 100644
index 0000000000000..4c5b5cc2219ca
--- /dev/null
+++ b/src/hotspot/os/linux/trimCHeapDCmd.hpp
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) 2021 SAP SE. All rights reserved.
+ * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#ifndef OS_LINUX_TRIMCHEAPDCMD_HPP
+#define OS_LINUX_TRIMCHEAPDCMD_HPP
+
+#include "services/diagnosticCommand.hpp"
+
+class outputStream;
+
+class TrimCLibcHeapDCmd : public DCmd {
+public:
+ TrimCLibcHeapDCmd(outputStream* output, bool heap) : DCmd(output, heap) {}
+ static const char* name() {
+ return "System.trim_native_heap";
+ }
+ static const char* description() {
+ return "Attempts to free up memory by trimming the C-heap.";
+ }
+ static const char* impact() {
+ return "Low";
+ }
+ static const JavaPermission permission() {
+ JavaPermission p = { "java.lang.management.ManagementPermission", "control", NULL };
+ return p;
+ }
+ virtual void execute(DCmdSource source, TRAPS);
+};
+
+#endif // OS_LINUX_TRIMCHEAPDCMD_HPP
diff --git a/src/hotspot/os/posix/os_posix.cpp b/src/hotspot/os/posix/os_posix.cpp
index ae058dd345b85..9eb1fcbcc0b22 100644
--- a/src/hotspot/os/posix/os_posix.cpp
+++ b/src/hotspot/os/posix/os_posix.cpp
@@ -1885,7 +1885,11 @@ int os::fork_and_exec(const char* cmd, bool prefer_vfork) {
// Use always vfork on AIX, since its safe and helps with analyzing OOM situations.
// Otherwise leave it up to the caller.
AIX_ONLY(prefer_vfork = true;)
+ #ifdef __APPLE__
+ pid = ::fork();
+ #else
pid = prefer_vfork ? ::vfork() : ::fork();
+ #endif
if (pid < 0) {
// fork failed
diff --git a/src/hotspot/os/posix/signals_posix.cpp b/src/hotspot/os/posix/signals_posix.cpp
index c2e5d50bb95a8..895c3cc09ae88 100644
--- a/src/hotspot/os/posix/signals_posix.cpp
+++ b/src/hotspot/os/posix/signals_posix.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -562,6 +562,11 @@ int JVM_HANDLE_XXX_SIGNAL(int sig, siginfo_t* info,
{
assert(info != NULL && ucVoid != NULL, "sanity");
+ if (sig == BREAK_SIGNAL) {
+ assert(!ReduceSignalUsage, "Should not happen with -Xrs/-XX:+ReduceSignalUsage");
+ return true; // ignore it
+ }
+
// Note: it's not uncommon that JNI code uses signal/sigset to install,
// then restore certain signal handler (e.g. to temporarily block SIGPIPE,
// or have a SIGILL handler when detecting CPU type). When that happens,
@@ -1145,7 +1150,11 @@ void os::print_siginfo(outputStream* os, const void* si0) {
os->print(", si_addr: " PTR_FORMAT, p2i(si->si_addr));
#ifdef SIGPOLL
} else if (sig == SIGPOLL) {
- os->print(", si_band: %ld", si->si_band);
+ // siginfo_t.si_band is defined as "long", and it is so in most
+ // implementations. But SPARC64 glibc has a bug: si_band is "int".
+ // Cast si_band to "long" to prevent format specifier mismatch.
+ // See: https://sourceware.org/bugzilla/show_bug.cgi?id=23821
+ os->print(", si_band: %ld", (long) si->si_band);
#endif
}
}
@@ -1193,7 +1202,7 @@ int os::get_signal_number(const char* signal_name) {
return -1;
}
-void set_signal_handler(int sig) {
+void set_signal_handler(int sig, bool do_check = true) {
// Check for overwrite.
struct sigaction oldAct;
sigaction(sig, (struct sigaction*)NULL, &oldAct);
@@ -1237,7 +1246,7 @@ void set_signal_handler(int sig) {
// Save handler setup for later checking
vm_handlers.set(sig, &sigAct);
- do_check_signal_periodically[sig] = true;
+ do_check_signal_periodically[sig] = do_check;
int ret = sigaction(sig, &sigAct, &oldAct);
assert(ret == 0, "check");
@@ -1275,7 +1284,12 @@ void install_signal_handlers() {
set_signal_handler(SIGFPE);
PPC64_ONLY(set_signal_handler(SIGTRAP);)
set_signal_handler(SIGXFSZ);
-
+ if (!ReduceSignalUsage) {
+ // This is just for early initialization phase. Intercepting the signal here reduces the risk
+ // that an attach client accidentally forces HotSpot to quit prematurely. We skip the periodic
+ // check because late initialization will overwrite it to UserHandler.
+ set_signal_handler(BREAK_SIGNAL, false);
+ }
#if defined(__APPLE__)
// lldb (gdb) installs both standard BSD signal handlers, and mach exception
// handlers. By replacing the existing task exception handler, we disable lldb's mach
diff --git a/src/hotspot/os/windows/osThread_windows.hpp b/src/hotspot/os/windows/osThread_windows.hpp
index 88dedaffe3ed2..74b5010066339 100644
--- a/src/hotspot/os/windows/osThread_windows.hpp
+++ b/src/hotspot/os/windows/osThread_windows.hpp
@@ -34,7 +34,6 @@
HANDLE _thread_handle; // Win32 thread handle
HANDLE _interrupt_event; // Event signalled on thread interrupt for use by
// Process.waitFor().
- ThreadState _last_state;
public:
// The following will only apply in the Win32 implementation, and should only
@@ -58,12 +57,6 @@
}
#endif // ASSERT
- // This is a temporary fix for the thread states during
- // suspend/resume until we throw away OSThread completely.
- // NEEDS_CLEANUP
- void set_last_state(ThreadState state) { _last_state = state; }
- ThreadState get_last_state() { return _last_state; }
-
private:
void pd_initialize();
void pd_destroy();
diff --git a/src/hotspot/os/windows/os_windows.cpp b/src/hotspot/os/windows/os_windows.cpp
index de796f97a43c2..d6c9c542699b0 100644
--- a/src/hotspot/os/windows/os_windows.cpp
+++ b/src/hotspot/os/windows/os_windows.cpp
@@ -681,6 +681,9 @@ bool os::create_thread(Thread* thread, ThreadType thr_type,
return false;
}
+ // Initial state is ALLOCATED but not INITIALIZED
+ osthread->set_state(ALLOCATED);
+
// Initialize the JDK library's interrupt event.
// This should really be done when OSThread is constructed,
// but there is no way for a constructor to report failure to
@@ -777,7 +780,7 @@ bool os::create_thread(Thread* thread, ThreadType thr_type,
osthread->set_thread_handle(thread_handle);
osthread->set_thread_id(thread_id);
- // Initial thread state is INITIALIZED, not SUSPENDED
+ // Thread state now is INITIALIZED, not SUSPENDED
osthread->set_state(INITIALIZED);
// The thread is returned suspended (in state INITIALIZED), and is started higher up in the call chain
@@ -885,8 +888,62 @@ uint os::processor_id() {
return (uint)GetCurrentProcessorNumber();
}
+// For dynamic lookup of SetThreadDescription API
+typedef HRESULT (WINAPI *SetThreadDescriptionFnPtr)(HANDLE, PCWSTR);
+typedef HRESULT (WINAPI *GetThreadDescriptionFnPtr)(HANDLE, PWSTR*);
+static SetThreadDescriptionFnPtr _SetThreadDescription = NULL;
+DEBUG_ONLY(static GetThreadDescriptionFnPtr _GetThreadDescription = NULL;)
+
+// forward decl.
+static errno_t convert_to_unicode(char const* char_path, LPWSTR* unicode_path);
+
void os::set_native_thread_name(const char *name) {
+ // From Windows 10 and Windows 2016 server, we have a direct API
+ // for setting the thread name/description:
+ // https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-setthreaddescription
+
+ if (_SetThreadDescription != NULL) {
+ // SetThreadDescription takes a PCWSTR but we have conversion routines that produce
+ // LPWSTR. The only difference is that PCWSTR is a pointer to const WCHAR.
+ LPWSTR unicode_name;
+ errno_t err = convert_to_unicode(name, &unicode_name);
+ if (err == ERROR_SUCCESS) {
+ HANDLE current = GetCurrentThread();
+ HRESULT hr = _SetThreadDescription(current, unicode_name);
+ if (FAILED(hr)) {
+ log_debug(os, thread)("set_native_thread_name: SetThreadDescription failed - falling back to debugger method");
+ FREE_C_HEAP_ARRAY(WCHAR, unicode_name);
+ } else {
+ log_trace(os, thread)("set_native_thread_name: SetThreadDescription succeeded - new name: %s", name);
+
+#ifdef ASSERT
+ // For verification purposes in a debug build we read the thread name back and check it.
+ PWSTR thread_name;
+ HRESULT hr2 = _GetThreadDescription(current, &thread_name);
+ if (FAILED(hr2)) {
+ log_debug(os, thread)("set_native_thread_name: GetThreadDescription failed!");
+ } else {
+ int res = CompareStringW(LOCALE_USER_DEFAULT,
+ 0, // no special comparison rules
+ unicode_name,
+ -1, // null-terminated
+ thread_name,
+ -1 // null-terminated
+ );
+ assert(res == CSTR_EQUAL,
+ "Name strings were not the same - set: %ls, but read: %ls", unicode_name, thread_name);
+ LocalFree(thread_name);
+ }
+#endif
+ FREE_C_HEAP_ARRAY(WCHAR, unicode_name);
+ return;
+ }
+ } else {
+ log_debug(os, thread)("set_native_thread_name: convert_to_unicode failed - falling back to debugger method");
+ }
+ }
+
// See: http://msdn.microsoft.com/en-us/library/xcb2z8hs.aspx
//
// Note that unfortunately this only works if the process
@@ -895,6 +952,7 @@ void os::set_native_thread_name(const char *name) {
// If there is no debugger attached skip raising the exception
if (!IsDebuggerPresent()) {
+ log_debug(os, thread)("set_native_thread_name: no debugger present so unable to set thread name");
return;
}
@@ -1808,11 +1866,19 @@ void os::win32::print_windows_version(outputStream* st) {
case 10000:
if (is_workstation) {
- st->print("10");
+ if (build_number >= 22000) {
+ st->print("11");
+ } else {
+ st->print("10");
+ }
} else {
- // distinguish Windows Server 2016 and 2019 by build number
- // Windows server 2019 GA 10/2018 build number is 17763
- if (build_number > 17762) {
+ // distinguish Windows Server by build number
+ // - 2016 GA 10/2016 build: 14393
+ // - 2019 GA 11/2018 build: 17763
+ // - 2022 GA 08/2021 build: 20348
+ if (build_number > 20347) {
+ st->print("Server 2022");
+ } else if (build_number > 17762) {
st->print("Server 2019");
} else {
st->print("Server 2016");
@@ -4193,6 +4259,7 @@ extern "C" {
static jint initSock();
+
// this is called _after_ the global arguments have been parsed
jint os::init_2(void) {
@@ -4311,6 +4378,24 @@ jint os::init_2(void) {
jdk_misc_signal_init();
}
+ // Lookup SetThreadDescription - the docs state we must use runtime-linking of
+ // kernelbase.dll, so that is what we do.
+ HINSTANCE _kernelbase = LoadLibrary(TEXT("kernelbase.dll"));
+ if (_kernelbase != NULL) {
+ _SetThreadDescription =
+ reinterpret_cast(
+ GetProcAddress(_kernelbase,
+ "SetThreadDescription"));
+#ifdef ASSERT
+ _GetThreadDescription =
+ reinterpret_cast(
+ GetProcAddress(_kernelbase,
+ "GetThreadDescription"));
+#endif
+ }
+ log_info(os, thread)("The SetThreadDescription API is%s available.", _SetThreadDescription == NULL ? " not" : "");
+
+
return JNI_OK;
}
@@ -4732,9 +4817,7 @@ bool os::dir_is_empty(const char* path) {
// create binary file, rewriting existing file if required
int os::create_binary_file(const char* path, bool rewrite_existing) {
int oflags = _O_CREAT | _O_WRONLY | _O_BINARY;
- if (!rewrite_existing) {
- oflags |= _O_EXCL;
- }
+ oflags |= rewrite_existing ? _O_TRUNC : _O_EXCL;
return ::open(path, oflags, _S_IREAD | _S_IWRITE);
}
diff --git a/src/hotspot/os_cpu/bsd_aarch64/atomic_bsd_aarch64.hpp b/src/hotspot/os_cpu/bsd_aarch64/atomic_bsd_aarch64.hpp
index e0c2961e4842c..fba59870d7c50 100644
--- a/src/hotspot/os_cpu/bsd_aarch64/atomic_bsd_aarch64.hpp
+++ b/src/hotspot/os_cpu/bsd_aarch64/atomic_bsd_aarch64.hpp
@@ -27,6 +27,8 @@
#ifndef OS_CPU_BSD_AARCH64_ATOMIC_BSD_AARCH64_HPP
#define OS_CPU_BSD_AARCH64_ATOMIC_BSD_AARCH64_HPP
+#include "utilities/debug.hpp"
+
// Implementation of class atomic
// Note that memory_order_conservative requires a full barrier after atomic stores.
// See https://patchwork.kernel.org/patch/3575821/
@@ -64,17 +66,40 @@ inline T Atomic::PlatformCmpxchg::operator()(T volatile* dest,
T exchange_value,
atomic_memory_order order) const {
STATIC_ASSERT(byte_size == sizeof(T));
- if (order == memory_order_relaxed) {
+ if (order == memory_order_conservative) {
T value = compare_value;
+ FULL_MEM_BARRIER;
__atomic_compare_exchange(dest, &value, &exchange_value, /*weak*/false,
__ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ FULL_MEM_BARRIER;
return value;
} else {
+ STATIC_ASSERT (
+ // The modes that align with C++11 are intended to
+ // follow the same semantics.
+ memory_order_relaxed == __ATOMIC_RELAXED &&
+ memory_order_acquire == __ATOMIC_ACQUIRE &&
+ memory_order_release == __ATOMIC_RELEASE &&
+ memory_order_acq_rel == __ATOMIC_ACQ_REL &&
+ memory_order_seq_cst == __ATOMIC_SEQ_CST);
+
+ // Some sanity checking on the memory order. It makes no
+ // sense to have a release operation for a store that never
+ // happens.
+ int failure_memory_order;
+ switch (order) {
+ case memory_order_release:
+ failure_memory_order = memory_order_relaxed; break;
+ case memory_order_acq_rel:
+ failure_memory_order = memory_order_acquire; break;
+ default:
+ failure_memory_order = order;
+ }
+ assert(failure_memory_order <= order, "must be");
+
T value = compare_value;
- FULL_MEM_BARRIER;
__atomic_compare_exchange(dest, &value, &exchange_value, /*weak*/false,
- __ATOMIC_RELAXED, __ATOMIC_RELAXED);
- FULL_MEM_BARRIER;
+ order, failure_memory_order);
return value;
}
}
diff --git a/src/hotspot/os_cpu/bsd_aarch64/pauth_bsd_aarch64.inline.hpp b/src/hotspot/os_cpu/bsd_aarch64/pauth_bsd_aarch64.inline.hpp
index 21193e181f2b7..a4d416d384e29 100644
--- a/src/hotspot/os_cpu/bsd_aarch64/pauth_bsd_aarch64.inline.hpp
+++ b/src/hotspot/os_cpu/bsd_aarch64/pauth_bsd_aarch64.inline.hpp
@@ -22,8 +22,8 @@
*
*/
-#ifndef OS_CPU_LINUX_AARCH64_PAUTH_BSD_AARCH64_INLINE_HPP
-#define OS_CPU_LINUX_AARCH64_PAUTH_BSD_AARCH64_INLINE_HPP
+#ifndef OS_CPU_BSD_AARCH64_PAUTH_BSD_AARCH64_INLINE_HPP
+#define OS_CPU_BSD_AARCH64_PAUTH_BSD_AARCH64_INLINE_HPP
#ifdef __APPLE__
#include
@@ -49,5 +49,5 @@ inline address pauth_strip_pointer(address ptr) {
#undef XPACLRI
-#endif // OS_CPU_LINUX_AARCH64_PAUTH_BSD_AARCH64_INLINE_HPP
+#endif // OS_CPU_BSD_AARCH64_PAUTH_BSD_AARCH64_INLINE_HPP
diff --git a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.S b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.S
index f5d2c2b69c222..3007587d9c22c 100644
--- a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.S
+++ b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.S
@@ -112,7 +112,55 @@ aarch64_atomic_cmpxchg_8_default_impl:
dmb ish
ret
- .globl aarch64_atomic_cmpxchg_1_relaxed_default_impl
+ .globl aarch64_atomic_cmpxchg_4_release_default_impl
+ .align 5
+aarch64_atomic_cmpxchg_4_release_default_impl:
+ prfm pstl1strm, [x0]
+0: ldxr w3, [x0]
+ cmp w3, w1
+ b.ne 1f
+ stlxr w8, w2, [x0]
+ cbnz w8, 0b
+1: mov w0, w3
+ ret
+
+ .globl aarch64_atomic_cmpxchg_8_release_default_impl
+ .align 5
+aarch64_atomic_cmpxchg_8_release_default_impl:
+ prfm pstl1strm, [x0]
+0: ldxr x3, [x0]
+ cmp x3, x1
+ b.ne 1f
+ stlxr w8, x2, [x0]
+ cbnz w8, 0b
+1: mov x0, x3
+ ret
+
+ .globl aarch64_atomic_cmpxchg_4_seq_cst_default_impl
+ .align 5
+aarch64_atomic_cmpxchg_4_seq_cst_default_impl:
+ prfm pstl1strm, [x0]
+0: ldaxr w3, [x0]
+ cmp w3, w1
+ b.ne 1f
+ stlxr w8, w2, [x0]
+ cbnz w8, 0b
+1: mov w0, w3
+ ret
+
+ .globl aarch64_atomic_cmpxchg_8_seq_cst_default_impl
+ .align 5
+aarch64_atomic_cmpxchg_8_seq_cst_default_impl:
+ prfm pstl1strm, [x0]
+0: ldaxr x3, [x0]
+ cmp x3, x1
+ b.ne 1f
+ stlxr w8, x2, [x0]
+ cbnz w8, 0b
+1: mov x0, x3
+ ret
+
+.globl aarch64_atomic_cmpxchg_1_relaxed_default_impl
.align 5
aarch64_atomic_cmpxchg_1_relaxed_default_impl:
prfm pstl1strm, [x0]
diff --git a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp
index 77e860ed5ec85..316e877ec1f64 100644
--- a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp
+++ b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.hpp
@@ -151,6 +151,11 @@ inline T Atomic::PlatformCmpxchg<4>::operator()(T volatile* dest,
switch (order) {
case memory_order_relaxed:
stub = aarch64_atomic_cmpxchg_4_relaxed_impl; break;
+ case memory_order_release:
+ stub = aarch64_atomic_cmpxchg_4_release_impl; break;
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ stub = aarch64_atomic_cmpxchg_4_seq_cst_impl; break;
default:
stub = aarch64_atomic_cmpxchg_4_impl; break;
}
@@ -169,6 +174,11 @@ inline T Atomic::PlatformCmpxchg<8>::operator()(T volatile* dest,
switch (order) {
case memory_order_relaxed:
stub = aarch64_atomic_cmpxchg_8_relaxed_impl; break;
+ case memory_order_release:
+ stub = aarch64_atomic_cmpxchg_8_release_impl; break;
+ case memory_order_acq_rel:
+ case memory_order_seq_cst:
+ stub = aarch64_atomic_cmpxchg_8_seq_cst_impl; break;
default:
stub = aarch64_atomic_cmpxchg_8_impl; break;
}
diff --git a/src/hotspot/os_cpu/linux_aarch64/os_linux_aarch64.cpp b/src/hotspot/os_cpu/linux_aarch64/os_linux_aarch64.cpp
index b3c60b6e62482..6b09069e09dd7 100644
--- a/src/hotspot/os_cpu/linux_aarch64/os_linux_aarch64.cpp
+++ b/src/hotspot/os_cpu/linux_aarch64/os_linux_aarch64.cpp
@@ -382,7 +382,20 @@ int os::extra_bang_size_in_bytes() {
extern "C" {
int SpinPause() {
- return 0;
+ using spin_wait_func_ptr_t = void (*)();
+ spin_wait_func_ptr_t func = CAST_TO_FN_PTR(spin_wait_func_ptr_t, StubRoutines::aarch64::spin_wait());
+ assert(func != nullptr, "StubRoutines::aarch64::spin_wait must not be null.");
+ (*func)();
+ // If StubRoutines::aarch64::spin_wait consists of only a RET,
+ // SpinPause can be considered as implemented. There will be a sequence
+ // of instructions for:
+ // - call of SpinPause
+ // - load of StubRoutines::aarch64::spin_wait stub pointer
+ // - indirect call of the stub
+ // - return from the stub
+ // - return from SpinPause
+ // So '1' always is returned.
+ return 1;
}
void _Copy_conjoint_jshorts_atomic(const jshort* from, jshort* to, size_t count) {
diff --git a/src/hotspot/os_cpu/linux_ppc/gc/z/zSyscall_linux_ppc.hpp b/src/hotspot/os_cpu/linux_ppc/gc/z/zSyscall_linux_ppc.hpp
new file mode 100644
index 0000000000000..5950b52136db8
--- /dev/null
+++ b/src/hotspot/os_cpu/linux_ppc/gc/z/zSyscall_linux_ppc.hpp
@@ -0,0 +1,42 @@
+/*
+ * Copyright (c) 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021 SAP SE. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ */
+
+#ifndef OS_CPU_LINUX_PPC_GC_Z_ZSYSCALL_LINUX_PPC_HPP
+#define OS_CPU_LINUX_PPC_GC_Z_ZSYSCALL_LINUX_PPC_HPP
+
+#include
+
+//
+// Support for building on older Linux systems
+//
+
+
+#ifndef SYS_memfd_create
+#define SYS_memfd_create 360
+#endif
+#ifndef SYS_fallocate
+#define SYS_fallocate 309
+#endif
+
+#endif // OS_CPU_LINUX_PPC_GC_Z_ZSYSCALL_LINUX_PPC_HPP
diff --git a/src/hotspot/os_cpu/linux_ppc/thread_linux_ppc.cpp b/src/hotspot/os_cpu/linux_ppc/thread_linux_ppc.cpp
index 9f77945664021..15f6220fc81dc 100644
--- a/src/hotspot/os_cpu/linux_ppc/thread_linux_ppc.cpp
+++ b/src/hotspot/os_cpu/linux_ppc/thread_linux_ppc.cpp
@@ -1,6 +1,6 @@
/*
- * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2012, 2019 SAP SE. All rights reserved.
+ * Copyright (c) 1997, 2022, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2022 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -35,6 +35,8 @@ frame JavaThread::pd_last_frame() {
address pc = _anchor.last_Java_pc();
// Last_Java_pc ist not set, if we come here from compiled code.
+ // Assume spill slot for link register contains a suitable pc.
+ // Should have been filled by method entry code.
if (pc == NULL) {
pc = (address) *(sp + 2);
}
@@ -56,14 +58,26 @@ bool JavaThread::pd_get_top_frame_for_profiling(frame* fr_addr, void* ucontext,
// if we were running Java code when SIGPROF came in.
if (isInJava) {
ucontext_t* uc = (ucontext_t*) ucontext;
- frame ret_frame((intptr_t*)uc->uc_mcontext.regs->gpr[1/*REG_SP*/],
- (address)uc->uc_mcontext.regs->nip);
+ address pc = (address)uc->uc_mcontext.regs->nip;
- if (ret_frame.pc() == NULL) {
+ if (pc == NULL) {
// ucontext wasn't useful
return false;
}
+ frame ret_frame((intptr_t*)uc->uc_mcontext.regs->gpr[1/*REG_SP*/], pc);
+
+ if (ret_frame.fp() == NULL) {
+ // The found frame does not have a valid frame pointer.
+ // Bail out because this will create big trouble later on, either
+ // - when using istate, calculated as (NULL - ijava_state_size) or
+ // - when using fp() directly in safe_for_sender()
+ //
+ // There is no conclusive description (yet) how this could happen, but it does.
+ // For more details on what was observed, see thread_linux_s390.cpp
+ return false;
+ }
+
if (ret_frame.is_interpreted_frame()) {
frame::ijava_state *istate = ret_frame.get_ijava_state();
const Method *m = (const Method*)(istate->method);
diff --git a/src/hotspot/os_cpu/linux_s390/thread_linux_s390.cpp b/src/hotspot/os_cpu/linux_s390/thread_linux_s390.cpp
index eeaf2f47fc607..3b16096784f22 100644
--- a/src/hotspot/os_cpu/linux_s390/thread_linux_s390.cpp
+++ b/src/hotspot/os_cpu/linux_s390/thread_linux_s390.cpp
@@ -1,6 +1,6 @@
/*
- * Copyright (c) 2016, 2021, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2016, 2019 SAP SE. All rights reserved.
+ * Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2016, 2022 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -35,6 +35,8 @@ frame JavaThread::pd_last_frame() {
address pc = _anchor.last_Java_pc();
// Last_Java_pc ist not set if we come here from compiled code.
+ // Assume spill slot for Z_R14 (return register) contains a suitable pc.
+ // Should have been filled by method entry code.
if (pc == NULL) {
pc = (address) *(sp + 14);
}
@@ -51,19 +53,55 @@ bool JavaThread::pd_get_top_frame_for_profiling(frame* fr_addr, void* ucontext,
return true;
}
+ // At this point, we don't have a last_Java_frame, so
+ // we try to glean some information out of the ucontext
+ // if we were running Java code when SIGPROF came in.
if (isInJava) {
ucontext_t* uc = (ucontext_t*) ucontext;
- frame ret_frame((intptr_t*)uc->uc_mcontext.gregs[15/*Z_SP*/],
- (address)uc->uc_mcontext.psw.addr);
+ address pc = (address)uc->uc_mcontext.psw.addr;
- if (ret_frame.pc() == NULL) {
+ if (pc == NULL) {
// ucontext wasn't useful
return false;
}
+ frame ret_frame((intptr_t*)uc->uc_mcontext.gregs[15/*Z_SP*/], pc);
+
+ if (ret_frame.fp() == NULL) {
+ // The found frame does not have a valid frame pointer.
+ // Bail out because this will create big trouble later on, either
+ // - when using istate, calculated as (NULL - z_ijava_state_size (= 0x70 (dbg) or 0x68 (rel)) or
+ // - when using fp() directly in safe_for_sender()
+ //
+ // There is no conclusive description (yet) how this could happen, but it does:
+ //
+ // We observed a SIGSEGV with the following stack trace (openjdk.jdk11u-dev, 2021-07-07, linuxs390x fastdebug)
+ // V [libjvm.so+0x12c8f12] JavaThread::pd_get_top_frame_for_profiling(frame*, void*, bool)+0x142
+ // V [libjvm.so+0xb1020c] JfrGetCallTrace::get_topframe(void*, frame&)+0x3c
+ // V [libjvm.so+0xba0b08] OSThreadSampler::protected_task(os::SuspendedThreadTaskContext const&)+0x98
+ // V [libjvm.so+0xff33c4] os::SuspendedThreadTask::internal_do_task()+0x14c
+ // V [libjvm.so+0xfe3c9c] os::SuspendedThreadTask::run()+0x24
+ // V [libjvm.so+0xba0c66] JfrThreadSampleClosure::sample_thread_in_java(JavaThread*, JfrStackFrame*, unsigned int)+0x66
+ // V [libjvm.so+0xba1718] JfrThreadSampleClosure::do_sample_thread(JavaThread*, JfrStackFrame*, unsigned int, JfrSampleType)+0x278
+ // V [libjvm.so+0xba4f54] JfrThreadSampler::task_stacktrace(JfrSampleType, JavaThread**) [clone .constprop.62]+0x284
+ // V [libjvm.so+0xba5e54] JfrThreadSampler::run()+0x2ec
+ // V [libjvm.so+0x12adc9c] Thread::call_run()+0x9c
+ // V [libjvm.so+0xff5ab0] thread_native_entry(Thread*)+0x128
+ // siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0xfffffffffffff000
+ // failing instruction: e320 6008 0004 LG r2,8(r0,r6)
+ // contents of r6: 0xffffffffffffff90
+ //
+ // Here is the sequence of what happens:
+ // - ret_frame is constructed with _fp == NULL (for whatever reason)
+ // - ijava_state_unchecked() calculates it's result as
+ // istate = fp() - z_ijava_state_size() = NULL - 0x68 DEBUG_ONLY(-8)
+ // - istate->method dereferences memory at offset 8 from istate
+ return false;
+ }
+
if (ret_frame.is_interpreted_frame()) {
frame::z_ijava_state* istate = ret_frame.ijava_state_unchecked();
- if (is_in_full_stack((address)istate)) {
+ if (!is_in_full_stack((address)istate)) {
return false;
}
const Method *m = (const Method*)(istate->method);
diff --git a/src/hotspot/os_cpu/linux_x86/globals_linux_x86.hpp b/src/hotspot/os_cpu/linux_x86/globals_linux_x86.hpp
index f5fdd6399fe7f..8f1f64c2d9e99 100644
--- a/src/hotspot/os_cpu/linux_x86/globals_linux_x86.hpp
+++ b/src/hotspot/os_cpu/linux_x86/globals_linux_x86.hpp
@@ -34,7 +34,13 @@ define_pd_global(intx, CompilerThreadStackSize, 1024);
define_pd_global(intx, ThreadStackSize, 1024); // 0 => use system default
define_pd_global(intx, VMThreadStackSize, 1024);
#else
-define_pd_global(intx, CompilerThreadStackSize, 512);
+// Some tests in debug VM mode run out of compile thread stack.
+// Observed on some x86_32 VarHandles tests during escape analysis.
+#ifdef ASSERT
+define_pd_global(intx, CompilerThreadStackSize, 768);
+#else
+define_pd_global(intx, CompilerThreadStackSize, 512);
+#endif
// ThreadStackSize 320 allows a couple of test cases to run while
// keeping the number of threads that can be created high. System
// default ThreadStackSize appears to be 512 which is too big.
diff --git a/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp b/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp
index 0360bcb6943a0..51a7fa8b0fc15 100644
--- a/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp
+++ b/src/hotspot/os_cpu/linux_zero/os_linux_zero.cpp
@@ -199,6 +199,20 @@ size_t os::Posix::default_stack_size(os::ThreadType thr_type) {
}
static void current_stack_region(address *bottom, size_t *size) {
+ if (os::is_primordial_thread()) {
+ // primordial thread needs special handling because pthread_getattr_np()
+ // may return bogus value.
+ address stack_bottom = os::Linux::initial_thread_stack_bottom();
+ size_t stack_bytes = os::Linux::initial_thread_stack_size();
+
+ assert(os::current_stack_pointer() >= stack_bottom, "should do");
+ assert(os::current_stack_pointer() < stack_bottom + stack_bytes, "should do");
+
+ *bottom = stack_bottom;
+ *size = stack_bytes;
+ return;
+ }
+
pthread_attr_t attr;
int res = pthread_getattr_np(pthread_self(), &attr);
if (res != 0) {
@@ -247,18 +261,6 @@ static void current_stack_region(address *bottom, size_t *size) {
pthread_attr_destroy(&attr);
- // The initial thread has a growable stack, and the size reported
- // by pthread_attr_getstack is the maximum size it could possibly
- // be given what currently mapped. This can be huge, so we cap it.
- if (os::is_primordial_thread()) {
- stack_bytes = stack_top - stack_bottom;
-
- if (stack_bytes > JavaThread::stack_size_at_create())
- stack_bytes = JavaThread::stack_size_at_create();
-
- stack_bottom = stack_top - stack_bytes;
- }
-
assert(os::current_stack_pointer() >= stack_bottom, "should do");
assert(os::current_stack_pointer() < stack_top, "should do");
diff --git a/src/hotspot/os_cpu/windows_aarch64/pauth_windows_aarch64.inline.hpp b/src/hotspot/os_cpu/windows_aarch64/pauth_windows_aarch64.inline.hpp
index bf1d2aa99e1c8..844291ee1e412 100644
--- a/src/hotspot/os_cpu/windows_aarch64/pauth_windows_aarch64.inline.hpp
+++ b/src/hotspot/os_cpu/windows_aarch64/pauth_windows_aarch64.inline.hpp
@@ -22,13 +22,13 @@
*
*/
-#ifndef OS_CPU_LINUX_AARCH64_PAUTH_WINDOWS_AARCH64_INLINE_HPP
-#define OS_CPU_LINUX_AARCH64_PAUTH_WINDOWS_AARCH64_INLINE_HPP
+#ifndef OS_CPU_WINDOWS_AARCH64_PAUTH_WINDOWS_AARCH64_INLINE_HPP
+#define OS_CPU_WINDOWS_AARCH64_PAUTH_WINDOWS_AARCH64_INLINE_HPP
inline address pauth_strip_pointer(address ptr) {
// No PAC support in windows as of yet.
return ptr;
}
-#endif // OS_CPU_LINUX_AARCH64_PAUTH_WINDOWS_AARCH64_INLINE_HPP
+#endif // OS_CPU_WINDOWS_AARCH64_PAUTH_WINDOWS_AARCH64_INLINE_HPP
diff --git a/src/hotspot/share/c1/c1_GraphBuilder.cpp b/src/hotspot/share/c1/c1_GraphBuilder.cpp
index a7a47781447e5..1c93452d47807 100644
--- a/src/hotspot/share/c1/c1_GraphBuilder.cpp
+++ b/src/hotspot/share/c1/c1_GraphBuilder.cpp
@@ -1989,19 +1989,19 @@ void GraphBuilder::invoke(Bytecodes::Code code) {
cha_monomorphic_target = target->find_monomorphic_target(calling_klass, declared_interface, singleton);
if (cha_monomorphic_target != NULL) {
if (cha_monomorphic_target->holder() != compilation()->env()->Object_klass()) {
- // If CHA is able to bind this invoke then update the class
- // to match that class, otherwise klass will refer to the
- // interface.
- klass = cha_monomorphic_target->holder();
+ ciInstanceKlass* holder = cha_monomorphic_target->holder();
+ ciInstanceKlass* constraint = (holder->is_subtype_of(singleton) ? holder : singleton); // avoid upcasts
actual_recv = declared_interface;
// insert a check it's really the expected class.
- CheckCast* c = new CheckCast(klass, receiver, copy_state_for_exception());
+ CheckCast* c = new CheckCast(constraint, receiver, copy_state_for_exception());
c->set_incompatible_class_change_check();
- c->set_direct_compare(klass->is_final());
+ c->set_direct_compare(constraint->is_final());
// pass the result of the checkcast so that the compiler has
// more accurate type info in the inlinee
better_receiver = append_split(c);
+
+ dependency_recorder()->assert_unique_implementor(declared_interface, singleton);
} else {
cha_monomorphic_target = NULL; // subtype check against Object is useless
}
diff --git a/src/hotspot/share/c1/c1_IR.cpp b/src/hotspot/share/c1/c1_IR.cpp
index e4b9d86dcc300..1900ec3610242 100644
--- a/src/hotspot/share/c1/c1_IR.cpp
+++ b/src/hotspot/share/c1/c1_IR.cpp
@@ -191,7 +191,8 @@ CodeEmitInfo::CodeEmitInfo(ValueStack* stack, XHandlers* exception_handlers, boo
, _oop_map(NULL)
, _stack(stack)
, _is_method_handle_invoke(false)
- , _deoptimize_on_exception(deoptimize_on_exception) {
+ , _deoptimize_on_exception(deoptimize_on_exception)
+ , _force_reexecute(false) {
assert(_stack != NULL, "must be non null");
}
@@ -203,7 +204,8 @@ CodeEmitInfo::CodeEmitInfo(CodeEmitInfo* info, ValueStack* stack)
, _oop_map(NULL)
, _stack(stack == NULL ? info->_stack : stack)
, _is_method_handle_invoke(info->_is_method_handle_invoke)
- , _deoptimize_on_exception(info->_deoptimize_on_exception) {
+ , _deoptimize_on_exception(info->_deoptimize_on_exception)
+ , _force_reexecute(info->_force_reexecute) {
// deep copy of exception handlers
if (info->_exception_handlers != NULL) {
@@ -215,7 +217,8 @@ CodeEmitInfo::CodeEmitInfo(CodeEmitInfo* info, ValueStack* stack)
void CodeEmitInfo::record_debug_info(DebugInformationRecorder* recorder, int pc_offset) {
// record the safepoint before recording the debug info for enclosing scopes
recorder->add_safepoint(pc_offset, _oop_map->deep_copy());
- _scope_debug_info->record_debug_info(recorder, pc_offset, true/*topmost*/, _is_method_handle_invoke);
+ bool reexecute = _force_reexecute || _scope_debug_info->should_reexecute();
+ _scope_debug_info->record_debug_info(recorder, pc_offset, reexecute, _is_method_handle_invoke);
recorder->end_safepoint(pc_offset);
}
diff --git a/src/hotspot/share/c1/c1_IR.hpp b/src/hotspot/share/c1/c1_IR.hpp
index f7155c464137d..31f4441128054 100644
--- a/src/hotspot/share/c1/c1_IR.hpp
+++ b/src/hotspot/share/c1/c1_IR.hpp
@@ -232,16 +232,15 @@ class IRScopeDebugInfo: public CompilationResourceObj {
//Whether we should reexecute this bytecode for deopt
bool should_reexecute();
- void record_debug_info(DebugInformationRecorder* recorder, int pc_offset, bool topmost, bool is_method_handle_invoke = false) {
+ void record_debug_info(DebugInformationRecorder* recorder, int pc_offset, bool reexecute, bool is_method_handle_invoke = false) {
if (caller() != NULL) {
// Order is significant: Must record caller first.
- caller()->record_debug_info(recorder, pc_offset, false/*topmost*/);
+ caller()->record_debug_info(recorder, pc_offset, false/*reexecute*/);
}
DebugToken* locvals = recorder->create_scope_values(locals());
DebugToken* expvals = recorder->create_scope_values(expressions());
DebugToken* monvals = recorder->create_monitor_values(monitors());
// reexecute allowed only for the topmost frame
- bool reexecute = topmost ? should_reexecute() : false;
bool return_oop = false; // This flag will be ignored since it used only for C2 with escape analysis.
bool rethrow_exception = false;
bool is_opt_native = false;
@@ -264,6 +263,7 @@ class CodeEmitInfo: public CompilationResourceObj {
ValueStack* _stack; // used by deoptimization (contains also monitors
bool _is_method_handle_invoke; // true if the associated call site is a MethodHandle call site.
bool _deoptimize_on_exception;
+ bool _force_reexecute; // force the reexecute flag on, used for patching stub
FrameMap* frame_map() const { return scope()->compilation()->frame_map(); }
Compilation* compilation() const { return scope()->compilation(); }
@@ -290,7 +290,11 @@ class CodeEmitInfo: public CompilationResourceObj {
bool is_method_handle_invoke() const { return _is_method_handle_invoke; }
void set_is_method_handle_invoke(bool x) { _is_method_handle_invoke = x; }
+ bool force_reexecute() const { return _force_reexecute; }
+ void set_force_reexecute() { _force_reexecute = true; }
+
int interpreter_frame_size() const;
+
};
diff --git a/src/hotspot/share/c1/c1_Instruction.cpp b/src/hotspot/share/c1/c1_Instruction.cpp
index 687d2dae707be..3ba5536df2b20 100644
--- a/src/hotspot/share/c1/c1_Instruction.cpp
+++ b/src/hotspot/share/c1/c1_Instruction.cpp
@@ -837,6 +837,11 @@ bool BlockBegin::try_merge(ValueStack* new_state) {
existing_state->invalidate_local(index);
TRACE_PHI(tty->print_cr("invalidating local %d because of type mismatch", index));
}
+
+ if (existing_value != new_state->local_at(index) && existing_value->as_Phi() == NULL) {
+ TRACE_PHI(tty->print_cr("required phi for local %d is missing, irreducible loop?", index));
+ return false; // BAILOUT in caller
+ }
}
#ifdef ASSERT
diff --git a/src/hotspot/share/c1/c1_LIRAssembler.cpp b/src/hotspot/share/c1/c1_LIRAssembler.cpp
index cc1e201489261..37ce476253d6e 100644
--- a/src/hotspot/share/c1/c1_LIRAssembler.cpp
+++ b/src/hotspot/share/c1/c1_LIRAssembler.cpp
@@ -43,6 +43,7 @@ void LIR_Assembler::patching_epilog(PatchingStub* patch, LIR_PatchCode patch_cod
while ((intx) _masm->pc() - (intx) patch->pc_start() < NativeGeneralJump::instruction_size) {
_masm->nop();
}
+ info->set_force_reexecute();
patch->install(_masm, patch_code, obj, info);
append_code_stub(patch);
@@ -607,7 +608,7 @@ void LIR_Assembler::emit_op0(LIR_Op0* op) {
check_icache();
}
offsets()->set_value(CodeOffsets::Verified_Entry, _masm->offset());
- _masm->verified_entry();
+ _masm->verified_entry(compilation()->directive()->BreakAtExecuteOption);
if (needs_clinit_barrier_on_entry(compilation()->method())) {
clinit_barrier(compilation()->method());
}
diff --git a/src/hotspot/share/c1/c1_LIRGenerator.cpp b/src/hotspot/share/c1/c1_LIRGenerator.cpp
index e3e6dc786b9e3..adfccccacfc13 100644
--- a/src/hotspot/share/c1/c1_LIRGenerator.cpp
+++ b/src/hotspot/share/c1/c1_LIRGenerator.cpp
@@ -663,7 +663,7 @@ void LIRGenerator::new_instance(LIR_Opr dst, ciInstanceKlass* klass, bool is_unr
assert(klass->is_loaded(), "must be loaded");
// allocate space for instance
- assert(klass->size_helper() >= 0, "illegal instance size");
+ assert(klass->size_helper() > 0, "illegal instance size");
const int instance_size = align_object_size(klass->size_helper());
__ allocate_object(dst, scratch1, scratch2, scratch3, scratch4,
oopDesc::header_size(), instance_size, klass_reg, !klass->is_initialized(), slow_path);
@@ -979,6 +979,14 @@ void LIRGenerator::move_to_phi(PhiResolver* resolver, Value cur_val, Value sux_v
Phi* phi = sux_val->as_Phi();
// cur_val can be null without phi being null in conjunction with inlining
if (phi != NULL && cur_val != NULL && cur_val != phi && !phi->is_illegal()) {
+ if (phi->is_local()) {
+ for (int i = 0; i < phi->operand_count(); i++) {
+ Value op = phi->operand_at(i);
+ if (op != NULL && op->type()->is_illegal()) {
+ bailout("illegal phi operand");
+ }
+ }
+ }
Phi* cur_phi = cur_val->as_Phi();
if (cur_phi != NULL && cur_phi->is_illegal()) {
// Phi and local would need to get invalidated
diff --git a/src/hotspot/share/c1/c1_MacroAssembler.hpp b/src/hotspot/share/c1/c1_MacroAssembler.hpp
index 206eda91e2abf..6a8304bd405fa 100644
--- a/src/hotspot/share/c1/c1_MacroAssembler.hpp
+++ b/src/hotspot/share/c1/c1_MacroAssembler.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -42,7 +42,7 @@ class C1_MacroAssembler: public MacroAssembler {
void build_frame(int frame_size_in_bytes, int bang_size_in_bytes);
void remove_frame(int frame_size_in_bytes);
- void verified_entry();
+ void verified_entry(bool breakAtEntry);
void verify_stack_oop(int offset) PRODUCT_RETURN;
void verify_not_null_oop(Register r) PRODUCT_RETURN;
diff --git a/src/hotspot/share/cds/dynamicArchive.cpp b/src/hotspot/share/cds/dynamicArchive.cpp
index 47c3642ff0ec3..8e3929bc2e41d 100644
--- a/src/hotspot/share/cds/dynamicArchive.cpp
+++ b/src/hotspot/share/cds/dynamicArchive.cpp
@@ -111,6 +111,7 @@ class DynamicArchiveBuilder : public ArchiveBuilder {
// Block concurrent class unloading from changing the _dumptime_table
MutexLocker ml(DumpTimeTable_lock, Mutex::_no_safepoint_check_flag);
SystemDictionaryShared::check_excluded_classes();
+ SystemDictionaryShared::cleanup_lambda_proxy_class_dictionary();
init_header();
gather_source_objs();
diff --git a/src/hotspot/share/cds/metaspaceShared.cpp b/src/hotspot/share/cds/metaspaceShared.cpp
index c8ec44aee06c2..87ff93fb41099 100644
--- a/src/hotspot/share/cds/metaspaceShared.cpp
+++ b/src/hotspot/share/cds/metaspaceShared.cpp
@@ -487,6 +487,7 @@ void VM_PopulateDumpSharedSpace::doit() {
// Block concurrent class unloading from changing the _dumptime_table
MutexLocker ml(DumpTimeTable_lock, Mutex::_no_safepoint_check_flag);
SystemDictionaryShared::check_excluded_classes();
+ SystemDictionaryShared::cleanup_lambda_proxy_class_dictionary();
StaticArchiveBuilder builder;
builder.gather_source_objs();
@@ -550,19 +551,19 @@ void VM_PopulateDumpSharedSpace::doit() {
class CollectCLDClosure : public CLDClosure {
GrowableArray _loaded_cld;
+ GrowableArray _loaded_cld_handles; // keep the CLDs alive
+ Thread* _current_thread;
public:
- CollectCLDClosure() {}
+ CollectCLDClosure(Thread* thread) : _current_thread(thread) {}
~CollectCLDClosure() {
- for (int i = 0; i < _loaded_cld.length(); i++) {
- ClassLoaderData* cld = _loaded_cld.at(i);
- cld->dec_keep_alive();
+ for (int i = 0; i < _loaded_cld_handles.length(); i++) {
+ _loaded_cld_handles.at(i).release(Universe::vm_global());
}
}
void do_cld(ClassLoaderData* cld) {
- if (!cld->is_unloading()) {
- cld->inc_keep_alive();
- _loaded_cld.append(cld);
- }
+ assert(cld->is_alive(), "must be");
+ _loaded_cld.append(cld);
+ _loaded_cld_handles.append(OopHandle(Universe::vm_global(), cld->holder_phantom()));
}
int nof_cld() const { return _loaded_cld.length(); }
@@ -591,11 +592,10 @@ bool MetaspaceShared::link_class_for_cds(InstanceKlass* ik, TRAPS) {
}
void MetaspaceShared::link_and_cleanup_shared_classes(TRAPS) {
- // Collect all loaded ClassLoaderData.
- ResourceMark rm;
-
LambdaFormInvokers::regenerate_holder_classes(CHECK);
- CollectCLDClosure collect_cld;
+
+ // Collect all loaded ClassLoaderData.
+ CollectCLDClosure collect_cld(THREAD);
{
// ClassLoaderDataGraph::loaded_cld_do requires ClassLoaderDataGraph_lock.
// We cannot link the classes while holding this lock (or else we may run into deadlock).
diff --git a/src/hotspot/share/ci/ciInstanceKlass.cpp b/src/hotspot/share/ci/ciInstanceKlass.cpp
index a9fa3855607f9..31d1d92c8c99a 100644
--- a/src/hotspot/share/ci/ciInstanceKlass.cpp
+++ b/src/hotspot/share/ci/ciInstanceKlass.cpp
@@ -205,12 +205,12 @@ ciConstantPoolCache* ciInstanceKlass::field_cache() {
//
ciInstanceKlass* ciInstanceKlass::get_canonical_holder(int offset) {
#ifdef ASSERT
- if (!(offset >= 0 && offset < layout_helper())) {
+ if (!(offset >= 0 && offset < layout_helper_size_in_bytes())) {
tty->print("*** get_canonical_holder(%d) on ", offset);
this->print();
tty->print_cr(" ***");
};
- assert(offset >= 0 && offset < layout_helper(), "offset must be tame");
+ assert(offset >= 0 && offset < layout_helper_size_in_bytes(), "offset must be tame");
#endif
if (offset < instanceOopDesc::base_offset_in_bytes()) {
@@ -227,7 +227,9 @@ ciInstanceKlass* ciInstanceKlass::get_canonical_holder(int offset) {
for (;;) {
assert(self->is_loaded(), "must be loaded to have size");
ciInstanceKlass* super = self->super();
- if (super == NULL || super->nof_nonstatic_fields() == 0) {
+ if (super == NULL ||
+ super->nof_nonstatic_fields() == 0 ||
+ super->layout_helper_size_in_bytes() <= offset) {
return self;
} else {
self = super; // return super->get_canonical_holder(offset)
diff --git a/src/hotspot/share/ci/ciInstanceKlass.hpp b/src/hotspot/share/ci/ciInstanceKlass.hpp
index 1e4a0a9ae6c38..28652a3c78ec3 100644
--- a/src/hotspot/share/ci/ciInstanceKlass.hpp
+++ b/src/hotspot/share/ci/ciInstanceKlass.hpp
@@ -165,6 +165,9 @@ class ciInstanceKlass : public ciKlass {
return compute_shared_has_subklass();
}
+ jint layout_helper_size_in_bytes() {
+ return Klass::layout_helper_size_in_bytes(layout_helper());
+ }
jint size_helper() {
return (Klass::layout_helper_size_in_bytes(layout_helper())
>> LogHeapWordSize);
diff --git a/src/hotspot/share/classfile/altHashing.cpp b/src/hotspot/share/classfile/altHashing.cpp
index d0672138668f9..98c5502fc1fdd 100644
--- a/src/hotspot/share/classfile/altHashing.cpp
+++ b/src/hotspot/share/classfile/altHashing.cpp
@@ -26,18 +26,23 @@
* halfsiphash code adapted from reference implementation
* (https://github.com/veorq/SipHash/blob/master/halfsiphash.c)
* which is distributed with the following copyright:
- *
- * SipHash reference C implementation
- *
- * Copyright (c) 2016 Jean-Philippe Aumasson
- *
- * To the extent possible under law, the author(s) have dedicated all copyright
- * and related and neighboring rights to this software to the public domain
- * worldwide. This software is distributed without any warranty.
- *
- * You should have received a copy of the CC0 Public Domain Dedication along
- * with this software. If not, see
- * .
+ */
+
+/*
+ SipHash reference C implementation
+
+ Copyright (c) 2012-2021 Jean-Philippe Aumasson
+
+ Copyright (c) 2012-2014 Daniel J. Bernstein
+
+ To the extent possible under law, the author(s) have dedicated all copyright
+ and related and neighboring rights to this software to the public domain
+ worldwide. This software is distributed without any warranty.
+
+ You should have received a copy of the CC0 Public Domain Dedication along
+ with
+ this software. If not, see
+ .
*/
#include "precompiled.hpp"
@@ -135,7 +140,9 @@ static uint64_t halfsiphash_finish64(uint32_t v[4], int rounds) {
}
// HalfSipHash-2-4 (32-bit output) for Symbols
-uint32_t AltHashing::halfsiphash_32(uint64_t seed, const uint8_t* data, int len) {
+uint32_t AltHashing::halfsiphash_32(uint64_t seed, const void* in, int len) {
+
+ const unsigned char* data = (const unsigned char*)in;
uint32_t v[4];
uint32_t newdata;
int off = 0;
diff --git a/src/hotspot/share/classfile/altHashing.hpp b/src/hotspot/share/classfile/altHashing.hpp
index e1726ae5152b7..f2fc52410d163 100644
--- a/src/hotspot/share/classfile/altHashing.hpp
+++ b/src/hotspot/share/classfile/altHashing.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -43,7 +43,7 @@ class AltHashing : AllStatic {
static uint64_t compute_seed();
// For Symbols
- static uint32_t halfsiphash_32(uint64_t seed, const uint8_t* data, int len);
+ static uint32_t halfsiphash_32(uint64_t seed, const void* in, int len);
// For Strings
static uint32_t halfsiphash_32(uint64_t seed, const uint16_t* data, int len);
};
diff --git a/src/hotspot/share/classfile/classFileParser.cpp b/src/hotspot/share/classfile/classFileParser.cpp
index 41da1214f2b08..052d013368a5a 100644
--- a/src/hotspot/share/classfile/classFileParser.cpp
+++ b/src/hotspot/share/classfile/classFileParser.cpp
@@ -3044,6 +3044,7 @@ static int inner_classes_jump_to_outer(const Array* inner_classes, int inner
static bool inner_classes_check_loop_through_outer(const Array* inner_classes, int idx, const ConstantPool* cp, int length) {
int slow = inner_classes->at(idx + InstanceKlass::inner_class_inner_class_info_offset);
int fast = inner_classes->at(idx + InstanceKlass::inner_class_outer_class_info_offset);
+
while (fast != -1 && fast != 0) {
if (slow != 0 && (cp->klass_name_at(slow) == cp->klass_name_at(fast))) {
return true; // found a circularity
@@ -3073,14 +3074,15 @@ bool ClassFileParser::check_inner_classes_circularity(const ConstantPool* cp, in
for (int y = idx + InstanceKlass::inner_class_next_offset; y < length;
y += InstanceKlass::inner_class_next_offset) {
- // To maintain compatibility, throw an exception if duplicate inner classes
- // entries are found.
- guarantee_property((_inner_classes->at(idx) != _inner_classes->at(y) ||
- _inner_classes->at(idx+1) != _inner_classes->at(y+1) ||
- _inner_classes->at(idx+2) != _inner_classes->at(y+2) ||
- _inner_classes->at(idx+3) != _inner_classes->at(y+3)),
- "Duplicate entry in InnerClasses attribute in class file %s",
- CHECK_(true));
+ // 4347400: make sure there's no duplicate entry in the classes array
+ if (_major_version >= JAVA_1_5_VERSION) {
+ guarantee_property((_inner_classes->at(idx) != _inner_classes->at(y) ||
+ _inner_classes->at(idx+1) != _inner_classes->at(y+1) ||
+ _inner_classes->at(idx+2) != _inner_classes->at(y+2) ||
+ _inner_classes->at(idx+3) != _inner_classes->at(y+3)),
+ "Duplicate entry in InnerClasses attribute in class file %s",
+ CHECK_(true));
+ }
// Return true if there are two entries with the same inner_class_info_index.
if (_inner_classes->at(y) == _inner_classes->at(idx)) {
return true;
@@ -3135,6 +3137,13 @@ u2 ClassFileParser::parse_classfile_inner_classes_attribute(const ClassFileStrea
valid_klass_reference_at(outer_class_info_index),
"outer_class_info_index %u has bad constant type in class file %s",
outer_class_info_index, CHECK_0);
+
+ if (outer_class_info_index != 0) {
+ const Symbol* const outer_class_name = cp->klass_name_at(outer_class_info_index);
+ char* bytes = (char*)outer_class_name->bytes();
+ guarantee_property(bytes[0] != JVM_SIGNATURE_ARRAY,
+ "Outer class is an array class in class file %s", CHECK_0);
+ }
// Inner class name
const u2 inner_name_index = cfs->get_u2_fast();
check_property(
@@ -3166,10 +3175,9 @@ u2 ClassFileParser::parse_classfile_inner_classes_attribute(const ClassFileStrea
inner_classes->at_put(index++, inner_access_flags.as_short());
}
- // 4347400: make sure there's no duplicate entry in the classes array
- // Also, check for circular entries.
+ // Check for circular and duplicate entries.
bool has_circularity = false;
- if (_need_verify && _major_version >= JAVA_1_5_VERSION) {
+ if (_need_verify) {
has_circularity = check_inner_classes_circularity(cp, length * 4, CHECK_0);
if (has_circularity) {
// If circularity check failed then ignore InnerClasses attribute.
diff --git a/src/hotspot/share/classfile/classLoader.cpp b/src/hotspot/share/classfile/classLoader.cpp
index 724ecfcd586d7..0287b73e50373 100644
--- a/src/hotspot/share/classfile/classLoader.cpp
+++ b/src/hotspot/share/classfile/classLoader.cpp
@@ -303,13 +303,19 @@ u1* ClassPathZipEntry::open_entry(JavaThread* current, const char* name, jint* f
}
// read contents into resource array
- int size = (*filesize) + ((nul_terminate) ? 1 : 0);
+ size_t size = (uint32_t)(*filesize);
+ if (nul_terminate) {
+ if (sizeof(size) == sizeof(uint32_t) && size == UINT_MAX) {
+ return NULL; // 32-bit integer overflow will occur.
+ }
+ size++;
+ }
buffer = NEW_RESOURCE_ARRAY(u1, size);
if (!(*ReadEntry)(_zip, entry, buffer, filename)) return NULL;
// return result
if (nul_terminate) {
- buffer[*filesize] = 0;
+ buffer[size - 1] = 0;
}
return buffer;
}
diff --git a/src/hotspot/share/classfile/classLoaderData.cpp b/src/hotspot/share/classfile/classLoaderData.cpp
index 340ffadf83751..8cc69e7391faa 100644
--- a/src/hotspot/share/classfile/classLoaderData.cpp
+++ b/src/hotspot/share/classfile/classLoaderData.cpp
@@ -55,6 +55,7 @@
#include "classfile/packageEntry.hpp"
#include "classfile/symbolTable.hpp"
#include "classfile/systemDictionary.hpp"
+#include "classfile/systemDictionaryShared.hpp"
#include "classfile/vmClasses.hpp"
#include "logging/log.hpp"
#include "logging/logStream.hpp"
@@ -302,9 +303,7 @@ bool ClassLoaderData::try_claim(int claim) {
// it is being defined, therefore _keep_alive is not volatile or atomic.
void ClassLoaderData::inc_keep_alive() {
if (has_class_mirror_holder()) {
- if (!Arguments::is_dumping_archive()) {
- assert(_keep_alive > 0, "Invalid keep alive increment count");
- }
+ assert(_keep_alive > 0, "Invalid keep alive increment count");
_keep_alive++;
}
}
@@ -355,6 +354,9 @@ void ClassLoaderData::methods_do(void f(Method*)) {
}
void ClassLoaderData::loaded_classes_do(KlassClosure* klass_closure) {
+ // To call this, one must have the MultiArray_lock held, but the _klasses list still has lock free reads.
+ assert_locked_or_safepoint(MultiArray_lock);
+
// Lock-free access requires load_acquire
for (Klass* k = Atomic::load_acquire(&_klasses); k != NULL; k = k->next_link()) {
// Do not filter ArrayKlass oops here...
@@ -885,6 +887,10 @@ void ClassLoaderData::free_deallocate_list_C_heap_structures() {
// Remove the class so unloading events aren't triggered for
// this class (scratch or error class) in do_unloading().
remove_class(ik);
+ // But still have to remove it from the dumptime_table.
+ if (Arguments::is_dumping_archive()) {
+ SystemDictionaryShared::remove_dumptime_info(ik);
+ }
}
}
}
diff --git a/src/hotspot/share/classfile/symbolTable.cpp b/src/hotspot/share/classfile/symbolTable.cpp
index fa966fc7b22af..a321d94bbd2b0 100644
--- a/src/hotspot/share/classfile/symbolTable.cpp
+++ b/src/hotspot/share/classfile/symbolTable.cpp
@@ -91,7 +91,14 @@ static volatile bool _has_items_to_clean = false;
static volatile bool _alt_hash = false;
+
+#ifdef USE_LIBRARY_BASED_TLS_ONLY
static volatile bool _lookup_shared_first = false;
+#else
+// "_lookup_shared_first" can get highly contended with many cores if multiple threads
+// are updating "lookup success history" in a global shared variable. If built-in TLS is available, use it.
+static THREAD_LOCAL bool _lookup_shared_first = false;
+#endif
// Static arena for symbols that are not deallocated
Arena* SymbolTable::_arena = NULL;
diff --git a/src/hotspot/share/classfile/systemDictionaryShared.cpp b/src/hotspot/share/classfile/systemDictionaryShared.cpp
index 78fcb1bc8e21f..a055ad7415604 100644
--- a/src/hotspot/share/classfile/systemDictionaryShared.cpp
+++ b/src/hotspot/share/classfile/systemDictionaryShared.cpp
@@ -2405,6 +2405,28 @@ bool SystemDictionaryShared::empty_dumptime_table() {
return false;
}
+class CleanupDumpTimeLambdaProxyClassTable: StackObj {
+ public:
+ bool do_entry(LambdaProxyClassKey& key, DumpTimeLambdaProxyClassInfo& info) {
+ assert_lock_strong(DumpTimeTable_lock);
+ for (int i = 0; i < info._proxy_klasses->length(); i++) {
+ InstanceKlass* ik = info._proxy_klasses->at(i);
+ if (!ik->can_be_verified_at_dumptime()) {
+ info._proxy_klasses->remove_at(i);
+ }
+ }
+ return info._proxy_klasses->length() == 0 ? true /* delete the node*/ : false;
+ }
+};
+
+void SystemDictionaryShared::cleanup_lambda_proxy_class_dictionary() {
+ assert_lock_strong(DumpTimeTable_lock);
+ if (_dumptime_lambda_proxy_class_dictionary != NULL) {
+ CleanupDumpTimeLambdaProxyClassTable cleanup_proxy_classes;
+ _dumptime_lambda_proxy_class_dictionary->unlink(&cleanup_proxy_classes);
+ }
+}
+
#if INCLUDE_CDS_JAVA_HEAP
class ArchivedMirrorPatcher {
diff --git a/src/hotspot/share/classfile/systemDictionaryShared.hpp b/src/hotspot/share/classfile/systemDictionaryShared.hpp
index 80f3c40c16ec0..5ba59378ecf78 100644
--- a/src/hotspot/share/classfile/systemDictionaryShared.hpp
+++ b/src/hotspot/share/classfile/systemDictionaryShared.hpp
@@ -331,6 +331,7 @@ class SystemDictionaryShared: public SystemDictionary {
static size_t estimate_size_for_archive();
static void write_to_archive(bool is_static_archive = true);
static void adjust_lambda_proxy_class_dictionary();
+ static void cleanup_lambda_proxy_class_dictionary();
static void serialize_dictionary_headers(class SerializeClosure* soc,
bool is_static_archive = true);
static void serialize_vm_classes(class SerializeClosure* soc);
diff --git a/src/hotspot/share/classfile/verifier.cpp b/src/hotspot/share/classfile/verifier.cpp
index 47468a16a2056..80825f0339f53 100644
--- a/src/hotspot/share/classfile/verifier.cpp
+++ b/src/hotspot/share/classfile/verifier.cpp
@@ -1805,7 +1805,7 @@ void ClassVerifier::verify_method(const methodHandle& m, TRAPS) {
no_control_flow = true; break;
default:
// We only need to check the valid bytecodes in class file.
- // And jsr and ret are not in the new class file format in JDK1.5.
+ // And jsr and ret are not in the new class file format in JDK1.6.
verify_error(ErrorContext::bad_code(bci),
"Bad instruction: %02x", opcode);
no_control_flow = false;
@@ -2316,6 +2316,7 @@ void ClassVerifier::verify_field_instructions(RawBytecodeStream* bcs,
// Get field name and signature
Symbol* field_name = cp->name_ref_at(index);
Symbol* field_sig = cp->signature_ref_at(index);
+ bool is_getfield = false;
// Field signature was checked in ClassFileParser.
assert(SignatureVerifier::is_valid_type_signature(field_sig),
@@ -2362,11 +2363,9 @@ void ClassVerifier::verify_field_instructions(RawBytecodeStream* bcs,
break;
}
case Bytecodes::_getfield: {
+ is_getfield = true;
stack_object_type = current_frame->pop_stack(
target_class_type, CHECK_VERIFY(this));
- for (int i = 0; i < n; i++) {
- current_frame->push_stack(field_type[i], CHECK_VERIFY(this));
- }
goto check_protected;
}
case Bytecodes::_putfield: {
@@ -2396,7 +2395,15 @@ void ClassVerifier::verify_field_instructions(RawBytecodeStream* bcs,
check_protected: {
if (_this_type == stack_object_type)
break; // stack_object_type must be assignable to _current_class_type
- if (was_recursively_verified()) return;
+ if (was_recursively_verified()) {
+ if (is_getfield) {
+ // Push field type for getfield.
+ for (int i = 0; i < n; i++) {
+ current_frame->push_stack(field_type[i], CHECK_VERIFY(this));
+ }
+ }
+ return;
+ }
Symbol* ref_class_name =
cp->klass_name_at(cp->klass_ref_index_at(index));
if (!name_in_supers(ref_class_name, current_class()))
@@ -2417,7 +2424,8 @@ void ClassVerifier::verify_field_instructions(RawBytecodeStream* bcs,
verify_error(ErrorContext::bad_type(bci,
current_frame->stack_top_ctx(),
TypeOrigin::implicit(current_type())),
- "Bad access to protected data in getfield");
+ "Bad access to protected data in %s",
+ is_getfield ? "getfield" : "putfield");
return;
}
}
@@ -2425,6 +2433,12 @@ void ClassVerifier::verify_field_instructions(RawBytecodeStream* bcs,
}
default: ShouldNotReachHere();
}
+ if (is_getfield) {
+ // Push field type for getfield after doing protection check.
+ for (int i = 0; i < n; i++) {
+ current_frame->push_stack(field_type[i], CHECK_VERIFY(this));
+ }
+ }
}
// Look at the method's handlers. If the bci is in the handler's try block
diff --git a/src/hotspot/share/classfile/vmIntrinsics.cpp b/src/hotspot/share/classfile/vmIntrinsics.cpp
index a0accc694fa04..0f9a440b3e632 100644
--- a/src/hotspot/share/classfile/vmIntrinsics.cpp
+++ b/src/hotspot/share/classfile/vmIntrinsics.cpp
@@ -505,6 +505,7 @@ bool vmIntrinsics::disabled_by_jvm_flags(vmIntrinsics::ID id) {
if (!SpecialArraysEquals) return true;
break;
case vmIntrinsics::_encodeISOArray:
+ case vmIntrinsics::_encodeAsciiArray:
case vmIntrinsics::_encodeByteISOArray:
if (!SpecialEncodeISOArray) return true;
break;
diff --git a/src/hotspot/share/classfile/vmIntrinsics.hpp b/src/hotspot/share/classfile/vmIntrinsics.hpp
index b072fba3789f6..83a501ccb4bc3 100644
--- a/src/hotspot/share/classfile/vmIntrinsics.hpp
+++ b/src/hotspot/share/classfile/vmIntrinsics.hpp
@@ -357,6 +357,9 @@ class methodHandle;
\
do_intrinsic(_encodeByteISOArray, java_lang_StringCoding, encodeISOArray_name, indexOfI_signature, F_S) \
\
+ do_intrinsic(_encodeAsciiArray, java_lang_StringCoding, encodeAsciiArray_name, encodeISOArray_signature, F_S) \
+ do_name( encodeAsciiArray_name, "implEncodeAsciiArray") \
+ \
do_class(java_math_BigInteger, "java/math/BigInteger") \
do_intrinsic(_multiplyToLen, java_math_BigInteger, multiplyToLen_name, multiplyToLen_signature, F_S) \
do_name( multiplyToLen_name, "implMultiplyToLen") \
diff --git a/src/hotspot/share/code/codeCache.cpp b/src/hotspot/share/code/codeCache.cpp
index a2e1e9dc479fb..0eab2bc5a484b 100644
--- a/src/hotspot/share/code/codeCache.cpp
+++ b/src/hotspot/share/code/codeCache.cpp
@@ -678,7 +678,7 @@ void CodeCache::nmethods_do(void f(nmethod* nm)) {
void CodeCache::metadata_do(MetadataClosure* f) {
assert_locked_or_safepoint(CodeCache_lock);
- NMethodIterator iter(NMethodIterator::only_alive_and_not_unloading);
+ NMethodIterator iter(NMethodIterator::only_alive);
while(iter.next()) {
iter.method()->metadata_do(f);
}
@@ -1032,7 +1032,7 @@ CompiledMethod* CodeCache::find_compiled(void* start) {
}
#if INCLUDE_JVMTI
-// RedefineClasses support for unloading nmethods that are dependent on "old" methods.
+// RedefineClasses support for saving nmethods that are dependent on "old" methods.
// We don't really expect this table to grow very large. If it does, it can become a hashtable.
static GrowableArray* old_compiled_method_table = NULL;
@@ -1091,7 +1091,7 @@ int CodeCache::mark_dependents_for_evol_deoptimization() {
reset_old_method_table();
int number_of_marked_CodeBlobs = 0;
- CompiledMethodIterator iter(CompiledMethodIterator::only_alive_and_not_unloading);
+ CompiledMethodIterator iter(CompiledMethodIterator::only_alive);
while(iter.next()) {
CompiledMethod* nm = iter.method();
// Walk all alive nmethods to check for old Methods.
@@ -1111,7 +1111,7 @@ int CodeCache::mark_dependents_for_evol_deoptimization() {
void CodeCache::mark_all_nmethods_for_evol_deoptimization() {
assert(SafepointSynchronize::is_at_safepoint(), "Can only do this at a safepoint!");
- CompiledMethodIterator iter(CompiledMethodIterator::only_alive_and_not_unloading);
+ CompiledMethodIterator iter(CompiledMethodIterator::only_alive);
while(iter.next()) {
CompiledMethod* nm = iter.method();
if (!nm->method()->is_method_handle_intrinsic()) {
diff --git a/src/hotspot/share/code/compiledMethod.cpp b/src/hotspot/share/code/compiledMethod.cpp
index 3cc2e1c4ffbdc..4e42b555d966e 100644
--- a/src/hotspot/share/code/compiledMethod.cpp
+++ b/src/hotspot/share/code/compiledMethod.cpp
@@ -478,6 +478,10 @@ bool CompiledMethod::clean_ic_if_metadata_is_dead(CompiledIC *ic) {
} else {
ShouldNotReachHere();
}
+ } else {
+ // This inline cache is a megamorphic vtable call. Those ICs never hold
+ // any Metadata and should therefore never be cleaned by this function.
+ return true;
}
}
diff --git a/src/hotspot/share/code/dependencies.cpp b/src/hotspot/share/code/dependencies.cpp
index f3ba73109da5a..306280dfc432d 100644
--- a/src/hotspot/share/code/dependencies.cpp
+++ b/src/hotspot/share/code/dependencies.cpp
@@ -116,6 +116,12 @@ void Dependencies::assert_unique_concrete_method(ciKlass* ctxk, ciMethod* uniqm,
}
}
+void Dependencies::assert_unique_implementor(ciInstanceKlass* ctxk, ciInstanceKlass* uniqk) {
+ check_ctxk(ctxk);
+ check_unique_implementor(ctxk, uniqk);
+ assert_common_2(unique_implementor, ctxk, uniqk);
+}
+
void Dependencies::assert_has_no_finalizable_subclasses(ciKlass* ctxk) {
check_ctxk(ctxk);
assert_common_1(no_finalizable_subclasses, ctxk);
@@ -173,6 +179,13 @@ void Dependencies::assert_abstract_with_unique_concrete_subtype(Klass* ctxk, Kla
assert_common_2(abstract_with_unique_concrete_subtype, ctxk_dv, conck_dv);
}
+void Dependencies::assert_unique_implementor(InstanceKlass* ctxk, InstanceKlass* uniqk) {
+ check_ctxk(ctxk);
+ assert(ctxk->is_interface(), "not an interface");
+ assert(ctxk->implementor() == uniqk, "not a unique implementor");
+ assert_common_2(unique_implementor, DepValue(_oop_recorder, ctxk), DepValue(_oop_recorder, uniqk));
+}
+
void Dependencies::assert_unique_concrete_method(Klass* ctxk, Method* uniqm) {
check_ctxk(ctxk);
check_unique_method(ctxk, uniqm);
@@ -573,6 +586,7 @@ const char* Dependencies::_dep_name[TYPE_LIMIT] = {
"abstract_with_unique_concrete_subtype",
"unique_concrete_method_2",
"unique_concrete_method_4",
+ "unique_implementor",
"no_finalizable_subclasses",
"call_site_target_value"
};
@@ -584,6 +598,7 @@ int Dependencies::_dep_args[TYPE_LIMIT] = {
2, // abstract_with_unique_concrete_subtype ctxk, k
2, // unique_concrete_method_2 ctxk, m
4, // unique_concrete_method_4 ctxk, m, resolved_klass, resolved_method
+ 2, // unique_implementor ctxk, implementor
1, // no_finalizable_subclasses ctxk
2 // call_site_target_value call_site, method_handle
};
@@ -1806,6 +1821,16 @@ Klass* Dependencies::check_unique_concrete_method(InstanceKlass* ctxk,
return NULL;
}
+Klass* Dependencies::check_unique_implementor(InstanceKlass* ctxk, Klass* uniqk, NewKlassDepChange* changes) {
+ assert(ctxk->is_interface(), "sanity");
+ assert(ctxk->nof_implementors() > 0, "no implementors");
+ if (ctxk->nof_implementors() == 1) {
+ assert(ctxk->implementor() == uniqk, "sanity");
+ return NULL;
+ }
+ return ctxk; // no unique implementor
+}
+
// Search for AME.
// There are two version of checks.
// 1) Spot checking version(Classload time). Newly added class is checked for AME.
@@ -1835,6 +1860,26 @@ Klass* Dependencies::find_witness_AME(InstanceKlass* ctxk, Method* m, KlassDepCh
return NULL;
}
+// This function is used by find_unique_concrete_method(non vtable based)
+// to check whether subtype method overrides the base method.
+static bool overrides(Method* sub_m, Method* base_m) {
+ assert(base_m != NULL, "base method should be non null");
+ if (sub_m == NULL) {
+ return false;
+ }
+ /**
+ * If base_m is public or protected then sub_m always overrides.
+ * If base_m is !public, !protected and !private (i.e. base_m is package private)
+ * then sub_m should be in the same package as that of base_m.
+ * For package private base_m this is conservative approach as it allows only subset of all allowed cases in
+ * the jvm specification.
+ **/
+ if (base_m->is_public() || base_m->is_protected() ||
+ base_m->method_holder()->is_same_class_package(sub_m->method_holder())) {
+ return true;
+ }
+ return false;
+}
// Find the set of all non-abstract methods under ctxk that match m.
// (The method m must be defined or inherited in ctxk.)
@@ -1872,6 +1917,9 @@ Method* Dependencies::find_unique_concrete_method(InstanceKlass* ctxk, Method* m
} else if (Dependencies::find_witness_AME(ctxk, fm) != NULL) {
// Found a concrete subtype which does not override abstract root method.
return NULL;
+ } else if (!overrides(fm, m)) {
+ // Found method doesn't override abstract root method.
+ return NULL;
}
assert(Dependencies::is_concrete_root_method(fm, ctxk) == Dependencies::is_concrete_method(m, ctxk), "mismatch");
#ifndef PRODUCT
@@ -2032,6 +2080,9 @@ Klass* Dependencies::DepStream::check_new_klass_dependency(NewKlassDepChange* ch
case unique_concrete_method_4:
witness = check_unique_concrete_method(context_type(), method_argument(1), type_argument(2), method_argument(3), changes);
break;
+ case unique_implementor:
+ witness = check_unique_implementor(context_type(), type_argument(1), changes);
+ break;
case no_finalizable_subclasses:
witness = check_has_no_finalizable_subclasses(context_type(), changes);
break;
diff --git a/src/hotspot/share/code/dependencies.hpp b/src/hotspot/share/code/dependencies.hpp
index 104fc9ee6496b..0d8fa9fa48c71 100644
--- a/src/hotspot/share/code/dependencies.hpp
+++ b/src/hotspot/share/code/dependencies.hpp
@@ -143,6 +143,9 @@ class Dependencies: public ResourceObj {
// of the analysis.
unique_concrete_method_4, // one unique concrete method under CX
+ // This dependency asserts that interface CX has a unique implementor class.
+ unique_implementor, // one unique implementor under CX
+
// This dependency asserts that no instances of class or it's
// subclasses require finalization registration.
no_finalizable_subclasses,
@@ -329,7 +332,10 @@ class Dependencies: public ResourceObj {
assert(!is_concrete_klass(ctxk->as_instance_klass()), "must be abstract");
}
static void check_unique_method(ciKlass* ctxk, ciMethod* m) {
- assert(!m->can_be_statically_bound(ctxk->as_instance_klass()), "redundant");
+ assert(!m->can_be_statically_bound(ctxk->as_instance_klass()) || ctxk->is_interface(), "redundant");
+ }
+ static void check_unique_implementor(ciInstanceKlass* ctxk, ciInstanceKlass* uniqk) {
+ assert(ctxk->implementor() == uniqk, "not a unique implementor");
}
void assert_common_1(DepType dept, ciBaseObject* x);
@@ -343,9 +349,9 @@ class Dependencies: public ResourceObj {
void assert_abstract_with_unique_concrete_subtype(ciKlass* ctxk, ciKlass* conck);
void assert_unique_concrete_method(ciKlass* ctxk, ciMethod* uniqm);
void assert_unique_concrete_method(ciKlass* ctxk, ciMethod* uniqm, ciKlass* resolved_klass, ciMethod* resolved_method);
+ void assert_unique_implementor(ciInstanceKlass* ctxk, ciInstanceKlass* uniqk);
void assert_has_no_finalizable_subclasses(ciKlass* ctxk);
void assert_call_site_target_value(ciCallSite* call_site, ciMethodHandle* method_handle);
-
#if INCLUDE_JVMCI
private:
static void check_ctxk(Klass* ctxk) {
@@ -366,6 +372,7 @@ class Dependencies: public ResourceObj {
void assert_evol_method(Method* m);
void assert_has_no_finalizable_subclasses(Klass* ctxk);
void assert_leaf_type(Klass* ctxk);
+ void assert_unique_implementor(InstanceKlass* ctxk, InstanceKlass* uniqk);
void assert_unique_concrete_method(Klass* ctxk, Method* uniqm);
void assert_abstract_with_unique_concrete_subtype(Klass* ctxk, Klass* conck);
void assert_call_site_target_value(oop callSite, oop methodHandle);
@@ -413,6 +420,7 @@ class Dependencies: public ResourceObj {
static Klass* check_evol_method(Method* m);
static Klass* check_leaf_type(InstanceKlass* ctxk);
static Klass* check_abstract_with_unique_concrete_subtype(InstanceKlass* ctxk, Klass* conck, NewKlassDepChange* changes = NULL);
+ static Klass* check_unique_implementor(InstanceKlass* ctxk, Klass* uniqk, NewKlassDepChange* changes = NULL);
static Klass* check_unique_concrete_method(InstanceKlass* ctxk, Method* uniqm, NewKlassDepChange* changes = NULL);
static Klass* check_unique_concrete_method(InstanceKlass* ctxk, Method* uniqm, Klass* resolved_klass, Method* resolved_method, KlassDepChange* changes = NULL);
static Klass* check_has_no_finalizable_subclasses(InstanceKlass* ctxk, NewKlassDepChange* changes = NULL);
diff --git a/src/hotspot/share/compiler/compileBroker.cpp b/src/hotspot/share/compiler/compileBroker.cpp
index cc1dff089e8d5..4c0a2acde0ed6 100644
--- a/src/hotspot/share/compiler/compileBroker.cpp
+++ b/src/hotspot/share/compiler/compileBroker.cpp
@@ -410,6 +410,7 @@ void CompileQueue::free_all() {
CompileTask::free(current);
}
_first = NULL;
+ _last = NULL;
// Wake up all threads that block on the queue.
MethodCompileQueue_lock->notify_all();
diff --git a/src/hotspot/share/compiler/compilerDirectives.cpp b/src/hotspot/share/compiler/compilerDirectives.cpp
index 0e38b067246ae..6d864072d1b6d 100644
--- a/src/hotspot/share/compiler/compilerDirectives.cpp
+++ b/src/hotspot/share/compiler/compilerDirectives.cpp
@@ -334,9 +334,21 @@ DirectiveSet* DirectiveSet::compilecommand_compatibility_init(const methodHandle
if (!CompilerDirectivesIgnoreCompileCommandsOption && CompilerOracle::has_any_command_set()) {
DirectiveSetPtr set(this);
+#ifdef COMPILER1
+ if (C1Breakpoint) {
+ // If the directives didn't have 'BreakAtExecute',
+ // the command 'C1Breakpoint' would become effective.
+ if (!_modified[BreakAtExecuteIndex]) {
+ set.cloned()->BreakAtExecuteOption = true;
+ }
+ }
+#endif
+
// All CompileCommands are not equal so this gets a bit verbose
// When CompileCommands have been refactored less clutter will remain.
if (CompilerOracle::should_break_at(method)) {
+ // If the directives didn't have 'BreakAtCompile' or 'BreakAtExecute',
+ // the sub-command 'Break' of the 'CompileCommand' would become effective.
if (!_modified[BreakAtCompileIndex]) {
set.cloned()->BreakAtCompileOption = true;
}
diff --git a/src/hotspot/share/compiler/methodMatcher.cpp b/src/hotspot/share/compiler/methodMatcher.cpp
index 6d6af709034c5..d01a608a80d5f 100644
--- a/src/hotspot/share/compiler/methodMatcher.cpp
+++ b/src/hotspot/share/compiler/methodMatcher.cpp
@@ -45,16 +45,20 @@
// 0x28 '(' and 0x29 ')' are used for the signature
// 0x2e '.' is always replaced before the matching
// 0x2f '/' is only used in the class name as package separator
+//
+// It seems hard to get Non-ASCII characters to work in all circumstances due
+// to limitations in Windows. So only ASCII characters are supported on Windows.
-#define RANGEBASE "\x1\x2\x3\x4\x5\x6\x7\x8\xa\xb\xc\xd\xe\xf" \
+#define RANGEBASE_ASCII "\x1\x2\x3\x4\x5\x6\x7\x8\xa\xb\xc\xd\xe\xf" \
"\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" \
"\x21\x22\x23\x24\x25\x26\x27\x2a\x2b\x2c\x2d" \
"\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" \
"\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" \
"\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5c\x5e\x5f" \
"\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f" \
- "\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f" \
- "\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" \
+ "\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f"
+
+#define RANGEBASE_NON_ASCII "\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" \
"\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f" \
"\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf" \
"\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" \
@@ -62,6 +66,8 @@
"\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf" \
"\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef"
+#define RANGEBASE RANGEBASE_ASCII NOT_WINDOWS(RANGEBASE_NON_ASCII)
+
#define RANGE0 "[*" RANGEBASE "]"
#define RANGESLASH "[*" RANGEBASE "/]"
@@ -167,6 +173,15 @@ bool MethodMatcher::canonicalize(char * line, const char *& error_msg) {
if (*lp == ':') *lp = ' ';
}
if (*lp == ',' || *lp == '.') *lp = ' ';
+
+#ifdef _WINDOWS
+ // It seems hard to get Non-ASCII characters to work in all circumstances due
+ // to limitations in Windows. So only ASCII characters are supported on Windows.
+ if (!isascii(*lp)) {
+ error_msg = "Non-ASCII characters are not supported on Windows.";
+ return false;
+ }
+#endif
}
return true;
}
@@ -240,10 +255,6 @@ void skip_leading_spaces(char*& line, int* total_bytes_read ) {
}
}
-PRAGMA_DIAG_PUSH
-// warning C4189: The file contains a character that cannot be represented
-// in the current code page
-PRAGMA_DISABLE_MSVC_WARNING(4819)
void MethodMatcher::parse_method_pattern(char*& line, const char*& error_msg, MethodMatcher* matcher) {
MethodMatcher::Mode c_match;
MethodMatcher::Mode m_match;
@@ -334,7 +345,6 @@ void MethodMatcher::parse_method_pattern(char*& line, const char*& error_msg, Me
error_msg = "Could not parse method pattern";
}
}
-PRAGMA_DIAG_POP
bool MethodMatcher::matches(const methodHandle& method) const {
Symbol* class_name = method->method_holder()->name();
diff --git a/src/hotspot/share/gc/g1/g1CommittedRegionMap.inline.hpp b/src/hotspot/share/gc/g1/g1CommittedRegionMap.inline.hpp
index f00cc9a1fbaca..d5e3eb0679f07 100644
--- a/src/hotspot/share/gc/g1/g1CommittedRegionMap.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1CommittedRegionMap.inline.hpp
@@ -30,7 +30,7 @@
#include "utilities/bitMap.inline.hpp"
inline bool G1CommittedRegionMap::active(uint index) const {
- return _active.at(index);
+ return _active.par_at(index);
}
inline bool G1CommittedRegionMap::inactive(uint index) const {
diff --git a/src/hotspot/share/gc/g1/g1PeriodicGCTask.cpp b/src/hotspot/share/gc/g1/g1PeriodicGCTask.cpp
index 67ddc6c49a474..1e212824c7ad3 100644
--- a/src/hotspot/share/gc/g1/g1PeriodicGCTask.cpp
+++ b/src/hotspot/share/gc/g1/g1PeriodicGCTask.cpp
@@ -27,12 +27,16 @@
#include "gc/g1/g1ConcurrentMark.inline.hpp"
#include "gc/g1/g1ConcurrentMarkThread.inline.hpp"
#include "gc/g1/g1PeriodicGCTask.hpp"
+#include "gc/shared/suspendibleThreadSet.hpp"
#include "logging/log.hpp"
#include "runtime/globals.hpp"
#include "runtime/os.hpp"
#include "utilities/globalDefinitions.hpp"
bool G1PeriodicGCTask::should_start_periodic_gc() {
+ // Ensure no GC safepoints while we're doing the checks, to avoid data races.
+ SuspendibleThreadSetJoiner sts;
+
G1CollectedHeap* g1h = G1CollectedHeap::heap();
// If we are currently in a concurrent mark we are going to uncommit memory soon.
if (g1h->concurrent_mark()->cm_thread()->in_progress()) {
diff --git a/src/hotspot/share/gc/g1/g1RemSet.cpp b/src/hotspot/share/gc/g1/g1RemSet.cpp
index 3a826ab23571e..7bea4a72d1be8 100644
--- a/src/hotspot/share/gc/g1/g1RemSet.cpp
+++ b/src/hotspot/share/gc/g1/g1RemSet.cpp
@@ -107,7 +107,7 @@ class G1RemSetScanState : public CHeapObj {
// within a region to claim. Dependent on the region size as proxy for the heap
// size, we limit the total number of chunks to limit memory usage and maintenance
// effort of that table vs. granularity of distributing scanning work.
- // Testing showed that 8 for 1M/2M region, 16 for 4M/8M regions, 32 for 16/32M regions
+ // Testing showed that 64 for 1M/2M region, 128 for 4M/8M regions, 256 for 16/32M regions
// seems to be such a good trade-off.
static uint get_chunks_per_region(uint log_region_size) {
// Limit the expected input values to current known possible values of the
@@ -115,7 +115,7 @@ class G1RemSetScanState : public CHeapObj {
// values for region size.
assert(log_region_size >= 20 && log_region_size <= 25,
"expected value in [20,25], but got %u", log_region_size);
- return 1u << (log_region_size / 2 - 7);
+ return 1u << (log_region_size / 2 - 4);
}
uint _scan_chunks_per_region; // Number of chunks per region.
diff --git a/src/hotspot/share/gc/shared/pretouchTask.cpp b/src/hotspot/share/gc/shared/pretouchTask.cpp
index 4f900439a3f98..a653ab805871a 100644
--- a/src/hotspot/share/gc/shared/pretouchTask.cpp
+++ b/src/hotspot/share/gc/shared/pretouchTask.cpp
@@ -82,7 +82,7 @@ void PretouchTask::pretouch(const char* task_name, char* start_address, char* en
}
if (pretouch_gang != NULL) {
- size_t num_chunks = (total_bytes + chunk_size - 1) / chunk_size;
+ size_t num_chunks = ((total_bytes - 1) / chunk_size) + 1;
uint num_workers = (uint)MIN2(num_chunks, (size_t)pretouch_gang->total_workers());
log_debug(gc, heap)("Running %s with %u workers for " SIZE_FORMAT " work units pre-touching " SIZE_FORMAT "B.",
diff --git a/src/hotspot/share/gc/shared/stringdedup/stringDedupConfig.cpp b/src/hotspot/share/gc/shared/stringdedup/stringDedupConfig.cpp
index 23d13ce4aa8e6..71ec8d563b77c 100644
--- a/src/hotspot/share/gc/shared/stringdedup/stringDedupConfig.cpp
+++ b/src/hotspot/share/gc/shared/stringdedup/stringDedupConfig.cpp
@@ -161,6 +161,6 @@ void StringDedup::Config::initialize() {
_load_factor_for_shrink = StringDeduplicationShrinkTableLoad;
_load_factor_target = StringDeduplicationTargetTableLoad;
_minimum_dead_for_cleanup = StringDeduplicationCleanupDeadMinimum;
- _dead_factor_for_cleanup = percent_of(StringDeduplicationCleanupDeadPercent, 100);
+ _dead_factor_for_cleanup = StringDeduplicationCleanupDeadPercent / 100.0;
_hash_seed = initial_hash_seed();
}
diff --git a/src/hotspot/share/gc/shared/suspendibleThreadSet.cpp b/src/hotspot/share/gc/shared/suspendibleThreadSet.cpp
index e91b1b4980209..2636cc920618d 100644
--- a/src/hotspot/share/gc/shared/suspendibleThreadSet.cpp
+++ b/src/hotspot/share/gc/shared/suspendibleThreadSet.cpp
@@ -29,10 +29,10 @@
#include "runtime/semaphore.hpp"
#include "runtime/thread.inline.hpp"
-uint SuspendibleThreadSet::_nthreads = 0;
-uint SuspendibleThreadSet::_nthreads_stopped = 0;
-bool SuspendibleThreadSet::_suspend_all = false;
-double SuspendibleThreadSet::_suspend_all_start = 0.0;
+uint SuspendibleThreadSet::_nthreads = 0;
+uint SuspendibleThreadSet::_nthreads_stopped = 0;
+volatile bool SuspendibleThreadSet::_suspend_all = false;
+double SuspendibleThreadSet::_suspend_all_start = 0.0;
static Semaphore* _synchronize_wakeup = NULL;
@@ -50,7 +50,7 @@ bool SuspendibleThreadSet::is_synchronized() {
void SuspendibleThreadSet::join() {
assert(!Thread::current()->is_suspendible_thread(), "Thread already joined");
MonitorLocker ml(STS_lock, Mutex::_no_safepoint_check_flag);
- while (_suspend_all) {
+ while (suspend_all()) {
ml.wait();
}
_nthreads++;
@@ -63,7 +63,7 @@ void SuspendibleThreadSet::leave() {
assert(_nthreads > 0, "Invalid");
DEBUG_ONLY(Thread::current()->clear_suspendible_thread();)
_nthreads--;
- if (_suspend_all && is_synchronized()) {
+ if (suspend_all() && is_synchronized()) {
// This leave completes a request, so inform the requestor.
_synchronize_wakeup->signal();
}
@@ -72,7 +72,7 @@ void SuspendibleThreadSet::leave() {
void SuspendibleThreadSet::yield() {
assert(Thread::current()->is_suspendible_thread(), "Must have joined");
MonitorLocker ml(STS_lock, Mutex::_no_safepoint_check_flag);
- if (_suspend_all) {
+ if (suspend_all()) {
_nthreads_stopped++;
if (is_synchronized()) {
if (ConcGCYieldTimeout > 0) {
@@ -82,7 +82,7 @@ void SuspendibleThreadSet::yield() {
// This yield completes the request, so inform the requestor.
_synchronize_wakeup->signal();
}
- while (_suspend_all) {
+ while (suspend_all()) {
ml.wait();
}
assert(_nthreads_stopped > 0, "Invalid");
@@ -97,8 +97,8 @@ void SuspendibleThreadSet::synchronize() {
}
{
MonitorLocker ml(STS_lock, Mutex::_no_safepoint_check_flag);
- assert(!_suspend_all, "Only one at a time");
- _suspend_all = true;
+ assert(!suspend_all(), "Only one at a time");
+ Atomic::store(&_suspend_all, true);
if (is_synchronized()) {
return;
}
@@ -120,7 +120,7 @@ void SuspendibleThreadSet::synchronize() {
#ifdef ASSERT
MonitorLocker ml(STS_lock, Mutex::_no_safepoint_check_flag);
- assert(_suspend_all, "STS not synchronizing");
+ assert(suspend_all(), "STS not synchronizing");
assert(is_synchronized(), "STS not synchronized");
#endif
}
@@ -128,8 +128,8 @@ void SuspendibleThreadSet::synchronize() {
void SuspendibleThreadSet::desynchronize() {
assert(Thread::current()->is_VM_thread(), "Must be the VM thread");
MonitorLocker ml(STS_lock, Mutex::_no_safepoint_check_flag);
- assert(_suspend_all, "STS not synchronizing");
+ assert(suspend_all(), "STS not synchronizing");
assert(is_synchronized(), "STS not synchronized");
- _suspend_all = false;
+ Atomic::store(&_suspend_all, false);
ml.notify_all();
}
diff --git a/src/hotspot/share/gc/shared/suspendibleThreadSet.hpp b/src/hotspot/share/gc/shared/suspendibleThreadSet.hpp
index 1e47c3b57a87c..37d27f3e9ed94 100644
--- a/src/hotspot/share/gc/shared/suspendibleThreadSet.hpp
+++ b/src/hotspot/share/gc/shared/suspendibleThreadSet.hpp
@@ -26,6 +26,7 @@
#define SHARE_GC_SHARED_SUSPENDIBLETHREADSET_HPP
#include "memory/allocation.hpp"
+#include "runtime/atomic.hpp"
// A SuspendibleThreadSet is a set of threads that can be suspended.
// A thread can join and later leave the set, and periodically yield.
@@ -40,9 +41,10 @@ class SuspendibleThreadSet : public AllStatic {
friend class SuspendibleThreadSetLeaver;
private:
+ static volatile bool _suspend_all;
+
static uint _nthreads;
static uint _nthreads_stopped;
- static bool _suspend_all;
static double _suspend_all_start;
static bool is_synchronized();
@@ -53,9 +55,11 @@ class SuspendibleThreadSet : public AllStatic {
// Removes the current thread from the set.
static void leave();
+ static bool suspend_all() { return Atomic::load(&_suspend_all); }
+
public:
// Returns true if an suspension is in progress.
- static bool should_yield() { return _suspend_all; }
+ static bool should_yield() { return suspend_all(); }
// Suspends the current thread if a suspension is in progress.
static void yield();
diff --git a/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.cpp b/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.cpp
index 45de2647375c2..5ba3828b2d3d7 100644
--- a/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.cpp
+++ b/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.cpp
@@ -2170,48 +2170,7 @@ void MemoryGraphFixer::collect_memory_nodes() {
assert(call->is_Call(), "");
mem = call->in(TypeFunc::Memory);
} else if (in->Opcode() == Op_NeverBranch) {
- Node* head = in->in(0);
- assert(head->is_Region(), "unexpected infinite loop graph shape");
-
- Node* phi_mem = NULL;
- for (DUIterator_Fast jmax, j = head->fast_outs(jmax); j < jmax; j++) {
- Node* u = head->fast_out(j);
- if (u->is_Phi() && u->bottom_type() == Type::MEMORY) {
- if (_phase->C->get_alias_index(u->adr_type()) == _alias) {
- assert(phi_mem == NULL || phi_mem->adr_type() == TypePtr::BOTTOM, "");
- phi_mem = u;
- } else if (u->adr_type() == TypePtr::BOTTOM) {
- assert(phi_mem == NULL || _phase->C->get_alias_index(phi_mem->adr_type()) == _alias, "");
- if (phi_mem == NULL) {
- phi_mem = u;
- }
- }
- }
- }
- if (phi_mem == NULL) {
- for (uint j = 1; j < head->req(); j++) {
- Node* tail = head->in(j);
- if (!_phase->is_dominator(head, tail)) {
- continue;
- }
- Node* c = tail;
- while (c != head) {
- if (c->is_SafePoint() && !c->is_CallLeaf()) {
- Node* m =c->in(TypeFunc::Memory);
- if (m->is_MergeMem()) {
- m = m->as_MergeMem()->memory_at(_alias);
- }
- assert(mem == NULL || mem == m, "several memory states");
- mem = m;
- }
- c = _phase->idom(c);
- }
- assert(mem != NULL, "should have found safepoint");
- }
- assert(mem != NULL, "should have found safepoint");
- } else {
- mem = phi_mem;
- }
+ mem = collect_memory_for_infinite_loop(in);
}
}
} else {
@@ -2435,6 +2394,67 @@ void MemoryGraphFixer::collect_memory_nodes() {
}
}
+Node* MemoryGraphFixer::collect_memory_for_infinite_loop(const Node* in) {
+ Node* mem = NULL;
+ Node* head = in->in(0);
+ assert(head->is_Region(), "unexpected infinite loop graph shape");
+
+ Node* phi_mem = NULL;
+ for (DUIterator_Fast jmax, j = head->fast_outs(jmax); j < jmax; j++) {
+ Node* u = head->fast_out(j);
+ if (u->is_Phi() && u->bottom_type() == Type::MEMORY) {
+ if (_phase->C->get_alias_index(u->adr_type()) == _alias) {
+ assert(phi_mem == NULL || phi_mem->adr_type() == TypePtr::BOTTOM, "");
+ phi_mem = u;
+ } else if (u->adr_type() == TypePtr::BOTTOM) {
+ assert(phi_mem == NULL || _phase->C->get_alias_index(phi_mem->adr_type()) == _alias, "");
+ if (phi_mem == NULL) {
+ phi_mem = u;
+ }
+ }
+ }
+ }
+ if (phi_mem == NULL) {
+ ResourceMark rm;
+ Node_Stack stack(0);
+ stack.push(head, 1);
+ do {
+ Node* n = stack.node();
+ uint i = stack.index();
+ if (i >= n->req()) {
+ stack.pop();
+ } else {
+ stack.set_index(i + 1);
+ Node* c = n->in(i);
+ assert(c != head, "should have found a safepoint on the way");
+ if (stack.size() != 1 || _phase->is_dominator(head, c)) {
+ for (;;) {
+ if (c->is_Region()) {
+ stack.push(c, 1);
+ break;
+ } else if (c->is_SafePoint() && !c->is_CallLeaf()) {
+ Node* m = c->in(TypeFunc::Memory);
+ if (m->is_MergeMem()) {
+ m = m->as_MergeMem()->memory_at(_alias);
+ }
+ assert(mem == NULL || mem == m, "several memory states");
+ mem = m;
+ break;
+ } else {
+ assert(c != c->in(0), "");
+ c = c->in(0);
+ }
+ }
+ }
+ }
+ } while (stack.size() > 0);
+ assert(mem != NULL, "should have found safepoint");
+ } else {
+ mem = phi_mem;
+ }
+ return mem;
+}
+
Node* MemoryGraphFixer::get_ctrl(Node* n) const {
Node* c = _phase->get_ctrl(n);
if (n->is_Proj() && n->in(0) != NULL && n->in(0)->is_Call()) {
diff --git a/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.hpp b/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.hpp
index 1890c40ad5836..6632e42b36f07 100644
--- a/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.hpp
+++ b/src/hotspot/share/gc/shenandoah/c2/shenandoahSupport.hpp
@@ -131,6 +131,8 @@ class MemoryGraphFixer : public ResourceObj {
Node* find_mem(Node* ctrl, Node* n) const;
void fix_mem(Node* ctrl, Node* region, Node* mem, Node* mem_for_ctrl, Node* mem_phi, Unique_Node_List& uses);
int alias() const { return _alias; }
+
+ Node* collect_memory_for_infinite_loop(const Node* in);
};
class ShenandoahCompareAndSwapPNode : public CompareAndSwapPNode {
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahArguments.cpp b/src/hotspot/share/gc/shenandoah/shenandoahArguments.cpp
index 624f004e3ccd3..c7e0c9b0cd9d7 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahArguments.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahArguments.cpp
@@ -35,7 +35,7 @@
#include "utilities/defaultStream.hpp"
void ShenandoahArguments::initialize() {
-#if !(defined AARCH64 || defined AMD64 || defined IA32)
+#if !(defined AARCH64 || defined AMD64 || defined IA32 || defined PPC64)
vm_exit_during_initialization("Shenandoah GC is not supported on this platform.");
#endif
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahAsserts.cpp b/src/hotspot/share/gc/shenandoah/shenandoahAsserts.cpp
index 898db1a8df982..87cf1c41d3f19 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahAsserts.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahAsserts.cpp
@@ -316,6 +316,28 @@ void ShenandoahAsserts::assert_marked(void *interior_loc, oop obj, const char *f
}
}
+void ShenandoahAsserts::assert_marked_weak(void *interior_loc, oop obj, const char *file, int line) {
+ assert_correct(interior_loc, obj, file, line);
+
+ ShenandoahHeap* heap = ShenandoahHeap::heap();
+ if (!heap->marking_context()->is_marked_weak(obj)) {
+ print_failure(_safe_all, obj, interior_loc, NULL, "Shenandoah assert_marked_weak failed",
+ "Object should be marked weakly",
+ file, line);
+ }
+}
+
+void ShenandoahAsserts::assert_marked_strong(void *interior_loc, oop obj, const char *file, int line) {
+ assert_correct(interior_loc, obj, file, line);
+
+ ShenandoahHeap* heap = ShenandoahHeap::heap();
+ if (!heap->marking_context()->is_marked_strong(obj)) {
+ print_failure(_safe_all, obj, interior_loc, NULL, "Shenandoah assert_marked_strong failed",
+ "Object should be marked strongly",
+ file, line);
+ }
+}
+
void ShenandoahAsserts::assert_in_cset(void* interior_loc, oop obj, const char* file, int line) {
assert_correct(interior_loc, obj, file, line);
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahAsserts.hpp b/src/hotspot/share/gc/shenandoah/shenandoahAsserts.hpp
index 61779b447d6e9..c730eafb89d01 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahAsserts.hpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahAsserts.hpp
@@ -61,6 +61,8 @@ class ShenandoahAsserts {
static void assert_forwarded(void* interior_loc, oop obj, const char* file, int line);
static void assert_not_forwarded(void* interior_loc, oop obj, const char* file, int line);
static void assert_marked(void* interior_loc, oop obj, const char* file, int line);
+ static void assert_marked_weak(void* interior_loc, oop obj, const char* file, int line);
+ static void assert_marked_strong(void* interior_loc, oop obj, const char* file, int line);
static void assert_in_cset(void* interior_loc, oop obj, const char* file, int line);
static void assert_not_in_cset(void* interior_loc, oop obj, const char* file, int line);
static void assert_not_in_cset_loc(void* interior_loc, const char* file, int line);
@@ -107,6 +109,20 @@ class ShenandoahAsserts {
#define shenandoah_assert_marked(interior_loc, obj) \
ShenandoahAsserts::assert_marked(interior_loc, obj, __FILE__, __LINE__)
+#define shenandoah_assert_marked_weak_if(interior_loc, obj, condition) \
+ if (condition) ShenandoahAsserts::assert_marked_weak(interior_loc, obj, __FILE__, __LINE__)
+#define shenandoah_assert_marked_weak_except(interior_loc, obj, exception) \
+ if (!(exception)) ShenandoahAsserts::assert_marked_weak(interior_loc, obj, __FILE__, __LINE__)
+#define shenandoah_assert_marked_weak(interior_loc, obj) \
+ ShenandoahAsserts::assert_marked_weak(interior_loc, obj, __FILE__, __LINE__)
+
+#define shenandoah_assert_marked_strong_if(interior_loc, obj, condition) \
+ if (condition) ShenandoahAsserts::assert_marked_strong(interior_loc, obj, __FILE__, __LINE__)
+#define shenandoah_assert_marked_strong_except(interior_loc, obj, exception) \
+ if (!(exception)) ShenandoahAsserts::assert_marked_strong(interior_loc, obj, __FILE__, __LINE__)
+#define shenandoah_assert_marked_strong(interior_loc, obj) \
+ ShenandoahAsserts::assert_marked_strong(interior_loc, obj, __FILE__, __LINE__)
+
#define shenandoah_assert_in_cset_if(interior_loc, obj, condition) \
if (condition) ShenandoahAsserts::assert_in_cset(interior_loc, obj, __FILE__, __LINE__)
#define shenandoah_assert_in_cset_except(interior_loc, obj, exception) \
@@ -168,6 +184,14 @@ class ShenandoahAsserts {
#define shenandoah_assert_marked_except(interior_loc, obj, exception)
#define shenandoah_assert_marked(interior_loc, obj)
+#define shenandoah_assert_marked_weak_if(interior_loc, obj, condition)
+#define shenandoah_assert_marked_weak_except(interior_loc, obj, exception)
+#define shenandoah_assert_marked_weak(interior_loc, obj)
+
+#define shenandoah_assert_marked_strong_if(interior_loc, obj, condition)
+#define shenandoah_assert_marked_strong_except(interior_loc, obj, exception)
+#define shenandoah_assert_marked_strong(interior_loc, obj)
+
#define shenandoah_assert_in_cset_if(interior_loc, obj, condition)
#define shenandoah_assert_in_cset_except(interior_loc, obj, exception)
#define shenandoah_assert_in_cset(interior_loc, obj)
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahBarrierSet.hpp b/src/hotspot/share/gc/shenandoah/shenandoahBarrierSet.hpp
index 38b447a354a18..a4353df545b95 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahBarrierSet.hpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahBarrierSet.hpp
@@ -54,11 +54,11 @@ class ShenandoahBarrierSet: public BarrierSet {
static bool need_keep_alive_barrier(DecoratorSet decorators, BasicType type);
static bool is_strong_access(DecoratorSet decorators) {
- return (decorators & (ON_WEAK_OOP_REF | ON_PHANTOM_OOP_REF | ON_UNKNOWN_OOP_REF)) == 0;
+ return (decorators & (ON_WEAK_OOP_REF | ON_PHANTOM_OOP_REF)) == 0;
}
static bool is_weak_access(DecoratorSet decorators) {
- return (decorators & (ON_WEAK_OOP_REF | ON_UNKNOWN_OOP_REF)) != 0;
+ return (decorators & ON_WEAK_OOP_REF) != 0;
}
static bool is_phantom_access(DecoratorSet decorators) {
@@ -90,8 +90,6 @@ class ShenandoahBarrierSet: public BarrierSet {
inline void satb_enqueue(oop value);
inline void iu_barrier(oop obj);
- template
- inline void keep_alive_if_weak(oop value);
inline void keep_alive_if_weak(DecoratorSet decorators, oop value);
inline void enqueue(oop obj);
@@ -101,8 +99,17 @@ class ShenandoahBarrierSet: public BarrierSet {
template
inline oop load_reference_barrier_mutator(oop obj, T* load_addr);
- template
- inline oop load_reference_barrier(oop obj, T* load_addr);
+ template
+ inline oop load_reference_barrier(DecoratorSet decorators, oop obj, T* load_addr);
+
+ template
+ inline oop oop_load(DecoratorSet decorators, T* addr);
+
+ template
+ inline oop oop_cmpxchg(DecoratorSet decorators, T* addr, oop compare_value, oop new_value);
+
+ template
+ inline oop oop_xchg(DecoratorSet decorators, T* addr, oop new_value);
private:
template
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahBarrierSet.inline.hpp b/src/hotspot/share/gc/shenandoah/shenandoahBarrierSet.inline.hpp
index d0beb4c22cf2a..ed6a9219abb37 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahBarrierSet.inline.hpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahBarrierSet.inline.hpp
@@ -99,21 +99,21 @@ inline oop ShenandoahBarrierSet::load_reference_barrier(oop obj) {
return obj;
}
-template
-inline oop ShenandoahBarrierSet::load_reference_barrier(oop obj, T* load_addr) {
+template
+inline oop ShenandoahBarrierSet::load_reference_barrier(DecoratorSet decorators, oop obj, T* load_addr) {
if (obj == NULL) {
return NULL;
}
// Prevent resurrection of unreachable phantom (i.e. weak-native) references.
- if (HasDecorator::value &&
+ if ((decorators & ON_PHANTOM_OOP_REF) != 0 &&
_heap->is_concurrent_weak_root_in_progress() &&
!_heap->marking_context()->is_marked(obj)) {
return NULL;
}
// Prevent resurrection of unreachable weak references.
- if ((HasDecorator::value || HasDecorator::value) &&
+ if ((decorators & ON_WEAK_OOP_REF) != 0 &&
_heap->is_concurrent_weak_root_in_progress() &&
!_heap->marking_context()->is_marked_strong(obj)) {
return NULL;
@@ -121,7 +121,7 @@ inline oop ShenandoahBarrierSet::load_reference_barrier(oop obj, T* load_addr) {
// Prevent resurrection of unreachable objects that are visited during
// concurrent class-unloading.
- if (HasDecorator::value &&
+ if ((decorators & AS_NO_KEEPALIVE) != 0 &&
_heap->is_evacuation_in_progress() &&
!_heap->marking_context()->is_marked(obj)) {
return obj;
@@ -184,45 +184,64 @@ inline void ShenandoahBarrierSet::keep_alive_if_weak(DecoratorSet decorators, oo
}
}
-template
-inline void ShenandoahBarrierSet::keep_alive_if_weak(oop value) {
- assert((decorators & ON_UNKNOWN_OOP_REF) == 0, "Reference strength must be known");
- if (!HasDecorator::value &&
- !HasDecorator::value) {
- satb_enqueue(value);
- }
+template
+inline oop ShenandoahBarrierSet::oop_load(DecoratorSet decorators, T* addr) {
+ oop value = RawAccess<>::oop_load(addr);
+ value = load_reference_barrier(decorators, value, addr);
+ keep_alive_if_weak(decorators, value);
+ return value;
+}
+
+template
+inline oop ShenandoahBarrierSet::oop_cmpxchg(DecoratorSet decorators, T* addr, oop compare_value, oop new_value) {
+ iu_barrier(new_value);
+ oop res;
+ oop expected = compare_value;
+ do {
+ compare_value = expected;
+ res = RawAccess<>::oop_atomic_cmpxchg(addr, compare_value, new_value);
+ expected = res;
+ } while ((compare_value != expected) && (resolve_forwarded(compare_value) == resolve_forwarded(expected)));
+
+ // Note: We don't need a keep-alive-barrier here. We already enqueue any loaded reference for SATB anyway,
+ // because it must be the previous value.
+ res = load_reference_barrier(decorators, res, reinterpret_cast(NULL));
+ satb_enqueue(res);
+ return res;
+}
+
+template
+inline oop ShenandoahBarrierSet::oop_xchg(DecoratorSet decorators, T* addr, oop new_value) {
+ iu_barrier(new_value);
+ oop previous = RawAccess<>::oop_atomic_xchg(addr, new_value);
+ // Note: We don't need a keep-alive-barrier here. We already enqueue any loaded reference for SATB anyway,
+ // because it must be the previous value.
+ previous = load_reference_barrier(decorators, previous, reinterpret_cast(NULL));
+ satb_enqueue(previous);
+ return previous;
}
template
template
inline oop ShenandoahBarrierSet::AccessBarrier::oop_load_not_in_heap(T* addr) {
- oop value = Raw::oop_load_not_in_heap(addr);
- if (value != NULL) {
- ShenandoahBarrierSet *const bs = ShenandoahBarrierSet::barrier_set();
- value = bs->load_reference_barrier(value, addr);
- bs->keep_alive_if_weak(value);
- }
- return value;
+ assert((decorators & ON_UNKNOWN_OOP_REF) == 0, "must be absent");
+ ShenandoahBarrierSet* const bs = ShenandoahBarrierSet::barrier_set();
+ return bs->oop_load(decorators, addr);
}
template
template
inline oop ShenandoahBarrierSet::AccessBarrier::oop_load_in_heap(T* addr) {
- oop value = Raw::oop_load_in_heap(addr);
- ShenandoahBarrierSet *const bs = ShenandoahBarrierSet::barrier_set();
- value = bs->load_reference_barrier(value, addr);
- bs->keep_alive_if_weak(value);
- return value;
+ assert((decorators & ON_UNKNOWN_OOP_REF) == 0, "must be absent");
+ ShenandoahBarrierSet* const bs = ShenandoahBarrierSet::barrier_set();
+ return bs->oop_load(decorators, addr);
}
template
inline oop ShenandoahBarrierSet::AccessBarrier::oop_load_in_heap_at(oop base, ptrdiff_t offset) {
- oop value = Raw::oop_load_in_heap_at(base, offset);
- ShenandoahBarrierSet *const bs = ShenandoahBarrierSet::barrier_set();
+ ShenandoahBarrierSet* const bs = ShenandoahBarrierSet::barrier_set();
DecoratorSet resolved_decorators = AccessBarrierSupport::resolve_possibly_unknown_oop_ref_strength(base, offset);
- value = bs->load_reference_barrier(value, AccessInternal::oop_field_addr(base, offset));
- bs->keep_alive_if_weak(resolved_decorators, value);
- return value;
+ return bs->oop_load(resolved_decorators, AccessInternal::oop_field_addr(base, offset));
}
template
@@ -254,59 +273,49 @@ inline void ShenandoahBarrierSet::AccessBarrier::oop_st
template
template
inline oop ShenandoahBarrierSet::AccessBarrier::oop_atomic_cmpxchg_not_in_heap(T* addr, oop compare_value, oop new_value) {
+ assert((decorators & (AS_NO_KEEPALIVE | ON_UNKNOWN_OOP_REF)) == 0, "must be absent");
ShenandoahBarrierSet* bs = ShenandoahBarrierSet::barrier_set();
- bs->iu_barrier(new_value);
-
- oop res;
- oop expected = compare_value;
- do {
- compare_value = expected;
- res = Raw::oop_atomic_cmpxchg(addr, compare_value, new_value);
- expected = res;
- } while ((compare_value != expected) && (resolve_forwarded(compare_value) == resolve_forwarded(expected)));
-
- // Note: We don't need a keep-alive-barrier here. We already enqueue any loaded reference for SATB anyway,
- // because it must be the previous value.
- res = ShenandoahBarrierSet::barrier_set()->load_reference_barrier(res, NULL);
- bs->satb_enqueue(res);
- return res;
+ return bs->oop_cmpxchg(decorators, addr, compare_value, new_value);
}
template
template
inline oop ShenandoahBarrierSet::AccessBarrier::oop_atomic_cmpxchg_in_heap(T* addr, oop compare_value, oop new_value) {
- return oop_atomic_cmpxchg_not_in_heap(addr, compare_value, new_value);
+ assert((decorators & (AS_NO_KEEPALIVE | ON_UNKNOWN_OOP_REF)) == 0, "must be absent");
+ ShenandoahBarrierSet* bs = ShenandoahBarrierSet::barrier_set();
+ return bs->oop_cmpxchg(decorators, addr, compare_value, new_value);
}
template
inline oop ShenandoahBarrierSet::AccessBarrier::oop_atomic_cmpxchg_in_heap_at(oop base, ptrdiff_t offset, oop compare_value, oop new_value) {
- return oop_atomic_cmpxchg_in_heap(AccessInternal::oop_field_addr(base, offset), compare_value, new_value);
+ assert((decorators & AS_NO_KEEPALIVE) == 0, "must be absent");
+ ShenandoahBarrierSet* bs = ShenandoahBarrierSet::barrier_set();
+ DecoratorSet resolved_decorators = AccessBarrierSupport::resolve_possibly_unknown_oop_ref_strength(base, offset);
+ return bs->oop_cmpxchg(resolved_decorators, AccessInternal::oop_field_addr(base, offset), compare_value, new_value);
}
template
template
inline oop ShenandoahBarrierSet::AccessBarrier::oop_atomic_xchg_not_in_heap(T* addr, oop new_value) {
+ assert((decorators & (AS_NO_KEEPALIVE | ON_UNKNOWN_OOP_REF)) == 0, "must be absent");
ShenandoahBarrierSet* bs = ShenandoahBarrierSet::barrier_set();
- bs->iu_barrier(new_value);
-
- oop previous = Raw::oop_atomic_xchg(addr, new_value);
-
- // Note: We don't need a keep-alive-barrier here. We already enqueue any loaded reference for SATB anyway,
- // because it must be the previous value.
- previous = ShenandoahBarrierSet::barrier_set()->load_reference_barrier(previous, NULL);
- bs->satb_enqueue(previous);
- return previous;
+ return bs->oop_xchg(decorators, addr, new_value);
}
template
template
inline oop ShenandoahBarrierSet::AccessBarrier::oop_atomic_xchg_in_heap(T* addr, oop new_value) {
- return oop_atomic_xchg_not_in_heap(addr, new_value);
+ assert((decorators & (AS_NO_KEEPALIVE | ON_UNKNOWN_OOP_REF)) == 0, "must be absent");
+ ShenandoahBarrierSet* bs = ShenandoahBarrierSet::barrier_set();
+ return bs->oop_xchg(decorators, addr, new_value);
}
template
inline oop ShenandoahBarrierSet::AccessBarrier::oop_atomic_xchg_in_heap_at(oop base, ptrdiff_t offset, oop new_value) {
- return oop_atomic_xchg_in_heap(AccessInternal::oop_field_addr(base, offset), new_value);
+ assert((decorators & AS_NO_KEEPALIVE) == 0, "must be absent");
+ ShenandoahBarrierSet* bs = ShenandoahBarrierSet::barrier_set();
+ DecoratorSet resolved_decorators = AccessBarrierSupport::resolve_possibly_unknown_oop_ref_strength(base, offset);
+ return bs->oop_xchg(resolved_decorators, AccessInternal::oop_field_addr(base, offset), new_value);
}
// Clone barrier support
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahClosures.inline.hpp b/src/hotspot/share/gc/shenandoah/shenandoahClosures.inline.hpp
index 44a16da8ee7ae..de59519d605ce 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahClosures.inline.hpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahClosures.inline.hpp
@@ -227,7 +227,7 @@ ShenandoahCodeBlobAndDisarmClosure::ShenandoahCodeBlobAndDisarmClosure(OopClosur
void ShenandoahCodeBlobAndDisarmClosure::do_code_blob(CodeBlob* cb) {
nmethod* const nm = cb->as_nmethod_or_null();
- if (nm != NULL && nm->oops_do_try_claim()) {
+ if (nm != NULL) {
assert(!ShenandoahNMethod::gc_data(nm)->is_unregistered(), "Should not be here");
CodeBlobToOopClosure::do_code_blob(cb);
_bs->disarm(nm);
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahCodeRoots.cpp b/src/hotspot/share/gc/shenandoah/shenandoahCodeRoots.cpp
index ca9afd7ad4f8f..1230030da64ad 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahCodeRoots.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahCodeRoots.cpp
@@ -87,7 +87,7 @@ void ShenandoahParallelCodeHeapIterator::parallel_blobs_do(CodeBlobClosure* f) {
int current = count++;
if ((current & stride_mask) == 0) {
process_block = (current >= _claimed_idx) &&
- (Atomic::cmpxchg(&_claimed_idx, current, current + stride) == current);
+ (Atomic::cmpxchg(&_claimed_idx, current, current + stride, memory_order_relaxed) == current);
}
if (process_block) {
if (cb->is_alive()) {
@@ -111,7 +111,7 @@ void ShenandoahCodeRoots::initialize() {
}
void ShenandoahCodeRoots::register_nmethod(nmethod* nm) {
- assert_locked_or_safepoint(CodeCache_lock);
+ assert(CodeCache_lock->owned_by_self(), "Must have CodeCache_lock held");
_nmethod_table->register_nmethod(nm);
}
@@ -121,7 +121,7 @@ void ShenandoahCodeRoots::unregister_nmethod(nmethod* nm) {
}
void ShenandoahCodeRoots::flush_nmethod(nmethod* nm) {
- assert_locked_or_safepoint(CodeCache_lock);
+ assert(CodeCache_lock->owned_by_self(), "Must have CodeCache_lock held");
_nmethod_table->flush_nmethod(nm);
}
@@ -355,15 +355,15 @@ ShenandoahCodeRootsIterator::ShenandoahCodeRootsIterator() :
_par_iterator(CodeCache::heaps()),
_table_snapshot(NULL) {
assert(SafepointSynchronize::is_at_safepoint(), "Must be at safepoint");
- assert(!Thread::current()->is_Worker_thread(), "Should not be acquired by workers");
- CodeCache_lock->lock_without_safepoint_check();
+ MutexLocker locker(CodeCache_lock, Mutex::_no_safepoint_check_flag);
_table_snapshot = ShenandoahCodeRoots::table()->snapshot_for_iteration();
}
ShenandoahCodeRootsIterator::~ShenandoahCodeRootsIterator() {
+ MonitorLocker locker(CodeCache_lock, Mutex::_no_safepoint_check_flag);
ShenandoahCodeRoots::table()->finish_iteration(_table_snapshot);
_table_snapshot = NULL;
- CodeCache_lock->unlock();
+ locker.notify_all();
}
void ShenandoahCodeRootsIterator::possibly_parallel_blobs_do(CodeBlobClosure *f) {
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahConcurrentGC.cpp b/src/hotspot/share/gc/shenandoah/shenandoahConcurrentGC.cpp
index 12d6f27365388..4dae246302aa1 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahConcurrentGC.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahConcurrentGC.cpp
@@ -50,24 +50,37 @@
// Breakpoint support
class ShenandoahBreakpointGCScope : public StackObj {
+private:
+ const GCCause::Cause _cause;
public:
- ShenandoahBreakpointGCScope() {
- ShenandoahBreakpoint::at_before_gc();
+ ShenandoahBreakpointGCScope(GCCause::Cause cause) : _cause(cause) {
+ if (cause == GCCause::_wb_breakpoint) {
+ ShenandoahBreakpoint::start_gc();
+ ShenandoahBreakpoint::at_before_gc();
+ }
}
~ShenandoahBreakpointGCScope() {
- ShenandoahBreakpoint::at_after_gc();
+ if (_cause == GCCause::_wb_breakpoint) {
+ ShenandoahBreakpoint::at_after_gc();
+ }
}
};
class ShenandoahBreakpointMarkScope : public StackObj {
+private:
+ const GCCause::Cause _cause;
public:
- ShenandoahBreakpointMarkScope() {
- ShenandoahBreakpoint::at_after_marking_started();
+ ShenandoahBreakpointMarkScope(GCCause::Cause cause) : _cause(cause) {
+ if (_cause == GCCause::_wb_breakpoint) {
+ ShenandoahBreakpoint::at_after_marking_started();
+ }
}
~ShenandoahBreakpointMarkScope() {
- ShenandoahBreakpoint::at_before_marking_completed();
+ if (_cause == GCCause::_wb_breakpoint) {
+ ShenandoahBreakpoint::at_before_marking_completed();
+ }
}
};
@@ -86,10 +99,7 @@ void ShenandoahConcurrentGC::cancel() {
bool ShenandoahConcurrentGC::collect(GCCause::Cause cause) {
ShenandoahHeap* const heap = ShenandoahHeap::heap();
- if (cause == GCCause::_wb_breakpoint) {
- ShenandoahBreakpoint::start_gc();
- }
- ShenandoahBreakpointGCScope breakpoint_gc_scope;
+ ShenandoahBreakpointGCScope breakpoint_gc_scope(cause);
// Reset for upcoming marking
entry_reset();
@@ -98,7 +108,7 @@ bool ShenandoahConcurrentGC::collect(GCCause::Cause cause) {
vmop_entry_init_mark();
{
- ShenandoahBreakpointMarkScope breakpoint_mark_scope;
+ ShenandoahBreakpointMarkScope breakpoint_mark_scope(cause);
// Concurrent mark roots
entry_mark_roots();
if (check_cancellation_and_abort(ShenandoahDegenPoint::_degenerated_outside_cycle)) return false;
@@ -657,7 +667,9 @@ void ShenandoahConcurrentGC::op_weak_refs() {
assert(heap->is_concurrent_weak_root_in_progress(), "Only during this phase");
// Concurrent weak refs processing
ShenandoahGCWorkerPhase worker_phase(ShenandoahPhaseTimings::conc_weak_refs);
- ShenandoahBreakpoint::at_after_reference_processing_started();
+ if (heap->gc_cause() == GCCause::_wb_breakpoint) {
+ ShenandoahBreakpoint::at_after_reference_processing_started();
+ }
heap->ref_processor()->process_references(ShenandoahPhaseTimings::conc_weak_refs, heap->workers(), true /* concurrent */);
}
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahControlThread.cpp b/src/hotspot/share/gc/shenandoah/shenandoahControlThread.cpp
index a7e4fed5d5edb..81371c398d725 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahControlThread.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahControlThread.cpp
@@ -97,8 +97,10 @@ void ShenandoahControlThread::run_service() {
while (!in_graceful_shutdown() && !should_terminate()) {
// Figure out if we have pending requests.
bool alloc_failure_pending = _alloc_failure_gc.is_set();
- bool explicit_gc_requested = _gc_requested.is_set() && is_explicit_gc(_requested_gc_cause);
- bool implicit_gc_requested = _gc_requested.is_set() && !is_explicit_gc(_requested_gc_cause);
+ bool is_gc_requested = _gc_requested.is_set();
+ GCCause::Cause requested_gc_cause = _requested_gc_cause;
+ bool explicit_gc_requested = is_gc_requested && is_explicit_gc(requested_gc_cause);
+ bool implicit_gc_requested = is_gc_requested && !is_explicit_gc(requested_gc_cause);
// This control loop iteration have seen this much allocations.
size_t allocs_seen = Atomic::xchg(&_allocs_seen, (size_t)0, memory_order_relaxed);
@@ -132,7 +134,7 @@ void ShenandoahControlThread::run_service() {
}
} else if (explicit_gc_requested) {
- cause = _requested_gc_cause;
+ cause = requested_gc_cause;
log_info(gc)("Trigger: Explicit GC request (%s)", GCCause::to_string(cause));
heuristics->record_requested_gc();
@@ -147,7 +149,7 @@ void ShenandoahControlThread::run_service() {
mode = stw_full;
}
} else if (implicit_gc_requested) {
- cause = _requested_gc_cause;
+ cause = requested_gc_cause;
log_info(gc)("Trigger: Implicit GC request (%s)", GCCause::to_string(cause));
heuristics->record_requested_gc();
@@ -505,8 +507,11 @@ void ShenandoahControlThread::handle_requested_gc(GCCause::Cause cause) {
size_t current_gc_id = get_gc_id();
size_t required_gc_id = current_gc_id + 1;
while (current_gc_id < required_gc_id) {
- _gc_requested.set();
+ // Although setting gc request is under _gc_waiters_lock, but read side (run_service())
+ // does not take the lock. We need to enforce following order, so that read side sees
+ // latest requested gc cause when the flag is set.
_requested_gc_cause = cause;
+ _gc_requested.set();
if (cause != GCCause::_wb_breakpoint) {
ml.wait();
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahDegeneratedGC.cpp b/src/hotspot/share/gc/shenandoah/shenandoahDegeneratedGC.cpp
index 6e209d993d2e2..0fbffc5feec1e 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahDegeneratedGC.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahDegeneratedGC.cpp
@@ -113,8 +113,10 @@ void ShenandoahDegenGC::op_degenerated() {
op_mark();
case _degenerated_mark:
- // No fallthrough. Continue mark, handed over from concurrent mark
- if (_degen_point == ShenandoahDegenPoint::_degenerated_mark) {
+ // No fallthrough. Continue mark, handed over from concurrent mark if
+ // concurrent mark has yet completed
+ if (_degen_point == ShenandoahDegenPoint::_degenerated_mark &&
+ heap->is_concurrent_mark_in_progress()) {
op_finish_mark();
}
assert(!heap->cancelled_gc(), "STW mark can not OOM");
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp b/src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp
index f0f40495ade61..495ddef596550 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp
@@ -1926,7 +1926,10 @@ oop ShenandoahHeap::pin_object(JavaThread* thr, oop o) {
}
void ShenandoahHeap::unpin_object(JavaThread* thr, oop o) {
- heap_region_containing(o)->record_unpin();
+ ShenandoahHeapRegion* r = heap_region_containing(o);
+ assert(r != NULL, "Sanity");
+ assert(r->pin_count() > 0, "Region " SIZE_FORMAT " should have non-zero pins", r->index());
+ r->record_unpin();
}
void ShenandoahHeap::sync_pinned_region_status() {
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahMark.inline.hpp b/src/hotspot/share/gc/shenandoah/shenandoahMark.inline.hpp
index 7c9ee7d2222df..929965ce8eceb 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahMark.inline.hpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahMark.inline.hpp
@@ -261,7 +261,9 @@ inline void ShenandoahMark::mark_through_ref(T* p, ShenandoahObjToScanQueue* q,
if ((STRING_DEDUP == ENQUEUE_DEDUP) && ShenandoahStringDedup::is_candidate(obj)) {
assert(ShenandoahStringDedup::is_enabled(), "Must be enabled");
req->add(obj);
- } else if ((STRING_DEDUP == ALWAYS_DEDUP) && ShenandoahStringDedup::is_string_candidate(obj)) {
+ } else if ((STRING_DEDUP == ALWAYS_DEDUP) &&
+ ShenandoahStringDedup::is_string_candidate(obj) &&
+ !ShenandoahStringDedup::dedup_requested(obj)) {
assert(ShenandoahStringDedup::is_enabled(), "Must be enabled");
req->add(obj);
}
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahNMethod.cpp b/src/hotspot/share/gc/shenandoah/shenandoahNMethod.cpp
index a35d54259dd4c..e0f5b9ee71ba4 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahNMethod.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahNMethod.cpp
@@ -271,13 +271,17 @@ void ShenandoahNMethodTable::register_nmethod(nmethod* nm) {
assert(_index >= 0 && _index <= _list->size(), "Sanity");
ShenandoahNMethod* data = ShenandoahNMethod::gc_data(nm);
- ShenandoahReentrantLocker data_locker(data != NULL ? data->lock() : NULL);
if (data != NULL) {
assert(contain(nm), "Must have been registered");
assert(nm == data->nm(), "Must be same nmethod");
+ // Prevent updating a nmethod while concurrent iteration is in progress.
+ wait_until_concurrent_iteration_done();
+ ShenandoahReentrantLocker data_locker(data->lock());
data->update();
} else {
+ // For a new nmethod, we can safely append it to the list, because
+ // concurrent iteration will not touch it.
data = ShenandoahNMethod::for_nmethod(nm);
assert(data != NULL, "Sanity");
ShenandoahNMethod::attach_gc_data(nm, data);
@@ -382,11 +386,13 @@ void ShenandoahNMethodTable::rebuild(int size) {
}
ShenandoahNMethodTableSnapshot* ShenandoahNMethodTable::snapshot_for_iteration() {
+ assert(CodeCache_lock->owned_by_self(), "Must have CodeCache_lock held");
_itr_cnt++;
return new ShenandoahNMethodTableSnapshot(this);
}
void ShenandoahNMethodTable::finish_iteration(ShenandoahNMethodTableSnapshot* snapshot) {
+ assert(CodeCache_lock->owned_by_self(), "Must have CodeCache_lock held");
assert(iteration_in_progress(), "Why we here?");
assert(snapshot != NULL, "No snapshot");
_itr_cnt--;
@@ -493,7 +499,7 @@ void ShenandoahNMethodTableSnapshot::parallel_blobs_do(CodeBlobClosure *f) {
size_t max = (size_t)_limit;
while (_claimed < max) {
- size_t cur = Atomic::fetch_and_add(&_claimed, stride);
+ size_t cur = Atomic::fetch_and_add(&_claimed, stride, memory_order_relaxed);
size_t start = cur;
size_t end = MIN2(cur + stride, max);
if (start >= max) break;
@@ -520,7 +526,7 @@ void ShenandoahNMethodTableSnapshot::concurrent_nmethods_do(NMethodClosure* cl)
ShenandoahNMethod** list = _list->list();
size_t max = (size_t)_limit;
while (_claimed < max) {
- size_t cur = Atomic::fetch_and_add(&_claimed, stride);
+ size_t cur = Atomic::fetch_and_add(&_claimed, stride, memory_order_relaxed);
size_t start = cur;
size_t end = MIN2(cur + stride, max);
if (start >= max) break;
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahReferenceProcessor.cpp b/src/hotspot/share/gc/shenandoah/shenandoahReferenceProcessor.cpp
index 92c3f6f63ff36..6ac55d60c8094 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahReferenceProcessor.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahReferenceProcessor.cpp
@@ -464,14 +464,14 @@ void ShenandoahReferenceProcessor::process_references(ShenandoahRefProcThreadLoc
void ShenandoahReferenceProcessor::work() {
// Process discovered references
uint max_workers = ShenandoahHeap::heap()->max_workers();
- uint worker_id = Atomic::add(&_iterate_discovered_list_id, 1U) - 1;
+ uint worker_id = Atomic::add(&_iterate_discovered_list_id, 1U, memory_order_relaxed) - 1;
while (worker_id < max_workers) {
if (UseCompressedOops) {
process_references(_ref_proc_thread_locals[worker_id], worker_id);
} else {
process_references(_ref_proc_thread_locals[worker_id], worker_id);
}
- worker_id = Atomic::add(&_iterate_discovered_list_id, 1U) - 1;
+ worker_id = Atomic::add(&_iterate_discovered_list_id, 1U, memory_order_relaxed) - 1;
}
}
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.cpp b/src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.cpp
index 7a9b13f02639c..e52991f1db466 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.cpp
@@ -79,7 +79,6 @@ ShenandoahThreadRoots::~ShenandoahThreadRoots() {
}
ShenandoahCodeCacheRoots::ShenandoahCodeCacheRoots(ShenandoahPhaseTimings::Phase phase) : _phase(phase) {
- nmethod::oops_do_marking_prologue();
}
void ShenandoahCodeCacheRoots::code_blobs_do(CodeBlobClosure* blob_cl, uint worker_id) {
@@ -87,10 +86,6 @@ void ShenandoahCodeCacheRoots::code_blobs_do(CodeBlobClosure* blob_cl, uint work
_coderoots_iterator.possibly_parallel_blobs_do(blob_cl);
}
-ShenandoahCodeCacheRoots::~ShenandoahCodeCacheRoots() {
- nmethod::oops_do_marking_epilogue();
-}
-
ShenandoahRootProcessor::ShenandoahRootProcessor(ShenandoahPhaseTimings::Phase phase) :
_heap(ShenandoahHeap::heap()),
_phase(phase),
@@ -158,7 +153,7 @@ ShenandoahConcurrentRootScanner::ShenandoahConcurrentRootScanner(uint n_workers,
_codecache_snapshot(NULL),
_phase(phase) {
if (!ShenandoahHeap::heap()->unload_classes()) {
- CodeCache_lock->lock_without_safepoint_check();
+ MutexLocker locker(CodeCache_lock, Mutex::_no_safepoint_check_flag);
_codecache_snapshot = ShenandoahCodeRoots::table()->snapshot_for_iteration();
}
update_tlab_stats();
@@ -167,8 +162,9 @@ ShenandoahConcurrentRootScanner::ShenandoahConcurrentRootScanner(uint n_workers,
ShenandoahConcurrentRootScanner::~ShenandoahConcurrentRootScanner() {
if (!ShenandoahHeap::heap()->unload_classes()) {
+ MonitorLocker locker(CodeCache_lock, Mutex::_no_safepoint_check_flag);
ShenandoahCodeRoots::table()->finish_iteration(_codecache_snapshot);
- CodeCache_lock->unlock();
+ locker.notify_all();
}
}
@@ -257,22 +253,44 @@ ShenandoahHeapIterationRootScanner::ShenandoahHeapIterationRootScanner() :
_code_roots(ShenandoahPhaseTimings::heap_iteration_roots) {
}
- void ShenandoahHeapIterationRootScanner::roots_do(OopClosure* oops) {
- assert(Thread::current()->is_VM_thread(), "Only by VM thread");
- // Must use _claim_none to avoid interfering with concurrent CLDG iteration
- CLDToOopClosure clds(oops, ClassLoaderData::_claim_none);
- MarkingCodeBlobClosure code(oops, !CodeBlobToOopClosure::FixRelocations);
- ShenandoahParallelOopsDoThreadClosure tc_cl(oops, &code, NULL);
- AlwaysTrueClosure always_true;
+class ShenandoahMarkCodeBlobClosure : public CodeBlobClosure {
+private:
+ OopClosure* const _oops;
+ BarrierSetNMethod* const _bs_nm;
+
+public:
+ ShenandoahMarkCodeBlobClosure(OopClosure* oops) :
+ _oops(oops),
+ _bs_nm(BarrierSet::barrier_set()->barrier_set_nmethod()) {}
+
+ virtual void do_code_blob(CodeBlob* cb) {
+ nmethod* const nm = cb->as_nmethod_or_null();
+ if (nm != nullptr) {
+ if (_bs_nm != nullptr) {
+ // Make sure it only sees to-space objects
+ _bs_nm->nmethod_entry_barrier(nm);
+ }
+ ShenandoahNMethod* const snm = ShenandoahNMethod::gc_data(nm);
+ assert(snm != nullptr, "Sanity");
+ snm->oops_do(_oops, false /*fix_relocations*/);
+ }
+ }
+};
+
+void ShenandoahHeapIterationRootScanner::roots_do(OopClosure* oops) {
+ // Must use _claim_none to avoid interfering with concurrent CLDG iteration
+ CLDToOopClosure clds(oops, ClassLoaderData::_claim_none);
+ ShenandoahMarkCodeBlobClosure code(oops);
+ ShenandoahParallelOopsDoThreadClosure tc_cl(oops, &code, NULL);
- ResourceMark rm;
+ ResourceMark rm;
- // Process light-weight/limited parallel roots then
- _vm_roots.oops_do(oops, 0);
- _weak_roots.oops_do(oops, 0);
- _cld_roots.cld_do(&clds, 0);
+ // Process light-weight/limited parallel roots then
+ _vm_roots.oops_do(oops, 0);
+ _weak_roots.oops_do(oops, 0);
+ _cld_roots.cld_do(&clds, 0);
- // Process heavy-weight/fully parallel roots the last
- _code_roots.code_blobs_do(&code, 0);
- _thread_roots.threads_do(&tc_cl, 0);
- }
+ // Process heavy-weight/fully parallel roots the last
+ _code_roots.code_blobs_do(&code, 0);
+ _thread_roots.threads_do(&tc_cl, 0);
+}
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.hpp b/src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.hpp
index 95e022ec50f9d..fa95d195ab5e3 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.hpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahRootProcessor.hpp
@@ -102,7 +102,6 @@ class ShenandoahCodeCacheRoots {
ShenandoahCodeRootsIterator _coderoots_iterator;
public:
ShenandoahCodeCacheRoots(ShenandoahPhaseTimings::Phase phase);
- ~ShenandoahCodeCacheRoots();
void code_blobs_do(CodeBlobClosure* blob_cl, uint worker_id);
};
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahRuntime.cpp b/src/hotspot/share/gc/shenandoah/shenandoahRuntime.cpp
index 2a8de1fa5752f..223ceb05a30f4 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahRuntime.cpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahRuntime.cpp
@@ -68,17 +68,17 @@ JRT_LEAF(void, ShenandoahRuntime::shenandoah_clone_barrier(oopDesc* src))
JRT_END
JRT_LEAF(oopDesc*, ShenandoahRuntime::load_reference_barrier_weak(oopDesc * src, oop* load_addr))
- return (oopDesc*) ShenandoahBarrierSet::barrier_set()->load_reference_barrier(oop(src), load_addr);
+ return (oopDesc*) ShenandoahBarrierSet::barrier_set()->load_reference_barrier(ON_WEAK_OOP_REF, oop(src), load_addr);
JRT_END
JRT_LEAF(oopDesc*, ShenandoahRuntime::load_reference_barrier_weak_narrow(oopDesc * src, narrowOop* load_addr))
- return (oopDesc*) ShenandoahBarrierSet::barrier_set()->load_reference_barrier(oop(src), load_addr);
+ return (oopDesc*) ShenandoahBarrierSet::barrier_set()->load_reference_barrier(ON_WEAK_OOP_REF, oop(src), load_addr);
JRT_END
JRT_LEAF(oopDesc*, ShenandoahRuntime::load_reference_barrier_phantom(oopDesc * src, oop* load_addr))
- return (oopDesc*) ShenandoahBarrierSet::barrier_set()->load_reference_barrier(oop(src), load_addr);
+ return (oopDesc*) ShenandoahBarrierSet::barrier_set()->load_reference_barrier(ON_PHANTOM_OOP_REF, oop(src), load_addr);
JRT_END
JRT_LEAF(oopDesc*, ShenandoahRuntime::load_reference_barrier_phantom_narrow(oopDesc * src, narrowOop* load_addr))
- return (oopDesc*) ShenandoahBarrierSet::barrier_set()->load_reference_barrier(oop(src), load_addr);
+ return (oopDesc*) ShenandoahBarrierSet::barrier_set()->load_reference_barrier(ON_PHANTOM_OOP_REF, oop(src), load_addr);
JRT_END
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahStringDedup.hpp b/src/hotspot/share/gc/shenandoah/shenandoahStringDedup.hpp
index a8bd674a6b565..da4b1a6f470ea 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahStringDedup.hpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahStringDedup.hpp
@@ -31,6 +31,7 @@ class ShenandoahStringDedup : public StringDedup {
public:
static inline bool is_string_candidate(oop obj);
static inline bool is_candidate(oop obj);
+ static inline bool dedup_requested(oop obj);
};
#endif // SHARE_GC_SHENANDOAH_SHENANDOAHSTRINGDEDUP_HPP
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahStringDedup.inline.hpp b/src/hotspot/share/gc/shenandoah/shenandoahStringDedup.inline.hpp
index e2d977989692d..57b30358e94b2 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahStringDedup.inline.hpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahStringDedup.inline.hpp
@@ -36,6 +36,10 @@ bool ShenandoahStringDedup::is_string_candidate(oop obj) {
java_lang_String::value(obj) != nullptr;
}
+bool ShenandoahStringDedup::dedup_requested(oop obj) {
+ return java_lang_String::test_and_set_deduplication_requested(obj);
+}
+
bool ShenandoahStringDedup::is_candidate(oop obj) {
if (!is_string_candidate(obj)) {
return false;
@@ -51,7 +55,8 @@ bool ShenandoahStringDedup::is_candidate(oop obj) {
// Increase string age and enqueue it when it rearches age threshold
markWord new_mark = mark.incr_age();
if (mark == obj->cas_set_mark(new_mark, mark)) {
- return StringDedup::is_threshold_age(new_mark.age());
+ return StringDedup::is_threshold_age(new_mark.age()) &&
+ !dedup_requested(obj);
}
}
return false;
diff --git a/src/hotspot/share/gc/shenandoah/shenandoahTaskqueue.hpp b/src/hotspot/share/gc/shenandoah/shenandoahTaskqueue.hpp
index bc38a0936eb48..5ce731e6fccba 100644
--- a/src/hotspot/share/gc/shenandoah/shenandoahTaskqueue.hpp
+++ b/src/hotspot/share/gc/shenandoah/shenandoahTaskqueue.hpp
@@ -337,7 +337,7 @@ T* ParallelClaimableQueueSet::claim_next() {
return NULL;
}
- jint index = Atomic::add(&_claimed_index, 1);
+ jint index = Atomic::add(&_claimed_index, 1, memory_order_relaxed);
if (index <= size) {
return GenericTaskQueueSet::queue((uint)index - 1);
diff --git a/src/hotspot/share/gc/z/c2/zBarrierSetC2.cpp b/src/hotspot/share/gc/z/c2/zBarrierSetC2.cpp
index 59360a40b11f6..30df287bdc067 100644
--- a/src/hotspot/share/gc/z/c2/zBarrierSetC2.cpp
+++ b/src/hotspot/share/gc/z/c2/zBarrierSetC2.cpp
@@ -268,24 +268,16 @@ static const TypeFunc* clone_type() {
void ZBarrierSetC2::clone_at_expansion(PhaseMacroExpand* phase, ArrayCopyNode* ac) const {
Node* const src = ac->in(ArrayCopyNode::Src);
+ const TypeAryPtr* ary_ptr = src->get_ptr_type()->isa_aryptr();
- if (ac->is_clone_array()) {
- const TypeAryPtr* ary_ptr = src->get_ptr_type()->isa_aryptr();
- BasicType bt;
- if (ary_ptr == NULL) {
- // ary_ptr can be null iff we are running with StressReflectiveCode
- // This code will be unreachable
- assert(StressReflectiveCode, "Guard against surprises");
- bt = T_LONG;
+ if (ac->is_clone_array() && ary_ptr != NULL) {
+ BasicType bt = ary_ptr->elem()->array_element_basic_type();
+ if (is_reference_type(bt)) {
+ // Clone object array
+ bt = T_OBJECT;
} else {
- bt = ary_ptr->elem()->array_element_basic_type();
- if (is_reference_type(bt)) {
- // Clone object array
- bt = T_OBJECT;
- } else {
- // Clone primitive array
- bt = T_LONG;
- }
+ // Clone primitive array
+ bt = T_LONG;
}
Node* ctrl = ac->in(TypeFunc::Control);
@@ -300,13 +292,16 @@ void ZBarrierSetC2::clone_at_expansion(PhaseMacroExpand* phase, ArrayCopyNode* a
// BarrierSetC2::clone sets the offsets via BarrierSetC2::arraycopy_payload_base_offset
// which 8-byte aligns them to allow for word size copies. Make sure the offsets point
// to the first element in the array when cloning object arrays. Otherwise, load
- // barriers are applied to parts of the header.
+ // barriers are applied to parts of the header. Also adjust the length accordingly.
assert(src_offset == dest_offset, "should be equal");
- assert((src_offset->get_long() == arrayOopDesc::base_offset_in_bytes(T_OBJECT) && UseCompressedClassPointers) ||
- (src_offset->get_long() == arrayOopDesc::length_offset_in_bytes() && !UseCompressedClassPointers),
- "unexpected offset for object array clone");
- src_offset = phase->longcon(arrayOopDesc::base_offset_in_bytes(T_OBJECT));
- dest_offset = src_offset;
+ jlong offset = src_offset->get_long();
+ if (offset != arrayOopDesc::base_offset_in_bytes(T_OBJECT)) {
+ assert(!UseCompressedClassPointers, "should only happen without compressed class pointers");
+ assert((arrayOopDesc::base_offset_in_bytes(T_OBJECT) - offset) == BytesPerLong, "unexpected offset");
+ length = phase->transform_later(new SubLNode(length, phase->longcon(1))); // Size is in longs
+ src_offset = phase->longcon(arrayOopDesc::base_offset_in_bytes(T_OBJECT));
+ dest_offset = src_offset;
+ }
}
Node* payload_src = phase->basic_plus_adr(src, src_offset);
Node* payload_dst = phase->basic_plus_adr(dest, dest_offset);
@@ -330,12 +325,11 @@ void ZBarrierSetC2::clone_at_expansion(PhaseMacroExpand* phase, ArrayCopyNode* a
Node* const dst = ac->in(ArrayCopyNode::Dest);
Node* const size = ac->in(ArrayCopyNode::Length);
- assert(ac->is_clone_inst(), "Sanity check");
assert(size->bottom_type()->is_long(), "Should be long");
// The native clone we are calling here expects the instance size in words
// Add header/offset size to payload size to get instance size.
- Node* const base_offset = phase->longcon(arraycopy_payload_base_offset(false) >> LogBytesPerLong);
+ Node* const base_offset = phase->longcon(arraycopy_payload_base_offset(ac->is_clone_array()) >> LogBytesPerLong);
Node* const full_size = phase->transform_later(new AddLNode(size, base_offset));
Node* const call = phase->make_leaf_call(ctrl,
diff --git a/src/hotspot/share/gc/z/zDirector.cpp b/src/hotspot/share/gc/z/zDirector.cpp
index 3028bb616fbc9..a9668a6a77f04 100644
--- a/src/hotspot/share/gc/z/zDirector.cpp
+++ b/src/hotspot/share/gc/z/zDirector.cpp
@@ -212,8 +212,7 @@ ZDriverRequest rule_allocation_rate_dynamic() {
// Calculate time until GC given the time until OOM and GC duration.
// We also subtract the sample interval, so that we don't overshoot the
// target time and end up starting the GC too late in the next interval.
- const double more_safety_for_fewer_workers = (ConcGCThreads - actual_gc_workers) * sample_interval;
- const double time_until_gc = time_until_oom - actual_gc_duration - sample_interval - more_safety_for_fewer_workers;
+ const double time_until_gc = time_until_oom - actual_gc_duration - sample_interval;
log_debug(gc, director)("Rule: Allocation Rate (Dynamic GC Workers), "
"MaxAllocRate: %.1fMB/s (+/-%.1f%%), Free: " SIZE_FORMAT "MB, GCCPUTime: %.3f, "
diff --git a/src/hotspot/share/gc/z/zNMethod.cpp b/src/hotspot/share/gc/z/zNMethod.cpp
index d492a1b66091f..71f510c2e81b2 100644
--- a/src/hotspot/share/gc/z/zNMethod.cpp
+++ b/src/hotspot/share/gc/z/zNMethod.cpp
@@ -126,8 +126,10 @@ void ZNMethod::log_register(const nmethod* nm) {
oop* const begin = nm->oops_begin();
oop* const end = nm->oops_end();
for (oop* p = begin; p < end; p++) {
+ const oop o = Atomic::load(p); // C1 PatchingStub may replace it concurrently.
+ const char* external_name = (o == nullptr) ? "N/A" : o->klass()->external_name();
log_oops.print(" Oop[" SIZE_FORMAT "] " PTR_FORMAT " (%s)",
- (p - begin), p2i(*p), (*p)->klass()->external_name());
+ (p - begin), p2i(o), external_name);
}
}
diff --git a/src/hotspot/share/gc/z/zStat.cpp b/src/hotspot/share/gc/z/zStat.cpp
index 1d1067b9bb627..0df9d061087c7 100644
--- a/src/hotspot/share/gc/z/zStat.cpp
+++ b/src/hotspot/share/gc/z/zStat.cpp
@@ -760,8 +760,10 @@ ZStatCriticalPhase::ZStatCriticalPhase(const char* name, bool verbose) :
_verbose(verbose) {}
void ZStatCriticalPhase::register_start(const Ticks& start) const {
- LogTarget(Debug, gc, start) log;
- log_start(log, true /* thread */);
+ // This is called from sensitive contexts, for example before an allocation stall
+ // has been resolved. This means we must not access any oops in here since that
+ // could lead to infinite recursion. Without access to the thread name we can't
+ // really log anything useful here.
}
void ZStatCriticalPhase::register_end(const Ticks& start, const Ticks& end) const {
diff --git a/src/hotspot/share/include/jmm.h b/src/hotspot/share/include/jmm.h
index d7788e7a4e841..ee1c77e504a42 100644
--- a/src/hotspot/share/include/jmm.h
+++ b/src/hotspot/share/include/jmm.h
@@ -333,7 +333,8 @@ typedef struct jmmInterface_1_ {
void (JNICALL *GetDiagnosticCommandArgumentsInfo)
(JNIEnv *env,
jstring commandName,
- dcmdArgInfo *infoArray);
+ dcmdArgInfo *infoArray,
+ jint count);
jstring (JNICALL *ExecuteDiagnosticCommand)
(JNIEnv *env,
jstring command);
diff --git a/src/hotspot/share/include/jvm.h b/src/hotspot/share/include/jvm.h
index 7b80d72b562d3..f12a4bec3d024 100644
--- a/src/hotspot/share/include/jvm.h
+++ b/src/hotspot/share/include/jvm.h
@@ -151,7 +151,7 @@ JNIEXPORT jboolean JNICALL
JVM_IsUseContainerSupport(void);
JNIEXPORT void * JNICALL
-JVM_LoadLibrary(const char *name);
+JVM_LoadLibrary(const char *name, jboolean throwException);
JNIEXPORT void JNICALL
JVM_UnloadLibrary(void * handle);
diff --git a/src/hotspot/share/interpreter/bytecodeUtils.cpp b/src/hotspot/share/interpreter/bytecodeUtils.cpp
index 9473e87dec14b..df5fbbe0e6b11 100644
--- a/src/hotspot/share/interpreter/bytecodeUtils.cpp
+++ b/src/hotspot/share/interpreter/bytecodeUtils.cpp
@@ -1029,7 +1029,6 @@ int ExceptionMessageBuilder::do_instruction(int bci) {
break;
case Bytecodes::_arraylength:
- // The return type of arraylength is wrong in the bytecodes table (T_VOID).
stack->pop(1);
stack->push(bci, T_INT);
break;
diff --git a/src/hotspot/share/interpreter/bytecodes.cpp b/src/hotspot/share/interpreter/bytecodes.cpp
index 6711ba735db76..770934f31d8c7 100644
--- a/src/hotspot/share/interpreter/bytecodes.cpp
+++ b/src/hotspot/share/interpreter/bytecodes.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -471,7 +471,7 @@ void Bytecodes::initialize() {
def(_new , "new" , "bkk" , NULL , T_OBJECT , 1, true );
def(_newarray , "newarray" , "bc" , NULL , T_OBJECT , 0, true );
def(_anewarray , "anewarray" , "bkk" , NULL , T_OBJECT , 0, true );
- def(_arraylength , "arraylength" , "b" , NULL , T_VOID , 0, true );
+ def(_arraylength , "arraylength" , "b" , NULL , T_INT , 0, true );
def(_athrow , "athrow" , "b" , NULL , T_VOID , -1, true );
def(_checkcast , "checkcast" , "bkk" , NULL , T_OBJECT , 0, true );
def(_instanceof , "instanceof" , "bkk" , NULL , T_INT , 0, true );
diff --git a/src/hotspot/share/interpreter/zero/bytecodeInterpreter.cpp b/src/hotspot/share/interpreter/zero/bytecodeInterpreter.cpp
index e10a86361b97e..4e8bd4f9beefd 100644
--- a/src/hotspot/share/interpreter/zero/bytecodeInterpreter.cpp
+++ b/src/hotspot/share/interpreter/zero/bytecodeInterpreter.cpp
@@ -107,7 +107,7 @@
There really shouldn't be any handles remaining to trash but this is cheap
in relation to a safepoint.
*/
-#define SAFEPOINT \
+#define RETURN_SAFEPOINT \
if (SafepointMechanism::should_process(THREAD)) { \
HandleMarkCleaner __hmc(THREAD); \
CALL_VM(SafepointMechanism::process_if_requested_with_exit_check(THREAD, true /* check asyncs */), \
@@ -403,25 +403,25 @@ template void BytecodeInterpreter::run(interpreterState istate);
template
void BytecodeInterpreter::run(interpreterState istate) {
-
- // In order to simplify some tests based on switches set at runtime
- // we invoke the interpreter a single time after switches are enabled
- // and set simpler to to test variables rather than method calls or complex
- // boolean expressions.
-
- static int initialized = 0;
- static int checkit = 0;
- static intptr_t* c_addr = NULL;
- static intptr_t c_value;
-
- if (checkit && *c_addr != c_value) {
- os::breakpoint();
- }
+ intptr_t* topOfStack = (intptr_t *)istate->stack(); /* access with STACK macros */
+ address pc = istate->bcp();
+ jubyte opcode;
+ intptr_t* locals = istate->locals();
+ ConstantPoolCache* cp = istate->constants(); // method()->constants()->cache()
+#ifdef LOTS_OF_REGS
+ JavaThread* THREAD = istate->thread();
+#else
+#undef THREAD
+#define THREAD istate->thread()
+#endif
#ifdef ASSERT
- if (istate->_msg != initialize) {
- assert(labs(istate->_stack_base - istate->_stack_limit) == (istate->_method->max_stack() + 1), "bad stack limit");
- }
+ assert(labs(istate->stack_base() - istate->stack_limit()) == (istate->method()->max_stack() + 1),
+ "Bad stack limit");
+ /* QQQ this should be a stack method so we don't know actual direction */
+ assert(topOfStack >= istate->stack_limit() && topOfStack < istate->stack_base(),
+ "Stack top out of range");
+
// Verify linkages.
interpreterState l = istate;
do {
@@ -433,136 +433,99 @@ void BytecodeInterpreter::run(interpreterState istate) {
interpreterState orig = istate;
#endif
- intptr_t* topOfStack = (intptr_t *)istate->stack(); /* access with STACK macros */
- address pc = istate->bcp();
- jubyte opcode;
- intptr_t* locals = istate->locals();
- ConstantPoolCache* cp = istate->constants(); // method()->constants()->cache()
-#ifdef LOTS_OF_REGS
- JavaThread* THREAD = istate->thread();
-#else
-#undef THREAD
-#define THREAD istate->thread()
-#endif
-
#ifdef USELABELS
const static void* const opclabels_data[256] = {
-/* 0x00 */ &&opc_nop, &&opc_aconst_null,&&opc_iconst_m1,&&opc_iconst_0,
-/* 0x04 */ &&opc_iconst_1,&&opc_iconst_2, &&opc_iconst_3, &&opc_iconst_4,
-/* 0x08 */ &&opc_iconst_5,&&opc_lconst_0, &&opc_lconst_1, &&opc_fconst_0,
-/* 0x0C */ &&opc_fconst_1,&&opc_fconst_2, &&opc_dconst_0, &&opc_dconst_1,
-
-/* 0x10 */ &&opc_bipush, &&opc_sipush, &&opc_ldc, &&opc_ldc_w,
-/* 0x14 */ &&opc_ldc2_w, &&opc_iload, &&opc_lload, &&opc_fload,
-/* 0x18 */ &&opc_dload, &&opc_aload, &&opc_iload_0,&&opc_iload_1,
-/* 0x1C */ &&opc_iload_2,&&opc_iload_3,&&opc_lload_0,&&opc_lload_1,
-
-/* 0x20 */ &&opc_lload_2,&&opc_lload_3,&&opc_fload_0,&&opc_fload_1,
-/* 0x24 */ &&opc_fload_2,&&opc_fload_3,&&opc_dload_0,&&opc_dload_1,
-/* 0x28 */ &&opc_dload_2,&&opc_dload_3,&&opc_aload_0,&&opc_aload_1,
-/* 0x2C */ &&opc_aload_2,&&opc_aload_3,&&opc_iaload, &&opc_laload,
-
-/* 0x30 */ &&opc_faload, &&opc_daload, &&opc_aaload, &&opc_baload,
-/* 0x34 */ &&opc_caload, &&opc_saload, &&opc_istore, &&opc_lstore,
-/* 0x38 */ &&opc_fstore, &&opc_dstore, &&opc_astore, &&opc_istore_0,
-/* 0x3C */ &&opc_istore_1,&&opc_istore_2,&&opc_istore_3,&&opc_lstore_0,
-
-/* 0x40 */ &&opc_lstore_1,&&opc_lstore_2,&&opc_lstore_3,&&opc_fstore_0,
-/* 0x44 */ &&opc_fstore_1,&&opc_fstore_2,&&opc_fstore_3,&&opc_dstore_0,
-/* 0x48 */ &&opc_dstore_1,&&opc_dstore_2,&&opc_dstore_3,&&opc_astore_0,
-/* 0x4C */ &&opc_astore_1,&&opc_astore_2,&&opc_astore_3,&&opc_iastore,
-
-/* 0x50 */ &&opc_lastore,&&opc_fastore,&&opc_dastore,&&opc_aastore,
-/* 0x54 */ &&opc_bastore,&&opc_castore,&&opc_sastore,&&opc_pop,
-/* 0x58 */ &&opc_pop2, &&opc_dup, &&opc_dup_x1, &&opc_dup_x2,
-/* 0x5C */ &&opc_dup2, &&opc_dup2_x1,&&opc_dup2_x2,&&opc_swap,
-
-/* 0x60 */ &&opc_iadd,&&opc_ladd,&&opc_fadd,&&opc_dadd,
-/* 0x64 */ &&opc_isub,&&opc_lsub,&&opc_fsub,&&opc_dsub,
-/* 0x68 */ &&opc_imul,&&opc_lmul,&&opc_fmul,&&opc_dmul,
-/* 0x6C */ &&opc_idiv,&&opc_ldiv,&&opc_fdiv,&&opc_ddiv,
-
-/* 0x70 */ &&opc_irem, &&opc_lrem, &&opc_frem,&&opc_drem,
-/* 0x74 */ &&opc_ineg, &&opc_lneg, &&opc_fneg,&&opc_dneg,
-/* 0x78 */ &&opc_ishl, &&opc_lshl, &&opc_ishr,&&opc_lshr,
-/* 0x7C */ &&opc_iushr,&&opc_lushr,&&opc_iand,&&opc_land,
-
-/* 0x80 */ &&opc_ior, &&opc_lor,&&opc_ixor,&&opc_lxor,
-/* 0x84 */ &&opc_iinc,&&opc_i2l,&&opc_i2f, &&opc_i2d,
-/* 0x88 */ &&opc_l2i, &&opc_l2f,&&opc_l2d, &&opc_f2i,
-/* 0x8C */ &&opc_f2l, &&opc_f2d,&&opc_d2i, &&opc_d2l,
-
-/* 0x90 */ &&opc_d2f, &&opc_i2b, &&opc_i2c, &&opc_i2s,
-/* 0x94 */ &&opc_lcmp, &&opc_fcmpl,&&opc_fcmpg,&&opc_dcmpl,
-/* 0x98 */ &&opc_dcmpg,&&opc_ifeq, &&opc_ifne, &&opc_iflt,
-/* 0x9C */ &&opc_ifge, &&opc_ifgt, &&opc_ifle, &&opc_if_icmpeq,
-
-/* 0xA0 */ &&opc_if_icmpne,&&opc_if_icmplt,&&opc_if_icmpge, &&opc_if_icmpgt,
-/* 0xA4 */ &&opc_if_icmple,&&opc_if_acmpeq,&&opc_if_acmpne, &&opc_goto,
-/* 0xA8 */ &&opc_jsr, &&opc_ret, &&opc_tableswitch,&&opc_lookupswitch,
-/* 0xAC */ &&opc_ireturn, &&opc_lreturn, &&opc_freturn, &&opc_dreturn,
-
-/* 0xB0 */ &&opc_areturn, &&opc_return, &&opc_getstatic, &&opc_putstatic,
-/* 0xB4 */ &&opc_getfield, &&opc_putfield, &&opc_invokevirtual,&&opc_invokespecial,
-/* 0xB8 */ &&opc_invokestatic,&&opc_invokeinterface,&&opc_invokedynamic,&&opc_new,
-/* 0xBC */ &&opc_newarray, &&opc_anewarray, &&opc_arraylength, &&opc_athrow,
-
-/* 0xC0 */ &&opc_checkcast, &&opc_instanceof, &&opc_monitorenter, &&opc_monitorexit,
-/* 0xC4 */ &&opc_wide, &&opc_multianewarray, &&opc_ifnull, &&opc_ifnonnull,
-/* 0xC8 */ &&opc_goto_w, &&opc_jsr_w, &&opc_breakpoint, &&opc_default,
-/* 0xCC */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-
-/* 0xD0 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xD4 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xD8 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xDC */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-
-/* 0xE0 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xE4 */ &&opc_default, &&opc_default, &&opc_fast_aldc, &&opc_fast_aldc_w,
+/* 0x00 */ &&opc_nop, &&opc_aconst_null, &&opc_iconst_m1, &&opc_iconst_0,
+/* 0x04 */ &&opc_iconst_1, &&opc_iconst_2, &&opc_iconst_3, &&opc_iconst_4,
+/* 0x08 */ &&opc_iconst_5, &&opc_lconst_0, &&opc_lconst_1, &&opc_fconst_0,
+/* 0x0C */ &&opc_fconst_1, &&opc_fconst_2, &&opc_dconst_0, &&opc_dconst_1,
+
+/* 0x10 */ &&opc_bipush, &&opc_sipush, &&opc_ldc, &&opc_ldc_w,
+/* 0x14 */ &&opc_ldc2_w, &&opc_iload, &&opc_lload, &&opc_fload,
+/* 0x18 */ &&opc_dload, &&opc_aload, &&opc_iload_0, &&opc_iload_1,
+/* 0x1C */ &&opc_iload_2, &&opc_iload_3, &&opc_lload_0, &&opc_lload_1,
+
+/* 0x20 */ &&opc_lload_2, &&opc_lload_3, &&opc_fload_0, &&opc_fload_1,
+/* 0x24 */ &&opc_fload_2, &&opc_fload_3, &&opc_dload_0, &&opc_dload_1,
+/* 0x28 */ &&opc_dload_2, &&opc_dload_3, &&opc_aload_0, &&opc_aload_1,
+/* 0x2C */ &&opc_aload_2, &&opc_aload_3, &&opc_iaload, &&opc_laload,
+
+/* 0x30 */ &&opc_faload, &&opc_daload, &&opc_aaload, &&opc_baload,
+/* 0x34 */ &&opc_caload, &&opc_saload, &&opc_istore, &&opc_lstore,
+/* 0x38 */ &&opc_fstore, &&opc_dstore, &&opc_astore, &&opc_istore_0,
+/* 0x3C */ &&opc_istore_1, &&opc_istore_2, &&opc_istore_3, &&opc_lstore_0,
+
+/* 0x40 */ &&opc_lstore_1, &&opc_lstore_2, &&opc_lstore_3, &&opc_fstore_0,
+/* 0x44 */ &&opc_fstore_1, &&opc_fstore_2, &&opc_fstore_3, &&opc_dstore_0,
+/* 0x48 */ &&opc_dstore_1, &&opc_dstore_2, &&opc_dstore_3, &&opc_astore_0,
+/* 0x4C */ &&opc_astore_1, &&opc_astore_2, &&opc_astore_3, &&opc_iastore,
+
+/* 0x50 */ &&opc_lastore, &&opc_fastore, &&opc_dastore, &&opc_aastore,
+/* 0x54 */ &&opc_bastore, &&opc_castore, &&opc_sastore, &&opc_pop,
+/* 0x58 */ &&opc_pop2, &&opc_dup, &&opc_dup_x1, &&opc_dup_x2,
+/* 0x5C */ &&opc_dup2, &&opc_dup2_x1, &&opc_dup2_x2, &&opc_swap,
+
+/* 0x60 */ &&opc_iadd, &&opc_ladd, &&opc_fadd, &&opc_dadd,
+/* 0x64 */ &&opc_isub, &&opc_lsub, &&opc_fsub, &&opc_dsub,
+/* 0x68 */ &&opc_imul, &&opc_lmul, &&opc_fmul, &&opc_dmul,
+/* 0x6C */ &&opc_idiv, &&opc_ldiv, &&opc_fdiv, &&opc_ddiv,
+
+/* 0x70 */ &&opc_irem, &&opc_lrem, &&opc_frem, &&opc_drem,
+/* 0x74 */ &&opc_ineg, &&opc_lneg, &&opc_fneg, &&opc_dneg,
+/* 0x78 */ &&opc_ishl, &&opc_lshl, &&opc_ishr, &&opc_lshr,
+/* 0x7C */ &&opc_iushr, &&opc_lushr, &&opc_iand, &&opc_land,
+
+/* 0x80 */ &&opc_ior, &&opc_lor, &&opc_ixor, &&opc_lxor,
+/* 0x84 */ &&opc_iinc, &&opc_i2l, &&opc_i2f, &&opc_i2d,
+/* 0x88 */ &&opc_l2i, &&opc_l2f, &&opc_l2d, &&opc_f2i,
+/* 0x8C */ &&opc_f2l, &&opc_f2d, &&opc_d2i, &&opc_d2l,
+
+/* 0x90 */ &&opc_d2f, &&opc_i2b, &&opc_i2c, &&opc_i2s,
+/* 0x94 */ &&opc_lcmp, &&opc_fcmpl, &&opc_fcmpg, &&opc_dcmpl,
+/* 0x98 */ &&opc_dcmpg, &&opc_ifeq, &&opc_ifne, &&opc_iflt,
+/* 0x9C */ &&opc_ifge, &&opc_ifgt, &&opc_ifle, &&opc_if_icmpeq,
+
+/* 0xA0 */ &&opc_if_icmpne, &&opc_if_icmplt, &&opc_if_icmpge, &&opc_if_icmpgt,
+/* 0xA4 */ &&opc_if_icmple, &&opc_if_acmpeq, &&opc_if_acmpne, &&opc_goto,
+/* 0xA8 */ &&opc_jsr, &&opc_ret, &&opc_tableswitch, &&opc_lookupswitch,
+/* 0xAC */ &&opc_ireturn, &&opc_lreturn, &&opc_freturn, &&opc_dreturn,
+
+/* 0xB0 */ &&opc_areturn, &&opc_return, &&opc_getstatic, &&opc_putstatic,
+/* 0xB4 */ &&opc_getfield, &&opc_putfield, &&opc_invokevirtual, &&opc_invokespecial,
+/* 0xB8 */ &&opc_invokestatic, &&opc_invokeinterface, &&opc_invokedynamic, &&opc_new,
+/* 0xBC */ &&opc_newarray, &&opc_anewarray, &&opc_arraylength, &&opc_athrow,
+
+/* 0xC0 */ &&opc_checkcast, &&opc_instanceof, &&opc_monitorenter, &&opc_monitorexit,
+/* 0xC4 */ &&opc_wide, &&opc_multianewarray, &&opc_ifnull, &&opc_ifnonnull,
+/* 0xC8 */ &&opc_goto_w, &&opc_jsr_w, &&opc_breakpoint, &&opc_default,
+/* 0xCC */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+
+/* 0xD0 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+/* 0xD4 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+/* 0xD8 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+/* 0xDC */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+
+/* 0xE0 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+/* 0xE4 */ &&opc_default, &&opc_default, &&opc_fast_aldc, &&opc_fast_aldc_w,
/* 0xE8 */ &&opc_return_register_finalizer,
- &&opc_invokehandle, &&opc_default, &&opc_default,
-/* 0xEC */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+ &&opc_invokehandle, &&opc_default, &&opc_default,
+/* 0xEC */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xF0 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xF4 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xF8 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
-/* 0xFC */ &&opc_default, &&opc_default, &&opc_default, &&opc_default
+/* 0xF0 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+/* 0xF4 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+/* 0xF8 */ &&opc_default, &&opc_default, &&opc_default, &&opc_default,
+/* 0xFC */ &&opc_default, &&opc_default, &&opc_default, &&opc_default
};
uintptr_t *dispatch_table = (uintptr_t*)&opclabels_data[0];
#endif /* USELABELS */
-#ifdef ASSERT
- // this will trigger a VERIFY_OOP on entry
- if (istate->msg() != initialize && ! METHOD->is_static()) {
- oop rcvr = LOCALS_OBJECT(0);
- VERIFY_OOP(rcvr);
- }
-#endif
-
- /* QQQ this should be a stack method so we don't know actual direction */
- guarantee(istate->msg() == initialize ||
- topOfStack >= istate->stack_limit() &&
- topOfStack < istate->stack_base(),
- "Stack top out of range");
-
- assert(!UseCompiler, "Zero does not support compilers");
- assert(!CountCompiledCalls, "Zero does not support counting compiled calls");
-
switch (istate->msg()) {
case initialize: {
- if (initialized++) ShouldNotReachHere(); // Only one initialize call.
+ ShouldNotCallThis();
return;
}
- break;
case method_entry: {
THREAD->set_do_not_unlock();
- // count invocations
- assert(initialized, "Interpreter not initialized");
-
- if ((istate->_stack_base - istate->_stack_limit) != istate->method()->max_stack() + 1) {
- // initialize
- os::breakpoint();
- }
// Lock method if synchronized.
if (METHOD->is_synchronized()) {
@@ -1336,23 +1299,15 @@ void BytecodeInterpreter::run(interpreterState istate) {
CASE(_areturn):
CASE(_ireturn):
CASE(_freturn):
- {
- // Allow a safepoint before returning to frame manager.
- SAFEPOINT;
-
- goto handle_return;
- }
-
CASE(_lreturn):
CASE(_dreturn):
- {
+ CASE(_return): {
// Allow a safepoint before returning to frame manager.
- SAFEPOINT;
+ RETURN_SAFEPOINT;
goto handle_return;
}
CASE(_return_register_finalizer): {
-
oop rcvr = LOCALS_OBJECT(0);
VERIFY_OOP(rcvr);
if (rcvr->klass()->has_finalizer()) {
@@ -1360,12 +1315,6 @@ void BytecodeInterpreter::run(interpreterState istate) {
}
goto handle_return;
}
- CASE(_return): {
-
- // Allow a safepoint before returning to frame manager.
- SAFEPOINT;
- goto handle_return;
- }
/* Array access byte-codes */
diff --git a/src/hotspot/share/jfr/leakprofiler/chains/bitset.cpp b/src/hotspot/share/jfr/leakprofiler/chains/bitset.cpp
index 359de35d04af5..7616d615a45d2 100644
--- a/src/hotspot/share/jfr/leakprofiler/chains/bitset.cpp
+++ b/src/hotspot/share/jfr/leakprofiler/chains/bitset.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2014, 2018, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -33,7 +33,7 @@ BitSet::BitSet() :
_bitmap_fragments(32),
_fragment_list(NULL),
_last_fragment_bits(NULL),
- _last_fragment_granule(0) {
+ _last_fragment_granule(UINTPTR_MAX) {
}
BitSet::~BitSet() {
diff --git a/src/hotspot/share/jfr/recorder/checkpoint/jfrCheckpointManager.cpp b/src/hotspot/share/jfr/recorder/checkpoint/jfrCheckpointManager.cpp
index 76f6658cc9979..3fb3df0d7d751 100644
--- a/src/hotspot/share/jfr/recorder/checkpoint/jfrCheckpointManager.cpp
+++ b/src/hotspot/share/jfr/recorder/checkpoint/jfrCheckpointManager.cpp
@@ -455,7 +455,7 @@ size_t JfrCheckpointManager::flush_type_set() {
elements = ::flush_type_set(thread);
}
}
- if (_new_checkpoint.is_signaled()) {
+ if (_new_checkpoint.is_signaled_with_reset()) {
WriteOperation wo(_chunkwriter);
MutexedWriteOperation mwo(wo);
_thread_local_mspace->iterate(mwo); // current epoch list
diff --git a/src/hotspot/share/jfr/recorder/checkpoint/types/jfrTypeSet.cpp b/src/hotspot/share/jfr/recorder/checkpoint/types/jfrTypeSet.cpp
index b93651a04903b..78d6ab48f97f2 100644
--- a/src/hotspot/share/jfr/recorder/checkpoint/types/jfrTypeSet.cpp
+++ b/src/hotspot/share/jfr/recorder/checkpoint/types/jfrTypeSet.cpp
@@ -243,6 +243,7 @@ int write__klass(JfrCheckpointWriter* writer, const void* k) {
int write__klass__leakp(JfrCheckpointWriter* writer, const void* k) {
assert(k != NULL, "invariant");
KlassPtr klass = (KlassPtr)k;
+ CLEAR_LEAKP(klass);
return write_klass(writer, klass, true);
}
@@ -848,7 +849,7 @@ class MethodIteratorHost {
private:
MethodCallback _method_cb;
KlassCallback _klass_cb;
- MethodUsedPredicate _method_used_predicate;
+ MethodUsedPredicate _method_used_predicate;
MethodFlagPredicate _method_flag_predicate;
public:
MethodIteratorHost(JfrCheckpointWriter* writer,
@@ -1103,6 +1104,10 @@ void JfrTypeSet::clear() {
}
size_t JfrTypeSet::on_unloading_classes(JfrCheckpointWriter* writer) {
+ // JfrTraceIdEpoch::has_changed_tag_state_no_reset() is a load-acquire we issue to see side-effects (i.e. tags).
+ // The JfrRecorderThread does this as part of normal processing, but with concurrent class unloading, which can
+ // happen in arbitrary threads, we invoke it explicitly.
+ JfrTraceIdEpoch::has_changed_tag_state_no_reset();
if (JfrRecorder::is_recording()) {
return serialize(writer, NULL, true, false);
}
diff --git a/src/hotspot/share/jfr/recorder/checkpoint/types/jfrTypeSetUtils.hpp b/src/hotspot/share/jfr/recorder/checkpoint/types/jfrTypeSetUtils.hpp
index b93d1673b726b..905a16d28c57c 100644
--- a/src/hotspot/share/jfr/recorder/checkpoint/types/jfrTypeSetUtils.hpp
+++ b/src/hotspot/share/jfr/recorder/checkpoint/types/jfrTypeSetUtils.hpp
@@ -146,16 +146,12 @@ class SymbolPredicate {
}
};
-template
class MethodUsedPredicate {
bool _current_epoch;
public:
MethodUsedPredicate(bool current_epoch) : _current_epoch(current_epoch) {}
bool operator()(const Klass* klass) {
- if (_current_epoch) {
- return leakp ? IS_LEAKP(klass) : METHOD_USED_THIS_EPOCH(klass);
- }
- return leakp ? IS_LEAKP(klass) : METHOD_USED_PREVIOUS_EPOCH(klass);
+ return _current_epoch ? METHOD_USED_THIS_EPOCH(klass) : METHOD_USED_PREVIOUS_EPOCH(klass);
}
};
diff --git a/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceIdEpoch.hpp b/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceIdEpoch.hpp
index 653c90fdba865..ba14919c9353b 100644
--- a/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceIdEpoch.hpp
+++ b/src/hotspot/share/jfr/recorder/checkpoint/types/traceid/jfrTraceIdEpoch.hpp
@@ -108,6 +108,10 @@ class JfrTraceIdEpoch : AllStatic {
}
static bool has_changed_tag_state() {
+ return _tag_state.is_signaled_with_reset();
+ }
+
+ static bool has_changed_tag_state_no_reset() {
return _tag_state.is_signaled();
}
diff --git a/src/hotspot/share/jfr/recorder/repository/jfrChunkWriter.cpp b/src/hotspot/share/jfr/recorder/repository/jfrChunkWriter.cpp
index 1fca9a43aa462..2f0781cdff2d2 100644
--- a/src/hotspot/share/jfr/recorder/repository/jfrChunkWriter.cpp
+++ b/src/hotspot/share/jfr/recorder/repository/jfrChunkWriter.cpp
@@ -207,7 +207,7 @@ int64_t JfrChunkWriter::write_chunk_header_checkpoint(bool flushpoint) {
const u4 checkpoint_size = current_offset() - event_size_offset;
write_padded_at_offset(checkpoint_size, event_size_offset);
set_last_checkpoint_offset(event_size_offset);
- const size_t sz_written = size_written();
+ const int64_t sz_written = size_written();
write_be_at_offset(sz_written, chunk_size_offset);
return sz_written;
}
diff --git a/src/hotspot/share/jfr/recorder/stringpool/jfrStringPool.cpp b/src/hotspot/share/jfr/recorder/stringpool/jfrStringPool.cpp
index f9d84d3dd26c5..7001f0f0a0272 100644
--- a/src/hotspot/share/jfr/recorder/stringpool/jfrStringPool.cpp
+++ b/src/hotspot/share/jfr/recorder/stringpool/jfrStringPool.cpp
@@ -44,7 +44,7 @@ typedef JfrStringPool::BufferPtr BufferPtr;
static JfrSignal _new_string;
bool JfrStringPool::is_modified() {
- return _new_string.is_signaled();
+ return _new_string.is_signaled_with_reset();
}
static JfrStringPool* _instance = NULL;
diff --git a/src/hotspot/share/jfr/utilities/jfrSignal.hpp b/src/hotspot/share/jfr/utilities/jfrSignal.hpp
index 410ec1e850453..74d1b9e21ca52 100644
--- a/src/hotspot/share/jfr/utilities/jfrSignal.hpp
+++ b/src/hotspot/share/jfr/utilities/jfrSignal.hpp
@@ -34,14 +34,16 @@ class JfrSignal {
JfrSignal() : _signaled(false) {}
void signal() const {
- if (!Atomic::load_acquire(&_signaled)) {
- Atomic::release_store(&_signaled, true);
- }
+ Atomic::release_store(&_signaled, true);
}
bool is_signaled() const {
- if (Atomic::load_acquire(&_signaled)) {
- Atomic::release_store(&_signaled, false); // auto-reset
+ return Atomic::load_acquire(&_signaled);
+ }
+
+ bool is_signaled_with_reset() const {
+ if (is_signaled()) {
+ Atomic::release_store(&_signaled, false);
return true;
}
return false;
diff --git a/src/hotspot/share/jvmci/jvmciCompilerToVM.cpp b/src/hotspot/share/jvmci/jvmciCompilerToVM.cpp
index 783f48578a1fc..42318fe5bfa38 100644
--- a/src/hotspot/share/jvmci/jvmciCompilerToVM.cpp
+++ b/src/hotspot/share/jvmci/jvmciCompilerToVM.cpp
@@ -1832,7 +1832,7 @@ C2V_VMENTRY_NULL(jobjectArray, getDeclaredMethods, (JNIEnv* env, jobject, jobjec
return JVMCIENV->get_jobjectArray(methods);
C2V_END
-C2V_VMENTRY_NULL(jobject, readFieldValue, (JNIEnv* env, jobject, jobject object, jobject expected_type, long displacement, jboolean is_volatile, jobject kind_object))
+C2V_VMENTRY_NULL(jobject, readFieldValue, (JNIEnv* env, jobject, jobject object, jobject expected_type, long displacement, jobject kind_object))
if (object == NULL || kind_object == NULL) {
JVMCI_THROW_0(NullPointerException);
}
@@ -1872,13 +1872,18 @@ C2V_VMENTRY_NULL(jobject, readFieldValue, (JNIEnv* env, jobject, jobject object,
ShouldNotReachHere();
}
- if (displacement < 0 || ((long) displacement + type2aelembytes(basic_type) > HeapWordSize * obj->size())) {
+ int basic_type_elemsize = type2aelembytes(basic_type);
+ if (displacement < 0 || ((long) displacement + basic_type_elemsize > HeapWordSize * obj->size())) {
// Reading outside of the object bounds
JVMCI_THROW_MSG_NULL(IllegalArgumentException, "reading outside object bounds");
}
// Perform basic sanity checks on the read. Primitive reads are permitted to read outside the
// bounds of their fields but object reads must map exactly onto the underlying oop slot.
+ bool aligned = (displacement % basic_type_elemsize) == 0;
+ if (!aligned) {
+ JVMCI_THROW_MSG_NULL(IllegalArgumentException, "read is unaligned");
+ }
if (basic_type == T_OBJECT) {
if (obj->is_objArray()) {
if (displacement < arrayOopDesc::base_offset_in_bytes(T_OBJECT)) {
@@ -1916,15 +1921,20 @@ C2V_VMENTRY_NULL(jobject, readFieldValue, (JNIEnv* env, jobject, jobject object,
}
jlong value = 0;
+
+ // Treat all reads as volatile for simplicity as this function can be used
+ // both for reading Java fields declared as volatile as well as for constant
+ // folding Unsafe.get* methods with volatile semantics.
+
switch (basic_type) {
- case T_BOOLEAN: value = is_volatile ? obj->bool_field_acquire(displacement) : obj->bool_field(displacement); break;
- case T_BYTE: value = is_volatile ? obj->byte_field_acquire(displacement) : obj->byte_field(displacement); break;
- case T_SHORT: value = is_volatile ? obj->short_field_acquire(displacement) : obj->short_field(displacement); break;
- case T_CHAR: value = is_volatile ? obj->char_field_acquire(displacement) : obj->char_field(displacement); break;
+ case T_BOOLEAN: value = obj->bool_field_acquire(displacement); break;
+ case T_BYTE: value = obj->byte_field_acquire(displacement); break;
+ case T_SHORT: value = obj->short_field_acquire(displacement); break;
+ case T_CHAR: value = obj->char_field_acquire(displacement); break;
case T_FLOAT:
- case T_INT: value = is_volatile ? obj->int_field_acquire(displacement) : obj->int_field(displacement); break;
+ case T_INT: value = obj->int_field_acquire(displacement); break;
case T_DOUBLE:
- case T_LONG: value = is_volatile ? obj->long_field_acquire(displacement) : obj->long_field(displacement); break;
+ case T_LONG: value = obj->long_field_acquire(displacement); break;
case T_OBJECT: {
if (displacement == java_lang_Class::component_mirror_offset() && java_lang_Class::is_instance(obj()) &&
@@ -1934,7 +1944,8 @@ C2V_VMENTRY_NULL(jobject, readFieldValue, (JNIEnv* env, jobject, jobject object,
return JVMCIENV->get_jobject(JVMCIENV->get_JavaConstant_NULL_POINTER());
}
- oop value = is_volatile ? obj->obj_field_acquire(displacement) : obj->obj_field(displacement);
+ oop value = obj->obj_field_acquire(displacement);
+
if (value == NULL) {
return JVMCIENV->get_jobject(JVMCIENV->get_JavaConstant_NULL_POINTER());
} else {
@@ -2680,8 +2691,8 @@ JNINativeMethod CompilerToVM::methods[] = {
{CC "boxPrimitive", CC "(" OBJECT ")" OBJECTCONSTANT, FN_PTR(boxPrimitive)},
{CC "getDeclaredConstructors", CC "(" HS_RESOLVED_KLASS ")[" RESOLVED_METHOD, FN_PTR(getDeclaredConstructors)},
{CC "getDeclaredMethods", CC "(" HS_RESOLVED_KLASS ")[" RESOLVED_METHOD, FN_PTR(getDeclaredMethods)},
- {CC "readFieldValue", CC "(" HS_RESOLVED_KLASS HS_RESOLVED_KLASS "JZLjdk/vm/ci/meta/JavaKind;)" JAVACONSTANT, FN_PTR(readFieldValue)},
- {CC "readFieldValue", CC "(" OBJECTCONSTANT HS_RESOLVED_KLASS "JZLjdk/vm/ci/meta/JavaKind;)" JAVACONSTANT, FN_PTR(readFieldValue)},
+ {CC "readFieldValue", CC "(" HS_RESOLVED_KLASS HS_RESOLVED_KLASS "JLjdk/vm/ci/meta/JavaKind;)" JAVACONSTANT, FN_PTR(readFieldValue)},
+ {CC "readFieldValue", CC "(" OBJECTCONSTANT HS_RESOLVED_KLASS "JLjdk/vm/ci/meta/JavaKind;)" JAVACONSTANT, FN_PTR(readFieldValue)},
{CC "isInstance", CC "(" HS_RESOLVED_KLASS OBJECTCONSTANT ")Z", FN_PTR(isInstance)},
{CC "isAssignableFrom", CC "(" HS_RESOLVED_KLASS HS_RESOLVED_KLASS ")Z", FN_PTR(isAssignableFrom)},
{CC "isTrustedForIntrinsics", CC "(" HS_RESOLVED_KLASS ")Z", FN_PTR(isTrustedForIntrinsics)},
diff --git a/src/hotspot/share/jvmci/vmStructs_jvmci.cpp b/src/hotspot/share/jvmci/vmStructs_jvmci.cpp
index c0e0883f14d55..7ec398398a4de 100644
--- a/src/hotspot/share/jvmci/vmStructs_jvmci.cpp
+++ b/src/hotspot/share/jvmci/vmStructs_jvmci.cpp
@@ -565,8 +565,8 @@
declare_constant(Deoptimization::Reason_not_compiled_exception_handler) \
declare_constant(Deoptimization::Reason_unresolved) \
declare_constant(Deoptimization::Reason_jsr_mismatch) \
- declare_constant(Deoptimization::Reason_LIMIT) \
- declare_constant(Deoptimization::_support_large_access_byte_array_virtualization) \
+ declare_constant(Deoptimization::Reason_TRAP_HISTORY_LENGTH) \
+ declare_constant(Deoptimization::_support_large_access_byte_array_virtualization) \
\
declare_constant(FieldInfo::access_flags_offset) \
declare_constant(FieldInfo::name_index_offset) \
diff --git a/src/hotspot/share/memory/allocation.hpp b/src/hotspot/share/memory/allocation.hpp
index 7170cfa6732d9..e881b577b3580 100644
--- a/src/hotspot/share/memory/allocation.hpp
+++ b/src/hotspot/share/memory/allocation.hpp
@@ -145,6 +145,7 @@ class AllocatedObj {
f(mtServiceability, "Serviceability") \
f(mtMetaspace, "Metaspace") \
f(mtStringDedup, "String Deduplication") \
+ f(mtObjectMonitor, "Object Monitors") \
f(mtNone, "Unknown") \
//end
diff --git a/src/hotspot/share/memory/metaspace/metaspaceDCmd.cpp b/src/hotspot/share/memory/metaspace/metaspaceDCmd.cpp
index d54c9d236b904..1d45731e21662 100644
--- a/src/hotspot/share/memory/metaspace/metaspaceDCmd.cpp
+++ b/src/hotspot/share/memory/metaspace/metaspaceDCmd.cpp
@@ -42,6 +42,7 @@ MetaspaceDCmd::MetaspaceDCmd(outputStream* output, bool heap) :
_by_spacetype("by-spacetype", "Break down numbers by loader type.", "BOOLEAN", false, "false"),
_by_chunktype("by-chunktype", "Break down numbers by chunk type.", "BOOLEAN", false, "false"),
_show_vslist("vslist", "Shows details about the underlying virtual space.", "BOOLEAN", false, "false"),
+ _show_chunkfreelist("chunkfreelist", "Shows details about global chunk free lists (ChunkManager).", "BOOLEAN", false, "false"),
_scale("scale", "Memory usage in which to scale. Valid values are: 1, KB, MB or GB (fixed scale) "
"or \"dynamic\" for a dynamically choosen scale.",
"STRING", false, "dynamic"),
@@ -53,6 +54,7 @@ MetaspaceDCmd::MetaspaceDCmd(outputStream* output, bool heap) :
_dcmdparser.add_dcmd_option(&_by_chunktype);
_dcmdparser.add_dcmd_option(&_by_spacetype);
_dcmdparser.add_dcmd_option(&_show_vslist);
+ _dcmdparser.add_dcmd_option(&_show_chunkfreelist);
_dcmdparser.add_dcmd_option(&_scale);
}
@@ -96,6 +98,7 @@ void MetaspaceDCmd::execute(DCmdSource source, TRAPS) {
if (_by_chunktype.value()) flags |= (int)MetaspaceReporter::Option::BreakDownByChunkType;
if (_by_spacetype.value()) flags |= (int)MetaspaceReporter::Option::BreakDownBySpaceType;
if (_show_vslist.value()) flags |= (int)MetaspaceReporter::Option::ShowVSList;
+ if (_show_chunkfreelist.value()) flags |= (int)MetaspaceReporter::Option::ShowChunkFreeList;
VM_PrintMetadata op(output(), scale, flags);
VMThread::execute(&op);
}
diff --git a/src/hotspot/share/memory/metaspace/metaspaceDCmd.hpp b/src/hotspot/share/memory/metaspace/metaspaceDCmd.hpp
index 0c0f49795d5ac..329dd6025d7ed 100644
--- a/src/hotspot/share/memory/metaspace/metaspaceDCmd.hpp
+++ b/src/hotspot/share/memory/metaspace/metaspaceDCmd.hpp
@@ -38,6 +38,7 @@ class MetaspaceDCmd : public DCmdWithParser {
DCmdArgument _by_spacetype;
DCmdArgument _by_chunktype;
DCmdArgument _show_vslist;
+ DCmdArgument _show_chunkfreelist;
DCmdArgument _scale;
DCmdArgument _show_classes;
public:
diff --git a/src/hotspot/share/memory/metaspace/metaspaceReporter.cpp b/src/hotspot/share/memory/metaspace/metaspaceReporter.cpp
index d856883aaec64..2bdc9019acf00 100644
--- a/src/hotspot/share/memory/metaspace/metaspaceReporter.cpp
+++ b/src/hotspot/share/memory/metaspace/metaspaceReporter.cpp
@@ -300,6 +300,24 @@ void MetaspaceReporter::print_report(outputStream* out, size_t scale, int flags)
out->cr();
}
+ // -- Print Chunkmanager details.
+ if ((flags & (int)Option::ShowChunkFreeList) > 0) {
+ out->cr();
+ out->print_cr("Chunk freelist details:");
+ if (Metaspace::using_class_space()) {
+ out->print_cr(" Non-Class:");
+ }
+ ChunkManager::chunkmanager_nonclass()->print_on(out);
+ out->cr();
+ if (Metaspace::using_class_space()) {
+ out->print_cr(" Class:");
+ ChunkManager::chunkmanager_class()->print_on(out);
+ out->cr();
+ }
+ }
+ out->cr();
+
+
//////////// Waste section ///////////////////////////
// As a convenience, print a summary of common waste.
out->cr();
diff --git a/src/hotspot/share/memory/metaspace/metaspaceReporter.hpp b/src/hotspot/share/memory/metaspace/metaspaceReporter.hpp
index 45c76b69e9dc4..2aab20633793d 100644
--- a/src/hotspot/share/memory/metaspace/metaspaceReporter.hpp
+++ b/src/hotspot/share/memory/metaspace/metaspaceReporter.hpp
@@ -44,7 +44,9 @@ class MetaspaceReporter : public AllStatic {
// Print details about the underlying virtual spaces.
ShowVSList = (1 << 3),
// If show_loaders: show loaded classes for each loader.
- ShowClasses = (1 << 4)
+ ShowClasses = (1 << 4),
+ // Print details about the underlying virtual spaces.
+ ShowChunkFreeList = (1 << 5)
};
// This will print out a basic metaspace usage report but
diff --git a/src/hotspot/share/oops/instanceKlass.cpp b/src/hotspot/share/oops/instanceKlass.cpp
index 208d8a04f7332..d562cdef21a26 100644
--- a/src/hotspot/share/oops/instanceKlass.cpp
+++ b/src/hotspot/share/oops/instanceKlass.cpp
@@ -589,7 +589,10 @@ void InstanceKlass::deallocate_contents(ClassLoaderData* loader_data) {
// Release C heap allocated data that this points to, which includes
// reference counting symbol names.
- release_C_heap_structures_internal();
+ // Can't release the constant pool here because the constant pool can be
+ // deallocated separately from the InstanceKlass for default methods and
+ // redefine classes.
+ release_C_heap_structures(/* release_constant_pool */ false);
deallocate_methods(loader_data, methods());
set_methods(NULL);
@@ -1593,7 +1596,8 @@ bool InstanceKlass::find_field_from_offset(int offset, bool is_static, fieldDesc
void InstanceKlass::methods_do(void f(Method* method)) {
// Methods aren't stable until they are loaded. This can be read outside
// a lock through the ClassLoaderData for profiling
- if (!is_loaded()) {
+ // Redefined scratch classes are on the list and need to be cleaned
+ if (!is_loaded() && !is_scratch_class()) {
return;
}
@@ -2009,29 +2013,18 @@ Method* InstanceKlass::lookup_method_in_all_interfaces(Symbol* name,
return NULL;
}
-/* jni_id_for_impl for jfieldIds only */
-JNIid* InstanceKlass::jni_id_for_impl(int offset) {
+/* jni_id_for for jfieldIds only */
+JNIid* InstanceKlass::jni_id_for(int offset) {
MutexLocker ml(JfieldIdCreation_lock);
- // Retry lookup after we got the lock
JNIid* probe = jni_ids() == NULL ? NULL : jni_ids()->find(offset);
if (probe == NULL) {
- // Slow case, allocate new static field identifier
+ // Allocate new static field identifier
probe = new JNIid(this, offset, jni_ids());
set_jni_ids(probe);
}
return probe;
}
-
-/* jni_id_for for jfieldIds only */
-JNIid* InstanceKlass::jni_id_for(int offset) {
- JNIid* probe = jni_ids() == NULL ? NULL : jni_ids()->find(offset);
- if (probe == NULL) {
- probe = jni_id_for_impl(offset);
- }
- return probe;
-}
-
u2 InstanceKlass::enclosing_method_data(int offset) const {
const Array* const inner_class_list = inner_classes();
if (inner_class_list == NULL) {
@@ -2375,7 +2368,9 @@ void InstanceKlass::metaspace_pointers_do(MetaspaceClosure* it) {
} else {
it->push(&_default_vtable_indices);
}
- it->push(&_fields);
+
+ // _fields might be written into by Rewriter::scan_method() -> fd.set_has_initialized_final_update()
+ it->push(&_fields, MetaspaceClosure::_writable);
if (itable_length() > 0) {
itableOffsetEntry* ioe = (itableOffsetEntry*)start_of_itable();
@@ -2525,6 +2520,9 @@ void InstanceKlass::restore_unshareable_info(ClassLoaderData* loader_data, Handl
constants()->restore_unshareable_info(CHECK);
if (array_klasses() != NULL) {
+ // To get a consistent list of classes we need MultiArray_lock to ensure
+ // array classes aren't observed while they are being restored.
+ MutexLocker ml(MultiArray_lock);
// Array classes have null protection domain.
// --> see ArrayKlass::complete_create_array_klass()
array_klasses()->restore_unshareable_info(ClassLoaderData::the_null_class_loader_data(), Handle(), CHECK);
@@ -2636,22 +2634,13 @@ static void method_release_C_heap_structures(Method* m) {
m->release_C_heap_structures();
}
-void InstanceKlass::release_C_heap_structures() {
-
+// Called also by InstanceKlass::deallocate_contents, with false for release_constant_pool.
+void InstanceKlass::release_C_heap_structures(bool release_constant_pool) {
// Clean up C heap
- release_C_heap_structures_internal();
- constants()->release_C_heap_structures();
+ Klass::release_C_heap_structures();
// Deallocate and call destructors for MDO mutexes
methods_do(method_release_C_heap_structures);
-}
-
-void InstanceKlass::release_C_heap_structures_internal() {
- Klass::release_C_heap_structures();
-
- // Can't release the constant pool here because the constant pool can be
- // deallocated separately from the InstanceKlass for default methods and
- // redefine classes.
// Deallocate oop map cache
if (_oop_map_cache != NULL) {
@@ -2687,6 +2676,10 @@ void InstanceKlass::release_C_heap_structures_internal() {
#endif
FREE_C_HEAP_ARRAY(char, _source_debug_extension);
+
+ if (release_constant_pool) {
+ constants()->release_C_heap_structures();
+ }
}
void InstanceKlass::set_source_debug_extension(const char* array, int length) {
@@ -3011,6 +3004,18 @@ InstanceKlass* InstanceKlass::compute_enclosing_class(bool* inner_is_member, TRA
constantPoolHandle i_cp(THREAD, constants());
if (ooff != 0) {
Klass* ok = i_cp->klass_at(ooff, CHECK_NULL);
+ if (!ok->is_instance_klass()) {
+ // If the outer class is not an instance klass then it cannot have
+ // declared any inner classes.
+ ResourceMark rm(THREAD);
+ Exceptions::fthrow(
+ THREAD_AND_LOCATION,
+ vmSymbols::java_lang_IncompatibleClassChangeError(),
+ "%s and %s disagree on InnerClasses attribute",
+ ok->external_name(),
+ external_name());
+ return NULL;
+ }
outer_klass = InstanceKlass::cast(ok);
*inner_is_member = true;
}
@@ -3939,8 +3944,6 @@ void InstanceKlass::purge_previous_version_list() {
// so will be deallocated during the next phase of class unloading.
log_trace(redefine, class, iklass, purge)
("previous version " INTPTR_FORMAT " is dead.", p2i(pv_node));
- // For debugging purposes.
- pv_node->set_is_scratch_class();
// Unlink from previous version list.
assert(pv_node->class_loader_data() == loader_data, "wrong loader_data");
InstanceKlass* next = pv_node->previous_versions();
@@ -4055,8 +4058,6 @@ void InstanceKlass::add_previous_version(InstanceKlass* scratch_class,
ConstantPool* cp_ref = scratch_class->constants();
if (!cp_ref->on_stack()) {
log_trace(redefine, class, iklass, add)("scratch class not added; no methods are running");
- // For debugging purposes.
- scratch_class->set_is_scratch_class();
scratch_class->class_loader_data()->add_to_deallocate_list(scratch_class);
return;
}
diff --git a/src/hotspot/share/oops/instanceKlass.hpp b/src/hotspot/share/oops/instanceKlass.hpp
index 738f1855a982b..6658ef90b669a 100644
--- a/src/hotspot/share/oops/instanceKlass.hpp
+++ b/src/hotspot/share/oops/instanceKlass.hpp
@@ -254,8 +254,7 @@ class InstanceKlass: public Klass {
_misc_is_shared_platform_class = 1 << 11, // defining class loader is platform class loader
_misc_is_shared_app_class = 1 << 12, // defining class loader is app class loader
_misc_has_resolved_methods = 1 << 13, // resolved methods table entries added for this class
- _misc_is_being_redefined = 1 << 14, // used for locking redefinition
- _misc_has_contended_annotations = 1 << 15 // has @Contended annotation
+ _misc_has_contended_annotations = 1 << 14 // has @Contended annotation
};
u2 shared_loader_type_bits() const {
return _misc_is_shared_boot_class|_misc_is_shared_platform_class|_misc_is_shared_app_class;
@@ -738,14 +737,16 @@ class InstanceKlass: public Klass {
#if INCLUDE_JVMTI
// Redefinition locking. Class can only be redefined by one thread at a time.
+ // The flag is in access_flags so that it can be set and reset using atomic
+ // operations, and not be reset by other misc_flag settings.
bool is_being_redefined() const {
- return ((_misc_flags & _misc_is_being_redefined) != 0);
+ return _access_flags.is_being_redefined();
}
void set_is_being_redefined(bool value) {
if (value) {
- _misc_flags |= _misc_is_being_redefined;
+ _access_flags.set_is_being_redefined();
} else {
- _misc_flags &= ~_misc_is_being_redefined;
+ _access_flags.clear_is_being_redefined();
}
}
@@ -1107,7 +1108,7 @@ class InstanceKlass: public Klass {
// callbacks for actions during class unloading
static void unload_class(InstanceKlass* ik);
- virtual void release_C_heap_structures();
+ virtual void release_C_heap_structures(bool release_constant_pool = true);
// Naming
const char* signature_name() const;
@@ -1203,8 +1204,6 @@ class InstanceKlass: public Klass {
void initialize_impl (TRAPS);
void initialize_super_interfaces (TRAPS);
void eager_initialize_impl ();
- /* jni_id_for_impl for jfieldID only */
- JNIid* jni_id_for_impl (int offset);
// find a local method (returns NULL if not found)
Method* find_method_impl(const Symbol* name,
@@ -1220,9 +1219,6 @@ class InstanceKlass: public Klass {
StaticLookupMode static_mode,
PrivateLookupMode private_mode);
- // Free CHeap allocated fields.
- void release_C_heap_structures_internal();
-
#if INCLUDE_JVMTI
// RedefineClasses support
void link_previous_versions(InstanceKlass* pv) { _previous_versions = pv; }
diff --git a/src/hotspot/share/oops/klass.cpp b/src/hotspot/share/oops/klass.cpp
index 0714313aca2b7..59038cdd80f88 100644
--- a/src/hotspot/share/oops/klass.cpp
+++ b/src/hotspot/share/oops/klass.cpp
@@ -105,7 +105,7 @@ bool Klass::is_subclass_of(const Klass* k) const {
return false;
}
-void Klass::release_C_heap_structures() {
+void Klass::release_C_heap_structures(bool release_constant_pool) {
if (_name != NULL) _name->decrement_refcount();
}
diff --git a/src/hotspot/share/oops/klass.hpp b/src/hotspot/share/oops/klass.hpp
index ec0989aebd43b..1d58cc3ec2bde 100644
--- a/src/hotspot/share/oops/klass.hpp
+++ b/src/hotspot/share/oops/klass.hpp
@@ -117,8 +117,8 @@ class Klass : public Metadata {
// Klass identifier used to implement devirtualized oop closure dispatching.
const KlassID _id;
- // vtable length
- int _vtable_len;
+ // Processed access flags, for use by Class.getModifiers.
+ jint _modifier_flags;
// The fields _super_check_offset, _secondary_super_cache, _secondary_supers
// and _primary_supers all help make fast subtype checks. See big discussion
@@ -154,7 +154,10 @@ class Klass : public Metadata {
// Provide access the corresponding instance java.lang.ClassLoader.
ClassLoaderData* _class_loader_data;
- jint _modifier_flags; // Processed access flags, for use by Class.getModifiers.
+ int _vtable_len; // vtable length. This field may be read very often when we
+ // have lots of itable dispatches (e.g., lambdas and streams).
+ // Keep it away from the beginning of a Klass to avoid cacheline
+ // contention that may happen when a nearby object is modified.
AccessFlags _access_flags; // Access flags. The class/interface distinction is stored here.
JFR_ONLY(DEFINE_TRACE_ID_FIELD;)
@@ -693,7 +696,7 @@ class Klass : public Metadata {
Symbol* name() const { return _name; }
void set_name(Symbol* n);
- virtual void release_C_heap_structures();
+ virtual void release_C_heap_structures(bool release_constant_pool = true);
public:
virtual jint compute_modifier_flags() const = 0;
diff --git a/src/hotspot/share/oops/method.cpp b/src/hotspot/share/oops/method.cpp
index a71ea7ee1d2df..ae75527f19b7b 100644
--- a/src/hotspot/share/oops/method.cpp
+++ b/src/hotspot/share/oops/method.cpp
@@ -324,14 +324,11 @@ int Method::bci_from(address bcp) const {
if (is_native() && bcp == 0) {
return 0;
}
-#ifdef ASSERT
- {
- ResourceMark rm;
- assert(is_native() && bcp == code_base() || contains(bcp) || VMError::is_error_reported(),
- "bcp doesn't belong to this method: bcp: " INTPTR_FORMAT ", method: %s",
- p2i(bcp), name_and_sig_as_C_string());
- }
-#endif
+ // Do not have a ResourceMark here because AsyncGetCallTrace stack walking code
+ // may call this after interrupting a nested ResourceMark.
+ assert(is_native() && bcp == code_base() || contains(bcp) || VMError::is_error_reported(),
+ "bcp doesn't belong to this method. bcp: " INTPTR_FORMAT, p2i(bcp));
+
return bcp - code_base();
}
diff --git a/src/hotspot/share/oops/methodData.hpp b/src/hotspot/share/oops/methodData.hpp
index a33b118334e4a..55967e9ee19b3 100644
--- a/src/hotspot/share/oops/methodData.hpp
+++ b/src/hotspot/share/oops/methodData.hpp
@@ -30,6 +30,7 @@
#include "oops/method.hpp"
#include "oops/oop.hpp"
#include "runtime/atomic.hpp"
+#include "runtime/deoptimization.hpp"
#include "runtime/mutex.hpp"
#include "utilities/align.hpp"
#include "utilities/copy.hpp"
@@ -1965,7 +1966,7 @@ class MethodData : public Metadata {
// Whole-method sticky bits and flags
enum {
- _trap_hist_limit = 25 JVMCI_ONLY(+5), // decoupled from Deoptimization::Reason_LIMIT
+ _trap_hist_limit = Deoptimization::Reason_TRAP_HISTORY_LENGTH,
_trap_hist_mask = max_jubyte,
_extra_data_count = 4 // extra DataLayout headers, for trap history
}; // Public flag values
@@ -1980,6 +1981,7 @@ class MethodData : public Metadata {
uint _nof_overflow_traps; // trap count, excluding _trap_hist
union {
intptr_t _align;
+ // JVMCI separates trap history for OSR compilations from normal compilations
u1 _array[JVMCI_ONLY(2 *) MethodData::_trap_hist_limit];
} _trap_hist;
@@ -1996,14 +1998,14 @@ class MethodData : public Metadata {
// Return (uint)-1 for overflow.
uint trap_count(int reason) const {
- assert((uint)reason < JVMCI_ONLY(2*) _trap_hist_limit, "oob");
+ assert((uint)reason < ARRAY_SIZE(_trap_hist._array), "oob");
return (int)((_trap_hist._array[reason]+1) & _trap_hist_mask) - 1;
}
uint inc_trap_count(int reason) {
// Count another trap, anywhere in this method.
assert(reason >= 0, "must be single trap");
- assert((uint)reason < JVMCI_ONLY(2*) _trap_hist_limit, "oob");
+ assert((uint)reason < ARRAY_SIZE(_trap_hist._array), "oob");
uint cnt1 = 1 + _trap_hist._array[reason];
if ((cnt1 & _trap_hist_mask) != 0) { // if no counter overflow...
_trap_hist._array[reason] = cnt1;
diff --git a/src/hotspot/share/oops/oopHandle.inline.hpp b/src/hotspot/share/oops/oopHandle.inline.hpp
index 44362499a2cc5..20de5146ec32c 100644
--- a/src/hotspot/share/oops/oopHandle.inline.hpp
+++ b/src/hotspot/share/oops/oopHandle.inline.hpp
@@ -48,7 +48,7 @@ inline OopHandle::OopHandle(OopStorage* storage, oop obj) :
}
inline void OopHandle::release(OopStorage* storage) {
- if (peek() != NULL) {
+ if (_obj != NULL) {
// Clear the OopHandle first
NativeAccess<>::oop_store(_obj, (oop)NULL);
storage->release(_obj);
diff --git a/src/hotspot/share/opto/addnode.cpp b/src/hotspot/share/opto/addnode.cpp
index 8d758b2b9c53f..6f7ca9df7b80d 100644
--- a/src/hotspot/share/opto/addnode.cpp
+++ b/src/hotspot/share/opto/addnode.cpp
@@ -363,6 +363,13 @@ Node *AddINode::Ideal(PhaseGVN *phase, bool can_reshape) {
}
}
+ // Convert (~x+1) into -x. Note there isn't a bitwise not bytecode,
+ // "~x" would typically represented as "x^(-1)", so (~x+1) will
+ // be (x^(-1))+1.
+ if (op1 == Op_XorI && phase->type(in2) == TypeInt::ONE &&
+ phase->type(in1->in(2)) == TypeInt::MINUS_1) {
+ return new SubINode(phase->makecon(TypeInt::ZERO), in1->in(1));
+ }
return AddNode::Ideal(phase, can_reshape);
}
@@ -486,7 +493,13 @@ Node *AddLNode::Ideal(PhaseGVN *phase, bool can_reshape) {
}
}
-
+ // Convert (~x+1) into -x. Note there isn't a bitwise not bytecode,
+ // "~x" would typically represented as "x^(-1)", so (~x+1) will
+ // be (x^(-1))+1
+ if (op1 == Op_XorL && phase->type(in2) == TypeLong::ONE &&
+ phase->type(in1->in(2)) == TypeLong::MINUS_1) {
+ return new SubLNode(phase->makecon(TypeLong::ZERO), in1->in(1));
+ }
return AddNode::Ideal(phase, can_reshape);
}
@@ -899,6 +912,21 @@ const Type *OrLNode::add_ring( const Type *t0, const Type *t1 ) const {
}
//=============================================================================
+//------------------------------Idealize---------------------------------------
+Node* XorINode::Ideal(PhaseGVN* phase, bool can_reshape) {
+ Node* in1 = in(1);
+ Node* in2 = in(2);
+ int op1 = in1->Opcode();
+ // Convert ~(x-1) into -x. Note there isn't a bitwise not bytecode,
+ // "~x" would typically represented as "x^(-1)", and "x-c0" would
+ // convert into "x+ -c0" in SubXNode::Ideal. So ~(x-1) will eventually
+ // be (x+(-1))^-1.
+ if (op1 == Op_AddI && phase->type(in2) == TypeInt::MINUS_1 &&
+ phase->type(in1->in(2)) == TypeInt::MINUS_1) {
+ return new SubINode(phase->makecon(TypeInt::ZERO), in1->in(1));
+ }
+ return AddNode::Ideal(phase, can_reshape);
+}
const Type* XorINode::Value(PhaseGVN* phase) const {
Node* in1 = in(1);
@@ -964,6 +992,21 @@ const Type *XorLNode::add_ring( const Type *t0, const Type *t1 ) const {
return TypeLong::make( r0->get_con() ^ r1->get_con() );
}
+Node* XorLNode::Ideal(PhaseGVN* phase, bool can_reshape) {
+ Node* in1 = in(1);
+ Node* in2 = in(2);
+ int op1 = in1->Opcode();
+ // Convert ~(x-1) into -x. Note there isn't a bitwise not bytecode,
+ // "~x" would typically represented as "x^(-1)", and "x-c0" would
+ // convert into "x+ -c0" in SubXNode::Ideal. So ~(x-1) will eventually
+ // be (x+(-1))^-1.
+ if (op1 == Op_AddL && phase->type(in2) == TypeLong::MINUS_1 &&
+ phase->type(in1->in(2)) == TypeLong::MINUS_1) {
+ return new SubLNode(phase->makecon(TypeLong::ZERO), in1->in(1));
+ }
+ return AddNode::Ideal(phase, can_reshape);
+}
+
const Type* XorLNode::Value(PhaseGVN* phase) const {
Node* in1 = in(1);
Node* in2 = in(2);
diff --git a/src/hotspot/share/opto/addnode.hpp b/src/hotspot/share/opto/addnode.hpp
index 0c6c34d392e01..132d796aea407 100644
--- a/src/hotspot/share/opto/addnode.hpp
+++ b/src/hotspot/share/opto/addnode.hpp
@@ -227,6 +227,7 @@ class XorINode : public AddNode {
public:
XorINode( Node *in1, Node *in2 ) : AddNode(in1,in2) {}
virtual int Opcode() const;
+ virtual Node *Ideal(PhaseGVN *phase, bool can_reshape);
virtual const Type *add_ring( const Type *, const Type * ) const;
virtual const Type *add_id() const { return TypeInt::ZERO; }
virtual const Type *bottom_type() const { return TypeInt::INT; }
@@ -242,6 +243,7 @@ class XorLNode : public AddNode {
public:
XorLNode( Node *in1, Node *in2 ) : AddNode(in1,in2) {}
virtual int Opcode() const;
+ virtual Node *Ideal(PhaseGVN *phase, bool can_reshape);
virtual const Type *add_ring( const Type *, const Type * ) const;
virtual const Type *add_id() const { return TypeLong::ZERO; }
virtual const Type *bottom_type() const { return TypeLong::LONG; }
diff --git a/src/hotspot/share/opto/buildOopMap.cpp b/src/hotspot/share/opto/buildOopMap.cpp
index b35d5152f295a..9a14b17513e87 100644
--- a/src/hotspot/share/opto/buildOopMap.cpp
+++ b/src/hotspot/share/opto/buildOopMap.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2002, 2018, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2002, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -231,10 +231,6 @@ OopMap *OopFlow::build_oop_map( Node *n, int max_reg, PhaseRegAlloc *regalloc, i
VMReg r = OptoReg::as_VMReg(OptoReg::Name(reg), framesize, max_inarg_slot);
- if (false && r->is_reg() && !r->is_concrete()) {
- continue;
- }
-
// See if dead (no reaching def).
Node *def = _defs[reg]; // Get reaching def
assert( def, "since live better have reaching def" );
@@ -312,14 +308,10 @@ OopMap *OopFlow::build_oop_map( Node *n, int max_reg, PhaseRegAlloc *regalloc, i
set_live_bit(live,breg);
// Already missed our turn?
if( breg < reg ) {
- if (b->is_stack() || b->is_concrete() || true ) {
- omap->set_oop( b);
- }
+ omap->set_oop(b);
}
}
- if (b->is_stack() || b->is_concrete() || true ) {
- omap->set_derived_oop( r, b);
- }
+ omap->set_derived_oop(r, b);
}
} else if( t->isa_narrowoop() ) {
@@ -347,9 +339,7 @@ OopMap *OopFlow::build_oop_map( Node *n, int max_reg, PhaseRegAlloc *regalloc, i
assert( dup_check[_callees[reg]]==0, "trying to callee save same reg twice" );
debug_only( dup_check[_callees[reg]]=1; )
VMReg callee = OptoReg::as_VMReg(OptoReg::Name(_callees[reg]));
- if ( callee->is_concrete() || true ) {
- omap->set_callee_saved( r, callee);
- }
+ omap->set_callee_saved(r, callee);
} else {
// Other - some reaching non-oop value
diff --git a/src/hotspot/share/opto/c2compiler.cpp b/src/hotspot/share/opto/c2compiler.cpp
index 5a1d7888ae138..f939145e19858 100644
--- a/src/hotspot/share/opto/c2compiler.cpp
+++ b/src/hotspot/share/opto/c2compiler.cpp
@@ -216,6 +216,9 @@ bool C2Compiler::is_intrinsic_supported(const methodHandle& method, bool is_virt
case vmIntrinsics::_copyMemory:
if (StubRoutines::unsafe_arraycopy() == NULL) return false;
break;
+ case vmIntrinsics::_encodeAsciiArray:
+ if (!Matcher::match_rule_supported(Op_EncodeISOArray) || !Matcher::supports_encode_ascii_array) return false;
+ break;
case vmIntrinsics::_encodeISOArray:
case vmIntrinsics::_encodeByteISOArray:
if (!Matcher::match_rule_supported(Op_EncodeISOArray)) return false;
diff --git a/src/hotspot/share/opto/callGenerator.cpp b/src/hotspot/share/opto/callGenerator.cpp
index b74cf29cbfb18..e9c376e42891e 100644
--- a/src/hotspot/share/opto/callGenerator.cpp
+++ b/src/hotspot/share/opto/callGenerator.cpp
@@ -459,7 +459,9 @@ class LateInlineVirtualCallGenerator : public VirtualCallGenerator {
public:
LateInlineVirtualCallGenerator(ciMethod* method, int vtable_index, float prof_factor)
: VirtualCallGenerator(method, vtable_index, true /*separate_io_projs*/),
- _unique_id(0), _inline_cg(NULL), _callee(NULL), _is_pure_call(false), _prof_factor(prof_factor) {}
+ _unique_id(0), _inline_cg(NULL), _callee(NULL), _is_pure_call(false), _prof_factor(prof_factor) {
+ assert(IncrementalInlineVirtual, "required");
+ }
virtual bool is_late_inline() const { return true; }
@@ -513,6 +515,12 @@ bool LateInlineVirtualCallGenerator::do_late_inline_check(Compile* C, JVMState*
// Method handle linker case is handled in CallDynamicJavaNode::Ideal().
// Unless inlining is performed, _override_symbolic_info bit will be set in DirectCallGenerator::generate().
+ // Implicit receiver null checks introduce problems when exception states are combined.
+ Node* receiver = jvms->map()->argument(jvms, 0);
+ const Type* recv_type = C->initial_gvn()->type(receiver);
+ if (recv_type->maybe_null()) {
+ return false;
+ }
// Even if inlining is not allowed, a virtual call can be strength-reduced to a direct call.
bool allow_inline = C->inlining_incrementally();
if (!allow_inline && _callee->holder()->is_interface()) {
@@ -667,12 +675,13 @@ void CallGenerator::do_late_inline_helper() {
bool result_not_used = false;
if (is_pure_call()) {
- if (is_boxing_late_inline() && callprojs.resproj != nullptr) {
- // replace box node to scalar node only in case it is directly referenced by debug info
- assert(call->as_CallStaticJava()->is_boxing_method(), "sanity");
- if (!has_non_debug_usages(callprojs.resproj) && is_box_cache_valid(call)) {
- scalarize_debug_usages(call, callprojs.resproj);
- }
+ // Disabled due to JDK-8276112
+ if (false && is_boxing_late_inline() && callprojs.resproj != nullptr) {
+ // replace box node to scalar node only in case it is directly referenced by debug info
+ assert(call->as_CallStaticJava()->is_boxing_method(), "sanity");
+ if (!has_non_debug_usages(callprojs.resproj) && is_box_cache_valid(call)) {
+ scalarize_debug_usages(call, callprojs.resproj);
+ }
}
// The call is marked as pure (no important side effects), but result isn't used.
@@ -736,13 +745,6 @@ void CallGenerator::do_late_inline_helper() {
C->set_default_node_notes(entry_nn);
}
- // Virtual call involves a receiver null check which can be made implicit.
- if (is_virtual_late_inline()) {
- GraphKit kit(jvms);
- kit.null_check_receiver();
- jvms = kit.transfer_exceptions_into_jvms();
- }
-
// Now perform the inlining using the synthesized JVMState
JVMState* new_jvms = inline_cg()->generate(jvms);
if (new_jvms == NULL) return; // no change
diff --git a/src/hotspot/share/opto/castnode.hpp b/src/hotspot/share/opto/castnode.hpp
index 95574e60457ac..2aa318c0e24c0 100644
--- a/src/hotspot/share/opto/castnode.hpp
+++ b/src/hotspot/share/opto/castnode.hpp
@@ -140,7 +140,7 @@ class CastFFNode: public ConstraintCastNode {
init_class_id(Class_CastFF);
}
virtual int Opcode() const;
- virtual uint ideal_reg() const { return Op_RegF; }
+ virtual uint ideal_reg() const { return in(1)->ideal_reg(); }
};
class CastDDNode: public ConstraintCastNode {
@@ -150,7 +150,7 @@ class CastDDNode: public ConstraintCastNode {
init_class_id(Class_CastDD);
}
virtual int Opcode() const;
- virtual uint ideal_reg() const { return Op_RegD; }
+ virtual uint ideal_reg() const { return in(1)->ideal_reg(); }
};
class CastVVNode: public ConstraintCastNode {
diff --git a/src/hotspot/share/opto/cfgnode.cpp b/src/hotspot/share/opto/cfgnode.cpp
index e44842e3fb6d0..71cc378790b90 100644
--- a/src/hotspot/share/opto/cfgnode.cpp
+++ b/src/hotspot/share/opto/cfgnode.cpp
@@ -1366,9 +1366,12 @@ Node* PhiNode::Identity(PhaseGVN* phase) {
}
int true_path = is_diamond_phi();
- if (true_path != 0) {
+ // Delay CMove'ing identity if Ideal has not had the chance to handle unsafe cases, yet.
+ if (true_path != 0 && !(phase->is_IterGVN() && wait_for_region_igvn(phase))) {
Node* id = is_cmove_id(phase, true_path);
- if (id != NULL) return id;
+ if (id != NULL) {
+ return id;
+ }
}
// Looking for phis with identical inputs. If we find one that has
@@ -2269,13 +2272,13 @@ Node *PhiNode::Ideal(PhaseGVN *phase, bool can_reshape) {
// Phi(...MergeMem(m0, m1:AT1, m2:AT2)...) into
// MergeMem(Phi(...m0...), Phi:AT1(...m1...), Phi:AT2(...m2...))
PhaseIterGVN* igvn = phase->is_IterGVN();
+ assert(igvn != NULL, "sanity check");
Node* hook = new Node(1);
PhiNode* new_base = (PhiNode*) clone();
// Must eagerly register phis, since they participate in loops.
- if (igvn) {
- igvn->register_new_node_with_optimizer(new_base);
- hook->add_req(new_base);
- }
+ igvn->register_new_node_with_optimizer(new_base);
+ hook->add_req(new_base);
+
MergeMemNode* result = MergeMemNode::make(new_base);
for (uint i = 1; i < req(); ++i) {
Node *ii = in(i);
@@ -2287,10 +2290,8 @@ Node *PhiNode::Ideal(PhaseGVN *phase, bool can_reshape) {
if (mms.is_empty()) {
Node* new_phi = new_base->slice_memory(mms.adr_type(phase->C));
made_new_phi = true;
- if (igvn) {
- igvn->register_new_node_with_optimizer(new_phi);
- hook->add_req(new_phi);
- }
+ igvn->register_new_node_with_optimizer(new_phi);
+ hook->add_req(new_phi);
mms.set_memory(new_phi);
}
Node* phi = mms.memory();
@@ -2308,6 +2309,13 @@ Node *PhiNode::Ideal(PhaseGVN *phase, bool can_reshape) {
}
}
}
+ // Already replace this phi node to cut it off from the graph to not interfere in dead loop checks during the
+ // transformations of the new phi nodes below. Otherwise, we could wrongly conclude that there is no dead loop
+ // because we are finding this phi node again. Also set the type of the new MergeMem node in case we are also
+ // visiting it in the transformations below.
+ igvn->replace_node(this, result);
+ igvn->set_type(result, result->bottom_type());
+
// now transform the new nodes, and return the mergemem
for (MergeMemStream mms(result); mms.next_non_empty(); ) {
Node* phi = mms.memory();
@@ -2717,7 +2725,7 @@ const Type* NeverBranchNode::Value(PhaseGVN* phase) const {
//------------------------------Ideal------------------------------------------
// Check for no longer being part of a loop
Node *NeverBranchNode::Ideal(PhaseGVN *phase, bool can_reshape) {
- if (can_reshape && !in(0)->is_Loop()) {
+ if (can_reshape && !in(0)->is_Region()) {
// Dead code elimination can sometimes delete this projection so
// if it's not there, there's nothing to do.
Node* fallthru = proj_out_or_null(0);
diff --git a/src/hotspot/share/opto/compile.cpp b/src/hotspot/share/opto/compile.cpp
index 09191f4563ae5..91b6427df254b 100644
--- a/src/hotspot/share/opto/compile.cpp
+++ b/src/hotspot/share/opto/compile.cpp
@@ -770,8 +770,12 @@ Compile::Compile( ciEnv* ci_env, ciMethod* target, int osr_bci,
// If any phase is randomized for stress testing, seed random number
// generation and log the seed for repeatability.
if (StressLCM || StressGCM || StressIGVN || StressCCP) {
- _stress_seed = FLAG_IS_DEFAULT(StressSeed) ?
- static_cast(Ticks::now().nanoseconds()) : StressSeed;
+ if (FLAG_IS_DEFAULT(StressSeed) || (FLAG_IS_ERGO(StressSeed) && RepeatCompilation)) {
+ _stress_seed = static_cast(Ticks::now().nanoseconds());
+ FLAG_SET_ERGO(StressSeed, _stress_seed);
+ } else {
+ _stress_seed = StressSeed;
+ }
if (_log != NULL) {
_log->elem("stress_test seed='%u'", _stress_seed);
}
@@ -1352,7 +1356,7 @@ const TypePtr *Compile::flatten_alias_type( const TypePtr *tj ) const {
ciInstanceKlass *k = to->klass()->as_instance_klass();
if( ptr == TypePtr::Constant ) {
if (to->klass() != ciEnv::current()->Class_klass() ||
- offset < k->size_helper() * wordSize) {
+ offset < k->layout_helper_size_in_bytes()) {
// No constant oop pointers (such as Strings); they alias with
// unknown strings.
assert(!is_known_inst, "not scalarizable allocation");
@@ -1376,7 +1380,7 @@ const TypePtr *Compile::flatten_alias_type( const TypePtr *tj ) const {
if (!is_known_inst) { // Do it only for non-instance types
tj = to = TypeInstPtr::make(TypePtr::BotPTR, env()->Object_klass(), false, NULL, offset);
}
- } else if (offset < 0 || offset >= k->size_helper() * wordSize) {
+ } else if (offset < 0 || offset >= k->layout_helper_size_in_bytes()) {
// Static fields are in the space above the normal instance
// fields in the java.lang.Class instance.
if (to->klass() != ciEnv::current()->Class_klass()) {
@@ -1386,6 +1390,7 @@ const TypePtr *Compile::flatten_alias_type( const TypePtr *tj ) const {
}
} else {
ciInstanceKlass *canonical_holder = k->get_canonical_holder(offset);
+ assert(offset < canonical_holder->layout_helper_size_in_bytes(), "");
if (!k->equals(canonical_holder) || tj->offset() != offset) {
if( is_known_inst ) {
tj = to = TypeInstPtr::make(to->ptr(), canonical_holder, true, NULL, offset, to->instance_id());
@@ -1658,7 +1663,7 @@ Compile::AliasType* Compile::find_alias_type(const TypePtr* adr_type, bool no_cr
ciField* field;
if (tinst->const_oop() != NULL &&
tinst->klass() == ciEnv::current()->Class_klass() &&
- tinst->offset() >= (tinst->klass()->as_instance_klass()->size_helper() * wordSize)) {
+ tinst->offset() >= (tinst->klass()->as_instance_klass()->layout_helper_size_in_bytes())) {
// static field
ciInstanceKlass* k = tinst->const_oop()->as_instance()->java_lang_Class_klass()->as_instance_klass();
field = k->get_field_by_offset(tinst->offset(), true);
@@ -2108,10 +2113,6 @@ void Compile::Optimize() {
if (failing()) return;
}
- // Now that all inlining is over, cut edge from root to loop
- // safepoints
- remove_root_to_sfpts_edges(igvn);
-
// Remove the speculative part of types and clean up the graph from
// the extra CastPP nodes whose only purpose is to carry them. Do
// that early so that optimizations are not disrupted by the extra
@@ -2148,6 +2149,10 @@ void Compile::Optimize() {
set_for_igvn(save_for_igvn);
}
+ // Now that all inlining is over and no PhaseRemoveUseless will run, cut edge from root to loop
+ // safepoints
+ remove_root_to_sfpts_edges(igvn);
+
// Perform escape analysis
if (_do_escape_analysis && ConnectionGraph::has_candidates(this)) {
if (has_loops()) {
@@ -2221,7 +2226,7 @@ void Compile::Optimize() {
TracePhase tp("ccp", &timers[_t_ccp]);
ccp.do_transform();
}
- print_method(PHASE_CPP1, 2);
+ print_method(PHASE_CCP1, 2);
assert( true, "Break here to ccp.dump_old2new_map()");
diff --git a/src/hotspot/share/opto/convertnode.cpp b/src/hotspot/share/opto/convertnode.cpp
index 259c58aeaf5eb..501cb08c45209 100644
--- a/src/hotspot/share/opto/convertnode.cpp
+++ b/src/hotspot/share/opto/convertnode.cpp
@@ -108,8 +108,10 @@ const Type* ConvD2INode::Value(PhaseGVN* phase) const {
//------------------------------Ideal------------------------------------------
// If converting to an int type, skip any rounding nodes
Node *ConvD2INode::Ideal(PhaseGVN *phase, bool can_reshape) {
- if( in(1)->Opcode() == Op_RoundDouble )
- set_req(1,in(1)->in(1));
+ if (in(1)->Opcode() == Op_RoundDouble) {
+ set_req(1, in(1)->in(1));
+ return this;
+ }
return NULL;
}
@@ -142,8 +144,10 @@ Node* ConvD2LNode::Identity(PhaseGVN* phase) {
//------------------------------Ideal------------------------------------------
// If converting to an int type, skip any rounding nodes
Node *ConvD2LNode::Ideal(PhaseGVN *phase, bool can_reshape) {
- if( in(1)->Opcode() == Op_RoundDouble )
- set_req(1,in(1)->in(1));
+ if (in(1)->Opcode() == Op_RoundDouble) {
+ set_req(1, in(1)->in(1));
+ return this;
+ }
return NULL;
}
@@ -179,8 +183,10 @@ Node* ConvF2INode::Identity(PhaseGVN* phase) {
//------------------------------Ideal------------------------------------------
// If converting to an int type, skip any rounding nodes
Node *ConvF2INode::Ideal(PhaseGVN *phase, bool can_reshape) {
- if( in(1)->Opcode() == Op_RoundFloat )
- set_req(1,in(1)->in(1));
+ if (in(1)->Opcode() == Op_RoundFloat) {
+ set_req(1, in(1)->in(1));
+ return this;
+ }
return NULL;
}
@@ -206,8 +212,10 @@ Node* ConvF2LNode::Identity(PhaseGVN* phase) {
//------------------------------Ideal------------------------------------------
// If converting to an int type, skip any rounding nodes
Node *ConvF2LNode::Ideal(PhaseGVN *phase, bool can_reshape) {
- if( in(1)->Opcode() == Op_RoundFloat )
- set_req(1,in(1)->in(1));
+ if (in(1)->Opcode() == Op_RoundFloat) {
+ set_req(1, in(1)->in(1));
+ return this;
+ }
return NULL;
}
diff --git a/src/hotspot/share/opto/doCall.cpp b/src/hotspot/share/opto/doCall.cpp
index 4a371eb7064ca..a0a042ca01067 100644
--- a/src/hotspot/share/opto/doCall.cpp
+++ b/src/hotspot/share/opto/doCall.cpp
@@ -136,7 +136,7 @@ CallGenerator* Compile::call_generator(ciMethod* callee, int vtable_index, bool
if (cg->does_virtual_dispatch()) {
cg_intrinsic = cg;
cg = NULL;
- } else if (should_delay_vector_inlining(callee, jvms)) {
+ } else if (IncrementalInline && should_delay_vector_inlining(callee, jvms)) {
return CallGenerator::for_late_inline(callee, cg);
} else {
return cg;
@@ -328,8 +328,10 @@ CallGenerator* Compile::call_generator(ciMethod* callee, int vtable_index, bool
CallGenerator* miss_cg = CallGenerator::for_uncommon_trap(callee,
Deoptimization::Reason_class_check, Deoptimization::Action_none);
- CallGenerator* cg = CallGenerator::for_guarded_call(holder, miss_cg, hit_cg);
+ ciKlass* constraint = (holder->is_subclass_of(singleton) ? holder : singleton); // avoid upcasts
+ CallGenerator* cg = CallGenerator::for_guarded_call(constraint, miss_cg, hit_cg);
if (hit_cg != NULL && cg != NULL) {
+ dependencies()->assert_unique_implementor(declared_interface, singleton);
dependencies()->assert_unique_concrete_method(declared_interface, cha_monomorphic_target, declared_interface, callee);
return cg;
}
diff --git a/src/hotspot/share/opto/graphKit.cpp b/src/hotspot/share/opto/graphKit.cpp
index 21bf49fbcf87a..ccafbf6fbb203 100644
--- a/src/hotspot/share/opto/graphKit.cpp
+++ b/src/hotspot/share/opto/graphKit.cpp
@@ -529,13 +529,6 @@ void GraphKit::uncommon_trap_if_should_post_on_exceptions(Deoptimization::DeoptR
void GraphKit::builtin_throw(Deoptimization::DeoptReason reason, Node* arg) {
bool must_throw = true;
- if (env()->jvmti_can_post_on_exceptions()) {
- // check if we must post exception events, take uncommon trap if so
- uncommon_trap_if_should_post_on_exceptions(reason, must_throw);
- // here if should_post_on_exceptions is false
- // continue on with the normal codegen
- }
-
// If this particular condition has not yet happened at this
// bytecode, then use the uncommon trap mechanism, and allow for
// a future recompilation if several traps occur here.
@@ -598,6 +591,13 @@ void GraphKit::builtin_throw(Deoptimization::DeoptReason reason, Node* arg) {
}
if (failing()) { stop(); return; } // exception allocation might fail
if (ex_obj != NULL) {
+ if (env()->jvmti_can_post_on_exceptions()) {
+ // check if we must post exception events, take uncommon trap if so
+ uncommon_trap_if_should_post_on_exceptions(reason, must_throw);
+ // here if should_post_on_exceptions is false
+ // continue on with the normal codegen
+ }
+
// Cheat with a preallocated exception object.
if (C->log() != NULL)
C->log()->elem("hot_throw preallocated='1' reason='%s'",
@@ -1748,14 +1748,15 @@ Node* GraphKit::array_element_address(Node* ary, Node* idx, BasicType elembt,
}
//-------------------------load_array_element-------------------------
-Node* GraphKit::load_array_element(Node* ctl, Node* ary, Node* idx, const TypeAryPtr* arytype) {
+Node* GraphKit::load_array_element(Node* ary, Node* idx, const TypeAryPtr* arytype, bool set_ctrl) {
const Type* elemtype = arytype->elem();
BasicType elembt = elemtype->array_element_basic_type();
Node* adr = array_element_address(ary, idx, elembt, arytype->size());
if (elembt == T_NARROWOOP) {
elembt = T_OBJECT; // To satisfy switch in LoadNode::make()
}
- Node* ld = make_load(ctl, adr, elemtype, elembt, arytype, MemNode::unordered);
+ Node* ld = access_load_at(ary, adr, arytype, elemtype, elembt,
+ IN_HEAP | IS_ARRAY | (set_ctrl ? C2_CONTROL_DEPENDENT_LOAD : 0));
return ld;
}
@@ -4260,7 +4261,7 @@ void GraphKit::inflate_string_slow(Node* src, Node* dst, Node* start, Node* coun
record_for_igvn(mem);
set_control(head);
set_memory(mem, TypeAryPtr::BYTES);
- Node* ch = load_array_element(control(), src, i_byte, TypeAryPtr::BYTES);
+ Node* ch = load_array_element(src, i_byte, TypeAryPtr::BYTES, /* set_ctrl */ true);
Node* st = store_to_memory(control(), array_element_address(dst, i_char, T_BYTE),
AndI(ch, intcon(0xff)), T_CHAR, TypeAryPtr::BYTES, MemNode::unordered,
false, false, true /* mismatched */);
diff --git a/src/hotspot/share/opto/graphKit.hpp b/src/hotspot/share/opto/graphKit.hpp
index d2d1043c1eaac..d815e21956fc1 100644
--- a/src/hotspot/share/opto/graphKit.hpp
+++ b/src/hotspot/share/opto/graphKit.hpp
@@ -660,7 +660,7 @@ class GraphKit : public Phase {
Node* ctrl = NULL);
// Return a load of array element at idx.
- Node* load_array_element(Node* ctl, Node* ary, Node* idx, const TypeAryPtr* arytype);
+ Node* load_array_element(Node* ary, Node* idx, const TypeAryPtr* arytype, bool set_ctrl);
//---------------- Dtrace support --------------------
void make_dtrace_method_entry_exit(ciMethod* method, bool is_entry);
diff --git a/src/hotspot/share/opto/ifnode.cpp b/src/hotspot/share/opto/ifnode.cpp
index 003240aaca36f..38b40a68b1f80 100644
--- a/src/hotspot/share/opto/ifnode.cpp
+++ b/src/hotspot/share/opto/ifnode.cpp
@@ -114,10 +114,11 @@ static Node* split_if(IfNode *iff, PhaseIterGVN *igvn) {
if( !t->singleton() ) return NULL;
// No intervening control, like a simple Call
- Node *r = iff->in(0);
- if( !r->is_Region() ) return NULL;
- if (r->is_Loop()) return NULL;
- if( phi->region() != r ) return NULL;
+ Node* r = iff->in(0);
+ if (!r->is_Region() || r->is_Loop() || phi->region() != r || r->as_Region()->is_copy()) {
+ return NULL;
+ }
+
// No other users of the cmp/bool
if (b->outcnt() != 1 || cmp->outcnt() != 1) {
//tty->print_cr("many users of cmp/bool");
@@ -243,13 +244,23 @@ static Node* split_if(IfNode *iff, PhaseIterGVN *igvn) {
}
Node* proj = PhaseIdealLoop::find_predicate(r->in(ii));
if (proj != NULL) {
+ // Bail out if splitting through a region with a predicate input (could
+ // also be a loop header before loop opts creates a LoopNode for it).
return NULL;
}
}
// If all the defs of the phi are the same constant, we already have the desired end state.
// Skip the split that would create empty phi and region nodes.
- if((r->req() - req_c) == 1) {
+ if ((r->req() - req_c) == 1) {
+ return NULL;
+ }
+
+ // At this point we know that we can apply the split if optimization. If the region is still on the worklist,
+ // we should wait until it is processed. The region might be removed which makes this optimization redundant.
+ // This also avoids the creation of dead data loops when rewiring data nodes below when a region is dying.
+ if (igvn->_worklist.member(r)) {
+ igvn->_worklist.push(iff); // retry split if later again
return NULL;
}
diff --git a/src/hotspot/share/opto/intrinsicnode.hpp b/src/hotspot/share/opto/intrinsicnode.hpp
index 3d6e9a38d1225..ab8a834bb28ad 100644
--- a/src/hotspot/share/opto/intrinsicnode.hpp
+++ b/src/hotspot/share/opto/intrinsicnode.hpp
@@ -168,10 +168,14 @@ class HasNegativesNode: public StrIntrinsicNode {
//------------------------------EncodeISOArray--------------------------------
-// encode char[] to byte[] in ISO_8859_1
+// encode char[] to byte[] in ISO_8859_1 or ASCII
class EncodeISOArrayNode: public Node {
+ bool ascii;
public:
- EncodeISOArrayNode(Node* control, Node* arymem, Node* s1, Node* s2, Node* c): Node(control, arymem, s1, s2, c) {};
+ EncodeISOArrayNode(Node* control, Node* arymem, Node* s1, Node* s2, Node* c, bool ascii)
+ : Node(control, arymem, s1, s2, c), ascii(ascii) {}
+
+ bool is_ascii() { return ascii; }
virtual int Opcode() const;
virtual bool depends_only_on_test() const { return false; }
virtual const Type* bottom_type() const { return TypeInt::INT; }
diff --git a/src/hotspot/share/opto/lcm.cpp b/src/hotspot/share/opto/lcm.cpp
index 014c6e596bf10..0548e6493dc1f 100644
--- a/src/hotspot/share/opto/lcm.cpp
+++ b/src/hotspot/share/opto/lcm.cpp
@@ -518,7 +518,7 @@ Node* PhaseCFG::select(
uint score = 0; // Bigger is better
int idx = -1; // Index in worklist
int cand_cnt = 0; // Candidate count
- bool block_size_threshold_ok = (block->number_of_nodes() > 10) ? true : false;
+ bool block_size_threshold_ok = (recalc_pressure_nodes != NULL) && (block->number_of_nodes() > 10);
for( uint i=0; i& ready_cnt, Vecto
return true;
}
- bool block_size_threshold_ok = (block->number_of_nodes() > 10) ? true : false;
+ bool block_size_threshold_ok = (recalc_pressure_nodes != NULL) && (block->number_of_nodes() > 10);
// We track the uses of local definitions as input dependences so that
// we know when a given instruction is avialable to be scheduled.
diff --git a/src/hotspot/share/opto/library_call.cpp b/src/hotspot/share/opto/library_call.cpp
index f175e15b31d8f..f65e3bf07a2bb 100644
--- a/src/hotspot/share/opto/library_call.cpp
+++ b/src/hotspot/share/opto/library_call.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1999, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -588,7 +588,9 @@ bool LibraryCallKit::try_to_inline(int predicate) {
case vmIntrinsics::_encodeISOArray:
case vmIntrinsics::_encodeByteISOArray:
- return inline_encodeISOArray();
+ return inline_encodeISOArray(false);
+ case vmIntrinsics::_encodeAsciiArray:
+ return inline_encodeISOArray(true);
case vmIntrinsics::_updateCRC32:
return inline_updateCRC32();
@@ -1559,7 +1561,7 @@ bool LibraryCallKit::inline_string_char_access(bool is_store) {
if (is_store) {
access_store_at(value, adr, TypeAryPtr::BYTES, ch, TypeInt::CHAR, T_CHAR, IN_HEAP | MO_UNORDERED | C2_MISMATCHED);
} else {
- ch = access_load_at(value, adr, TypeAryPtr::BYTES, TypeInt::CHAR, T_CHAR, IN_HEAP | MO_UNORDERED | C2_MISMATCHED | C2_CONTROL_DEPENDENT_LOAD);
+ ch = access_load_at(value, adr, TypeAryPtr::BYTES, TypeInt::CHAR, T_CHAR, IN_HEAP | MO_UNORDERED | C2_MISMATCHED | C2_CONTROL_DEPENDENT_LOAD | C2_UNKNOWN_CONTROL_LOAD);
set_result(ch);
}
return true;
@@ -4846,8 +4848,8 @@ LibraryCallKit::tightly_coupled_allocation(Node* ptr) {
}
//-------------inline_encodeISOArray-----------------------------------
-// encode char[] to byte[] in ISO_8859_1
-bool LibraryCallKit::inline_encodeISOArray() {
+// encode char[] to byte[] in ISO_8859_1 or ASCII
+bool LibraryCallKit::inline_encodeISOArray(bool ascii) {
assert(callee()->signature()->size() == 5, "encodeISOArray has 5 parameters");
// no receiver since it is static method
Node *src = argument(0);
@@ -4882,7 +4884,7 @@ bool LibraryCallKit::inline_encodeISOArray() {
// 'dst_start' points to dst array + scaled offset
const TypeAryPtr* mtype = TypeAryPtr::BYTES;
- Node* enc = new EncodeISOArrayNode(control(), memory(mtype), src_start, dst_start, length);
+ Node* enc = new EncodeISOArrayNode(control(), memory(mtype), src_start, dst_start, length, ascii);
enc = _gvn.transform(enc);
Node* res_mem = _gvn.transform(new SCMemProjNode(enc));
set_memory(res_mem, mtype);
@@ -6167,7 +6169,7 @@ Node * LibraryCallKit::get_key_start_from_aescrypt_object(Node *aescrypt_object)
if (objSessionK == NULL) {
return (Node *) NULL;
}
- Node* objAESCryptKey = load_array_element(control(), objSessionK, intcon(0), TypeAryPtr::OOPS);
+ Node* objAESCryptKey = load_array_element(objSessionK, intcon(0), TypeAryPtr::OOPS, /* set_ctrl */ true);
#else
Node* objAESCryptKey = load_field_from_object(aescrypt_object, "K", "[I");
#endif // PPC64
diff --git a/src/hotspot/share/opto/library_call.hpp b/src/hotspot/share/opto/library_call.hpp
index 1d33d6f4c9b0b..15e2de48becb1 100644
--- a/src/hotspot/share/opto/library_call.hpp
+++ b/src/hotspot/share/opto/library_call.hpp
@@ -285,7 +285,7 @@ class LibraryCallKit : public GraphKit {
Node* get_state_from_digest_object(Node *digestBase_object, const char* state_type);
Node* get_digest_length_from_digest_object(Node *digestBase_object);
Node* inline_digestBase_implCompressMB_predicate(int predicate);
- bool inline_encodeISOArray();
+ bool inline_encodeISOArray(bool ascii);
bool inline_updateCRC32();
bool inline_updateBytesCRC32();
bool inline_updateByteBufferCRC32();
diff --git a/src/hotspot/share/opto/loopPredicate.cpp b/src/hotspot/share/opto/loopPredicate.cpp
index fdb2bdbf7adf6..6fd40b6447d72 100644
--- a/src/hotspot/share/opto/loopPredicate.cpp
+++ b/src/hotspot/share/opto/loopPredicate.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2011, 2018, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -107,8 +107,9 @@ void PhaseIdealLoop::register_control(Node* n, IdealLoopTree *loop, Node* pred,
// Otherwise, the continuation projection is set up to be the false
// projection. This code is also used to clone predicates to cloned loops.
ProjNode* PhaseIdealLoop::create_new_if_for_predicate(ProjNode* cont_proj, Node* new_entry,
- Deoptimization::DeoptReason reason,
- int opcode, bool if_cont_is_true_proj) {
+ Deoptimization::DeoptReason reason, int opcode,
+ bool if_cont_is_true_proj, Node_List* old_new,
+ UnswitchingAction unswitching_action) {
assert(cont_proj->is_uncommon_trap_if_pattern(reason), "must be a uct if pattern!");
IfNode* iff = cont_proj->in(0)->as_If();
@@ -193,11 +194,40 @@ ProjNode* PhaseIdealLoop::create_new_if_for_predicate(ProjNode* cont_proj, Node*
if (use->is_Phi() && use->outcnt() > 0) {
assert(use->in(0) == rgn, "");
_igvn.rehash_node_delayed(use);
- use->add_req(use->in(proj_index));
+ Node* phi_input = use->in(proj_index);
+ if (unswitching_action == UnswitchingAction::FastLoopCloning
+ && !phi_input->is_CFG() && !phi_input->is_Phi() && get_ctrl(phi_input) == uncommon_proj) {
+ // There are some control dependent nodes on the uncommon projection and we are currently copying predicates
+ // to the fast loop in loop unswitching (first step, slow loop is processed afterwards). For the fast loop,
+ // we need to clone all the data nodes in the chain from the phi ('use') up until the node whose control input
+ // is the uncommon_proj. The slow loop can reuse the old data nodes and thus only needs to update the control
+ // input to the uncommon_proj (done on the next invocation of this method when 'unswitch_is_slow_loop' is true.
+ assert(LoopUnswitching, "sanity check");
+ phi_input = clone_data_nodes_for_fast_loop(phi_input, uncommon_proj, if_uct, old_new);
+ } else if (unswitching_action == UnswitchingAction::SlowLoopRewiring) {
+ // Replace phi input for the old predicate path with TOP as the predicate is dying anyways. This avoids the need
+ // to clone the data nodes again for the slow loop.
+ assert(LoopUnswitching, "sanity check");
+ _igvn.replace_input_of(use, proj_index, C->top());
+ }
+ use->add_req(phi_input);
has_phi = true;
}
}
assert(!has_phi || rgn->req() > 3, "no phis when region is created");
+ if (unswitching_action == UnswitchingAction::SlowLoopRewiring) {
+ // Rewire the control dependent data nodes for the slow loop from the old to the new uncommon projection.
+ assert(uncommon_proj->outcnt() > 1 && old_new == NULL, "sanity");
+ for (DUIterator_Fast jmax, j = uncommon_proj->fast_outs(jmax); j < jmax; j++) {
+ Node* data = uncommon_proj->fast_out(j);
+ if (!data->is_CFG()) {
+ _igvn.replace_input_of(data, 0, if_uct);
+ set_ctrl(data, if_uct);
+ --j;
+ --jmax;
+ }
+ }
+ }
if (new_entry == NULL) {
// Attach if_cont to iff
@@ -209,9 +239,70 @@ ProjNode* PhaseIdealLoop::create_new_if_for_predicate(ProjNode* cont_proj, Node*
return if_cont->as_Proj();
}
+// Clone data nodes for the fast loop while creating a new If with create_new_if_for_predicate. Returns the node which is
+// used for the uncommon trap phi input.
+Node* PhaseIdealLoop::clone_data_nodes_for_fast_loop(Node* phi_input, ProjNode* uncommon_proj, Node* if_uct, Node_List* old_new) {
+ // Step 1: Clone all nodes on the data chain but do not rewire anything, yet. Keep track of the cloned nodes
+ // by using the old_new mapping. This mapping is then used in step 2 to rewire the cloned nodes accordingly.
+ DEBUG_ONLY(uint last_idx = C->unique();)
+ Unique_Node_List list;
+ list.push(phi_input);
+ for (uint j = 0; j < list.size(); j++) {
+ Node* next = list.at(j);
+ Node* clone = next->clone();
+ _igvn.register_new_node_with_optimizer(clone);
+ old_new->map(next->_idx, clone);
+ for (uint k = 1; k < next->req(); k++) {
+ Node* in = next->in(k);
+ if (!in->is_Phi() && get_ctrl(in) == uncommon_proj) {
+ list.push(in);
+ }
+ }
+ }
+
+ // Step 2: All nodes are cloned. Rewire them by using the old_new mapping.
+ for (uint j = 0; j < list.size(); j++) {
+ Node* next = list.at(j);
+ Node* clone = old_new->at(next->_idx);
+ assert(clone != NULL && clone->_idx >= last_idx, "must exist and be a proper clone");
+ if (next->in(0) == uncommon_proj) {
+ // All data nodes with a control input to the uncommon projection in the chain need to be rewired to the new uncommon
+ // projection (could not only be the last data node in the chain but also, for example, a DivNode within the chain).
+ _igvn.replace_input_of(clone, 0, if_uct);
+ set_ctrl(clone, if_uct);
+ }
+
+ // Rewire the inputs of the cloned nodes to the old nodes to the new clones.
+ for (uint k = 1; k < next->req(); k++) {
+ Node* in = next->in(k);
+ if (!in->is_Phi()) {
+ assert(!in->is_CFG(), "must be data node");
+ Node* in_clone = old_new->at(in->_idx);
+ if (in_clone != NULL) {
+ assert(in_clone->_idx >= last_idx, "must be a valid clone");
+ _igvn.replace_input_of(clone, k, in_clone);
+ set_ctrl(clone, if_uct);
+ }
+ }
+ }
+ }
+ Node* clone_phi_input = old_new->at(phi_input->_idx);
+ assert(clone_phi_input != NULL && clone_phi_input->_idx >= last_idx, "must exist and be a proper clone");
+ return clone_phi_input;
+}
//--------------------------clone_predicate-----------------------
-ProjNode* PhaseIdealLoop::clone_predicate_to_unswitched_loop(ProjNode* predicate_proj, Node* new_entry, Deoptimization::DeoptReason reason) {
- ProjNode* new_predicate_proj = create_new_if_for_predicate(predicate_proj, new_entry, reason, Op_If);
+ProjNode* PhaseIdealLoop::clone_predicate_to_unswitched_loop(ProjNode* predicate_proj, Node* new_entry,
+ Deoptimization::DeoptReason reason, Node_List* old_new) {
+ UnswitchingAction unswitching_action;
+ if (predicate_proj->other_if_proj()->outcnt() > 1) {
+ // There are some data dependencies that need to be taken care of when cloning a predicate.
+ unswitching_action = old_new == NULL ? UnswitchingAction::SlowLoopRewiring : UnswitchingAction::FastLoopCloning;
+ } else {
+ unswitching_action = UnswitchingAction::None;
+ }
+
+ ProjNode* new_predicate_proj = create_new_if_for_predicate(predicate_proj, new_entry, reason, Op_If,
+ true, old_new, unswitching_action);
IfNode* iff = new_predicate_proj->in(0)->as_If();
Node* ctrl = iff->in(0);
@@ -250,9 +341,9 @@ void PhaseIdealLoop::clone_skeleton_predicates_to_unswitched_loop(IdealLoopTree*
assert(predicate->is_Proj() && predicate->as_Proj()->is_IfProj(), "predicate must be a projection of an if node");
IfProjNode* predicate_proj = predicate->as_IfProj();
- ProjNode* fast_proj = clone_skeleton_predicate_for_unswitched_loops(iff, predicate_proj, uncommon_proj, reason, iffast_pred, loop);
+ ProjNode* fast_proj = clone_skeleton_predicate_for_unswitched_loops(iff, predicate_proj, reason, iffast_pred);
assert(skeleton_predicate_has_opaque(fast_proj->in(0)->as_If()), "must find skeleton predicate for fast loop");
- ProjNode* slow_proj = clone_skeleton_predicate_for_unswitched_loops(iff, predicate_proj, uncommon_proj, reason, ifslow_pred, loop);
+ ProjNode* slow_proj = clone_skeleton_predicate_for_unswitched_loops(iff, predicate_proj, reason, ifslow_pred);
assert(skeleton_predicate_has_opaque(slow_proj->in(0)->as_If()), "must find skeleton predicate for slow loop");
// Update control dependent data nodes.
@@ -306,10 +397,10 @@ void PhaseIdealLoop::get_skeleton_predicates(Node* predicate, Unique_Node_List&
// Clone a skeleton predicate for an unswitched loop. OpaqueLoopInit and OpaqueLoopStride nodes are cloned and uncommon
// traps are kept for the predicate (a Halt node is used later when creating pre/main/post loops and copying this cloned
// predicate again).
-ProjNode* PhaseIdealLoop::clone_skeleton_predicate_for_unswitched_loops(Node* iff, ProjNode* predicate, Node* uncommon_proj,
- Deoptimization::DeoptReason reason, ProjNode* output_proj,
- IdealLoopTree* loop) {
- Node* bol = clone_skeleton_predicate_bool(iff, NULL, NULL, predicate, uncommon_proj, output_proj, loop);
+ProjNode* PhaseIdealLoop::clone_skeleton_predicate_for_unswitched_loops(Node* iff, ProjNode* predicate,
+ Deoptimization::DeoptReason reason,
+ ProjNode* output_proj) {
+ Node* bol = clone_skeleton_predicate_bool(iff, NULL, NULL, output_proj);
ProjNode* proj = create_new_if_for_predicate(output_proj, NULL, reason, iff->Opcode(), predicate->is_IfTrue());
_igvn.replace_input_of(proj->in(0), 1, bol);
_igvn.replace_input_of(output_proj->in(0), 0, proj);
@@ -319,7 +410,7 @@ ProjNode* PhaseIdealLoop::clone_skeleton_predicate_for_unswitched_loops(Node* if
//--------------------------clone_loop_predicates-----------------------
// Clone loop predicates to cloned loops when unswitching a loop.
-void PhaseIdealLoop::clone_predicates_to_unswitched_loop(IdealLoopTree* loop, const Node_List& old_new, ProjNode*& iffast_pred, ProjNode*& ifslow_pred) {
+void PhaseIdealLoop::clone_predicates_to_unswitched_loop(IdealLoopTree* loop, Node_List& old_new, ProjNode*& iffast_pred, ProjNode*& ifslow_pred) {
LoopNode* head = loop->_head->as_Loop();
bool clone_limit_check = !head->is_CountedLoop();
Node* entry = head->skip_strip_mined()->in(LoopNode::EntryControl);
@@ -343,7 +434,7 @@ void PhaseIdealLoop::clone_predicates_to_unswitched_loop(IdealLoopTree* loop, co
}
if (predicate_proj != NULL) { // right pattern that can be used by loop predication
// clone predicate
- iffast_pred = clone_predicate_to_unswitched_loop(predicate_proj, iffast_pred, Deoptimization::Reason_predicate);
+ iffast_pred = clone_predicate_to_unswitched_loop(predicate_proj, iffast_pred, Deoptimization::Reason_predicate, &old_new);
ifslow_pred = clone_predicate_to_unswitched_loop(predicate_proj, ifslow_pred, Deoptimization::Reason_predicate);
clone_skeleton_predicates_to_unswitched_loop(loop, old_new, Deoptimization::Reason_predicate, predicate_proj, iffast_pred, ifslow_pred);
@@ -352,7 +443,7 @@ void PhaseIdealLoop::clone_predicates_to_unswitched_loop(IdealLoopTree* loop, co
}
if (profile_predicate_proj != NULL) { // right pattern that can be used by loop predication
// clone predicate
- iffast_pred = clone_predicate_to_unswitched_loop(profile_predicate_proj, iffast_pred, Deoptimization::Reason_profile_predicate);
+ iffast_pred = clone_predicate_to_unswitched_loop(profile_predicate_proj, iffast_pred, Deoptimization::Reason_profile_predicate, &old_new);
ifslow_pred = clone_predicate_to_unswitched_loop(profile_predicate_proj, ifslow_pred, Deoptimization::Reason_profile_predicate);
clone_skeleton_predicates_to_unswitched_loop(loop, old_new, Deoptimization::Reason_profile_predicate, profile_predicate_proj, iffast_pred, ifslow_pred);
@@ -363,7 +454,7 @@ void PhaseIdealLoop::clone_predicates_to_unswitched_loop(IdealLoopTree* loop, co
// Clone loop limit check last to insert it before loop.
// Don't clone a limit check which was already finalized
// for this counted loop (only one limit check is needed).
- iffast_pred = clone_predicate_to_unswitched_loop(limit_check_proj, iffast_pred, Deoptimization::Reason_loop_limit_check);
+ iffast_pred = clone_predicate_to_unswitched_loop(limit_check_proj, iffast_pred, Deoptimization::Reason_loop_limit_check, &old_new);
ifslow_pred = clone_predicate_to_unswitched_loop(limit_check_proj, ifslow_pred, Deoptimization::Reason_loop_limit_check);
check_created_predicate_for_unswitching(iffast_pred);
@@ -372,7 +463,7 @@ void PhaseIdealLoop::clone_predicates_to_unswitched_loop(IdealLoopTree* loop, co
}
#ifndef PRODUCT
-void PhaseIdealLoop::check_created_predicate_for_unswitching(const Node* new_entry) const {
+void PhaseIdealLoop::check_created_predicate_for_unswitching(const Node* new_entry) {
assert(new_entry != NULL, "IfTrue or IfFalse after clone predicate");
if (TraceLoopPredicate) {
tty->print("Loop Predicate cloned: ");
@@ -464,6 +555,7 @@ class Invariance : public StackObj {
Node_List _old_new; // map of old to new (clone)
IdealLoopTree* _lpt;
PhaseIdealLoop* _phase;
+ Node* _data_dependency_on; // The projection into the loop on which data nodes are dependent or NULL otherwise
// Helper function to set up the invariance for invariance computation
// If n is a known invariant, set up directly. Otherwise, look up the
@@ -565,7 +657,8 @@ class Invariance : public StackObj {
_visited(area), _invariant(area),
_stack(area, 10 /* guess */),
_clone_visited(area), _old_new(area),
- _lpt(lpt), _phase(lpt->_phase)
+ _lpt(lpt), _phase(lpt->_phase),
+ _data_dependency_on(NULL)
{
LoopNode* head = _lpt->_head->as_Loop();
Node* entry = head->skip_strip_mined()->in(LoopNode::EntryControl);
@@ -573,7 +666,12 @@ class Invariance : public StackObj {
// If a node is pinned between the predicates and the loop
// entry, we won't be able to move any node in the loop that
// depends on it above it in a predicate. Mark all those nodes
- // as non loop invariatnt.
+ // as non-loop-invariant.
+ // Loop predication could create new nodes for which the below
+ // invariant information is missing. Mark the 'entry' node to
+ // later check again if a node needs to be treated as non-loop-
+ // invariant as well.
+ _data_dependency_on = entry;
Unique_Node_List wq;
wq.push(entry);
for (uint next = 0; next < wq.size(); ++next) {
@@ -592,6 +690,12 @@ class Invariance : public StackObj {
}
}
+ // Did we explicitly mark some nodes non-loop-invariant? If so, return the entry node on which some data nodes
+ // are dependent that prevent loop predication. Otherwise, return NULL.
+ Node* data_dependency_on() {
+ return _data_dependency_on;
+ }
+
// Map old to n for invariance computation and clone
void map_ctrl(Node* old, Node* n) {
assert(old->is_CFG() && n->is_CFG(), "must be");
@@ -621,7 +725,7 @@ class Invariance : public StackObj {
// Returns true if the predicate of iff is in "scale*iv + offset u< load_range(ptr)" format
// Note: this function is particularly designed for loop predication. We require load_range
// and offset to be loop invariant computed on the fly by "invar"
-bool IdealLoopTree::is_range_check_if(IfNode *iff, PhaseIdealLoop *phase, Invariance& invar) const {
+bool IdealLoopTree::is_range_check_if(IfNode *iff, PhaseIdealLoop *phase, Invariance& invar DEBUG_ONLY(COMMA ProjNode *predicate_proj)) const {
if (!is_loop_exit(iff)) {
return false;
}
@@ -653,15 +757,44 @@ bool IdealLoopTree::is_range_check_if(IfNode *iff, PhaseIdealLoop *phase, Invari
if (!invar.is_invariant(range)) {
return false;
}
+
+ Compile* C = Compile::current();
+ uint old_unique_idx = C->unique();
Node *iv = _head->as_CountedLoop()->phi();
int scale = 0;
Node *offset = NULL;
if (!phase->is_scaled_iv_plus_offset(cmp->in(1), iv, &scale, &offset)) {
return false;
}
- if (offset && !invar.is_invariant(offset)) { // offset must be invariant
- return false;
+ if (offset != NULL) {
+ if (!invar.is_invariant(offset)) { // offset must be invariant
+ return false;
+ }
+ Node* data_dependency_on = invar.data_dependency_on();
+ if (data_dependency_on != NULL && old_unique_idx < C->unique()) {
+ // 'offset' node was newly created by is_scaled_iv_plus_offset(). Check that it does not depend on the entry projection
+ // into the loop. If it does, we cannot perform loop predication (see Invariant::Invariant()).
+ assert(!offset->is_CFG(), "offset must be a data node");
+ if (_phase->get_ctrl(offset) == data_dependency_on) {
+ return false;
+ }
+ }
}
+#ifdef ASSERT
+ if (offset && phase->has_ctrl(offset)) {
+ Node* offset_ctrl = phase->get_ctrl(offset);
+ if (phase->get_loop(predicate_proj) == phase->get_loop(offset_ctrl) &&
+ phase->is_dominator(predicate_proj, offset_ctrl)) {
+ // If the control of offset is loop predication promoted by previous pass,
+ // then it will lead to cyclic dependency.
+ // Previously promoted loop predication is in the same loop of predication
+ // point.
+ // This situation can occur when pinning nodes too conservatively - can we do better?
+ assert(false, "cyclic dependency prevents range check elimination, idx: offset %d, offset_ctrl %d, predicate_proj %d",
+ offset->_idx, offset_ctrl->_idx, predicate_proj->_idx);
+ }
+ }
+#endif
return true;
}
@@ -683,7 +816,7 @@ bool IdealLoopTree::is_range_check_if(IfNode *iff, PhaseIdealLoop *phase, Invari
BoolNode* PhaseIdealLoop::rc_predicate(IdealLoopTree *loop, Node* ctrl,
int scale, Node* offset,
Node* init, Node* limit, jint stride,
- Node* range, bool upper, bool &overflow) {
+ Node* range, bool upper, bool &overflow, bool negate) {
jint con_limit = (limit != NULL && limit->is_Con()) ? limit->get_int() : 0;
jint con_init = init->is_Con() ? init->get_int() : 0;
jint con_offset = offset->is_Con() ? offset->get_int() : 0;
@@ -809,7 +942,7 @@ BoolNode* PhaseIdealLoop::rc_predicate(IdealLoopTree *loop, Node* ctrl,
cmp = new CmpUNode(max_idx_expr, range);
}
register_new_node(cmp, ctrl);
- BoolNode* bol = new BoolNode(cmp, BoolTest::lt);
+ BoolNode* bol = new BoolNode(cmp, negate ? BoolTest::ge : BoolTest::lt);
register_new_node(bol, ctrl);
if (TraceLoopPredicate) {
@@ -1141,7 +1274,7 @@ bool PhaseIdealLoop::loop_predication_impl_helper(IdealLoopTree *loop, ProjNode*
loop->dump_head();
}
#endif
- } else if (cl != NULL && loop->is_range_check_if(iff, this, invar)) {
+ } else if (cl != NULL && loop->is_range_check_if(iff, this, invar DEBUG_ONLY(COMMA predicate_proj))) {
// Range check for counted loops
const Node* cmp = bol->in(1)->as_Cmp();
Node* idx = cmp->in(1);
@@ -1176,36 +1309,26 @@ bool PhaseIdealLoop::loop_predication_impl_helper(IdealLoopTree *loop, ProjNode*
}
// If predicate expressions may overflow in the integer range, longs are used.
bool overflow = false;
+ bool negate = (proj->_con != predicate_proj->_con);
// Test the lower bound
- BoolNode* lower_bound_bol = rc_predicate(loop, ctrl, scale, offset, init, limit, stride, rng, false, overflow);
- // Negate test if necessary
- bool negated = false;
- if (proj->_con != predicate_proj->_con) {
- lower_bound_bol = new BoolNode(lower_bound_bol->in(1), lower_bound_bol->_test.negate());
- register_new_node(lower_bound_bol, ctrl);
- negated = true;
- }
+ BoolNode* lower_bound_bol = rc_predicate(loop, ctrl, scale, offset, init, limit, stride, rng, false, overflow, negate);
+
ProjNode* lower_bound_proj = create_new_if_for_predicate(predicate_proj, NULL, reason, overflow ? Op_If : iff->Opcode());
IfNode* lower_bound_iff = lower_bound_proj->in(0)->as_If();
_igvn.hash_delete(lower_bound_iff);
lower_bound_iff->set_req(1, lower_bound_bol);
- if (TraceLoopPredicate) tty->print_cr("lower bound check if: %s %d ", negated ? " negated" : "", lower_bound_iff->_idx);
+ if (TraceLoopPredicate) tty->print_cr("lower bound check if: %s %d ", negate ? " negated" : "", lower_bound_iff->_idx);
// Test the upper bound
- BoolNode* upper_bound_bol = rc_predicate(loop, lower_bound_proj, scale, offset, init, limit, stride, rng, true, overflow);
- negated = false;
- if (proj->_con != predicate_proj->_con) {
- upper_bound_bol = new BoolNode(upper_bound_bol->in(1), upper_bound_bol->_test.negate());
- register_new_node(upper_bound_bol, ctrl);
- negated = true;
- }
+ BoolNode* upper_bound_bol = rc_predicate(loop, lower_bound_proj, scale, offset, init, limit, stride, rng, true, overflow, negate);
+
ProjNode* upper_bound_proj = create_new_if_for_predicate(predicate_proj, NULL, reason, overflow ? Op_If : iff->Opcode());
assert(upper_bound_proj->in(0)->as_If()->in(0) == lower_bound_proj, "should dominate");
IfNode* upper_bound_iff = upper_bound_proj->in(0)->as_If();
_igvn.hash_delete(upper_bound_iff);
upper_bound_iff->set_req(1, upper_bound_bol);
- if (TraceLoopPredicate) tty->print_cr("upper bound check if: %s %d ", negated ? " negated" : "", lower_bound_iff->_idx);
+ if (TraceLoopPredicate) tty->print_cr("upper bound check if: %s %d ", negate ? " negated" : "", lower_bound_iff->_idx);
// Fall through into rest of the clean up code which will move
// any dependent nodes onto the upper bound test.
@@ -1251,10 +1374,10 @@ ProjNode* PhaseIdealLoop::insert_initial_skeleton_predicate(IfNode* iff, IdealLo
Node* rng, bool &overflow,
Deoptimization::DeoptReason reason) {
// First predicate for the initial value on first loop iteration
- assert(proj->_con && predicate_proj->_con, "not a range check?");
Node* opaque_init = new OpaqueLoopInitNode(C, init);
register_new_node(opaque_init, upper_bound_proj);
- BoolNode* bol = rc_predicate(loop, upper_bound_proj, scale, offset, opaque_init, limit, stride, rng, (stride > 0) != (scale > 0), overflow);
+ bool negate = (proj->_con != predicate_proj->_con);
+ BoolNode* bol = rc_predicate(loop, upper_bound_proj, scale, offset, opaque_init, limit, stride, rng, (stride > 0) != (scale > 0), overflow, negate);
Node* opaque_bol = new Opaque4Node(C, bol, _igvn.intcon(1)); // This will go away once loop opts are over
C->add_skeleton_predicate_opaq(opaque_bol);
register_new_node(opaque_bol, upper_bound_proj);
@@ -1272,7 +1395,7 @@ ProjNode* PhaseIdealLoop::insert_initial_skeleton_predicate(IfNode* iff, IdealLo
register_new_node(max_value, new_proj);
max_value = new AddINode(opaque_init, max_value);
register_new_node(max_value, new_proj);
- bol = rc_predicate(loop, new_proj, scale, offset, max_value, limit, stride, rng, (stride > 0) != (scale > 0), overflow);
+ bol = rc_predicate(loop, new_proj, scale, offset, max_value, limit, stride, rng, (stride > 0) != (scale > 0), overflow, negate);
opaque_bol = new Opaque4Node(C, bol, _igvn.intcon(1));
C->add_skeleton_predicate_opaq(opaque_bol);
register_new_node(opaque_bol, new_proj);
diff --git a/src/hotspot/share/opto/loopTransform.cpp b/src/hotspot/share/opto/loopTransform.cpp
index 1e6feb33bec1a..a9d28dfbbf535 100644
--- a/src/hotspot/share/opto/loopTransform.cpp
+++ b/src/hotspot/share/opto/loopTransform.cpp
@@ -899,7 +899,7 @@ bool IdealLoopTree::policy_unroll(PhaseIdealLoop *phase) {
// Progress defined as current size less than 20% larger than previous size.
if (UseSuperWord && cl->node_count_before_unroll() > 0 &&
future_unroll_cnt > LoopUnrollMin &&
- (future_unroll_cnt - 1) * (100 / LoopPercentProfileLimit) > cl->profile_trip_cnt() &&
+ (future_unroll_cnt - 1) * (100.0 / LoopPercentProfileLimit) > cl->profile_trip_cnt() &&
1.2 * cl->node_count_before_unroll() < (double)_body.size()) {
return false;
}
@@ -1264,10 +1264,12 @@ void PhaseIdealLoop::copy_skeleton_predicates_to_main_loop_helper(Node* predicat
// Clone the skeleton predicate twice and initialize one with the initial
// value of the loop induction variable. Leave the other predicate
// to be initialized when increasing the stride during loop unrolling.
- prev_proj = clone_skeleton_predicate_for_main_loop(iff, opaque_init, NULL, predicate, uncommon_proj, current_proj, outer_loop, prev_proj);
+ prev_proj = clone_skeleton_predicate_for_main_or_post_loop(iff, opaque_init, NULL, predicate, uncommon_proj,
+ current_proj, outer_loop, prev_proj);
assert(skeleton_predicate_has_opaque(prev_proj->in(0)->as_If()), "");
- prev_proj = clone_skeleton_predicate_for_main_loop(iff, init, stride, predicate, uncommon_proj, current_proj, outer_loop, prev_proj);
+ prev_proj = clone_skeleton_predicate_for_main_or_post_loop(iff, init, stride, predicate, uncommon_proj,
+ current_proj, outer_loop, prev_proj);
assert(!skeleton_predicate_has_opaque(prev_proj->in(0)->as_If()), "");
// Rewire any control inputs from the cloned skeleton predicates down to the main and post loop for data nodes that are part of the
@@ -1344,8 +1346,7 @@ bool PhaseIdealLoop::skeleton_predicate_has_opaque(IfNode* iff) {
// Clone the skeleton predicate bool for a main or unswitched loop:
// Main loop: Set new_init and new_stride nodes as new inputs.
// Unswitched loop: new_init and new_stride are both NULL. Clone OpaqueLoopInit and OpaqueLoopStride instead.
-Node* PhaseIdealLoop::clone_skeleton_predicate_bool(Node* iff, Node* new_init, Node* new_stride, Node* predicate, Node* uncommon_proj,
- Node* control, IdealLoopTree* outer_loop) {
+Node* PhaseIdealLoop::clone_skeleton_predicate_bool(Node* iff, Node* new_init, Node* new_stride, Node* control) {
Node_Stack to_clone(2);
to_clone.push(iff->in(1), 1);
uint current = C->unique();
@@ -1421,9 +1422,9 @@ Node* PhaseIdealLoop::clone_skeleton_predicate_bool(Node* iff, Node* new_init, N
// Clone a skeleton predicate for the main loop. new_init and new_stride are set as new inputs. Since the predicates cannot fail at runtime,
// Halt nodes are inserted instead of uncommon traps.
-Node* PhaseIdealLoop::clone_skeleton_predicate_for_main_loop(Node* iff, Node* new_init, Node* new_stride, Node* predicate, Node* uncommon_proj,
- Node* control, IdealLoopTree* outer_loop, Node* input_proj) {
- Node* result = clone_skeleton_predicate_bool(iff, new_init, new_stride, predicate, uncommon_proj, control, outer_loop);
+Node* PhaseIdealLoop::clone_skeleton_predicate_for_main_or_post_loop(Node* iff, Node* new_init, Node* new_stride, Node* predicate, Node* uncommon_proj,
+ Node* control, IdealLoopTree* outer_loop, Node* input_proj) {
+ Node* result = clone_skeleton_predicate_bool(iff, new_init, new_stride, control);
Node* proj = predicate->clone();
Node* other_proj = uncommon_proj->clone();
Node* new_iff = iff->clone();
@@ -1437,8 +1438,8 @@ Node* PhaseIdealLoop::clone_skeleton_predicate_for_main_loop(Node* iff, Node* ne
C->root()->add_req(halt);
new_iff->set_req(0, input_proj);
- register_control(new_iff, outer_loop->_parent, input_proj);
- register_control(proj, outer_loop->_parent, new_iff);
+ register_control(new_iff, outer_loop == _ltree_root ? _ltree_root : outer_loop->_parent, input_proj);
+ register_control(proj, outer_loop == _ltree_root ? _ltree_root : outer_loop->_parent, new_iff);
register_control(other_proj, _ltree_root, new_iff);
register_control(halt, _ltree_root, other_proj);
return proj;
@@ -1521,7 +1522,8 @@ void PhaseIdealLoop::insert_pre_post_loops(IdealLoopTree *loop, Node_List &old_n
// Add the post loop
const uint idx_before_pre_post = Compile::current()->unique();
CountedLoopNode *post_head = NULL;
- Node *main_exit = insert_post_loop(loop, old_new, main_head, main_end, incr, limit, post_head);
+ Node* post_incr = incr;
+ Node* main_exit = insert_post_loop(loop, old_new, main_head, main_end, post_incr, limit, post_head);
const uint idx_after_post_before_pre = Compile::current()->unique();
//------------------------------
@@ -1620,6 +1622,7 @@ void PhaseIdealLoop::insert_pre_post_loops(IdealLoopTree *loop, Node_List &old_n
assert(post_head->in(1)->is_IfProj(), "must be zero-trip guard If node projection of the post loop");
copy_skeleton_predicates_to_main_loop(pre_head, castii, stride, outer_loop, outer_main_head, dd_main_head,
idx_before_pre_post, idx_after_post_before_pre, min_taken, post_head->in(1), old_new);
+ copy_skeleton_predicates_to_post_loop(outer_main_head, post_head, post_incr, stride);
// Step B4: Shorten the pre-loop to run only 1 iteration (for now).
// RCE and alignment may change this later.
@@ -1742,6 +1745,7 @@ void PhaseIdealLoop::insert_vector_post_loop(IdealLoopTree *loop, Node_List &old
// In this case we throw away the result as we are not using it to connect anything else.
CountedLoopNode *post_head = NULL;
insert_post_loop(loop, old_new, main_head, main_end, incr, limit, post_head);
+ copy_skeleton_predicates_to_post_loop(main_head->skip_strip_mined(), post_head, incr, main_head->stride());
// It's difficult to be precise about the trip-counts
// for post loops. They are usually very short,
@@ -1788,6 +1792,7 @@ void PhaseIdealLoop::insert_scalar_rced_post_loop(IdealLoopTree *loop, Node_List
// In this case we throw away the result as we are not using it to connect anything else.
CountedLoopNode *post_head = NULL;
insert_post_loop(loop, old_new, main_head, main_end, incr, limit, post_head);
+ copy_skeleton_predicates_to_post_loop(main_head->skip_strip_mined(), post_head, incr, main_head->stride());
// It's difficult to be precise about the trip-counts
// for post loops. They are usually very short,
@@ -1804,9 +1809,9 @@ void PhaseIdealLoop::insert_scalar_rced_post_loop(IdealLoopTree *loop, Node_List
//------------------------------insert_post_loop-------------------------------
// Insert post loops. Add a post loop to the given loop passed.
-Node *PhaseIdealLoop::insert_post_loop(IdealLoopTree *loop, Node_List &old_new,
- CountedLoopNode *main_head, CountedLoopEndNode *main_end,
- Node *incr, Node *limit, CountedLoopNode *&post_head) {
+Node *PhaseIdealLoop::insert_post_loop(IdealLoopTree* loop, Node_List& old_new,
+ CountedLoopNode* main_head, CountedLoopEndNode* main_end,
+ Node*& incr, Node* limit, CountedLoopNode*& post_head) {
IfNode* outer_main_end = main_end;
IdealLoopTree* outer_loop = loop;
if (main_head->is_strip_mined()) {
@@ -1890,8 +1895,8 @@ Node *PhaseIdealLoop::insert_post_loop(IdealLoopTree *loop, Node_List &old_new,
}
// CastII for the new post loop:
- Node* castii = cast_incr_before_loop(zer_opaq->in(1), zer_taken, post_head);
- assert(castii != NULL, "no castII inserted");
+ incr = cast_incr_before_loop(zer_opaq->in(1), zer_taken, post_head);
+ assert(incr != NULL, "no castII inserted");
return new_main_exit;
}
@@ -1933,7 +1938,8 @@ void PhaseIdealLoop::update_main_loop_skeleton_predicates(Node* ctrl, CountedLoo
_igvn.replace_input_of(iff, 1, iff->in(1)->in(2));
} else {
// Add back predicates updated for the new stride.
- prev_proj = clone_skeleton_predicate_for_main_loop(iff, init, max_value, entry, proj, ctrl, outer_loop, prev_proj);
+ prev_proj = clone_skeleton_predicate_for_main_or_post_loop(iff, init, max_value, entry, proj, ctrl, outer_loop,
+ prev_proj);
assert(!skeleton_predicate_has_opaque(prev_proj->in(0)->as_If()), "unexpected");
}
}
@@ -1945,6 +1951,34 @@ void PhaseIdealLoop::update_main_loop_skeleton_predicates(Node* ctrl, CountedLoo
}
}
+void PhaseIdealLoop::copy_skeleton_predicates_to_post_loop(LoopNode* main_loop_head, CountedLoopNode* post_loop_head, Node* init, Node* stride) {
+ // Go over the skeleton predicates of the main loop and make a copy for the post loop with its initial iv value and
+ // stride as inputs.
+ Node* post_loop_entry = post_loop_head->in(LoopNode::EntryControl);
+ Node* main_loop_entry = main_loop_head->in(LoopNode::EntryControl);
+ IdealLoopTree* post_loop = get_loop(post_loop_head);
+
+ Node* ctrl = main_loop_entry;
+ Node* prev_proj = post_loop_entry;
+ while (ctrl != NULL && ctrl->is_Proj() && ctrl->in(0)->is_If()) {
+ IfNode* iff = ctrl->in(0)->as_If();
+ ProjNode* proj = iff->proj_out(1 - ctrl->as_Proj()->_con);
+ if (proj->unique_ctrl_out()->Opcode() != Op_Halt) {
+ break;
+ }
+ if (iff->in(1)->Opcode() == Op_Opaque4 && skeleton_predicate_has_opaque(iff)) {
+ prev_proj = clone_skeleton_predicate_for_main_or_post_loop(iff, init, stride, ctrl, proj, post_loop_entry,
+ post_loop, prev_proj);
+ assert(!skeleton_predicate_has_opaque(prev_proj->in(0)->as_If()), "unexpected");
+ }
+ ctrl = ctrl->in(0)->in(0);
+ }
+ if (prev_proj != post_loop_entry) {
+ _igvn.replace_input_of(post_loop_head, LoopNode::EntryControl, prev_proj);
+ set_idom(post_loop_head, prev_proj, dom_depth(post_loop_head));
+ }
+}
+
//------------------------------do_unroll--------------------------------------
// Unroll the loop body one step - make each trip do 2 iterations.
void PhaseIdealLoop::do_unroll(IdealLoopTree *loop, Node_List &old_new, bool adjust_min_trip) {
@@ -2555,7 +2589,7 @@ Node* PhaseIdealLoop::add_range_check_predicate(IdealLoopTree* loop, CountedLoop
Node* predicate_proj, int scale_con, Node* offset,
Node* limit, jint stride_con, Node* value) {
bool overflow = false;
- BoolNode* bol = rc_predicate(loop, predicate_proj, scale_con, offset, value, NULL, stride_con, limit, (stride_con > 0) != (scale_con > 0), overflow);
+ BoolNode* bol = rc_predicate(loop, predicate_proj, scale_con, offset, value, NULL, stride_con, limit, (stride_con > 0) != (scale_con > 0), overflow, false);
Node* opaque_bol = new Opaque4Node(C, bol, _igvn.intcon(1));
register_new_node(opaque_bol, predicate_proj);
IfNode* new_iff = NULL;
@@ -3362,6 +3396,7 @@ bool IdealLoopTree::iteration_split_impl(PhaseIdealLoop *phase, Node_List &old_n
phase->do_peeling(this, old_new);
} else if (policy_unswitching(phase)) {
phase->do_unswitching(this, old_new);
+ return false; // need to recalculate idom data
}
return true;
}
@@ -3380,7 +3415,7 @@ bool IdealLoopTree::iteration_split_impl(PhaseIdealLoop *phase, Node_List &old_n
if (cl->is_normal_loop()) {
if (policy_unswitching(phase)) {
phase->do_unswitching(this, old_new);
- return true;
+ return false; // need to recalculate idom data
}
if (policy_maximally_unroll(phase)) {
// Here we did some unrolling and peeling. Eventually we will
@@ -3493,6 +3528,7 @@ bool IdealLoopTree::iteration_split(PhaseIdealLoop* phase, Node_List &old_new) {
AutoNodeBudget node_budget(phase);
if (policy_unswitching(phase)) {
phase->do_unswitching(this, old_new);
+ return false; // need to recalculate idom data
}
}
}
diff --git a/src/hotspot/share/opto/loopnode.cpp b/src/hotspot/share/opto/loopnode.cpp
index 5d9e682f39158..54f92b238410e 100644
--- a/src/hotspot/share/opto/loopnode.cpp
+++ b/src/hotspot/share/opto/loopnode.cpp
@@ -2045,7 +2045,7 @@ Node* CountedLoopNode::match_incr_with_optional_truncation(Node* expr, Node** tr
}
LoopNode* CountedLoopNode::skip_strip_mined(int expect_skeleton) {
- if (is_strip_mined() && is_valid_counted_loop(T_INT)) {
+ if (is_strip_mined() && in(EntryControl) != NULL && in(EntryControl)->is_OuterStripMinedLoop()) {
verify_strip_mined(expect_skeleton);
return in(EntryControl)->as_Loop();
}
@@ -2150,12 +2150,14 @@ Node* CountedLoopNode::skip_predicates_from_entry(Node* ctrl) {
}
Node* CountedLoopNode::skip_predicates() {
+ Node* ctrl = in(LoopNode::EntryControl);
if (is_main_loop()) {
- Node* ctrl = skip_strip_mined()->in(LoopNode::EntryControl);
-
+ ctrl = skip_strip_mined()->in(LoopNode::EntryControl);
+ }
+ if (is_main_loop() || is_post_loop()) {
return skip_predicates_from_entry(ctrl);
}
- return in(LoopNode::EntryControl);
+ return ctrl;
}
diff --git a/src/hotspot/share/opto/loopnode.hpp b/src/hotspot/share/opto/loopnode.hpp
index d2a9c4d6aa4af..863be3b67ba5c 100644
--- a/src/hotspot/share/opto/loopnode.hpp
+++ b/src/hotspot/share/opto/loopnode.hpp
@@ -737,7 +737,7 @@ class IdealLoopTree : public ResourceObj {
bool policy_range_check( PhaseIdealLoop *phase ) const;
// Return TRUE if "iff" is a range check.
- bool is_range_check_if(IfNode *iff, PhaseIdealLoop *phase, Invariance& invar) const;
+ bool is_range_check_if(IfNode *iff, PhaseIdealLoop *phase, Invariance& invar DEBUG_ONLY(COMMA ProjNode *predicate_proj)) const;
// Estimate the number of nodes required when cloning a loop (body).
uint est_loop_clone_sz(uint factor) const;
@@ -915,13 +915,13 @@ class PhaseIdealLoop : public PhaseTransform {
void copy_skeleton_predicates_to_main_loop(CountedLoopNode* pre_head, Node* init, Node* stride, IdealLoopTree* outer_loop, LoopNode* outer_main_head,
uint dd_main_head, const uint idx_before_pre_post, const uint idx_after_post_before_pre,
Node* zero_trip_guard_proj_main, Node* zero_trip_guard_proj_post, const Node_List &old_new);
- Node* clone_skeleton_predicate_for_main_loop(Node* iff, Node* new_init, Node* new_stride, Node* predicate, Node* uncommon_proj, Node* control,
- IdealLoopTree* outer_loop, Node* input_proj);
- Node* clone_skeleton_predicate_bool(Node* iff, Node* new_init, Node* new_stride, Node* predicate, Node* uncommon_proj, Node* control,
- IdealLoopTree* outer_loop);
+ Node* clone_skeleton_predicate_for_main_or_post_loop(Node* iff, Node* new_init, Node* new_stride, Node* predicate, Node* uncommon_proj, Node* control,
+ IdealLoopTree* outer_loop, Node* input_proj);
+ Node* clone_skeleton_predicate_bool(Node* iff, Node* new_init, Node* new_stride, Node* control);
static bool skeleton_predicate_has_opaque(IfNode* iff);
static void get_skeleton_predicates(Node* predicate, Unique_Node_List& list, bool get_opaque = false);
void update_main_loop_skeleton_predicates(Node* ctrl, CountedLoopNode* loop_head, Node* init, int stride_con);
+ void copy_skeleton_predicates_to_post_loop(LoopNode* main_loop_head, CountedLoopNode* post_loop_head, Node* init, Node* stride);
void insert_loop_limit_check(ProjNode* limit_check_proj, Node* cmp_limit, Node* bol);
#ifdef ASSERT
bool only_has_infinite_loops();
@@ -1244,9 +1244,9 @@ class PhaseIdealLoop : public PhaseTransform {
void insert_pre_post_loops( IdealLoopTree *loop, Node_List &old_new, bool peel_only );
// Add post loop after the given loop.
- Node *insert_post_loop(IdealLoopTree *loop, Node_List &old_new,
- CountedLoopNode *main_head, CountedLoopEndNode *main_end,
- Node *incr, Node *limit, CountedLoopNode *&post_head);
+ Node *insert_post_loop(IdealLoopTree* loop, Node_List& old_new,
+ CountedLoopNode* main_head, CountedLoopEndNode* main_end,
+ Node*& incr, Node* limit, CountedLoopNode*& post_head);
// Add an RCE'd post loop which we will multi-version adapt for run time test path usage
void insert_scalar_rced_post_loop( IdealLoopTree *loop, Node_List &old_new );
@@ -1275,9 +1275,20 @@ class PhaseIdealLoop : public PhaseTransform {
// Return true if exp is a scaled induction var plus (or minus) constant
bool is_scaled_iv_plus_offset(Node* exp, Node* iv, int* p_scale, Node** p_offset, int depth = 0);
+ // Enum to determine the action to be performed in create_new_if_for_predicate() when processing phis of UCT regions.
+ enum class UnswitchingAction {
+ None, // No special action.
+ FastLoopCloning, // Need to clone nodes for the fast loop.
+ SlowLoopRewiring // Need to rewire nodes for the slow loop.
+ };
+
// Create a new if above the uncommon_trap_if_pattern for the predicate to be promoted
ProjNode* create_new_if_for_predicate(ProjNode* cont_proj, Node* new_entry, Deoptimization::DeoptReason reason,
- int opcode, bool if_cont_is_true_proj = true);
+ int opcode, bool if_cont_is_true_proj = true, Node_List* old_new = NULL,
+ UnswitchingAction unswitching_action = UnswitchingAction::None);
+
+ // Clone data nodes for the fast loop while creating a new If with create_new_if_for_predicate.
+ Node* clone_data_nodes_for_fast_loop(Node* phi_input, ProjNode* uncommon_proj, Node* if_uct, Node_List* old_new);
void register_control(Node* n, IdealLoopTree *loop, Node* pred, bool update_body = true);
@@ -1292,7 +1303,8 @@ class PhaseIdealLoop : public PhaseTransform {
BoolNode* rc_predicate(IdealLoopTree *loop, Node* ctrl,
int scale, Node* offset,
Node* init, Node* limit, jint stride,
- Node* range, bool upper, bool &overflow);
+ Node* range, bool upper, bool &overflow,
+ bool negate);
// Implementation of the loop predication to promote checks outside the loop
bool loop_predication_impl(IdealLoopTree *loop);
@@ -1563,13 +1575,15 @@ class PhaseIdealLoop : public PhaseTransform {
}
// Clone loop predicates to slow and fast loop when unswitching a loop
- void clone_predicates_to_unswitched_loop(IdealLoopTree* loop, const Node_List& old_new, ProjNode*& iffast_pred, ProjNode*& ifslow_pred);
- ProjNode* clone_predicate_to_unswitched_loop(ProjNode* predicate_proj, Node* new_entry, Deoptimization::DeoptReason reason);
+ void clone_predicates_to_unswitched_loop(IdealLoopTree* loop, Node_List& old_new, ProjNode*& iffast_pred, ProjNode*& ifslow_pred);
+ ProjNode* clone_predicate_to_unswitched_loop(ProjNode* predicate_proj, Node* new_entry, Deoptimization::DeoptReason reason,
+ Node_List* old_new = NULL);
void clone_skeleton_predicates_to_unswitched_loop(IdealLoopTree* loop, const Node_List& old_new, Deoptimization::DeoptReason reason,
ProjNode* old_predicate_proj, ProjNode* iffast_pred, ProjNode* ifslow_pred);
- ProjNode* clone_skeleton_predicate_for_unswitched_loops(Node* iff, ProjNode* predicate, Node* uncommon_proj, Deoptimization::DeoptReason reason,
- ProjNode* output_proj, IdealLoopTree* loop);
- void check_created_predicate_for_unswitching(const Node* new_entry) const PRODUCT_RETURN;
+ ProjNode* clone_skeleton_predicate_for_unswitched_loops(Node* iff, ProjNode* predicate,
+ Deoptimization::DeoptReason reason,
+ ProjNode* output_proj);
+ static void check_created_predicate_for_unswitching(const Node* new_entry) PRODUCT_RETURN;
bool _created_loop_node;
#ifdef ASSERT
@@ -1620,6 +1634,8 @@ class PhaseIdealLoop : public PhaseTransform {
Node* compute_early_ctrl(Node* n, Node* n_ctrl);
void try_sink_out_of_loop(Node* n);
+
+ bool safe_for_if_replacement(const Node* dom) const;
};
diff --git a/src/hotspot/share/opto/loopopts.cpp b/src/hotspot/share/opto/loopopts.cpp
index 371079c3b0696..f523c1f824800 100644
--- a/src/hotspot/share/opto/loopopts.cpp
+++ b/src/hotspot/share/opto/loopopts.cpp
@@ -233,8 +233,13 @@ void PhaseIdealLoop::dominated_by( Node *prevdom, Node *iff, bool flip, bool exc
if (VerifyLoopOptimizations && PrintOpto) { tty->print_cr("dominating test"); }
// prevdom is the dominating projection of the dominating test.
- assert( iff->is_If(), "" );
- assert(iff->Opcode() == Op_If || iff->Opcode() == Op_CountedLoopEnd || iff->Opcode() == Op_RangeCheck, "Check this code when new subtype is added");
+ assert(iff->is_If(), "must be");
+ assert(iff->Opcode() == Op_If ||
+ iff->Opcode() == Op_CountedLoopEnd ||
+ iff->Opcode() == Op_LongCountedLoopEnd ||
+ iff->Opcode() == Op_RangeCheck,
+ "Check this code when new subtype is added");
+
int pop = prevdom->Opcode();
assert( pop == Op_IfFalse || pop == Op_IfTrue, "" );
if (flip) {
@@ -1169,6 +1174,16 @@ bool PhaseIdealLoop::identical_backtoback_ifs(Node *n) {
if (!n->in(0)->is_Region()) {
return false;
}
+
+ IfNode* n_if = n->as_If();
+ if (n_if->proj_out(0)->outcnt() > 1 || n_if->proj_out(1)->outcnt() > 1) {
+ // Removing the dominated If node by using the split-if optimization does not work if there are data dependencies.
+ // Some data dependencies depend on its immediate dominator If node and should not be separated from it (e.g. null
+ // checks, division by zero checks etc.). Bail out for now until data dependencies are correctly handled when
+ // optimizing back-to-back ifs.
+ return false;
+ }
+
Node* region = n->in(0);
Node* dom = idom(region);
if (!dom->is_If() || dom->in(1) != n->in(1)) {
@@ -1387,7 +1402,8 @@ void PhaseIdealLoop::split_if_with_blocks_post(Node *n) {
Node *prevdom = n;
Node *dom = idom(prevdom);
while (dom != cutoff) {
- if (dom->req() > 1 && dom->in(1) == bol && prevdom->in(0) == dom) {
+ if (dom->req() > 1 && dom->in(1) == bol && prevdom->in(0) == dom &&
+ safe_for_if_replacement(dom)) {
// It's invalid to move control dependent data nodes in the inner
// strip-mined loop, because:
// 1) break validation of LoopNode::verify_strip_mined()
@@ -1425,20 +1441,54 @@ void PhaseIdealLoop::split_if_with_blocks_post(Node *n) {
}
}
+bool PhaseIdealLoop::safe_for_if_replacement(const Node* dom) const {
+ if (!dom->is_CountedLoopEnd()) {
+ return true;
+ }
+ CountedLoopEndNode* le = dom->as_CountedLoopEnd();
+ CountedLoopNode* cl = le->loopnode();
+ if (cl == NULL) {
+ return true;
+ }
+ if (!cl->is_main_loop()) {
+ return true;
+ }
+ if (cl->is_canonical_loop_entry() == NULL) {
+ return true;
+ }
+ // Further unrolling is possible so loop exit condition might change
+ return false;
+}
+
// See if a shared loop-varying computation has no loop-varying uses.
// Happens if something is only used for JVM state in uncommon trap exits,
// like various versions of induction variable+offset. Clone the
// computation per usage to allow it to sink out of the loop.
void PhaseIdealLoop::try_sink_out_of_loop(Node* n) {
+ bool is_raw_to_oop_cast = n->is_ConstraintCast() &&
+ n->in(1)->bottom_type()->isa_rawptr() &&
+ !n->bottom_type()->isa_rawptr();
if (has_ctrl(n) &&
!n->is_Phi() &&
!n->is_Bool() &&
!n->is_Proj() &&
!n->is_MergeMem() &&
!n->is_CMove() &&
- n->Opcode() != Op_Opaque4) {
+ !is_raw_to_oop_cast && // don't extend live ranges of raw oops
+ n->Opcode() != Op_Opaque4 &&
+ !n->is_Type()) {
Node *n_ctrl = get_ctrl(n);
IdealLoopTree *n_loop = get_loop(n_ctrl);
+
+ if (n->in(0) != NULL) {
+ IdealLoopTree* loop_ctrl = get_loop(n->in(0));
+ if (n_loop != loop_ctrl && n_loop->is_member(loop_ctrl)) {
+ // n has a control input inside a loop but get_ctrl() is member of an outer loop. This could happen, for example,
+ // for Div nodes inside a loop (control input inside loop) without a use except for an UCT (outside the loop).
+ // Rewire control of n to right outside of the loop, regardless if its input(s) are later sunk or not.
+ _igvn.replace_input_of(n, 0, place_outside_loop(n_ctrl, loop_ctrl));
+ }
+ }
if (n_loop != _ltree_root && n->outcnt() > 1) {
// Compute early control: needed for anti-dependence analysis. It's also possible that as a result of
// previous transformations in this loop opts round, the node can be hoisted now: early control will tell us.
diff --git a/src/hotspot/share/opto/macro.cpp b/src/hotspot/share/opto/macro.cpp
index b97564c936e52..d502d9a96d2b0 100644
--- a/src/hotspot/share/opto/macro.cpp
+++ b/src/hotspot/share/opto/macro.cpp
@@ -2562,6 +2562,7 @@ void PhaseMacroExpand::eliminate_macro_nodes() {
assert(n->Opcode() == Op_LoopLimit ||
n->Opcode() == Op_Opaque2 ||
n->Opcode() == Op_Opaque3 ||
+ n->Opcode() == Op_Opaque4 ||
BarrierSet::barrier_set()->barrier_set_c2()->is_gc_barrier_node(n),
"unknown node type in macro list");
}
@@ -2623,6 +2624,19 @@ bool PhaseMacroExpand::expand_macro_nodes() {
_igvn.replace_node(n, repl);
success = true;
#endif
+ } else if (n->Opcode() == Op_Opaque4) {
+ // With Opaque4 nodes, the expectation is that the test of input 1
+ // is always equal to the constant value of input 2. So we can
+ // remove the Opaque4 and replace it by input 2. In debug builds,
+ // leave the non constant test in instead to sanity check that it
+ // never fails (if it does, that subgraph was constructed so, at
+ // runtime, a Halt node is executed).
+#ifdef ASSERT
+ _igvn.replace_node(n, n->in(1));
+#else
+ _igvn.replace_node(n, n->in(2));
+#endif
+ success = true;
} else if (n->Opcode() == Op_OuterStripMinedLoop) {
n->as_OuterStripMinedLoop()->adjust_strip_mined_loop(&_igvn);
C->remove_macro_node(n);
diff --git a/src/hotspot/share/opto/macro.hpp b/src/hotspot/share/opto/macro.hpp
index 163952e69c36a..b65b97fec8775 100644
--- a/src/hotspot/share/opto/macro.hpp
+++ b/src/hotspot/share/opto/macro.hpp
@@ -186,7 +186,6 @@ class PhaseMacroExpand : public Phase {
int replace_input(Node *use, Node *oldref, Node *newref);
void migrate_outs(Node *old, Node *target);
- void copy_call_debug_info(CallNode *oldcall, CallNode * newcall);
Node* opt_bits_test(Node* ctrl, Node* region, int edge, Node* word, int mask, int bits, bool return_fast_path = false);
void copy_predefined_input_for_runtime_call(Node * ctrl, CallNode* oldcall, CallNode* call);
CallNode* make_slow_call(CallNode *oldcall, const TypeFunc* slow_call_type, address slow_call,
diff --git a/src/hotspot/share/opto/macroArrayCopy.cpp b/src/hotspot/share/opto/macroArrayCopy.cpp
index 4915230597006..68f98070e769f 100644
--- a/src/hotspot/share/opto/macroArrayCopy.cpp
+++ b/src/hotspot/share/opto/macroArrayCopy.cpp
@@ -831,7 +831,9 @@ Node* PhaseMacroExpand::generate_arraycopy(ArrayCopyNode *ac, AllocateArrayNode*
}
_igvn.replace_node(_callprojs.fallthrough_memproj, out_mem);
- _igvn.replace_node(_callprojs.fallthrough_ioproj, *io);
+ if (_callprojs.fallthrough_ioproj != NULL) {
+ _igvn.replace_node(_callprojs.fallthrough_ioproj, *io);
+ }
_igvn.replace_node(_callprojs.fallthrough_catchproj, *ctrl);
#ifdef ASSERT
@@ -1095,8 +1097,14 @@ MergeMemNode* PhaseMacroExpand::generate_slow_arraycopy(ArrayCopyNode *ac,
}
transform_later(out_mem);
- *io = _callprojs.fallthrough_ioproj->clone();
- transform_later(*io);
+ // When src is negative and arraycopy is before an infinite loop,_callprojs.fallthrough_ioproj
+ // could be NULL. Skip clone and update NULL fallthrough_ioproj.
+ if (_callprojs.fallthrough_ioproj != NULL) {
+ *io = _callprojs.fallthrough_ioproj->clone();
+ transform_later(*io);
+ } else {
+ *io = NULL;
+ }
return out_mem;
}
@@ -1333,7 +1341,9 @@ void PhaseMacroExpand::expand_arraycopy_node(ArrayCopyNode *ac) {
}
_igvn.replace_node(_callprojs.fallthrough_memproj, merge_mem);
- _igvn.replace_node(_callprojs.fallthrough_ioproj, io);
+ if (_callprojs.fallthrough_ioproj != NULL) {
+ _igvn.replace_node(_callprojs.fallthrough_ioproj, io);
+ }
_igvn.replace_node(_callprojs.fallthrough_catchproj, ctrl);
return;
}
diff --git a/src/hotspot/share/opto/memnode.cpp b/src/hotspot/share/opto/memnode.cpp
index c348492bd0816..772a1842508ac 100644
--- a/src/hotspot/share/opto/memnode.cpp
+++ b/src/hotspot/share/opto/memnode.cpp
@@ -596,7 +596,7 @@ Node* LoadNode::find_previous_arraycopy(PhaseTransform* phase, Node* ld_alloc, N
ArrayCopyNode* MemNode::find_array_copy_clone(PhaseTransform* phase, Node* ld_alloc, Node* mem) const {
if (mem->is_Proj() && mem->in(0) != NULL && (mem->in(0)->Opcode() == Op_MemBarStoreStore ||
- mem->in(0)->Opcode() == Op_MemBarCPUOrder)) {
+ mem->in(0)->Opcode() == Op_MemBarCPUOrder)) {
if (ld_alloc != NULL) {
// Check if there is an array copy for a clone
Node* mb = mem->in(0);
@@ -1061,7 +1061,6 @@ Node* MemNode::can_see_stored_value(Node* st, PhaseTransform* phase) const {
// This is more general than load from boxing objects.
if (skip_through_membars(atp, tp, phase->C->eliminate_boxing())) {
uint alias_idx = atp->index();
- bool final = !atp->is_rewritable();
Node* result = NULL;
Node* current = st;
// Skip through chains of MemBarNodes checking the MergeMems for
@@ -1069,17 +1068,19 @@ Node* MemNode::can_see_stored_value(Node* st, PhaseTransform* phase) const {
// kind of node is encountered. Loads from final memory can skip
// through any kind of MemBar but normal loads shouldn't skip
// through MemBarAcquire since the could allow them to move out of
- // a synchronized region.
+ // a synchronized region. It is not safe to step over MemBarCPUOrder,
+ // because alias info above them may be inaccurate (e.g., due to
+ // mixed/mismatched unsafe accesses).
+ bool is_final_mem = !atp->is_rewritable();
while (current->is_Proj()) {
int opc = current->in(0)->Opcode();
- if ((final && (opc == Op_MemBarAcquire ||
- opc == Op_MemBarAcquireLock ||
- opc == Op_LoadFence)) ||
+ if ((is_final_mem && (opc == Op_MemBarAcquire ||
+ opc == Op_MemBarAcquireLock ||
+ opc == Op_LoadFence)) ||
opc == Op_MemBarRelease ||
opc == Op_StoreFence ||
opc == Op_MemBarReleaseLock ||
- opc == Op_MemBarStoreStore ||
- opc == Op_MemBarCPUOrder) {
+ opc == Op_MemBarStoreStore) {
Node* mem = current->in(0)->in(TypeFunc::Memory);
if (mem->is_MergeMem()) {
MergeMemNode* merge = mem->as_MergeMem();
diff --git a/src/hotspot/share/opto/mulnode.cpp b/src/hotspot/share/opto/mulnode.cpp
index 5d2b91a704a14..6e6026213062e 100644
--- a/src/hotspot/share/opto/mulnode.cpp
+++ b/src/hotspot/share/opto/mulnode.cpp
@@ -59,13 +59,11 @@ Node* MulNode::Identity(PhaseGVN* phase) {
// We also canonicalize the Node, moving constants to the right input,
// and flatten expressions (so that 1+x+2 becomes x+3).
Node *MulNode::Ideal(PhaseGVN *phase, bool can_reshape) {
- const Type *t1 = phase->type( in(1) );
- const Type *t2 = phase->type( in(2) );
- Node *progress = NULL; // Progress flag
+ Node* in1 = in(1);
+ Node* in2 = in(2);
+ Node* progress = NULL; // Progress flag
// convert "max(a,b) * min(a,b)" into "a*b".
- Node *in1 = in(1);
- Node *in2 = in(2);
if ((in(1)->Opcode() == max_opcode() && in(2)->Opcode() == min_opcode())
|| (in(1)->Opcode() == min_opcode() && in(2)->Opcode() == max_opcode())) {
Node *in11 = in(1)->in(1);
@@ -83,10 +81,15 @@ Node *MulNode::Ideal(PhaseGVN *phase, bool can_reshape) {
igvn->_worklist.push(in1);
igvn->_worklist.push(in2);
}
+ in1 = in(1);
+ in2 = in(2);
progress = this;
}
}
+ const Type* t1 = phase->type(in1);
+ const Type* t2 = phase->type(in2);
+
// We are OK if right is a constant, or right is a load and
// left is a non-constant.
if( !(t2->singleton() ||
diff --git a/src/hotspot/share/opto/opaquenode.cpp b/src/hotspot/share/opto/opaquenode.cpp
index c1b769e237056..c66df16b2d039 100644
--- a/src/hotspot/share/opto/opaquenode.cpp
+++ b/src/hotspot/share/opto/opaquenode.cpp
@@ -60,25 +60,6 @@ bool Opaque2Node::cmp( const Node &n ) const {
return (&n == this); // Always fail except on self
}
-Node* Opaque4Node::Identity(PhaseGVN* phase) {
- if (phase->C->post_loop_opts_phase()) {
- // With Opaque4 nodes, the expectation is that the test of input 1
- // is always equal to the constant value of input 2. So we can
- // remove the Opaque4 and replace it by input 2. In debug builds,
- // leave the non constant test in instead to sanity check that it
- // never fails (if it does, that subgraph was constructed so, at
- // runtime, a Halt node is executed).
-#ifdef ASSERT
- return this->in(1);
-#else
- return this->in(2);
-#endif
- } else {
- phase->C->record_for_post_loop_opts_igvn(this);
- }
- return this;
-}
-
const Type* Opaque4Node::Value(PhaseGVN* phase) const {
return phase->type(in(1));
}
diff --git a/src/hotspot/share/opto/opaquenode.hpp b/src/hotspot/share/opto/opaquenode.hpp
index 160f814fd2d6d..cb7ff23764bfd 100644
--- a/src/hotspot/share/opto/opaquenode.hpp
+++ b/src/hotspot/share/opto/opaquenode.hpp
@@ -114,11 +114,13 @@ class Opaque3Node : public Opaque2Node {
// GraphKit::must_be_not_null().
class Opaque4Node : public Node {
public:
- Opaque4Node(Compile* C, Node *tst, Node* final_tst) : Node(NULL, tst, final_tst) {}
+ Opaque4Node(Compile* C, Node *tst, Node* final_tst) : Node(NULL, tst, final_tst) {
+ init_flags(Flag_is_macro);
+ C->add_macro_node(this);
+ }
virtual int Opcode() const;
virtual const Type *bottom_type() const { return TypeInt::BOOL; }
- virtual Node* Identity(PhaseGVN* phase);
virtual const Type* Value(PhaseGVN* phase) const;
};
diff --git a/src/hotspot/share/opto/output.cpp b/src/hotspot/share/opto/output.cpp
index 0e91d90ef36c5..57d2fe05481e4 100644
--- a/src/hotspot/share/opto/output.cpp
+++ b/src/hotspot/share/opto/output.cpp
@@ -2911,6 +2911,16 @@ void Scheduling::anti_do_def( Block *b, Node *def, OptoReg::Name def_reg, int is
if( !OptoReg::is_valid(def_reg) ) // Ignore stores & control flow
return;
+ if (OptoReg::is_reg(def_reg)) {
+ VMReg vmreg = OptoReg::as_VMReg(def_reg);
+ if (vmreg->is_reg() && !vmreg->is_concrete() && !vmreg->prev()->is_concrete()) {
+ // This is one of the high slots of a vector register.
+ // ScheduleAndBundle already checked there are no live wide
+ // vectors in this method so it can be safely ignored.
+ return;
+ }
+ }
+
Node *pinch = _reg_node[def_reg]; // Get pinch point
if ((pinch == NULL) || _cfg->get_block_for_node(pinch) != b || // No pinch-point yet?
is_def ) { // Check for a true def (not a kill)
diff --git a/src/hotspot/share/opto/phasetype.hpp b/src/hotspot/share/opto/phasetype.hpp
index c2615abd66613..2aa55a655537f 100644
--- a/src/hotspot/share/opto/phasetype.hpp
+++ b/src/hotspot/share/opto/phasetype.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -44,7 +44,7 @@ enum CompilerPhaseType {
PHASE_PHASEIDEALLOOP1,
PHASE_PHASEIDEALLOOP2,
PHASE_PHASEIDEALLOOP3,
- PHASE_CPP1,
+ PHASE_CCP1,
PHASE_ITER_GVN2,
PHASE_PHASEIDEALLOOP_ITERATIONS,
PHASE_OPTIMIZE_FINISHED,
@@ -95,7 +95,7 @@ class CompilerPhaseTypeHelper {
case PHASE_PHASEIDEALLOOP1: return "PhaseIdealLoop 1";
case PHASE_PHASEIDEALLOOP2: return "PhaseIdealLoop 2";
case PHASE_PHASEIDEALLOOP3: return "PhaseIdealLoop 3";
- case PHASE_CPP1: return "PhaseCPP 1";
+ case PHASE_CCP1: return "PhaseCCP 1";
case PHASE_ITER_GVN2: return "Iter GVN 2";
case PHASE_PHASEIDEALLOOP_ITERATIONS: return "PhaseIdealLoop iterations";
case PHASE_OPTIMIZE_FINISHED: return "Optimize finished";
diff --git a/src/hotspot/share/opto/postaloc.cpp b/src/hotspot/share/opto/postaloc.cpp
index ad1f0cf8aa50e..aa9ae37a78a12 100644
--- a/src/hotspot/share/opto/postaloc.cpp
+++ b/src/hotspot/share/opto/postaloc.cpp
@@ -611,7 +611,7 @@ void PhaseChaitin::post_allocate_copy_removal() {
if( phi != x && u != x ) // Found a different input
u = u ? NodeSentinel : x; // Capture unique input, or NodeSentinel for 2nd input
}
- if (u != NodeSentinel) { // Junk Phi. Remove
+ if (u != NodeSentinel || phi->outcnt() == 0) { // Junk Phi. Remove
phi->replace_by(u);
j -= yank_if_dead(phi, block, &value, ®nd);
phi_dex--;
diff --git a/src/hotspot/share/opto/stringopts.cpp b/src/hotspot/share/opto/stringopts.cpp
index 6392cd8ca025c..8c0a060d86258 100644
--- a/src/hotspot/share/opto/stringopts.cpp
+++ b/src/hotspot/share/opto/stringopts.cpp
@@ -65,7 +65,8 @@ class StringConcat : public ResourceObj {
StringMode,
IntMode,
CharMode,
- StringNullCheckMode
+ StringNullCheckMode,
+ NegativeIntCheckMode
};
StringConcat(PhaseStringOpts* stringopts, CallStaticJavaNode* end):
@@ -122,12 +123,19 @@ class StringConcat : public ResourceObj {
void push_string(Node* value) {
push(value, StringMode);
}
+
void push_string_null_check(Node* value) {
push(value, StringNullCheckMode);
}
+
+ void push_negative_int_check(Node* value) {
+ push(value, NegativeIntCheckMode);
+ }
+
void push_int(Node* value) {
push(value, IntMode);
}
+
void push_char(Node* value) {
push(value, CharMode);
}
@@ -277,6 +285,20 @@ void StringConcat::eliminate_unneeded_control() {
C->gvn_replace_by(n, n->in(0)->in(0));
// get rid of the other projection
C->gvn_replace_by(n->in(0)->as_If()->proj_out(false), C->top());
+ } else if (n->is_Region()) {
+ Node* iff = n->in(1)->in(0);
+ assert(n->req() == 3 && n->in(2)->in(0) == iff, "not a diamond");
+ assert(iff->is_If(), "no if for the diamond");
+ Node* bol = iff->in(1);
+ assert(bol->is_Bool(), "unexpected if shape");
+ Node* cmp = bol->in(1);
+ assert(cmp->is_Cmp(), "unexpected if shape");
+ if (cmp->in(1)->is_top() || cmp->in(2)->is_top()) {
+ // This region should lose its Phis and be optimized out by igvn but there's a chance the if folds to top first
+ // which then causes a reachable part of the graph to become dead.
+ Compile* C = _stringopts->C;
+ C->gvn_replace_by(n, iff->in(0));
+ }
}
}
}
@@ -488,13 +510,35 @@ StringConcat* PhaseStringOpts::build_candidate(CallStaticJavaNode* call) {
#ifndef PRODUCT
if (PrintOptimizeStringConcat) {
tty->print("giving up because StringBuilder(null) throws exception");
- alloc->jvms()->dump_spec(tty); tty->cr();
+ alloc->jvms()->dump_spec(tty);
+ tty->cr();
}
#endif
return NULL;
}
// StringBuilder(str) argument needs null check.
sc->push_string_null_check(use->in(TypeFunc::Parms + 1));
+ } else if (sig == ciSymbols::int_void_signature()) {
+ // StringBuilder(int) case.
+ Node* parm = use->in(TypeFunc::Parms + 1);
+ assert(parm != NULL, "must exist");
+ const TypeInt* type = _gvn->type(parm)->is_int();
+ if (type->_hi < 0) {
+ // Initial capacity argument is always negative in which case StringBuilder(int) throws
+ // a NegativeArraySizeException. Bail out from string opts.
+#ifndef PRODUCT
+ if (PrintOptimizeStringConcat) {
+ tty->print("giving up because a negative argument is passed to StringBuilder(int) which "
+ "throws a NegativeArraySizeException");
+ alloc->jvms()->dump_spec(tty);
+ tty->cr();
+ }
+#endif
+ return NULL;
+ } else if (type->_lo < 0) {
+ // Argument could be negative: We need a runtime check to throw NegativeArraySizeException in that case.
+ sc->push_negative_int_check(parm);
+ }
}
// The int variant takes an initial size for the backing
// array so just treat it like the void version.
@@ -1003,6 +1047,7 @@ bool StringConcat::validate_control_flow() {
// The IGVN will make this simple diamond go away when it
// transforms the Region. Make sure it sees it.
Compile::current()->record_for_igvn(ptr);
+ _control.push(ptr);
ptr = ptr->in(1)->in(0)->in(0);
continue;
}
@@ -1229,7 +1274,7 @@ Node* PhaseStringOpts::int_stringSize(GraphKit& kit, Node* arg) {
kit.set_control(loop);
Node* sizeTable = fetch_static_field(kit, size_table_field);
- Node* value = kit.load_array_element(NULL, sizeTable, index, TypeAryPtr::INTS);
+ Node* value = kit.load_array_element(sizeTable, index, TypeAryPtr::INTS, /* set_ctrl */ false);
C->record_for_igvn(value);
Node* limit = __ CmpI(phi, value);
Node* limitb = __ Bool(limit, BoolTest::le);
@@ -1779,6 +1824,23 @@ void PhaseStringOpts::replace_string_concat(StringConcat* sc) {
for (int argi = 0; argi < sc->num_arguments(); argi++) {
Node* arg = sc->argument(argi);
switch (sc->mode(argi)) {
+ case StringConcat::NegativeIntCheckMode: {
+ // Initial capacity argument might be negative in which case StringBuilder(int) throws
+ // a NegativeArraySizeException. Insert a runtime check with an uncommon trap.
+ const TypeInt* type = kit.gvn().type(arg)->is_int();
+ assert(type->_hi >= 0 && type->_lo < 0, "no runtime int check needed");
+ Node* p = __ Bool(__ CmpI(arg, kit.intcon(0)), BoolTest::ge);
+ IfNode* iff = kit.create_and_map_if(kit.control(), p, PROB_MIN, COUNT_UNKNOWN);
+ {
+ // Negative int -> uncommon trap.
+ PreserveJVMState pjvms(&kit);
+ kit.set_control(__ IfFalse(iff));
+ kit.uncommon_trap(Deoptimization::Reason_intrinsic,
+ Deoptimization::Action_maybe_recompile);
+ }
+ kit.set_control(__ IfTrue(iff));
+ break;
+ }
case StringConcat::IntMode: {
Node* string_size = int_stringSize(kit, arg);
@@ -1948,6 +2010,8 @@ void PhaseStringOpts::replace_string_concat(StringConcat* sc) {
for (int argi = 0; argi < sc->num_arguments(); argi++) {
Node* arg = sc->argument(argi);
switch (sc->mode(argi)) {
+ case StringConcat::NegativeIntCheckMode:
+ break; // Nothing to do, was only needed to add a runtime check earlier.
case StringConcat::IntMode: {
start = int_getChars(kit, arg, dst_array, coder, start, string_sizes->in(argi));
break;
diff --git a/src/hotspot/share/opto/superword.cpp b/src/hotspot/share/opto/superword.cpp
index 82f10893c4de7..e9199dd44e65e 100644
--- a/src/hotspot/share/opto/superword.cpp
+++ b/src/hotspot/share/opto/superword.cpp
@@ -2488,7 +2488,6 @@ void SuperWord::output() {
} else if (VectorNode::is_scalar_rotate(n)) {
Node* in1 = low_adr->in(1);
Node* in2 = p->at(0)->in(2);
- assert(in2->bottom_type()->isa_int(), "Shift must always be an int value");
// If rotation count is non-constant or greater than 8bit value create a vector.
if (!in2->is_Con() || -0x80 > in2->get_int() || in2->get_int() >= 0x80) {
in2 = vector_opd(p, 2);
diff --git a/src/hotspot/share/opto/vectorIntrinsics.cpp b/src/hotspot/share/opto/vectorIntrinsics.cpp
index d1a8ede4f5f76..06f4914199d17 100644
--- a/src/hotspot/share/opto/vectorIntrinsics.cpp
+++ b/src/hotspot/share/opto/vectorIntrinsics.cpp
@@ -1672,8 +1672,7 @@ bool LibraryCallKit::inline_vector_convert() {
if (num_elem_from < num_elem_to) {
// Since input and output number of elements are not consistent, we need to make sure we
// properly size. Thus, first make a cast that retains the number of elements from source.
- // In case the size exceeds the arch size, we do the minimum.
- int num_elem_for_cast = MIN2(num_elem_from, Matcher::max_vector_size(elem_bt_to));
+ int num_elem_for_cast = num_elem_from;
// It is possible that arch does not support this intermediate vector size
// TODO More complex logic required here to handle this corner case for the sizes.
@@ -1692,7 +1691,7 @@ bool LibraryCallKit::inline_vector_convert() {
} else if (num_elem_from > num_elem_to) {
// Since number elements from input is larger than output, simply reduce size of input (we are supposed to
// drop top elements anyway).
- int num_elem_for_resize = MAX2(num_elem_to, Matcher::min_vector_size(elem_bt_from));
+ int num_elem_for_resize = num_elem_to;
// It is possible that arch does not support this intermediate vector size
// TODO More complex logic required here to handle this corner case for the sizes.
diff --git a/src/hotspot/share/opto/vectornode.cpp b/src/hotspot/share/opto/vectornode.cpp
index b51b0a51680fa..39ddc25a80647 100644
--- a/src/hotspot/share/opto/vectornode.cpp
+++ b/src/hotspot/share/opto/vectornode.cpp
@@ -27,6 +27,7 @@
#include "opto/mulnode.hpp"
#include "opto/subnode.hpp"
#include "opto/vectornode.hpp"
+#include "opto/convertnode.hpp"
#include "utilities/powerOfTwo.hpp"
#include "utilities/globalDefinitions.hpp"
@@ -311,6 +312,14 @@ bool VectorNode::is_vector_rotate_supported(int vopc, uint vlen, BasicType bt) {
return true;
}
+ // If target does not support variable shift operations then no point
+ // in creating a rotate vector node since it will not be disintegratable.
+ // Adding a pessimistic check to avoid complex pattern mathing which
+ // may not be full proof.
+ if (!Matcher::supports_vector_variable_shifts()) {
+ return false;
+ }
+
// Validate existence of nodes created in case of rotate degeneration.
switch (bt) {
case T_INT:
@@ -1142,22 +1151,50 @@ Node* VectorNode::degenerate_vector_rotate(Node* src, Node* cnt, bool is_rotate_
// later swap them in case of left rotation.
Node* shiftRCnt = NULL;
Node* shiftLCnt = NULL;
- if (cnt->is_Con() && cnt->bottom_type()->isa_int()) {
- // Constant shift case.
- int shift = cnt->get_int() & shift_mask;
+ const TypeInt* cnt_type = cnt->bottom_type()->isa_int();
+ bool is_binary_vector_op = false;
+ if (cnt_type && cnt_type->is_con()) {
+ // Constant shift.
+ int shift = cnt_type->get_con() & shift_mask;
shiftRCnt = phase->intcon(shift);
shiftLCnt = phase->intcon(shift_mask + 1 - shift);
- } else {
- // Variable shift case.
- assert(VectorNode::is_invariant_vector(cnt), "Broadcast expected");
+ } else if (VectorNode::is_invariant_vector(cnt)) {
+ // Scalar variable shift, handle replicates generated by auto vectorizer.
cnt = cnt->in(1);
if (bt == T_LONG) {
// Shift count vector for Rotate vector has long elements too.
- assert(cnt->Opcode() == Op_ConvI2L, "ConvI2L expected");
- cnt = cnt->in(1);
+ if (cnt->Opcode() == Op_ConvI2L) {
+ cnt = cnt->in(1);
+ } else {
+ assert(cnt->bottom_type()->isa_long() &&
+ cnt->bottom_type()->is_long()->is_con(), "Long constant expected");
+ cnt = phase->transform(new ConvL2INode(cnt));
+ }
}
shiftRCnt = phase->transform(new AndINode(cnt, phase->intcon(shift_mask)));
shiftLCnt = phase->transform(new SubINode(phase->intcon(shift_mask + 1), shiftRCnt));
+ } else {
+ // Vector variable shift.
+ assert(Matcher::supports_vector_variable_shifts(), "");
+ assert(bt == T_INT, "Variable vector case supported for integer type rotation");
+
+ assert(cnt->bottom_type()->isa_vect(), "Unexpected shift");
+ const Type* elem_ty = Type::get_const_basic_type(bt);
+
+ Node* shift_mask_node = phase->intcon(shift_mask);
+ Node* const_one_node = phase->intcon(1);
+
+ int subVopc = VectorNode::opcode(Op_SubI, bt);
+ int addVopc = VectorNode::opcode(Op_AddI, bt);
+
+ Node* vector_mask = phase->transform(VectorNode::scalar2vector(shift_mask_node, vlen, elem_ty));
+ Node* vector_one = phase->transform(VectorNode::scalar2vector(const_one_node, vlen, elem_ty));
+
+ shiftRCnt = cnt;
+ shiftRCnt = phase->transform(VectorNode::make(Op_AndV, shiftRCnt, vector_mask, vt));
+ vector_mask = phase->transform(VectorNode::make(addVopc, vector_one, vector_mask, vt));
+ shiftLCnt = phase->transform(VectorNode::make(subVopc, vector_mask, shiftRCnt, vt));
+ is_binary_vector_op = true;
}
// Swap the computed left and right shift counts.
@@ -1165,8 +1202,10 @@ Node* VectorNode::degenerate_vector_rotate(Node* src, Node* cnt, bool is_rotate_
swap(shiftRCnt,shiftLCnt);
}
- shiftLCnt = phase->transform(new LShiftCntVNode(shiftLCnt, vt));
- shiftRCnt = phase->transform(new RShiftCntVNode(shiftRCnt, vt));
+ if (!is_binary_vector_op) {
+ shiftLCnt = phase->transform(new LShiftCntVNode(shiftLCnt, vt));
+ shiftRCnt = phase->transform(new RShiftCntVNode(shiftRCnt, vt));
+ }
return new OrVNode(phase->transform(VectorNode::make(shiftLOpc, src, shiftLCnt, vlen, bt)),
phase->transform(VectorNode::make(shiftROpc, src, shiftRCnt, vlen, bt)),
diff --git a/src/hotspot/share/prims/jni.cpp b/src/hotspot/share/prims/jni.cpp
index dec1e22491db1..cd01152484132 100644
--- a/src/hotspot/share/prims/jni.cpp
+++ b/src/hotspot/share/prims/jni.cpp
@@ -977,7 +977,7 @@ JNI_ENTRY(jobject, jni_NewObjectA(JNIEnv *env, jclass clazz, jmethodID methodID,
HOTSPOT_JNI_NEWOBJECTA_ENTRY(env, clazz, (uintptr_t) methodID);
jobject obj = NULL;
- DT_RETURN_MARK(NewObjectA, jobject, (const jobject)obj);
+ DT_RETURN_MARK(NewObjectA, jobject, (const jobject&)obj);
instanceOop i = InstanceKlass::allocate_instance(JNIHandles::resolve_non_null(clazz), CHECK_NULL);
obj = JNIHandles::make_local(THREAD, i);
diff --git a/src/hotspot/share/prims/jvm.cpp b/src/hotspot/share/prims/jvm.cpp
index 91da785714b4e..28f6361157ce4 100644
--- a/src/hotspot/share/prims/jvm.cpp
+++ b/src/hotspot/share/prims/jvm.cpp
@@ -3357,7 +3357,7 @@ JVM_END
// Library support ///////////////////////////////////////////////////////////////////////////
-JVM_ENTRY_NO_ENV(void*, JVM_LoadLibrary(const char* name))
+JVM_ENTRY_NO_ENV(void*, JVM_LoadLibrary(const char* name, jboolean throwException))
//%note jvm_ct
char ebuf[1024];
void *load_result;
@@ -3366,18 +3366,23 @@ JVM_ENTRY_NO_ENV(void*, JVM_LoadLibrary(const char* name))
load_result = os::dll_load(name, ebuf, sizeof ebuf);
}
if (load_result == NULL) {
- char msg[1024];
- jio_snprintf(msg, sizeof msg, "%s: %s", name, ebuf);
- // Since 'ebuf' may contain a string encoded using
- // platform encoding scheme, we need to pass
- // Exceptions::unsafe_to_utf8 to the new_exception method
- // as the last argument. See bug 6367357.
- Handle h_exception =
- Exceptions::new_exception(thread,
- vmSymbols::java_lang_UnsatisfiedLinkError(),
- msg, Exceptions::unsafe_to_utf8);
-
- THROW_HANDLE_0(h_exception);
+ if (throwException) {
+ char msg[1024];
+ jio_snprintf(msg, sizeof msg, "%s: %s", name, ebuf);
+ // Since 'ebuf' may contain a string encoded using
+ // platform encoding scheme, we need to pass
+ // Exceptions::unsafe_to_utf8 to the new_exception method
+ // as the last argument. See bug 6367357.
+ Handle h_exception =
+ Exceptions::new_exception(thread,
+ vmSymbols::java_lang_UnsatisfiedLinkError(),
+ msg, Exceptions::unsafe_to_utf8);
+
+ THROW_HANDLE_0(h_exception);
+ } else {
+ log_info(library)("Failed to load library %s", name);
+ return load_result;
+ }
}
log_info(library)("Loaded library %s, handle " INTPTR_FORMAT, name, p2i(load_result));
return load_result;
diff --git a/src/hotspot/share/prims/jvmtiEnvBase.cpp b/src/hotspot/share/prims/jvmtiEnvBase.cpp
index 10f3a764131ea..5517b64bc2dbb 100644
--- a/src/hotspot/share/prims/jvmtiEnvBase.cpp
+++ b/src/hotspot/share/prims/jvmtiEnvBase.cpp
@@ -830,7 +830,7 @@ JvmtiEnvBase::get_stack_trace(JavaThread *java_thread,
"call by myself / at safepoint / at handshake");
int count = 0;
if (java_thread->has_last_Java_frame()) {
- RegisterMap reg_map(java_thread);
+ RegisterMap reg_map(java_thread, false /* update_map */, false /* process_frames */);
ResourceMark rm(current_thread);
javaVFrame *jvf = java_thread->last_java_vframe(®_map);
HandleMark hm(current_thread);
diff --git a/src/hotspot/share/prims/jvmtiImpl.cpp b/src/hotspot/share/prims/jvmtiImpl.cpp
index ea5ba609f9576..ba79e19392c42 100644
--- a/src/hotspot/share/prims/jvmtiImpl.cpp
+++ b/src/hotspot/share/prims/jvmtiImpl.cpp
@@ -206,9 +206,7 @@ JvmtiBreakpoint::JvmtiBreakpoint(Method* m_method, jlocation location)
}
JvmtiBreakpoint::~JvmtiBreakpoint() {
- if (_class_holder.peek() != NULL) {
- _class_holder.release(JvmtiExport::jvmti_oop_storage());
- }
+ _class_holder.release(JvmtiExport::jvmti_oop_storage());
}
void JvmtiBreakpoint::copy(JvmtiBreakpoint& bp) {
diff --git a/src/hotspot/share/prims/jvmtiRedefineClasses.cpp b/src/hotspot/share/prims/jvmtiRedefineClasses.cpp
index 99479bb8b7abb..6e7d1fd49c2b0 100644
--- a/src/hotspot/share/prims/jvmtiRedefineClasses.cpp
+++ b/src/hotspot/share/prims/jvmtiRedefineClasses.cpp
@@ -4419,6 +4419,9 @@ void VM_RedefineClasses::redefine_single_class(Thread* current, jclass the_jclas
the_class->set_has_been_redefined();
+ // Scratch class is unloaded but still needs cleaning, and skipping for CDS.
+ scratch_class->set_is_scratch_class();
+
// keep track of previous versions of this class
the_class->add_previous_version(scratch_class, emcp_method_count);
diff --git a/src/hotspot/share/prims/nativeLookup.cpp b/src/hotspot/share/prims/nativeLookup.cpp
index 3ae101ec60236..16a1041e8d727 100644
--- a/src/hotspot/share/prims/nativeLookup.cpp
+++ b/src/hotspot/share/prims/nativeLookup.cpp
@@ -244,6 +244,7 @@ static JNINativeMethod lookup_special_native_methods[] = {
{ CC"Java_jdk_internal_invoke_NativeEntryPoint_registerNatives", NULL, FN_PTR(JVM_RegisterNativeEntryPointMethods) },
{ CC"Java_jdk_internal_perf_Perf_registerNatives", NULL, FN_PTR(JVM_RegisterPerfMethods) },
{ CC"Java_sun_hotspot_WhiteBox_registerNatives", NULL, FN_PTR(JVM_RegisterWhiteBoxMethods) },
+ { CC"Java_jdk_test_whitebox_WhiteBox_registerNatives", NULL, FN_PTR(JVM_RegisterWhiteBoxMethods) },
{ CC"Java_jdk_internal_vm_vector_VectorSupport_registerNatives", NULL, FN_PTR(JVM_RegisterVectorSupportMethods)},
#if INCLUDE_JVMCI
{ CC"Java_jdk_vm_ci_runtime_JVMCI_initializeRuntime", NULL, FN_PTR(JVM_GetJVMCIRuntime) },
diff --git a/src/hotspot/share/prims/vectorSupport.cpp b/src/hotspot/share/prims/vectorSupport.cpp
index 5a9b3b1b6a344..4797857b9f63d 100644
--- a/src/hotspot/share/prims/vectorSupport.cpp
+++ b/src/hotspot/share/prims/vectorSupport.cpp
@@ -291,6 +291,7 @@ int VectorSupport::vop2ideal(jint id, BasicType bt) {
case T_BYTE: // fall-through
case T_SHORT: // fall-through
case T_INT: return Op_NegI;
+ case T_LONG: return Op_NegL;
case T_FLOAT: return Op_NegF;
case T_DOUBLE: return Op_NegD;
default: fatal("NEG: %s", type2name(bt));
diff --git a/src/hotspot/share/prims/wbtestmethods/parserTests.cpp b/src/hotspot/share/prims/wbtestmethods/parserTests.cpp
index 32ef1653f0464..c6ef6cf82963f 100644
--- a/src/hotspot/share/prims/wbtestmethods/parserTests.cpp
+++ b/src/hotspot/share/prims/wbtestmethods/parserTests.cpp
@@ -50,7 +50,7 @@
* This method Returns a char* representation of that enum value.
*/
static const char* lookup_diagnosticArgumentEnum(const char* field_name, oop object) {
- const char* enum_sig = "Lsun/hotspot/parser/DiagnosticCommand$DiagnosticArgumentType;";
+ const char* enum_sig = "Ljdk/test/whitebox/parser/DiagnosticCommand$DiagnosticArgumentType;";
TempNewSymbol enumSigSymbol = SymbolTable::new_symbol(enum_sig);
int offset = WhiteBox::offset_for_field(field_name, object, enumSigSymbol);
oop enumOop = object->obj_field(offset);
diff --git a/src/hotspot/share/prims/whitebox.cpp b/src/hotspot/share/prims/whitebox.cpp
index 6ef0251211cb7..145fa8e1bac33 100644
--- a/src/hotspot/share/prims/whitebox.cpp
+++ b/src/hotspot/share/prims/whitebox.cpp
@@ -995,7 +995,7 @@ bool WhiteBox::validate_cgroup(const char* proc_cgroups,
const char* proc_self_cgroup,
const char* proc_self_mountinfo,
u1* cg_flags) {
- CgroupInfo cg_infos[4];
+ CgroupInfo cg_infos[CG_INFO_LENGTH];
return CgroupSubsystemFactory::determine_type(cg_infos, proc_cgroups,
proc_self_cgroup,
proc_self_mountinfo, cg_flags);
@@ -2151,6 +2151,9 @@ bool WhiteBox::lookup_bool(const char* field_name, oop object) {
void WhiteBox::register_methods(JNIEnv* env, jclass wbclass, JavaThread* thread, JNINativeMethod* method_array, int method_count) {
ResourceMark rm;
+ Klass* klass = java_lang_Class::as_Klass(JNIHandles::resolve_non_null(wbclass));
+ const char* klass_name = klass->external_name();
+
ThreadToNativeFromVM ttnfv(thread); // can't be in VM when we call JNI
// one by one registration natives for exception catching
@@ -2166,13 +2169,13 @@ void WhiteBox::register_methods(JNIEnv* env, jclass wbclass, JavaThread* thread,
if (env->IsInstanceOf(throwable_obj, no_such_method_error_klass)) {
// NoSuchMethodError is thrown when a method can't be found or a method is not native.
// Ignoring the exception since it is not preventing use of other WhiteBox methods.
- tty->print_cr("Warning: 'NoSuchMethodError' on register of sun.hotspot.WhiteBox::%s%s",
- method_array[i].name, method_array[i].signature);
+ tty->print_cr("Warning: 'NoSuchMethodError' on register of %s::%s%s",
+ klass_name, method_array[i].name, method_array[i].signature);
}
} else {
// Registration failed unexpectedly.
- tty->print_cr("Warning: unexpected error on register of sun.hotspot.WhiteBox::%s%s. All methods will be unregistered",
- method_array[i].name, method_array[i].signature);
+ tty->print_cr("Warning: unexpected error on register of %s::%s%s. All methods will be unregistered",
+ klass_name, method_array[i].name, method_array[i].signature);
env->UnregisterNatives(wbclass);
break;
}
@@ -2387,7 +2390,7 @@ static JNINativeMethod methods[] = {
{CC"countAliveClasses0", CC"(Ljava/lang/String;)I", (void*)&WB_CountAliveClasses },
{CC"getSymbolRefcount", CC"(Ljava/lang/String;)I", (void*)&WB_GetSymbolRefcount },
{CC"parseCommandLine0",
- CC"(Ljava/lang/String;C[Lsun/hotspot/parser/DiagnosticCommand;)[Ljava/lang/Object;",
+ CC"(Ljava/lang/String;C[Ljdk/test/whitebox/parser/DiagnosticCommand;)[Ljava/lang/Object;",
(void*) &WB_ParseCommandLine
},
{CC"addToBootstrapClassLoaderSearch0", CC"(Ljava/lang/String;)V",
diff --git a/src/hotspot/share/runtime/abstract_vm_version.cpp b/src/hotspot/share/runtime/abstract_vm_version.cpp
index 67aa1f8559fa5..494336e59c586 100644
--- a/src/hotspot/share/runtime/abstract_vm_version.cpp
+++ b/src/hotspot/share/runtime/abstract_vm_version.cpp
@@ -235,7 +235,9 @@ const char* Abstract_VM_Version::internal_vm_info_string() {
#elif _MSC_VER == 1927
#define HOTSPOT_BUILD_COMPILER "MS VC++ 16.7 (VS2019)"
#elif _MSC_VER == 1928
- #define HOTSPOT_BUILD_COMPILER "MS VC++ 16.8 (VS2019)"
+ #define HOTSPOT_BUILD_COMPILER "MS VC++ 16.8 / 16.9 (VS2019)"
+ #elif _MSC_VER == 1929
+ #define HOTSPOT_BUILD_COMPILER "MS VC++ 16.10 / 16.11 (VS2019)"
#else
#define HOTSPOT_BUILD_COMPILER "unknown MS VC++:" XSTR(_MSC_VER)
#endif
diff --git a/src/hotspot/share/runtime/arguments.cpp b/src/hotspot/share/runtime/arguments.cpp
index 2504527c2336d..3587b08f30973 100644
--- a/src/hotspot/share/runtime/arguments.cpp
+++ b/src/hotspot/share/runtime/arguments.cpp
@@ -4048,6 +4048,11 @@ jint Arguments::apply_ergo() {
// Clear flags not supported on zero.
FLAG_SET_DEFAULT(ProfileInterpreter, false);
FLAG_SET_DEFAULT(UseBiasedLocking, false);
+
+ if (LogTouchedMethods) {
+ warning("LogTouchedMethods is not supported for Zero");
+ FLAG_SET_DEFAULT(LogTouchedMethods, false);
+ }
#endif // ZERO
if (PrintAssembly && FLAG_IS_DEFAULT(DebugNonSafepoints)) {
diff --git a/src/hotspot/share/runtime/atomic.hpp b/src/hotspot/share/runtime/atomic.hpp
index 82e8222e327a4..7a71b6ce4f20f 100644
--- a/src/hotspot/share/runtime/atomic.hpp
+++ b/src/hotspot/share/runtime/atomic.hpp
@@ -47,6 +47,7 @@ enum atomic_memory_order {
memory_order_acquire = 2,
memory_order_release = 3,
memory_order_acq_rel = 4,
+ memory_order_seq_cst = 5,
// Strong two-way memory barrier.
memory_order_conservative = 8
};
diff --git a/src/hotspot/share/runtime/deoptimization.cpp b/src/hotspot/share/runtime/deoptimization.cpp
index 12244ab2662cb..680becce19540 100644
--- a/src/hotspot/share/runtime/deoptimization.cpp
+++ b/src/hotspot/share/runtime/deoptimization.cpp
@@ -775,6 +775,7 @@ JRT_LEAF(BasicType, Deoptimization::unpack_frames(JavaThread* thread, int exec_m
// at an uncommon trap for an invoke (where the compiler
// generates debug info before the invoke has executed)
Bytecodes::Code cur_code = str.next();
+ Bytecodes::Code next_code = Bytecodes::_shouldnotreachhere;
if (Bytecodes::is_invoke(cur_code)) {
Bytecode_invoke invoke(mh, iframe->interpreter_frame_bci());
cur_invoke_parameter_size = invoke.size_of_parameters();
@@ -783,7 +784,7 @@ JRT_LEAF(BasicType, Deoptimization::unpack_frames(JavaThread* thread, int exec_m
}
}
if (str.bci() < max_bci) {
- Bytecodes::Code next_code = str.next();
+ next_code = str.next();
if (next_code >= 0) {
// The interpreter oop map generator reports results before
// the current bytecode has executed except in the case of
@@ -833,6 +834,10 @@ JRT_LEAF(BasicType, Deoptimization::unpack_frames(JavaThread* thread, int exec_m
// Print out some information that will help us debug the problem
tty->print_cr("Wrong number of expression stack elements during deoptimization");
tty->print_cr(" Error occurred while verifying frame %d (0..%d, 0 is topmost)", i, cur_array->frames() - 1);
+ tty->print_cr(" Current code %s", Bytecodes::name(cur_code));
+ if (try_next_mask) {
+ tty->print_cr(" Next code %s", Bytecodes::name(next_code));
+ }
tty->print_cr(" Fabricated interpreter frame had %d expression stack elements",
iframe->interpreter_frame_expression_stack_size());
tty->print_cr(" Interpreter oop map had %d expression stack elements", mask.expression_stack_size());
@@ -2384,7 +2389,8 @@ Deoptimization::query_update_method_data(MethodData* trap_mdo,
uint idx = reason;
#if INCLUDE_JVMCI
if (is_osr) {
- idx += Reason_LIMIT;
+ // Upper half of history array used for traps in OSR compilations
+ idx += Reason_TRAP_HISTORY_LENGTH;
}
#endif
uint prior_trap_count = trap_mdo->trap_count(idx);
diff --git a/src/hotspot/share/runtime/deoptimization.hpp b/src/hotspot/share/runtime/deoptimization.hpp
index 20c9e6b36417c..4452e7a15572e 100644
--- a/src/hotspot/share/runtime/deoptimization.hpp
+++ b/src/hotspot/share/runtime/deoptimization.hpp
@@ -46,6 +46,7 @@ class Deoptimization : AllStatic {
public:
// What condition caused the deoptimization?
+ // Note: Keep this enum in sync. with Deoptimization::_trap_reason_name.
enum DeoptReason {
Reason_many = -1, // indicates presence of several reasons
Reason_none = 0, // indicates absence of a relevant deopt.
@@ -98,20 +99,22 @@ class Deoptimization : AllStatic {
Reason_jsr_mismatch,
#endif
+ // Used to define MethodData::_trap_hist_limit where Reason_tenured isn't included
+ Reason_TRAP_HISTORY_LENGTH,
+
// Reason_tenured is counted separately, add normal counted Reasons above.
- // Related to MethodData::_trap_hist_limit where Reason_tenured isn't included
- Reason_tenured, // age of the code has reached the limit
+ Reason_tenured = Reason_TRAP_HISTORY_LENGTH, // age of the code has reached the limit
Reason_LIMIT,
- // Note: Keep this enum in sync. with _trap_reason_name.
- Reason_RECORDED_LIMIT = Reason_profile_predicate // some are not recorded per bc
// Note: Reason_RECORDED_LIMIT should fit into 31 bits of
// DataLayout::trap_bits. This dependency is enforced indirectly
// via asserts, to avoid excessive direct header-to-header dependencies.
// See Deoptimization::trap_state_reason and class DataLayout.
+ Reason_RECORDED_LIMIT = Reason_profile_predicate, // some are not recorded per bc
};
// What action must be taken by the runtime?
+ // Note: Keep this enum in sync. with Deoptimization::_trap_action_name.
enum DeoptAction {
Action_none, // just interpret, do not invalidate nmethod
Action_maybe_recompile, // recompile the nmethod; need not invalidate
@@ -119,7 +122,6 @@ class Deoptimization : AllStatic {
Action_make_not_entrant, // invalidate the nmethod, recompile (probably)
Action_make_not_compilable, // invalidate the nmethod and do not compile
Action_LIMIT
- // Note: Keep this enum in sync. with _trap_action_name.
};
enum {
diff --git a/src/hotspot/share/runtime/fieldDescriptor.cpp b/src/hotspot/share/runtime/fieldDescriptor.cpp
index d10cb79e6aa10..b73f1c27befcf 100644
--- a/src/hotspot/share/runtime/fieldDescriptor.cpp
+++ b/src/hotspot/share/runtime/fieldDescriptor.cpp
@@ -55,7 +55,7 @@ Symbol* fieldDescriptor::generic_signature() const {
}
}
assert(false, "should never happen");
- return NULL;
+ return vmSymbols::void_signature(); // return a default value (for code analyzers)
}
bool fieldDescriptor::is_trusted_final() const {
diff --git a/src/hotspot/share/runtime/globals.hpp b/src/hotspot/share/runtime/globals.hpp
index 8779036b44024..8c61e6a05aea7 100644
--- a/src/hotspot/share/runtime/globals.hpp
+++ b/src/hotspot/share/runtime/globals.hpp
@@ -456,22 +456,22 @@ const intx ObjectAlignmentInBytes = 8;
"Use only malloc/free for allocation (no resource area/arena)") \
\
develop(bool, ZapResourceArea, trueInDebug, \
- "Zap freed resource/arena space with 0xABABABAB") \
+ "Zap freed resource/arena space") \
\
notproduct(bool, ZapVMHandleArea, trueInDebug, \
- "Zap freed VM handle space with 0xBCBCBCBC") \
+ "Zap freed VM handle space") \
\
notproduct(bool, ZapStackSegments, trueInDebug, \
- "Zap allocated/freed stack segments with 0xFADFADED") \
+ "Zap allocated/freed stack segments") \
\
develop(bool, ZapUnusedHeapArea, trueInDebug, \
- "Zap unused heap space with 0xBAADBABE") \
+ "Zap unused heap space") \
\
develop(bool, CheckZapUnusedHeapArea, false, \
"Check zapping of unused heap space") \
\
develop(bool, ZapFillerObjects, trueInDebug, \
- "Zap filler objects with 0xDEAFBABE") \
+ "Zap filler objects") \
\
product(bool, ExecutingUnitTests, false, \
"Whether the JVM is running unit tests or not") \
diff --git a/src/hotspot/share/runtime/handshake.cpp b/src/hotspot/share/runtime/handshake.cpp
index 8b76e9e495f03..6801e45e94e77 100644
--- a/src/hotspot/share/runtime/handshake.cpp
+++ b/src/hotspot/share/runtime/handshake.cpp
@@ -612,14 +612,12 @@ void HandshakeState::do_self_suspend() {
assert(Thread::current() == _handshakee, "should call from _handshakee");
assert(_lock.owned_by_self(), "Lock must be held");
assert(!_handshakee->has_last_Java_frame() || _handshakee->frame_anchor()->walkable(), "should have walkable stack");
- JavaThreadState jts = _handshakee->thread_state();
+ assert(_handshakee->thread_state() == _thread_blocked, "Caller should have transitioned to _thread_blocked");
+
while (is_suspended()) {
- _handshakee->set_thread_state(_thread_blocked);
log_trace(thread, suspend)("JavaThread:" INTPTR_FORMAT " suspended", p2i(_handshakee));
_lock.wait_without_safepoint_check();
}
- _handshakee->set_thread_state(jts);
- set_async_suspend_handshake(false);
log_trace(thread, suspend)("JavaThread:" INTPTR_FORMAT " resumed", p2i(_handshakee));
}
@@ -631,7 +629,12 @@ class ThreadSelfSuspensionHandshake : public AsyncHandshakeClosure {
void do_thread(Thread* thr) {
JavaThread* current = thr->as_Java_thread();
assert(current == Thread::current(), "Must be self executed.");
+ JavaThreadState jts = current->thread_state();
+
+ current->set_thread_state(_thread_blocked);
current->handshake_state()->do_self_suspend();
+ current->set_thread_state(jts);
+ current->handshake_state()->set_async_suspend_handshake(false);
}
virtual bool is_suspend() { return true; }
};
@@ -681,15 +684,19 @@ class SuspendThreadHandshake : public HandshakeClosure {
bool HandshakeState::suspend() {
JavaThread* self = JavaThread::current();
- SuspendThreadHandshake st;
- Handshake::execute(&st, _handshakee);
if (_handshakee == self) {
- // If target is the current thread we need to call this to do the
- // actual suspend since Handshake::execute() above only installed
- // the asynchronous handshake.
- SafepointMechanism::process_if_requested(self);
+ // If target is the current thread we can bypass the handshake machinery
+ // and just suspend directly
+ ThreadBlockInVM tbivm(self);
+ MutexLocker ml(&_lock, Mutex::_no_safepoint_check_flag);
+ set_suspended(true);
+ do_self_suspend();
+ return true;
+ } else {
+ SuspendThreadHandshake st;
+ Handshake::execute(&st, _handshakee);
+ return st.did_suspend();
}
- return st.did_suspend();
}
bool HandshakeState::resume() {
diff --git a/src/hotspot/share/runtime/mutexLocker.hpp b/src/hotspot/share/runtime/mutexLocker.hpp
index 16b7401f5ad2b..d45c62dadc810 100644
--- a/src/hotspot/share/runtime/mutexLocker.hpp
+++ b/src/hotspot/share/runtime/mutexLocker.hpp
@@ -260,12 +260,8 @@ class MonitorLocker: public MutexLocker {
}
bool wait(int64_t timeout = 0) {
- if (_flag == Mutex::_safepoint_check_flag) {
- return as_monitor()->wait(timeout);
- } else {
- return as_monitor()->wait_without_safepoint_check(timeout);
- }
- return false;
+ return _flag == Mutex::_safepoint_check_flag ?
+ as_monitor()->wait(timeout) : as_monitor()->wait_without_safepoint_check(timeout);
}
void notify_all() {
diff --git a/src/hotspot/share/runtime/objectMonitor.hpp b/src/hotspot/share/runtime/objectMonitor.hpp
index 7a866c123d20e..b36a724e43bc9 100644
--- a/src/hotspot/share/runtime/objectMonitor.hpp
+++ b/src/hotspot/share/runtime/objectMonitor.hpp
@@ -127,7 +127,7 @@ class ObjectWaiter : public StackObj {
#define OM_CACHE_LINE_SIZE DEFAULT_CACHE_LINE_SIZE
#endif
-class ObjectMonitor : public CHeapObj {
+class ObjectMonitor : public CHeapObj {
friend class ObjectSynchronizer;
friend class ObjectWaiter;
friend class VMStructs;
diff --git a/src/hotspot/share/runtime/os.cpp b/src/hotspot/share/runtime/os.cpp
index 9b8e667f9ec38..928f9bc83279b 100644
--- a/src/hotspot/share/runtime/os.cpp
+++ b/src/hotspot/share/runtime/os.cpp
@@ -916,7 +916,7 @@ bool os::print_function_and_library_name(outputStream* st,
addr = addr2;
}
}
-#endif // HANDLE_FUNCTION_DESCRIPTORS
+#endif // HAVE_FUNCTION_DESCRIPTORS
if (have_function_name) {
// Print function name, optionally demangled
diff --git a/src/hotspot/share/runtime/safepointMechanism.hpp b/src/hotspot/share/runtime/safepointMechanism.hpp
index 69e1b84889371..46cac5e4a610f 100644
--- a/src/hotspot/share/runtime/safepointMechanism.hpp
+++ b/src/hotspot/share/runtime/safepointMechanism.hpp
@@ -45,15 +45,12 @@ class SafepointMechanism : public AllStatic {
static address _polling_page;
-
static inline void disarm_local_poll(JavaThread* thread);
static inline bool global_poll();
static void process(JavaThread *thread, bool allow_suspend);
- static inline bool should_process_no_suspend(JavaThread* thread);
-
static void default_initialize();
static void pd_initialize() NOT_AIX({ default_initialize(); });
diff --git a/src/hotspot/share/runtime/safepointMechanism.inline.hpp b/src/hotspot/share/runtime/safepointMechanism.inline.hpp
index 965eea20518d0..fbfcf2827b126 100644
--- a/src/hotspot/share/runtime/safepointMechanism.inline.hpp
+++ b/src/hotspot/share/runtime/safepointMechanism.inline.hpp
@@ -30,6 +30,7 @@
#include "runtime/atomic.hpp"
#include "runtime/handshake.hpp"
#include "runtime/safepoint.hpp"
+#include "runtime/stackWatermarkSet.hpp"
#include "runtime/thread.inline.hpp"
// Caller is responsible for using a memory barrier if needed.
@@ -62,26 +63,29 @@ bool SafepointMechanism::global_poll() {
return (SafepointSynchronize::_state != SafepointSynchronize::_not_synchronized);
}
-bool SafepointMechanism::should_process_no_suspend(JavaThread* thread) {
- if (global_poll() || thread->handshake_state()->has_a_non_suspend_operation()) {
- return true;
- } else {
- // We ignore suspend requests if any and just check before returning if we need
- // to fix the thread's oops and first few frames due to a possible safepoint.
- StackWatermarkSet::on_safepoint(thread);
- update_poll_values(thread);
- OrderAccess::cross_modify_fence();
- return false;
- }
-}
-
bool SafepointMechanism::should_process(JavaThread* thread, bool allow_suspend) {
if (!local_poll_armed(thread)) {
return false;
} else if (allow_suspend) {
return true;
}
- return should_process_no_suspend(thread);
+ // We are armed but we should ignore suspend operations.
+ if (global_poll() || // Safepoint
+ thread->handshake_state()->has_a_non_suspend_operation() || // Non-suspend handshake
+ !StackWatermarkSet::processing_started(thread)) { // StackWatermark processing is not started
+ return true;
+ }
+
+ // It has boiled down to two possibilities:
+ // 1: We have nothing to process, this just a disarm poll.
+ // 2: We have a suspend handshake, which cannot be processed.
+ // We update the poll value in case of a disarm, to reduce false positives.
+ update_poll_values(thread);
+
+ // We are now about to avoid processing and thus no cross modify fence will be executed.
+ // In case a safepoint happened, while being blocked, we execute it here.
+ OrderAccess::cross_modify_fence();
+ return false;
}
void SafepointMechanism::process_if_requested(JavaThread* thread, bool allow_suspend) {
diff --git a/src/hotspot/share/runtime/stackWatermarkSet.cpp b/src/hotspot/share/runtime/stackWatermarkSet.cpp
index e283679712015..8029981baf76a 100644
--- a/src/hotspot/share/runtime/stackWatermarkSet.cpp
+++ b/src/hotspot/share/runtime/stackWatermarkSet.cpp
@@ -128,6 +128,15 @@ void StackWatermarkSet::start_processing(JavaThread* jt, StackWatermarkKind kind
// will always update the poll values when waking up from a safepoint.
}
+bool StackWatermarkSet::processing_started(JavaThread* jt) {
+ for (StackWatermark* current = head(jt); current != NULL; current = current->next()) {
+ if (!current->processing_started()) {
+ return false;
+ }
+ }
+ return true;
+}
+
void StackWatermarkSet::finish_processing(JavaThread* jt, void* context, StackWatermarkKind kind) {
StackWatermark* watermark = get(jt, kind);
if (watermark != NULL) {
diff --git a/src/hotspot/share/runtime/stackWatermarkSet.hpp b/src/hotspot/share/runtime/stackWatermarkSet.hpp
index 78cfc7f28276f..11c27abb85ef7 100644
--- a/src/hotspot/share/runtime/stackWatermarkSet.hpp
+++ b/src/hotspot/share/runtime/stackWatermarkSet.hpp
@@ -79,6 +79,9 @@ class StackWatermarkSet : public AllStatic {
// Called to ensure that processing of the thread is started
static void start_processing(JavaThread* jt, StackWatermarkKind kind);
+ // Returns true if all StackWatermarks have been started.
+ static bool processing_started(JavaThread* jt);
+
// Called to finish the processing of a thread
static void finish_processing(JavaThread* jt, void* context, StackWatermarkKind kind);
diff --git a/src/hotspot/share/services/attachListener.cpp b/src/hotspot/share/services/attachListener.cpp
index 6deb1afddb9f1..7f147c65d2742 100644
--- a/src/hotspot/share/services/attachListener.cpp
+++ b/src/hotspot/share/services/attachListener.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2005, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2005, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -292,6 +292,7 @@ static jint heap_inspection(AttachOperation* op, outputStream* out) {
uintx num;
if (!Arguments::parse_uintx(num_str, &num, 0)) {
out->print_cr("Invalid parallel thread number: [%s]", num_str);
+ delete fs;
return JNI_ERR;
}
parallel_thread_num = num == 0 ? parallel_thread_num : (uint)num;
diff --git a/src/hotspot/share/services/diagnosticCommand.cpp b/src/hotspot/share/services/diagnosticCommand.cpp
index 7fad4198d17a1..51f47421c5089 100644
--- a/src/hotspot/share/services/diagnosticCommand.cpp
+++ b/src/hotspot/share/services/diagnosticCommand.cpp
@@ -58,7 +58,9 @@
#include "utilities/events.hpp"
#include "utilities/formatBuffer.hpp"
#include "utilities/macros.hpp"
-
+#ifdef LINUX
+#include "trimCHeapDCmd.hpp"
+#endif
static void loadAgentModule(TRAPS) {
ResourceMark rm(THREAD);
@@ -118,6 +120,7 @@ void DCmdRegistrant::register_dcmds(){
DCmdFactory::register_DCmdFactory(new DCmdFactoryImpl(full_export, true, false));
#ifdef LINUX
DCmdFactory::register_DCmdFactory(new DCmdFactoryImpl(full_export, true, false));
+ DCmdFactory::register_DCmdFactory(new DCmdFactoryImpl(full_export, true, false));
#endif // LINUX
DCmdFactory::register_DCmdFactory(new DCmdFactoryImpl(full_export, true, false));
DCmdFactory::register_DCmdFactory(new DCmdFactoryImpl(full_export, true, false));
@@ -468,10 +471,13 @@ HeapDumpDCmd::HeapDumpDCmd(outputStream* output, bool heap) :
"BOOLEAN", false, "false"),
_gzip("-gz", "If specified, the heap dump is written in gzipped format "
"using the given compression level. 1 (recommended) is the fastest, "
- "9 the strongest compression.", "INT", false, "1") {
+ "9 the strongest compression.", "INT", false, "1"),
+ _overwrite("-overwrite", "If specified, the dump file will be overwritten if it exists",
+ "BOOLEAN", false, "false") {
_dcmdparser.add_dcmd_option(&_all);
_dcmdparser.add_dcmd_argument(&_filename);
_dcmdparser.add_dcmd_option(&_gzip);
+ _dcmdparser.add_dcmd_option(&_overwrite);
}
void HeapDumpDCmd::execute(DCmdSource source, TRAPS) {
@@ -490,7 +496,7 @@ void HeapDumpDCmd::execute(DCmdSource source, TRAPS) {
// This helps reduces the amount of unreachable objects in the dump
// and makes it easier to browse.
HeapDumper dumper(!_all.value() /* request GC if _all is false*/);
- dumper.dump(_filename.value(), output(), (int) level);
+ dumper.dump(_filename.value(), output(), (int) level, _overwrite.value());
}
ClassHistogramDCmd::ClassHistogramDCmd(outputStream* output, bool heap) :
diff --git a/src/hotspot/share/services/diagnosticCommand.hpp b/src/hotspot/share/services/diagnosticCommand.hpp
index 026387f4cecf1..12b7b6f3a8a36 100644
--- a/src/hotspot/share/services/diagnosticCommand.hpp
+++ b/src/hotspot/share/services/diagnosticCommand.hpp
@@ -314,6 +314,7 @@ class HeapDumpDCmd : public DCmdWithParser {
DCmdArgument _filename;
DCmdArgument _all;
DCmdArgument _gzip;
+ DCmdArgument _overwrite;
public:
HeapDumpDCmd(outputStream* output, bool heap);
static const char* name() {
diff --git a/src/hotspot/share/services/heapDumper.cpp b/src/hotspot/share/services/heapDumper.cpp
index 4e86935dcc3a0..57b8100b0858c 100644
--- a/src/hotspot/share/services/heapDumper.cpp
+++ b/src/hotspot/share/services/heapDumper.cpp
@@ -1905,7 +1905,7 @@ void VM_HeapDumper::dump_stack_traces() {
}
// dump the heap to given path.
-int HeapDumper::dump(const char* path, outputStream* out, int compression) {
+int HeapDumper::dump(const char* path, outputStream* out, int compression, bool overwrite) {
assert(path != NULL && strlen(path) > 0, "path missing");
// print message in interactive case
@@ -1928,7 +1928,7 @@ int HeapDumper::dump(const char* path, outputStream* out, int compression) {
}
}
- DumpWriter writer(new (std::nothrow) FileWriter(path), compressor);
+ DumpWriter writer(new (std::nothrow) FileWriter(path, overwrite), compressor);
if (writer.error() != NULL) {
set_error(writer.error());
diff --git a/src/hotspot/share/services/heapDumper.hpp b/src/hotspot/share/services/heapDumper.hpp
index fdd899a0997cc..57e00fb14b52f 100644
--- a/src/hotspot/share/services/heapDumper.hpp
+++ b/src/hotspot/share/services/heapDumper.hpp
@@ -71,7 +71,7 @@ class HeapDumper : public StackObj {
// dumps the heap to the specified file, returns 0 if success.
// additional info is written to out if not NULL.
// compression >= 0 creates a gzipped file with the given compression level.
- int dump(const char* path, outputStream* out = NULL, int compression = -1);
+ int dump(const char* path, outputStream* out = NULL, int compression = -1, bool overwrite = false);
// returns error message (resource allocated), or NULL if no error
char* error_as_C_string() const;
diff --git a/src/hotspot/share/services/heapDumperCompression.cpp b/src/hotspot/share/services/heapDumperCompression.cpp
index 49568273d8999..41c9726109494 100644
--- a/src/hotspot/share/services/heapDumperCompression.cpp
+++ b/src/hotspot/share/services/heapDumperCompression.cpp
@@ -34,7 +34,7 @@
char const* FileWriter::open_writer() {
assert(_fd < 0, "Must not already be open");
- _fd = os::create_binary_file(_path, false); // don't replace existing file
+ _fd = os::create_binary_file(_path, _overwrite);
if (_fd < 0) {
return os::strerror(errno);
diff --git a/src/hotspot/share/services/heapDumperCompression.hpp b/src/hotspot/share/services/heapDumperCompression.hpp
index cfd04a9a7b7bc..91bc308fa851b 100644
--- a/src/hotspot/share/services/heapDumperCompression.hpp
+++ b/src/hotspot/share/services/heapDumperCompression.hpp
@@ -61,10 +61,11 @@ class AbstractWriter : public CHeapObj {
class FileWriter : public AbstractWriter {
private:
char const* _path;
+ bool _overwrite;
int _fd;
public:
- FileWriter(char const* path) : _path(path), _fd(-1) { }
+ FileWriter(char const* path, bool overwrite) : _path(path), _overwrite(overwrite), _fd(-1) { }
~FileWriter();
diff --git a/src/hotspot/share/services/management.cpp b/src/hotspot/share/services/management.cpp
index df4c20cebf157..a6c1efebac54b 100644
--- a/src/hotspot/share/services/management.cpp
+++ b/src/hotspot/share/services/management.cpp
@@ -1996,7 +1996,7 @@ JVM_ENTRY(void, jmm_GetDiagnosticCommandInfo(JNIEnv *env, jobjectArray cmds,
JVM_END
JVM_ENTRY(void, jmm_GetDiagnosticCommandArgumentsInfo(JNIEnv *env,
- jstring command, dcmdArgInfo* infoArray))
+ jstring command, dcmdArgInfo* infoArray, jint count))
ResourceMark rm(THREAD);
oop cmd = JNIHandles::resolve_external_guard(command);
if (cmd == NULL) {
@@ -2020,10 +2020,12 @@ JVM_ENTRY(void, jmm_GetDiagnosticCommandArgumentsInfo(JNIEnv *env,
}
DCmdMark mark(dcmd);
GrowableArray* array = dcmd->argument_info_array();
- if (array->length() == 0) {
- return;
+ const int num_args = array->length();
+ if (num_args != count) {
+ assert(false, "jmm_GetDiagnosticCommandArgumentsInfo count mismatch (%d vs %d)", count, num_args);
+ THROW_MSG(vmSymbols::java_lang_InternalError(), "jmm_GetDiagnosticCommandArgumentsInfo count mismatch");
}
- for (int i = 0; i < array->length(); i++) {
+ for (int i = 0; i < num_args; i++) {
infoArray[i].name = array->at(i)->name();
infoArray[i].description = array->at(i)->description();
infoArray[i].type = array->at(i)->type();
diff --git a/src/hotspot/share/services/threadService.cpp b/src/hotspot/share/services/threadService.cpp
index cf81d53552227..3540abeba83f0 100644
--- a/src/hotspot/share/services/threadService.cpp
+++ b/src/hotspot/share/services/threadService.cpp
@@ -878,7 +878,10 @@ void ThreadSnapshot::initialize(ThreadsList * t_list, JavaThread* thread) {
_sleep_ticks = stat->sleep_ticks();
_sleep_count = stat->sleep_count();
- _thread_status = java_lang_Thread::get_thread_status(threadObj);
+ // If thread is still attaching then threadObj will be NULL.
+ _thread_status = threadObj == NULL ? JavaThreadStatus::NEW
+ : java_lang_Thread::get_thread_status(threadObj);
+
_is_suspended = thread->is_suspended();
_is_in_native = (thread->thread_state() == _thread_in_native);
diff --git a/src/hotspot/share/utilities/accessFlags.hpp b/src/hotspot/share/utilities/accessFlags.hpp
index cb3663349a820..83d1b6579062d 100644
--- a/src/hotspot/share/utilities/accessFlags.hpp
+++ b/src/hotspot/share/utilities/accessFlags.hpp
@@ -69,6 +69,7 @@ enum {
JVM_ACC_IS_SHARED_CLASS = 0x02000000, // True if klass is shared
JVM_ACC_IS_HIDDEN_CLASS = 0x04000000, // True if klass is hidden
JVM_ACC_IS_VALUE_BASED_CLASS = 0x08000000, // True if klass is marked as a ValueBased class
+ JVM_ACC_IS_BEING_REDEFINED = 0x00100000, // True if the klass is being redefined.
// Klass* and Method* flags
JVM_ACC_HAS_LOCAL_VARIABLE_TABLE= 0x00200000,
@@ -159,6 +160,10 @@ class AccessFlags {
void set_has_localvariable_table() { atomic_set_bits(JVM_ACC_HAS_LOCAL_VARIABLE_TABLE); }
void clear_has_localvariable_table() { atomic_clear_bits(JVM_ACC_HAS_LOCAL_VARIABLE_TABLE); }
+ bool is_being_redefined() const { return (_flags & JVM_ACC_IS_BEING_REDEFINED) != 0; }
+ void set_is_being_redefined() { atomic_set_bits(JVM_ACC_IS_BEING_REDEFINED); }
+ void clear_is_being_redefined() { atomic_clear_bits(JVM_ACC_IS_BEING_REDEFINED); }
+
// field flags
bool is_field_access_watched() const { return (_flags & JVM_ACC_FIELD_ACCESS_WATCHED) != 0; }
bool is_field_modification_watched() const
diff --git a/src/hotspot/share/utilities/resourceHash.hpp b/src/hotspot/share/utilities/resourceHash.hpp
index f56763f562d96..cfd6a848e27af 100644
--- a/src/hotspot/share/utilities/resourceHash.hpp
+++ b/src/hotspot/share/utilities/resourceHash.hpp
@@ -195,6 +195,32 @@ class ResourceHashtable : public ResourceObj {
++bucket;
}
}
+
+ // ITER contains bool do_entry(K const&, V const&), which will be
+ // called for each entry in the table. If do_entry() returns true,
+ // the entry is deleted.
+ template
+ void unlink(ITER* iter) {
+ Node** bucket = const_cast(_table);
+ while (bucket < &_table[SIZE]) {
+ Node** ptr = bucket;
+ while (*ptr != NULL) {
+ Node* node = *ptr;
+ // do_entry must clean up the key and value in Node.
+ bool clean = iter->do_entry(node->_key, node->_value);
+ if (clean) {
+ *ptr = node->_next;
+ if (ALLOC_TYPE == ResourceObj::C_HEAP) {
+ delete node;
+ }
+ } else {
+ ptr = &(node->_next);
+ }
+ }
+ ++bucket;
+ }
+ }
+
};
diff --git a/src/java.base/linux/classes/jdk/internal/platform/CgroupMetrics.java b/src/java.base/linux/classes/jdk/internal/platform/CgroupMetrics.java
index 12cb4b04444a4..b60d2152c7df7 100644
--- a/src/java.base/linux/classes/jdk/internal/platform/CgroupMetrics.java
+++ b/src/java.base/linux/classes/jdk/internal/platform/CgroupMetrics.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2020, Red Hat Inc.
+ * Copyright (c) 2020, 2021, Red Hat Inc.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -149,6 +149,16 @@ public long getMemorySoftLimit() {
return subsystem.getMemorySoftLimit();
}
+ @Override
+ public long getPidsMax() {
+ return subsystem.getPidsMax();
+ }
+
+ @Override
+ public long getPidsCurrent() {
+ return subsystem.getPidsCurrent();
+ }
+
@Override
public long getBlkIOServiceCount() {
return subsystem.getBlkIOServiceCount();
diff --git a/src/java.base/linux/classes/jdk/internal/platform/CgroupSubsystem.java b/src/java.base/linux/classes/jdk/internal/platform/CgroupSubsystem.java
index 5013e9c378c36..952de13e9f25b 100644
--- a/src/java.base/linux/classes/jdk/internal/platform/CgroupSubsystem.java
+++ b/src/java.base/linux/classes/jdk/internal/platform/CgroupSubsystem.java
@@ -36,5 +36,13 @@ public interface CgroupSubsystem extends Metrics {
* has determined that no limit is being imposed.
*/
public static final long LONG_RETVAL_UNLIMITED = -1;
+ public static final String MAX_VAL = "max";
+
+ public static long limitFromString(String strVal) {
+ if (strVal == null || MAX_VAL.equals(strVal)) {
+ return CgroupSubsystem.LONG_RETVAL_UNLIMITED;
+ }
+ return Long.parseLong(strVal);
+ }
}
diff --git a/src/java.base/linux/classes/jdk/internal/platform/CgroupSubsystemFactory.java b/src/java.base/linux/classes/jdk/internal/platform/CgroupSubsystemFactory.java
index 931d0896079cc..94700af94f8f6 100644
--- a/src/java.base/linux/classes/jdk/internal/platform/CgroupSubsystemFactory.java
+++ b/src/java.base/linux/classes/jdk/internal/platform/CgroupSubsystemFactory.java
@@ -51,6 +51,7 @@ public class CgroupSubsystemFactory {
private static final String CPUSET_CTRL = "cpuset";
private static final String BLKIO_CTRL = "blkio";
private static final String MEMORY_CTRL = "memory";
+ private static final String PIDS_CTRL = "pids";
/*
* From https://www.kernel.org/doc/Documentation/filesystems/proc.txt
@@ -149,6 +150,7 @@ public static Optional determineType(String mountInfo,
case CPUSET_CTRL: infos.put(CPUSET_CTRL, info); break;
case MEMORY_CTRL: infos.put(MEMORY_CTRL, info); break;
case BLKIO_CTRL: infos.put(BLKIO_CTRL, info); break;
+ case PIDS_CTRL: infos.put(PIDS_CTRL, info); break;
}
}
@@ -194,9 +196,10 @@ public static Optional determineType(String mountInfo,
if (isCgroupsV2) {
action = (tokens -> setCgroupV2Path(infos, tokens));
}
- selfCgroupLines.map(line -> line.split(":"))
- .filter(tokens -> (tokens.length >= 3))
- .forEach(action);
+ // The limit value of 3 is because /proc/self/cgroup contains three
+ // colon-separated tokens per line. The last token, cgroup path, might
+ // contain a ':'.
+ selfCgroupLines.map(line -> line.split(":", 3)).forEach(action);
}
CgroupTypeResult result = new CgroupTypeResult(isCgroupsV2,
@@ -251,6 +254,7 @@ private static void setCgroupV1Path(Map infos,
case CPUACCT_CTRL:
case CPU_CTRL:
case BLKIO_CTRL:
+ case PIDS_CTRL:
CgroupInfo info = infos.get(cName);
info.setCgroupPath(cgroupPath);
break;
@@ -302,6 +306,7 @@ private static boolean amendCgroupInfos(String mntInfoLine,
case MEMORY_CTRL: // fall-through
case CPU_CTRL:
case CPUACCT_CTRL:
+ case PIDS_CTRL:
case BLKIO_CTRL: {
CgroupInfo info = infos.get(controllerName);
assert info.getMountPoint() == null;
diff --git a/src/java.base/linux/classes/jdk/internal/platform/cgroupv1/CgroupV1Subsystem.java b/src/java.base/linux/classes/jdk/internal/platform/cgroupv1/CgroupV1Subsystem.java
index d10cfe091cfce..b928e83093ee9 100644
--- a/src/java.base/linux/classes/jdk/internal/platform/cgroupv1/CgroupV1Subsystem.java
+++ b/src/java.base/linux/classes/jdk/internal/platform/cgroupv1/CgroupV1Subsystem.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2018, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2018, 2021, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -38,6 +38,7 @@ public class CgroupV1Subsystem implements CgroupSubsystem, CgroupV1Metrics {
private CgroupV1SubsystemController cpuacct;
private CgroupV1SubsystemController cpuset;
private CgroupV1SubsystemController blkio;
+ private CgroupV1SubsystemController pids;
private static volatile CgroupV1Subsystem INSTANCE;
@@ -126,6 +127,15 @@ private static CgroupV1Subsystem initSubSystem(Map infos) {
}
break;
}
+ case "pids": {
+ if (info.getMountRoot() != null && info.getMountPoint() != null) {
+ CgroupV1SubsystemController controller = new CgroupV1SubsystemController(info.getMountRoot(), info.getMountPoint());
+ controller.setPath(info.getCgroupPath());
+ subsystem.setPidsController(controller);
+ anyActiveControllers = true;
+ }
+ break;
+ }
default:
throw new AssertionError("Unrecognized controller in infos: " + info.getName());
}
@@ -170,6 +180,10 @@ private void setBlkIOController(CgroupV1SubsystemController blkio) {
this.blkio = blkio;
}
+ private void setPidsController(CgroupV1SubsystemController pids) {
+ this.pids = pids;
+ }
+
private static long getLongValue(CgroupSubsystemController controller,
String parm) {
return CgroupSubsystemController.getLongValue(controller,
@@ -394,6 +408,17 @@ public long getMemorySoftLimit() {
return CgroupV1SubsystemController.longValOrUnlimited(getLongValue(memory, "memory.soft_limit_in_bytes"));
}
+ /*****************************************************************
+ * pids subsystem
+ ****************************************************************/
+ public long getPidsMax() {
+ String pidsMaxStr = CgroupSubsystemController.getStringValue(pids, "pids.max");
+ return CgroupSubsystem.limitFromString(pidsMaxStr);
+ }
+
+ public long getPidsCurrent() {
+ return getLongValue(pids, "pids.current");
+ }
/*****************************************************************
* BlKIO Subsystem
diff --git a/src/java.base/linux/classes/jdk/internal/platform/cgroupv2/CgroupV2Subsystem.java b/src/java.base/linux/classes/jdk/internal/platform/cgroupv2/CgroupV2Subsystem.java
index 2d4e2bc78e457..8f2489dbb72de 100644
--- a/src/java.base/linux/classes/jdk/internal/platform/cgroupv2/CgroupV2Subsystem.java
+++ b/src/java.base/linux/classes/jdk/internal/platform/cgroupv2/CgroupV2Subsystem.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2020, Red Hat Inc.
+ * Copyright (c) 2020, 2021, Red Hat Inc.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -45,7 +45,6 @@ public class CgroupV2Subsystem implements CgroupSubsystem {
private final CgroupSubsystemController unified;
private static final String PROVIDER_NAME = "cgroupv2";
private static final int PER_CPU_SHARES = 1024;
- private static final String MAX_VAL = "max";
private static final Object EMPTY_STR = "";
private static final long NO_SWAP = 0;
@@ -149,14 +148,7 @@ private long getFromCpuMax(int tokenIdx) {
return CgroupSubsystem.LONG_RETVAL_UNLIMITED;
}
String quota = tokens[tokenIdx];
- return limitFromString(quota);
- }
-
- private long limitFromString(String strVal) {
- if (strVal == null || MAX_VAL.equals(strVal)) {
- return CgroupSubsystem.LONG_RETVAL_UNLIMITED;
- }
- return Long.parseLong(strVal);
+ return CgroupSubsystem.limitFromString(quota);
}
@Override
@@ -251,7 +243,7 @@ public long getMemoryFailCount() {
@Override
public long getMemoryLimit() {
String strVal = CgroupSubsystemController.getStringValue(unified, "memory.max");
- return limitFromString(strVal);
+ return CgroupSubsystem.limitFromString(strVal);
}
@Override
@@ -279,7 +271,7 @@ public long getMemoryAndSwapLimit() {
if (strVal == null) {
return getMemoryLimit();
}
- long swapLimit = limitFromString(strVal);
+ long swapLimit = CgroupSubsystem.limitFromString(strVal);
if (swapLimit >= 0) {
long memoryLimit = getMemoryLimit();
assert memoryLimit >= 0;
@@ -310,7 +302,18 @@ public long getMemoryAndSwapUsage() {
@Override
public long getMemorySoftLimit() {
String softLimitStr = CgroupSubsystemController.getStringValue(unified, "memory.low");
- return limitFromString(softLimitStr);
+ return CgroupSubsystem.limitFromString(softLimitStr);
+ }
+
+ @Override
+ public long getPidsMax() {
+ String pidsMaxStr = CgroupSubsystemController.getStringValue(unified, "pids.max");
+ return CgroupSubsystem.limitFromString(pidsMaxStr);
+ }
+
+ @Override
+ public long getPidsCurrent() {
+ return getLongVal("pids.current");
}
@Override
diff --git a/src/java.base/macosx/classes/apple/security/KeychainStore.java b/src/java.base/macosx/classes/apple/security/KeychainStore.java
index cf97d4e04c0e9..e4a77e61cca5b 100644
--- a/src/java.base/macosx/classes/apple/security/KeychainStore.java
+++ b/src/java.base/macosx/classes/apple/security/KeychainStore.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2011, 2022, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -68,6 +68,25 @@ static class TrustedCertEntry {
Certificate cert;
long certRef; // SecCertificateRef for this key
+
+ // Each KeyStore.TrustedCertificateEntry have 2 attributes:
+ // 1. "trustSettings" -> trustSettings.toString()
+ // 2. "2.16.840.1.113894.746875.1.1" -> trustedKeyUsageValue
+ // The 1st one is mainly for debugging use. The 2nd one is similar
+ // to the attribute with the same key in a PKCS12KeyStore.
+
+ // The SecTrustSettingsCopyTrustSettings() output for this certificate
+ // inside the KeyChain in its original array of CFDictionaryRef objects
+ // structure with values dumped as strings. For each trust, an extra
+ // entry "SecPolicyOid" is added whose value is the OID for this trust.
+ // The extra entries are used to construct trustedKeyUsageValue.
+ List