# HG changeset patch # User stefank # Date 1527310789 -7200 # Node ID 9d62da00bf15ee782335d177799ff5ea3f95188a # Parent d9132bdf6c30e8025aafa517261ad3ed4ff3a466 8204540: Automatic oop closure devirtualization Reviewed-by: kbarrett, eosterlund diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/cmsOopClosures.cpp --- a/src/hotspot/share/gc/cms/cmsOopClosures.cpp Mon Jun 25 12:44:52 2018 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,31 +0,0 @@ -/* - * Copyright (c) 2015, Oracle and/or its affiliates. All rights reserved. - * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. - * - * This code is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 only, as - * published by the Free Software Foundation. - * - * This code is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License - * version 2 for more details (a copy is included in the LICENSE file that - * accompanied this code). - * - * You should have received a copy of the GNU General Public License version - * 2 along with this work; if not, write to the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. - * - * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA - * or visit www.oracle.com if you need additional information or have any - * questions. - * - */ - -#include "precompiled.hpp" -#include "gc/cms/cmsOopClosures.inline.hpp" -#include "gc/cms/cms_specialized_oop_closures.hpp" -#include "memory/iterator.inline.hpp" - -// Generate CMS specialized oop_oop_iterate functions. -SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_CMS(ALL_KLASS_OOP_OOP_ITERATE_DEFN) diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/cmsOopClosures.hpp --- a/src/hotspot/share/gc/cms/cmsOopClosures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/cms/cmsOopClosures.hpp Sat May 26 06:59:49 2018 +0200 @@ -44,22 +44,17 @@ void do_oop(oop obj); \ template inline void do_oop_work(T* p); -// TODO: This duplication of the MetadataAwareOopClosure class is only needed +// TODO: This duplication of the MetadataVisitingOopIterateClosure class is only needed // because some CMS OopClosures derive from OopsInGenClosure. It would be // good to get rid of them completely. -class MetadataAwareOopsInGenClosure: public OopsInGenClosure { +class MetadataVisitingOopsInGenClosure: public OopsInGenClosure { public: - virtual bool do_metadata() { return do_metadata_nv(); } - inline bool do_metadata_nv() { return true; } - + virtual bool do_metadata() { return true; } virtual void do_klass(Klass* k); - void do_klass_nv(Klass* k); - - virtual void do_cld(ClassLoaderData* cld) { do_cld_nv(cld); } - void do_cld_nv(ClassLoaderData* cld); + virtual void do_cld(ClassLoaderData* cld); }; -class MarkRefsIntoClosure: public MetadataAwareOopsInGenClosure { +class MarkRefsIntoClosure: public MetadataVisitingOopsInGenClosure { private: const MemRegion _span; CMSBitMap* _bitMap; @@ -71,7 +66,7 @@ virtual void do_oop(narrowOop* p); }; -class ParMarkRefsIntoClosure: public MetadataAwareOopsInGenClosure { +class ParMarkRefsIntoClosure: public MetadataVisitingOopsInGenClosure { private: const MemRegion _span; CMSBitMap* _bitMap; @@ -85,7 +80,7 @@ // A variant of the above used in certain kinds of CMS // marking verification. -class MarkRefsIntoVerifyClosure: public MetadataAwareOopsInGenClosure { +class MarkRefsIntoVerifyClosure: public MetadataVisitingOopsInGenClosure { private: const MemRegion _span; CMSBitMap* _verification_bm; @@ -100,7 +95,7 @@ }; // The non-parallel version (the parallel version appears further below). -class PushAndMarkClosure: public MetadataAwareOopClosure { +class PushAndMarkClosure: public MetadataVisitingOopIterateClosure { private: CMSCollector* _collector; MemRegion _span; @@ -120,8 +115,6 @@ bool concurrent_precleaning); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); }; // In the parallel case, the bit map and the @@ -130,7 +123,7 @@ // synchronization (for instance, via CAS). The marking stack // used in the non-parallel case above is here replaced with // an OopTaskQueue structure to allow efficient work stealing. -class ParPushAndMarkClosure: public MetadataAwareOopClosure { +class ParPushAndMarkClosure: public MetadataVisitingOopIterateClosure { private: CMSCollector* _collector; MemRegion _span; @@ -146,12 +139,10 @@ OopTaskQueue* work_queue); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); }; // The non-parallel version (the parallel version appears further below). -class MarkRefsIntoAndScanClosure: public MetadataAwareOopsInGenClosure { +class MarkRefsIntoAndScanClosure: public MetadataVisitingOopsInGenClosure { private: MemRegion _span; CMSBitMap* _bit_map; @@ -175,8 +166,6 @@ bool concurrent_precleaning); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); void set_freelistLock(Mutex* m) { _freelistLock = m; @@ -192,7 +181,7 @@ // stack and the bitMap are shared, so access needs to be suitably // synchronized. An OopTaskQueue structure, supporting efficient // work stealing, replaces a CMSMarkStack for storing grey objects. -class ParMarkRefsIntoAndScanClosure: public MetadataAwareOopsInGenClosure { +class ParMarkRefsIntoAndScanClosure: public MetadataVisitingOopsInGenClosure { private: MemRegion _span; CMSBitMap* _bit_map; @@ -209,8 +198,6 @@ OopTaskQueue* work_queue); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); void trim_queue(uint size); }; @@ -218,7 +205,7 @@ // This closure is used during the concurrent marking phase // following the first checkpoint. Its use is buried in // the closure MarkFromRootsClosure. -class PushOrMarkClosure: public MetadataAwareOopClosure { +class PushOrMarkClosure: public MetadataVisitingOopIterateClosure { private: CMSCollector* _collector; MemRegion _span; @@ -238,8 +225,6 @@ MarkFromRootsClosure* parent); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); // Deal with a stack overflow condition void handle_stack_overflow(HeapWord* lost); @@ -251,7 +236,7 @@ // This closure is used during the concurrent marking phase // following the first checkpoint. Its use is buried in // the closure ParMarkFromRootsClosure. -class ParPushOrMarkClosure: public MetadataAwareOopClosure { +class ParPushOrMarkClosure: public MetadataVisitingOopIterateClosure { private: CMSCollector* _collector; MemRegion _whole_span; @@ -275,8 +260,6 @@ ParMarkFromRootsClosure* parent); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); // Deal with a stack overflow condition void handle_stack_overflow(HeapWord* lost); @@ -290,7 +273,7 @@ // processing phase of the CMS final checkpoint step, as // well as during the concurrent precleaning of the discovered // reference lists. -class CMSKeepAliveClosure: public MetadataAwareOopClosure { +class CMSKeepAliveClosure: public MetadataVisitingOopIterateClosure { private: CMSCollector* _collector; const MemRegion _span; @@ -306,11 +289,9 @@ bool concurrent_precleaning() const { return _concurrent_precleaning; } virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); }; -class CMSInnerParMarkAndPushClosure: public MetadataAwareOopClosure { +class CMSInnerParMarkAndPushClosure: public MetadataVisitingOopIterateClosure { private: CMSCollector* _collector; MemRegion _span; @@ -324,14 +305,12 @@ OopTaskQueue* work_queue); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); }; // A parallel (MT) version of the above, used when // reference processing is parallel; the only difference // is in the do_oop method. -class CMSParKeepAliveClosure: public MetadataAwareOopClosure { +class CMSParKeepAliveClosure: public MetadataVisitingOopIterateClosure { private: MemRegion _span; OopTaskQueue* _work_queue; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/cmsOopClosures.inline.hpp --- a/src/hotspot/share/gc/cms/cmsOopClosures.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/cms/cmsOopClosures.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -32,42 +32,38 @@ #include "oops/compressedOops.inline.hpp" #include "oops/oop.inline.hpp" -// MetadataAwareOopClosure and MetadataAwareOopsInGenClosure are duplicated, +// MetadataVisitingOopIterateClosure and MetadataVisitingOopsInGenClosure are duplicated, // until we get rid of OopsInGenClosure. -inline void MetadataAwareOopsInGenClosure::do_klass_nv(Klass* k) { +inline void MetadataVisitingOopsInGenClosure::do_klass(Klass* k) { ClassLoaderData* cld = k->class_loader_data(); - do_cld_nv(cld); + MetadataVisitingOopsInGenClosure::do_cld(cld); } -inline void MetadataAwareOopsInGenClosure::do_klass(Klass* k) { do_klass_nv(k); } -inline void MetadataAwareOopsInGenClosure::do_cld_nv(ClassLoaderData* cld) { +inline void MetadataVisitingOopsInGenClosure::do_cld(ClassLoaderData* cld) { bool claim = true; // Must claim the class loader data before processing. cld->oops_do(this, claim); } // Decode the oop and call do_oop on it. -#define DO_OOP_WORK_IMPL(cls) \ - template void cls::do_oop_work(T* p) { \ - T heap_oop = RawAccess<>::oop_load(p); \ - if (!CompressedOops::is_null(heap_oop)) { \ - oop obj = CompressedOops::decode_not_null(heap_oop); \ - do_oop(obj); \ - } \ - } - -#define DO_OOP_WORK_NV_IMPL(cls) \ - DO_OOP_WORK_IMPL(cls) \ - void cls::do_oop_nv(oop* p) { cls::do_oop_work(p); } \ - void cls::do_oop_nv(narrowOop* p) { cls::do_oop_work(p); } +#define DO_OOP_WORK_IMPL(cls) \ + template void cls::do_oop_work(T* p) { \ + T heap_oop = RawAccess<>::oop_load(p); \ + if (!CompressedOops::is_null(heap_oop)) { \ + oop obj = CompressedOops::decode_not_null(heap_oop); \ + do_oop(obj); \ + } \ + } \ + inline void cls::do_oop(oop* p) { do_oop_work(p); } \ + inline void cls::do_oop(narrowOop* p) { do_oop_work(p); } DO_OOP_WORK_IMPL(MarkRefsIntoClosure) DO_OOP_WORK_IMPL(ParMarkRefsIntoClosure) DO_OOP_WORK_IMPL(MarkRefsIntoVerifyClosure) -DO_OOP_WORK_NV_IMPL(PushAndMarkClosure) -DO_OOP_WORK_NV_IMPL(ParPushAndMarkClosure) -DO_OOP_WORK_NV_IMPL(MarkRefsIntoAndScanClosure) -DO_OOP_WORK_NV_IMPL(ParMarkRefsIntoAndScanClosure) +DO_OOP_WORK_IMPL(PushAndMarkClosure) +DO_OOP_WORK_IMPL(ParPushAndMarkClosure) +DO_OOP_WORK_IMPL(MarkRefsIntoAndScanClosure) +DO_OOP_WORK_IMPL(ParMarkRefsIntoAndScanClosure) // Trim our work_queue so its length is below max at return inline void ParMarkRefsIntoAndScanClosure::trim_queue(uint max) { @@ -84,10 +80,10 @@ } } -DO_OOP_WORK_NV_IMPL(PushOrMarkClosure) -DO_OOP_WORK_NV_IMPL(ParPushOrMarkClosure) -DO_OOP_WORK_NV_IMPL(CMSKeepAliveClosure) -DO_OOP_WORK_NV_IMPL(CMSInnerParMarkAndPushClosure) +DO_OOP_WORK_IMPL(PushOrMarkClosure) +DO_OOP_WORK_IMPL(ParPushOrMarkClosure) +DO_OOP_WORK_IMPL(CMSKeepAliveClosure) +DO_OOP_WORK_IMPL(CMSInnerParMarkAndPushClosure) DO_OOP_WORK_IMPL(CMSParKeepAliveClosure) #endif // SHARE_VM_GC_CMS_CMSOOPCLOSURES_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/cms_specialized_oop_closures.hpp --- a/src/hotspot/share/gc/cms/cms_specialized_oop_closures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,63 +0,0 @@ -/* - * Copyright (c) 2001, 2017, Oracle and/or its affiliates. All rights reserved. - * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. - * - * This code is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 only, as - * published by the Free Software Foundation. - * - * This code is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License - * version 2 for more details (a copy is included in the LICENSE file that - * accompanied this code). - * - * You should have received a copy of the GNU General Public License version - * 2 along with this work; if not, write to the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. - * - * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA - * or visit www.oracle.com if you need additional information or have any - * questions. - * - */ - -#ifndef SHARE_GC_CMS_CMS_SPECIALIZED_OOP_CLOSURES_HPP -#define SHARE_GC_CMS_CMS_SPECIALIZED_OOP_CLOSURES_HPP - -// The following OopClosure types get specialized versions of -// "oop_oop_iterate" that invoke the closures' do_oop methods -// non-virtually, using a mechanism defined in this file. Extend these -// macros in the obvious way to add specializations for new closures. - -// Forward declarations. - -// ParNew -class ParScanWithBarrierClosure; -class ParScanWithoutBarrierClosure; - -// CMS -class MarkRefsIntoAndScanClosure; -class ParMarkRefsIntoAndScanClosure; -class PushAndMarkClosure; -class ParPushAndMarkClosure; -class PushOrMarkClosure; -class ParPushOrMarkClosure; -class CMSKeepAliveClosure; -class CMSInnerParMarkAndPushClosure; - -#define SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_P(f) \ - f(ParScanWithBarrierClosure,_nv) \ - f(ParScanWithoutBarrierClosure,_nv) - -#define SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_CMS(f) \ - f(MarkRefsIntoAndScanClosure,_nv) \ - f(ParMarkRefsIntoAndScanClosure,_nv) \ - f(PushAndMarkClosure,_nv) \ - f(ParPushAndMarkClosure,_nv) \ - f(PushOrMarkClosure,_nv) \ - f(ParPushOrMarkClosure,_nv) \ - f(CMSKeepAliveClosure,_nv) \ - f(CMSInnerParMarkAndPushClosure,_nv) - -#endif // SHARE_GC_CMS_CMS_SPECIALIZED_OOP_CLOSURES_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/compactibleFreeListSpace.cpp --- a/src/hotspot/share/gc/cms/compactibleFreeListSpace.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/cms/compactibleFreeListSpace.cpp Sat May 26 06:59:49 2018 +0200 @@ -30,12 +30,14 @@ #include "gc/cms/concurrentMarkSweepThread.hpp" #include "gc/shared/blockOffsetTable.inline.hpp" #include "gc/shared/collectedHeap.inline.hpp" +#include "gc/shared/genOopClosures.inline.hpp" #include "gc/shared/space.inline.hpp" #include "gc/shared/spaceDecorator.hpp" #include "logging/log.hpp" #include "logging/logStream.hpp" #include "memory/allocation.inline.hpp" #include "memory/binaryTreeDictionary.inline.hpp" +#include "memory/iterator.inline.hpp" #include "memory/resourceArea.hpp" #include "memory/universe.hpp" #include "oops/access.inline.hpp" @@ -843,13 +845,13 @@ void walk_mem_region_with_cl_nopar(MemRegion mr, \ HeapWord* bottom, HeapWord* top, \ ClosureType* cl) - walk_mem_region_with_cl_DECL(ExtendedOopClosure); + walk_mem_region_with_cl_DECL(OopIterateClosure); walk_mem_region_with_cl_DECL(FilteringClosure); public: FreeListSpaceDCTOC(CompactibleFreeListSpace* sp, CMSCollector* collector, - ExtendedOopClosure* cl, + OopIterateClosure* cl, CardTable::PrecisionStyle precision, HeapWord* boundary, bool parallel) : @@ -929,11 +931,11 @@ // (There are only two of these, rather than N, because the split is due // only to the introduction of the FilteringClosure, a local part of the // impl of this abstraction.) -FreeListSpaceDCTOC__walk_mem_region_with_cl_DEFN(ExtendedOopClosure) +FreeListSpaceDCTOC__walk_mem_region_with_cl_DEFN(OopIterateClosure) FreeListSpaceDCTOC__walk_mem_region_with_cl_DEFN(FilteringClosure) DirtyCardToOopClosure* -CompactibleFreeListSpace::new_dcto_cl(ExtendedOopClosure* cl, +CompactibleFreeListSpace::new_dcto_cl(OopIterateClosure* cl, CardTable::PrecisionStyle precision, HeapWord* boundary, bool parallel) { @@ -965,7 +967,7 @@ } // Apply the given closure to each oop in the space. -void CompactibleFreeListSpace::oop_iterate(ExtendedOopClosure* cl) { +void CompactibleFreeListSpace::oop_iterate(OopIterateClosure* cl) { assert_lock_strong(freelistLock()); HeapWord *cur, *limit; size_t curSize; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/compactibleFreeListSpace.hpp --- a/src/hotspot/share/gc/cms/compactibleFreeListSpace.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/cms/compactibleFreeListSpace.hpp Sat May 26 06:59:49 2018 +0200 @@ -433,7 +433,7 @@ Mutex* freelistLock() const { return &_freelistLock; } // Iteration support - void oop_iterate(ExtendedOopClosure* cl); + void oop_iterate(OopIterateClosure* cl); void object_iterate(ObjectClosure* blk); // Apply the closure to each object in the space whose references @@ -463,7 +463,7 @@ ObjectClosureCareful* cl); // Override: provides a DCTO_CL specific to this kind of space. - DirtyCardToOopClosure* new_dcto_cl(ExtendedOopClosure* cl, + DirtyCardToOopClosure* new_dcto_cl(OopIterateClosure* cl, CardTable::PrecisionStyle precision, HeapWord* boundary, bool parallel); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/concurrentMarkSweepGeneration.cpp --- a/src/hotspot/share/gc/cms/concurrentMarkSweepGeneration.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/cms/concurrentMarkSweepGeneration.cpp Sat May 26 06:59:49 2018 +0200 @@ -2467,7 +2467,7 @@ } void -ConcurrentMarkSweepGeneration::oop_iterate(ExtendedOopClosure* cl) { +ConcurrentMarkSweepGeneration::oop_iterate(OopIterateClosure* cl) { if (freelistLock()->owned_by_self()) { Generation::oop_iterate(cl); } else { @@ -3305,7 +3305,7 @@ pst->all_tasks_completed(); } -class ParConcMarkingClosure: public MetadataAwareOopClosure { +class ParConcMarkingClosure: public MetadataVisitingOopIterateClosure { private: CMSCollector* _collector; CMSConcMarkingTask* _task; @@ -3318,7 +3318,7 @@ public: ParConcMarkingClosure(CMSCollector* collector, CMSConcMarkingTask* task, OopTaskQueue* work_queue, CMSBitMap* bit_map, CMSMarkStack* overflow_stack): - MetadataAwareOopClosure(collector->ref_processor()), + MetadataVisitingOopIterateClosure(collector->ref_processor()), _collector(collector), _task(task), _span(collector->_span), @@ -3382,9 +3382,6 @@ } } -void ParConcMarkingClosure::do_oop(oop* p) { ParConcMarkingClosure::do_oop_work(p); } -void ParConcMarkingClosure::do_oop(narrowOop* p) { ParConcMarkingClosure::do_oop_work(p); } - void ParConcMarkingClosure::trim_queue(size_t max) { while (_work_queue->size() > max) { oop new_oop; @@ -4065,9 +4062,9 @@ } class PrecleanCLDClosure : public CLDClosure { - MetadataAwareOopsInGenClosure* _cm_closure; + MetadataVisitingOopsInGenClosure* _cm_closure; public: - PrecleanCLDClosure(MetadataAwareOopsInGenClosure* oop_closure) : _cm_closure(oop_closure) {} + PrecleanCLDClosure(MetadataVisitingOopsInGenClosure* oop_closure) : _cm_closure(oop_closure) {} void do_cld(ClassLoaderData* cld) { if (cld->has_accumulated_modified_oops()) { cld->clear_accumulated_modified_oops(); @@ -4429,7 +4426,7 @@ ResourceMark rm; GrowableArray* array = ClassLoaderDataGraph::new_clds(); for (int i = 0; i < array->length(); i++) { - par_mrias_cl.do_cld_nv(array->at(i)); + Devirtualizer::do_cld(&par_mrias_cl, array->at(i)); } // We don't need to keep track of new CLDs anymore. @@ -4970,7 +4967,7 @@ ResourceMark rm; GrowableArray* array = ClassLoaderDataGraph::new_clds(); for (int i = 0; i < array->length(); i++) { - mrias_cl.do_cld_nv(array->at(i)); + Devirtualizer::do_cld(&mrias_cl, array->at(i)); } // We don't need to keep track of new CLDs anymore. @@ -5803,9 +5800,6 @@ } } -void MarkRefsIntoClosure::do_oop(oop* p) { MarkRefsIntoClosure::do_oop_work(p); } -void MarkRefsIntoClosure::do_oop(narrowOop* p) { MarkRefsIntoClosure::do_oop_work(p); } - ParMarkRefsIntoClosure::ParMarkRefsIntoClosure( MemRegion span, CMSBitMap* bitMap): _span(span), @@ -5825,9 +5819,6 @@ } } -void ParMarkRefsIntoClosure::do_oop(oop* p) { ParMarkRefsIntoClosure::do_oop_work(p); } -void ParMarkRefsIntoClosure::do_oop(narrowOop* p) { ParMarkRefsIntoClosure::do_oop_work(p); } - // A variant of the above, used for CMS marking verification. MarkRefsIntoVerifyClosure::MarkRefsIntoVerifyClosure( MemRegion span, CMSBitMap* verification_bm, CMSBitMap* cms_bm): @@ -5856,9 +5847,6 @@ } } -void MarkRefsIntoVerifyClosure::do_oop(oop* p) { MarkRefsIntoVerifyClosure::do_oop_work(p); } -void MarkRefsIntoVerifyClosure::do_oop(narrowOop* p) { MarkRefsIntoVerifyClosure::do_oop_work(p); } - ////////////////////////////////////////////////// // MarkRefsIntoAndScanClosure ////////////////////////////////////////////////// @@ -5933,9 +5921,6 @@ } } -void MarkRefsIntoAndScanClosure::do_oop(oop* p) { MarkRefsIntoAndScanClosure::do_oop_work(p); } -void MarkRefsIntoAndScanClosure::do_oop(narrowOop* p) { MarkRefsIntoAndScanClosure::do_oop_work(p); } - void MarkRefsIntoAndScanClosure::do_yield_work() { assert(ConcurrentMarkSweepThread::cms_thread_has_cms_token(), "CMS thread should hold CMS token"); @@ -6016,9 +6001,6 @@ } } -void ParMarkRefsIntoAndScanClosure::do_oop(oop* p) { ParMarkRefsIntoAndScanClosure::do_oop_work(p); } -void ParMarkRefsIntoAndScanClosure::do_oop(narrowOop* p) { ParMarkRefsIntoAndScanClosure::do_oop_work(p); } - // This closure is used to rescan the marked objects on the dirty cards // in the mod union table and the card table proper. size_t ScanMarkedObjectsAgainCarefullyClosure::do_object_careful_m( @@ -6597,7 +6579,7 @@ CMSCollector* collector, MemRegion span, CMSBitMap* verification_bm, CMSBitMap* cms_bm, CMSMarkStack* mark_stack): - MetadataAwareOopClosure(collector->ref_processor()), + MetadataVisitingOopIterateClosure(collector->ref_processor()), _collector(collector), _span(span), _verification_bm(verification_bm), @@ -6654,7 +6636,7 @@ MemRegion span, CMSBitMap* bitMap, CMSMarkStack* markStack, HeapWord* finger, MarkFromRootsClosure* parent) : - MetadataAwareOopClosure(collector->ref_processor()), + MetadataVisitingOopIterateClosure(collector->ref_processor()), _collector(collector), _span(span), _bitMap(bitMap), @@ -6671,7 +6653,7 @@ HeapWord* finger, HeapWord* volatile* global_finger_addr, ParMarkFromRootsClosure* parent) : - MetadataAwareOopClosure(collector->ref_processor()), + MetadataVisitingOopIterateClosure(collector->ref_processor()), _collector(collector), _whole_span(collector->_span), _span(span), @@ -6752,9 +6734,6 @@ } } -void PushOrMarkClosure::do_oop(oop* p) { PushOrMarkClosure::do_oop_work(p); } -void PushOrMarkClosure::do_oop(narrowOop* p) { PushOrMarkClosure::do_oop_work(p); } - void ParPushOrMarkClosure::do_oop(oop obj) { // Ignore mark word because we are running concurrent with mutators. assert(oopDesc::is_oop_or_null(obj, true), "Expected an oop or NULL at " PTR_FORMAT, p2i(obj)); @@ -6801,9 +6780,6 @@ } } -void ParPushOrMarkClosure::do_oop(oop* p) { ParPushOrMarkClosure::do_oop_work(p); } -void ParPushOrMarkClosure::do_oop(narrowOop* p) { ParPushOrMarkClosure::do_oop_work(p); } - PushAndMarkClosure::PushAndMarkClosure(CMSCollector* collector, MemRegion span, ReferenceDiscoverer* rd, @@ -6811,7 +6787,7 @@ CMSBitMap* mod_union_table, CMSMarkStack* mark_stack, bool concurrent_precleaning): - MetadataAwareOopClosure(rd), + MetadataVisitingOopIterateClosure(rd), _collector(collector), _span(span), _bit_map(bit_map), @@ -6883,7 +6859,7 @@ ReferenceDiscoverer* rd, CMSBitMap* bit_map, OopTaskQueue* work_queue): - MetadataAwareOopClosure(rd), + MetadataVisitingOopIterateClosure(rd), _collector(collector), _span(span), _bit_map(bit_map), @@ -6892,9 +6868,6 @@ assert(ref_discoverer() != NULL, "ref_discoverer shouldn't be NULL"); } -void PushAndMarkClosure::do_oop(oop* p) { PushAndMarkClosure::do_oop_work(p); } -void PushAndMarkClosure::do_oop(narrowOop* p) { PushAndMarkClosure::do_oop_work(p); } - // Grey object rescan during second checkpoint phase -- // the parallel version. void ParPushAndMarkClosure::do_oop(oop obj) { @@ -6937,9 +6910,6 @@ } } -void ParPushAndMarkClosure::do_oop(oop* p) { ParPushAndMarkClosure::do_oop_work(p); } -void ParPushAndMarkClosure::do_oop(narrowOop* p) { ParPushAndMarkClosure::do_oop_work(p); } - void CMSPrecleanRefsYieldClosure::do_yield_work() { Mutex* bml = _collector->bitMapLock(); assert_lock_strong(bml); @@ -7606,9 +7576,6 @@ } } -void CMSKeepAliveClosure::do_oop(oop* p) { CMSKeepAliveClosure::do_oop_work(p); } -void CMSKeepAliveClosure::do_oop(narrowOop* p) { CMSKeepAliveClosure::do_oop_work(p); } - // CMSParKeepAliveClosure: a parallel version of the above. // The work queues are private to each closure (thread), // but (may be) available for stealing by other threads. @@ -7629,9 +7596,6 @@ } } -void CMSParKeepAliveClosure::do_oop(oop* p) { CMSParKeepAliveClosure::do_oop_work(p); } -void CMSParKeepAliveClosure::do_oop(narrowOop* p) { CMSParKeepAliveClosure::do_oop_work(p); } - void CMSParKeepAliveClosure::trim_queue(uint max) { while (_work_queue->size() > max) { oop new_oop; @@ -7677,9 +7641,6 @@ } } -void CMSInnerParMarkAndPushClosure::do_oop(oop* p) { CMSInnerParMarkAndPushClosure::do_oop_work(p); } -void CMSInnerParMarkAndPushClosure::do_oop(narrowOop* p) { CMSInnerParMarkAndPushClosure::do_oop_work(p); } - ////////////////////////////////////////////////////////////////// // CMSExpansionCause ///////////////////////////// ////////////////////////////////////////////////////////////////// diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/concurrentMarkSweepGeneration.hpp --- a/src/hotspot/share/gc/cms/concurrentMarkSweepGeneration.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/cms/concurrentMarkSweepGeneration.hpp Sat May 26 06:59:49 2018 +0200 @@ -1190,7 +1190,7 @@ void save_sweep_limit(); // More iteration support - virtual void oop_iterate(ExtendedOopClosure* cl); + virtual void oop_iterate(OopIterateClosure* cl); virtual void safe_object_iterate(ObjectClosure* cl); virtual void object_iterate(ObjectClosure* cl); @@ -1307,7 +1307,7 @@ // The following closures are used to do certain kinds of verification of // CMS marking. -class PushAndMarkVerifyClosure: public MetadataAwareOopClosure { +class PushAndMarkVerifyClosure: public MetadataVisitingOopIterateClosure { CMSCollector* _collector; MemRegion _span; CMSBitMap* _verification_bm; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/parNewGeneration.cpp --- a/src/hotspot/share/gc/cms/parNewGeneration.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/cms/parNewGeneration.cpp Sat May 26 06:59:49 2018 +0200 @@ -51,6 +51,7 @@ #include "gc/shared/workgroup.hpp" #include "logging/log.hpp" #include "logging/logStream.hpp" +#include "memory/iterator.inline.hpp" #include "memory/resourceArea.hpp" #include "oops/access.inline.hpp" #include "oops/compressedOops.inline.hpp" @@ -502,12 +503,6 @@ _boundary = _g->reserved().end(); } -void ParScanWithBarrierClosure::do_oop(oop* p) { ParScanClosure::do_oop_work(p, true, false); } -void ParScanWithBarrierClosure::do_oop(narrowOop* p) { ParScanClosure::do_oop_work(p, true, false); } - -void ParScanWithoutBarrierClosure::do_oop(oop* p) { ParScanClosure::do_oop_work(p, false, false); } -void ParScanWithoutBarrierClosure::do_oop(narrowOop* p) { ParScanClosure::do_oop_work(p, false, false); } - void ParRootScanWithBarrierTwoGensClosure::do_oop(oop* p) { ParScanClosure::do_oop_work(p, true, true); } void ParRootScanWithBarrierTwoGensClosure::do_oop(narrowOop* p) { ParScanClosure::do_oop_work(p, true, true); } @@ -519,9 +514,6 @@ : ScanWeakRefClosure(g), _par_scan_state(par_scan_state) {} -void ParScanWeakRefClosure::do_oop(oop* p) { ParScanWeakRefClosure::do_oop_work(p); } -void ParScanWeakRefClosure::do_oop(narrowOop* p) { ParScanWeakRefClosure::do_oop_work(p); } - #ifdef WIN32 #pragma warning(disable: 4786) /* identifier was truncated to '255' characters in the browser information */ #endif @@ -691,7 +683,7 @@ } #endif // ASSERT - _par_cl->do_oop_nv(p); + Devirtualizer::do_oop_no_verify(_par_cl, p); if (CMSHeap::heap()->is_in_reserved(p)) { oop obj = RawAccess::oop_load(p);; @@ -717,7 +709,7 @@ } #endif // ASSERT - _cl->do_oop_nv(p); + Devirtualizer::do_oop_no_verify(_cl, p); if (CMSHeap::heap()->is_in_reserved(p)) { oop obj = RawAccess::oop_load(p); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/parOopClosures.cpp --- a/src/hotspot/share/gc/cms/parOopClosures.cpp Mon Jun 25 12:44:52 2018 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,31 +0,0 @@ -/* - * Copyright (c) 2015, Oracle and/or its affiliates. All rights reserved. - * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. - * - * This code is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 only, as - * published by the Free Software Foundation. - * - * This code is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License - * version 2 for more details (a copy is included in the LICENSE file that - * accompanied this code). - * - * You should have received a copy of the GNU General Public License version - * 2 along with this work; if not, write to the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. - * - * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA - * or visit www.oracle.com if you need additional information or have any - * questions. - * - */ - -#include "precompiled.hpp" -#include "gc/cms/parOopClosures.inline.hpp" -#include "gc/cms/cms_specialized_oop_closures.hpp" -#include "memory/iterator.inline.hpp" - -// Generate ParNew specialized oop_oop_iterate functions. -SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_P(ALL_KLASS_OOP_OOP_ITERATE_DEFN); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/parOopClosures.hpp --- a/src/hotspot/share/gc/cms/parOopClosures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/cms/parOopClosures.hpp Sat May 26 06:59:49 2018 +0200 @@ -57,8 +57,6 @@ ParScanClosure(g, par_scan_state) {} virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); }; class ParScanWithoutBarrierClosure: public ParScanClosure { @@ -68,8 +66,6 @@ ParScanClosure(g, par_scan_state) {} virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); }; class ParRootScanWithBarrierTwoGensClosure: public ParScanClosure { @@ -99,8 +95,6 @@ ParScanThreadState* par_scan_state); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); }; class ParEvacuateFollowersClosure: public VoidClosure { diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/cms/parOopClosures.inline.hpp --- a/src/hotspot/share/gc/cms/parOopClosures.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/cms/parOopClosures.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -57,8 +57,8 @@ } } -inline void ParScanWeakRefClosure::do_oop_nv(oop* p) { ParScanWeakRefClosure::do_oop_work(p); } -inline void ParScanWeakRefClosure::do_oop_nv(narrowOop* p) { ParScanWeakRefClosure::do_oop_work(p); } +inline void ParScanWeakRefClosure::do_oop(oop* p) { ParScanWeakRefClosure::do_oop_work(p); } +inline void ParScanWeakRefClosure::do_oop(narrowOop* p) { ParScanWeakRefClosure::do_oop_work(p); } template inline void ParScanClosure::par_do_barrier(T* p) { assert(generation()->is_in_reserved(p), "expected ref in generation"); @@ -137,10 +137,10 @@ } } -inline void ParScanWithBarrierClosure::do_oop_nv(oop* p) { ParScanClosure::do_oop_work(p, true, false); } -inline void ParScanWithBarrierClosure::do_oop_nv(narrowOop* p) { ParScanClosure::do_oop_work(p, true, false); } +inline void ParScanWithBarrierClosure::do_oop(oop* p) { ParScanClosure::do_oop_work(p, true, false); } +inline void ParScanWithBarrierClosure::do_oop(narrowOop* p) { ParScanClosure::do_oop_work(p, true, false); } -inline void ParScanWithoutBarrierClosure::do_oop_nv(oop* p) { ParScanClosure::do_oop_work(p, false, false); } -inline void ParScanWithoutBarrierClosure::do_oop_nv(narrowOop* p) { ParScanClosure::do_oop_work(p, false, false); } +inline void ParScanWithoutBarrierClosure::do_oop(oop* p) { ParScanClosure::do_oop_work(p, false, false); } +inline void ParScanWithoutBarrierClosure::do_oop(narrowOop* p) { ParScanClosure::do_oop_work(p, false, false); } #endif // SHARE_VM_GC_CMS_PAROOPCLOSURES_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1ConcurrentMark.cpp --- a/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp Sat May 26 06:59:49 2018 +0200 @@ -2116,7 +2116,7 @@ G1CMOopClosure::G1CMOopClosure(G1CollectedHeap* g1h, G1CMTask* task) - : MetadataAwareOopClosure(get_cm_oop_closure_ref_processor(g1h)), + : MetadataVisitingOopIterateClosure(get_cm_oop_closure_ref_processor(g1h)), _g1h(g1h), _task(task) { } diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp --- a/src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -29,6 +29,7 @@ #include "gc/g1/g1ConcurrentMark.hpp" #include "gc/g1/g1ConcurrentMarkBitMap.inline.hpp" #include "gc/g1/g1ConcurrentMarkObjArrayProcessor.inline.hpp" +#include "gc/g1/g1OopClosures.inline.hpp" #include "gc/g1/g1Policy.hpp" #include "gc/g1/g1RegionMarkStatsCache.inline.hpp" #include "gc/g1/g1RemSetTrackingPolicy.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1EvacFailure.cpp --- a/src/hotspot/share/gc/g1/g1EvacFailure.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1EvacFailure.cpp Sat May 26 06:59:49 2018 +0200 @@ -38,7 +38,7 @@ #include "oops/compressedOops.inline.hpp" #include "oops/oop.inline.hpp" -class UpdateRSetDeferred : public ExtendedOopClosure { +class UpdateRSetDeferred : public BasicOopIterateClosure { private: G1CollectedHeap* _g1h; DirtyCardQueue* _dcq; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1FullGCAdjustTask.cpp --- a/src/hotspot/share/gc/g1/g1FullGCAdjustTask.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1FullGCAdjustTask.cpp Sat May 26 06:59:49 2018 +0200 @@ -34,6 +34,7 @@ #include "gc/shared/gcTraceTime.inline.hpp" #include "gc/shared/referenceProcessor.hpp" #include "logging/log.hpp" +#include "memory/iterator.inline.hpp" class G1AdjustLiveClosure : public StackObj { G1AdjustClosure* _adjust_closure; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1FullGCMarkTask.cpp --- a/src/hotspot/share/gc/g1/g1FullGCMarkTask.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1FullGCMarkTask.cpp Sat May 26 06:59:49 2018 +0200 @@ -31,6 +31,7 @@ #include "gc/g1/g1FullGCReferenceProcessorExecutor.hpp" #include "gc/shared/gcTraceTime.inline.hpp" #include "gc/shared/referenceProcessor.hpp" +#include "memory/iterator.inline.hpp" G1FullGCMarkTask::G1FullGCMarkTask(G1FullCollector* collector) : G1FullGCTask("G1 Parallel Marking Task", collector), diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1FullGCMarker.cpp --- a/src/hotspot/share/gc/g1/g1FullGCMarker.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1FullGCMarker.cpp Sat May 26 06:59:49 2018 +0200 @@ -25,6 +25,7 @@ #include "precompiled.hpp" #include "gc/g1/g1FullGCMarker.inline.hpp" #include "gc/shared/referenceProcessor.hpp" +#include "memory/iterator.inline.hpp" G1FullGCMarker::G1FullGCMarker(uint worker_id, PreservedMarks* preserved_stack, G1CMBitMap* bitmap) : _worker_id(worker_id), diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1FullGCMarker.inline.hpp --- a/src/hotspot/share/gc/g1/g1FullGCMarker.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1FullGCMarker.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -28,6 +28,7 @@ #include "gc/g1/g1Allocator.inline.hpp" #include "gc/g1/g1ConcurrentMarkBitMap.inline.hpp" #include "gc/g1/g1FullGCMarker.hpp" +#include "gc/g1/g1FullGCOopClosures.inline.hpp" #include "gc/g1/g1StringDedup.hpp" #include "gc/g1/g1StringDedupQueue.hpp" #include "gc/shared/preservedMarks.inline.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1FullGCOopClosures.cpp --- a/src/hotspot/share/gc/g1/g1FullGCOopClosures.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1FullGCOopClosures.cpp Sat May 26 06:59:49 2018 +0200 @@ -26,32 +26,12 @@ #include "gc/g1/g1CollectedHeap.hpp" #include "gc/g1/g1FullGCMarker.inline.hpp" #include "gc/g1/g1FullGCOopClosures.inline.hpp" -#include "gc/g1/g1_specialized_oop_closures.hpp" #include "logging/logStream.hpp" +#include "memory/iterator.inline.hpp" #include "oops/access.inline.hpp" #include "oops/compressedOops.inline.hpp" #include "oops/oop.inline.hpp" -void G1MarkAndPushClosure::do_oop(oop* p) { - do_oop_nv(p); -} - -void G1MarkAndPushClosure::do_oop(narrowOop* p) { - do_oop_nv(p); -} - -bool G1MarkAndPushClosure::do_metadata() { - return do_metadata_nv(); -} - -void G1MarkAndPushClosure::do_klass(Klass* k) { - do_klass_nv(k); -} - -void G1MarkAndPushClosure::do_cld(ClassLoaderData* cld) { - do_cld_nv(cld); -} - void G1FollowStackClosure::do_void() { _marker->drain_stack(); } void G1FullKeepAliveClosure::do_oop(oop* p) { do_oop_work(p); } @@ -75,7 +55,7 @@ #endif // PRODUCT } -template void G1VerifyOopClosure::do_oop_nv(T* p) { +template void G1VerifyOopClosure::do_oop_work(T* p) { T heap_oop = RawAccess<>::oop_load(p); if (!CompressedOops::is_null(heap_oop)) { _cc++; @@ -121,8 +101,5 @@ } } -template void G1VerifyOopClosure::do_oop_nv(oop*); -template void G1VerifyOopClosure::do_oop_nv(narrowOop*); - -// Generate G1 full GC specialized oop_oop_iterate functions. -SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_G1FULL(ALL_KLASS_OOP_OOP_ITERATE_DEFN) +template void G1VerifyOopClosure::do_oop_work(oop*); +template void G1VerifyOopClosure::do_oop_work(narrowOop*); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1FullGCOopClosures.hpp --- a/src/hotspot/share/gc/g1/g1FullGCOopClosures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1FullGCOopClosures.hpp Sat May 26 06:59:49 2018 +0200 @@ -55,7 +55,7 @@ virtual void do_oop(narrowOop* p); }; -class G1MarkAndPushClosure : public ExtendedOopClosure { +class G1MarkAndPushClosure : public OopIterateClosure { G1FullGCMarker* _marker; uint _worker_id; @@ -63,26 +63,21 @@ G1MarkAndPushClosure(uint worker, G1FullGCMarker* marker, ReferenceDiscoverer* ref) : _marker(marker), _worker_id(worker), - ExtendedOopClosure(ref) { } + OopIterateClosure(ref) { } - template inline void do_oop_nv(T* p); + template inline void do_oop_work(T* p); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); virtual bool do_metadata(); - bool do_metadata_nv(); - virtual void do_klass(Klass* k); - void do_klass_nv(Klass* k); - virtual void do_cld(ClassLoaderData* cld); - void do_cld_nv(ClassLoaderData* cld); }; -class G1AdjustClosure : public ExtendedOopClosure { +class G1AdjustClosure : public BasicOopIterateClosure { template static inline void adjust_pointer(T* p); public: - template void do_oop_nv(T* p) { adjust_pointer(p); } + template void do_oop_work(T* p) { adjust_pointer(p); } virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); @@ -107,10 +102,10 @@ bool failures() { return _failures; } void print_object(outputStream* out, oop obj); - template void do_oop_nv(T* p); + template void do_oop_work(T* p); - void do_oop(oop* p) { do_oop_nv(p); } - void do_oop(narrowOop* p) { do_oop_nv(p); } + void do_oop(oop* p) { do_oop_work(p); } + void do_oop(narrowOop* p) { do_oop_work(p); } }; class G1FollowStackClosure: public VoidClosure { diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1FullGCOopClosures.inline.hpp --- a/src/hotspot/share/gc/g1/g1FullGCOopClosures.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1FullGCOopClosures.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -36,19 +36,27 @@ #include "oops/oop.inline.hpp" template -inline void G1MarkAndPushClosure::do_oop_nv(T* p) { +inline void G1MarkAndPushClosure::do_oop_work(T* p) { _marker->mark_and_push(p); } -inline bool G1MarkAndPushClosure::do_metadata_nv() { +inline void G1MarkAndPushClosure::do_oop(oop* p) { + do_oop_work(p); +} + +inline void G1MarkAndPushClosure::do_oop(narrowOop* p) { + do_oop_work(p); +} + +inline bool G1MarkAndPushClosure::do_metadata() { return true; } -inline void G1MarkAndPushClosure::do_klass_nv(Klass* k) { +inline void G1MarkAndPushClosure::do_klass(Klass* k) { _marker->follow_klass(k); } -inline void G1MarkAndPushClosure::do_cld_nv(ClassLoaderData* cld) { +inline void G1MarkAndPushClosure::do_cld(ClassLoaderData* cld) { _marker->follow_cld(cld); } @@ -81,8 +89,8 @@ RawAccess::oop_store(p, forwardee); } -inline void G1AdjustClosure::do_oop(oop* p) { do_oop_nv(p); } -inline void G1AdjustClosure::do_oop(narrowOop* p) { do_oop_nv(p); } +inline void G1AdjustClosure::do_oop(oop* p) { do_oop_work(p); } +inline void G1AdjustClosure::do_oop(narrowOop* p) { do_oop_work(p); } inline bool G1IsAliveClosure::do_object_b(oop p) { return _bitmap->is_marked(p) || G1ArchiveAllocator::is_closed_archive_object(p); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1FullGCPrepareTask.cpp --- a/src/hotspot/share/gc/g1/g1FullGCPrepareTask.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1FullGCPrepareTask.cpp Sat May 26 06:59:49 2018 +0200 @@ -35,6 +35,7 @@ #include "gc/shared/gcTraceTime.inline.hpp" #include "gc/shared/referenceProcessor.hpp" #include "logging/log.hpp" +#include "memory/iterator.inline.hpp" #include "oops/oop.inline.hpp" #include "utilities/ticks.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1FullGCReferenceProcessorExecutor.cpp --- a/src/hotspot/share/gc/g1/g1FullGCReferenceProcessorExecutor.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1FullGCReferenceProcessorExecutor.cpp Sat May 26 06:59:49 2018 +0200 @@ -30,6 +30,7 @@ #include "gc/g1/g1FullGCReferenceProcessorExecutor.hpp" #include "gc/shared/gcTraceTime.inline.hpp" #include "gc/shared/referenceProcessor.hpp" +#include "memory/iterator.inline.hpp" G1FullGCReferenceProcessingExecutor::G1FullGCReferenceProcessingExecutor(G1FullCollector* collector) : _collector(collector), diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1HeapVerifier.cpp --- a/src/hotspot/share/gc/g1/g1HeapVerifier.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1HeapVerifier.cpp Sat May 26 06:59:49 2018 +0200 @@ -37,6 +37,7 @@ #include "gc/g1/g1StringDedup.hpp" #include "logging/log.hpp" #include "logging/logStream.hpp" +#include "memory/iterator.inline.hpp" #include "memory/resourceArea.hpp" #include "oops/access.inline.hpp" #include "oops/compressedOops.inline.hpp" @@ -61,7 +62,7 @@ bool failures() { return _failures; } - template void do_oop_nv(T* p) { + template void do_oop_work(T* p) { T heap_oop = RawAccess<>::oop_load(p); if (!CompressedOops::is_null(heap_oop)) { oop obj = CompressedOops::decode_not_null(heap_oop); @@ -76,8 +77,8 @@ } } - void do_oop(oop* p) { do_oop_nv(p); } - void do_oop(narrowOop* p) { do_oop_nv(p); } + void do_oop(oop* p) { do_oop_work(p); } + void do_oop(narrowOop* p) { do_oop_work(p); } }; class G1VerifyCodeRootOopClosure: public OopClosure { diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1OopClosures.cpp --- a/src/hotspot/share/gc/g1/g1OopClosures.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1OopClosures.cpp Sat May 26 06:59:49 2018 +0200 @@ -26,7 +26,6 @@ #include "gc/g1/g1CollectedHeap.inline.hpp" #include "gc/g1/g1OopClosures.inline.hpp" #include "gc/g1/g1ParScanThreadState.hpp" -#include "gc/g1/g1_specialized_oop_closures.hpp" #include "memory/iterator.inline.hpp" #include "utilities/stack.inline.hpp" @@ -61,6 +60,3 @@ } _count++; } - -// Generate G1 specialized oop_oop_iterate functions. -SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_G1(ALL_KLASS_OOP_OOP_ITERATE_DEFN) diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1OopClosures.hpp --- a/src/hotspot/share/gc/g1/g1OopClosures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1OopClosures.hpp Sat May 26 06:59:49 2018 +0200 @@ -39,7 +39,7 @@ class G1CMTask; class ReferenceProcessor; -class G1ScanClosureBase : public ExtendedOopClosure { +class G1ScanClosureBase : public BasicOopIterateClosure { protected: G1CollectedHeap* _g1h; G1ParScanThreadState* _par_scan_state; @@ -71,9 +71,9 @@ uint worker_i) : G1ScanClosureBase(g1h, pss), _worker_i(worker_i) { } - template void do_oop_nv(T* p); - virtual void do_oop(narrowOop* p) { do_oop_nv(p); } - virtual void do_oop(oop* p) { do_oop_nv(p); } + template void do_oop_work(T* p); + virtual void do_oop(narrowOop* p) { do_oop_work(p); } + virtual void do_oop(oop* p) { do_oop_work(p); } }; // Used during the Scan RS phase to scan cards from the remembered set during garbage collection. @@ -83,9 +83,9 @@ G1ParScanThreadState* par_scan_state): G1ScanClosureBase(g1h, par_scan_state) { } - template void do_oop_nv(T* p); - virtual void do_oop(oop* p) { do_oop_nv(p); } - virtual void do_oop(narrowOop* p) { do_oop_nv(p); } + template void do_oop_work(T* p); + virtual void do_oop(oop* p) { do_oop_work(p); } + virtual void do_oop(narrowOop* p) { do_oop_work(p); } }; // This closure is applied to the fields of the objects that have just been copied during evacuation. @@ -94,9 +94,9 @@ G1ScanEvacuatedObjClosure(G1CollectedHeap* g1h, G1ParScanThreadState* par_scan_state) : G1ScanClosureBase(g1h, par_scan_state) { } - template void do_oop_nv(T* p); - virtual void do_oop(oop* p) { do_oop_nv(p); } - virtual void do_oop(narrowOop* p) { do_oop_nv(p); } + template void do_oop_work(T* p); + virtual void do_oop(oop* p) { do_oop_work(p); } + virtual void do_oop(narrowOop* p) { do_oop_work(p); } void set_ref_discoverer(ReferenceDiscoverer* rd) { set_ref_discoverer_internal(rd); @@ -167,18 +167,18 @@ }; // Closure for iterating over object fields during concurrent marking -class G1CMOopClosure : public MetadataAwareOopClosure { +class G1CMOopClosure : public MetadataVisitingOopIterateClosure { G1CollectedHeap* _g1h; G1CMTask* _task; public: G1CMOopClosure(G1CollectedHeap* g1h,G1CMTask* task); - template void do_oop_nv(T* p); - virtual void do_oop( oop* p) { do_oop_nv(p); } - virtual void do_oop(narrowOop* p) { do_oop_nv(p); } + template void do_oop_work(T* p); + virtual void do_oop( oop* p) { do_oop_work(p); } + virtual void do_oop(narrowOop* p) { do_oop_work(p); } }; // Closure to scan the root regions during concurrent marking -class G1RootRegionScanClosure : public MetadataAwareOopClosure { +class G1RootRegionScanClosure : public MetadataVisitingOopIterateClosure { private: G1CollectedHeap* _g1h; G1ConcurrentMark* _cm; @@ -186,12 +186,12 @@ public: G1RootRegionScanClosure(G1CollectedHeap* g1h, G1ConcurrentMark* cm, uint worker_id) : _g1h(g1h), _cm(cm), _worker_id(worker_id) { } - template void do_oop_nv(T* p); - virtual void do_oop( oop* p) { do_oop_nv(p); } - virtual void do_oop(narrowOop* p) { do_oop_nv(p); } + template void do_oop_work(T* p); + virtual void do_oop( oop* p) { do_oop_work(p); } + virtual void do_oop(narrowOop* p) { do_oop_work(p); } }; -class G1ConcurrentRefineOopClosure: public ExtendedOopClosure { +class G1ConcurrentRefineOopClosure: public BasicOopIterateClosure { G1CollectedHeap* _g1h; uint _worker_i; @@ -204,21 +204,21 @@ // This closure needs special handling for InstanceRefKlass. virtual ReferenceIterationMode reference_iteration_mode() { return DO_DISCOVERED_AND_DISCOVERY; } - template void do_oop_nv(T* p); - virtual void do_oop(narrowOop* p) { do_oop_nv(p); } - virtual void do_oop(oop* p) { do_oop_nv(p); } + template void do_oop_work(T* p); + virtual void do_oop(narrowOop* p) { do_oop_work(p); } + virtual void do_oop(oop* p) { do_oop_work(p); } }; -class G1RebuildRemSetClosure : public ExtendedOopClosure { +class G1RebuildRemSetClosure : public BasicOopIterateClosure { G1CollectedHeap* _g1h; uint _worker_id; public: G1RebuildRemSetClosure(G1CollectedHeap* g1h, uint worker_id) : _g1h(g1h), _worker_id(worker_id) { } - template void do_oop_nv(T* p); - virtual void do_oop(oop* p) { do_oop_nv(p); } - virtual void do_oop(narrowOop* p) { do_oop_nv(p); } + template void do_oop_work(T* p); + virtual void do_oop(oop* p) { do_oop_work(p); } + virtual void do_oop(narrowOop* p) { do_oop_work(p); } // This closure needs special handling for InstanceRefKlass. virtual ReferenceIterationMode reference_iteration_mode() { return DO_DISCOVERED_AND_DISCOVERY; } }; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1OopClosures.inline.hpp --- a/src/hotspot/share/gc/g1/g1OopClosures.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/g1OopClosures.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -72,7 +72,7 @@ } template -inline void G1ScanEvacuatedObjClosure::do_oop_nv(T* p) { +inline void G1ScanEvacuatedObjClosure::do_oop_work(T* p) { T heap_oop = RawAccess<>::oop_load(p); if (CompressedOops::is_null(heap_oop)) { @@ -92,12 +92,12 @@ } template -inline void G1CMOopClosure::do_oop_nv(T* p) { +inline void G1CMOopClosure::do_oop_work(T* p) { _task->deal_with_reference(p); } template -inline void G1RootRegionScanClosure::do_oop_nv(T* p) { +inline void G1RootRegionScanClosure::do_oop_work(T* p) { T heap_oop = RawAccess::oop_load(p); if (CompressedOops::is_null(heap_oop)) { return; @@ -128,7 +128,7 @@ } template -inline void G1ConcurrentRefineOopClosure::do_oop_nv(T* p) { +inline void G1ConcurrentRefineOopClosure::do_oop_work(T* p) { T o = RawAccess::oop_load(p); if (CompressedOops::is_null(o)) { return; @@ -157,7 +157,7 @@ } template -inline void G1ScanObjsDuringUpdateRSClosure::do_oop_nv(T* p) { +inline void G1ScanObjsDuringUpdateRSClosure::do_oop_work(T* p) { T o = RawAccess<>::oop_load(p); if (CompressedOops::is_null(o)) { return; @@ -183,7 +183,7 @@ } template -inline void G1ScanObjsDuringScanRSClosure::do_oop_nv(T* p) { +inline void G1ScanObjsDuringScanRSClosure::do_oop_work(T* p) { T heap_oop = RawAccess<>::oop_load(p); if (CompressedOops::is_null(heap_oop)) { return; @@ -280,7 +280,7 @@ trim_queue_partially(); } -template void G1RebuildRemSetClosure::do_oop_nv(T* p) { +template void G1RebuildRemSetClosure::do_oop_work(T* p) { oop const obj = RawAccess::oop_load(p); if (obj == NULL) { return; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/g1_specialized_oop_closures.hpp --- a/src/hotspot/share/gc/g1/g1_specialized_oop_closures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,62 +0,0 @@ -/* - * Copyright (c) 2001, 2018, Oracle and/or its affiliates. All rights reserved. - * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. - * - * This code is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 only, as - * published by the Free Software Foundation. - * - * This code is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License - * version 2 for more details (a copy is included in the LICENSE file that - * accompanied this code). - * - * You should have received a copy of the GNU General Public License version - * 2 along with this work; if not, write to the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. - * - * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA - * or visit www.oracle.com if you need additional information or have any - * questions. - * - */ - -#ifndef SHARE_VM_GC_G1_G1_SPECIALIZED_OOP_CLOSURES_HPP -#define SHARE_VM_GC_G1_G1_SPECIALIZED_OOP_CLOSURES_HPP - -// The following OopClosure types get specialized versions of -// "oop_oop_iterate" that invoke the closures' do_oop methods -// non-virtually, using a mechanism defined in this file. Extend these -// macros in the obvious way to add specializations for new closures. - -// Forward declarations. - -class G1ScanEvacuatedObjClosure; - -class G1ScanObjsDuringUpdateRSClosure; -class G1ScanObjsDuringScanRSClosure; -class G1ConcurrentRefineOopClosure; - -class G1CMOopClosure; -class G1RootRegionScanClosure; - -class G1MarkAndPushClosure; -class G1AdjustClosure; - -class G1RebuildRemSetClosure; - -#define SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_G1(f) \ - f(G1ScanEvacuatedObjClosure,_nv) \ - f(G1ScanObjsDuringUpdateRSClosure,_nv) \ - f(G1ScanObjsDuringScanRSClosure,_nv) \ - f(G1ConcurrentRefineOopClosure,_nv) \ - f(G1CMOopClosure,_nv) \ - f(G1RootRegionScanClosure,_nv) \ - f(G1RebuildRemSetClosure,_nv) - -#define SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_G1FULL(f) \ - f(G1MarkAndPushClosure,_nv) \ - f(G1AdjustClosure,_nv) - -#endif // SHARE_VM_GC_G1_G1_SPECIALIZED_OOP_CLOSURES_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/g1/heapRegion.cpp --- a/src/hotspot/share/gc/g1/heapRegion.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/g1/heapRegion.cpp Sat May 26 06:59:49 2018 +0200 @@ -37,7 +37,7 @@ #include "gc/shared/space.inline.hpp" #include "logging/log.hpp" #include "logging/logStream.hpp" -#include "memory/iterator.hpp" +#include "memory/iterator.inline.hpp" #include "memory/resourceArea.hpp" #include "oops/access.inline.hpp" #include "oops/compressedOops.inline.hpp" @@ -450,7 +450,7 @@ p2i(prev_top_at_mark_start()), p2i(next_top_at_mark_start()), rem_set()->get_state_str()); } -class G1VerificationClosure : public ExtendedOopClosure { +class G1VerificationClosure : public BasicOopIterateClosure { protected: G1CollectedHeap* _g1h; G1CardTable *_ct; @@ -608,7 +608,7 @@ }; // Closure that applies the given two closures in sequence. -class G1Mux2Closure : public ExtendedOopClosure { +class G1Mux2Closure : public BasicOopIterateClosure { OopClosure* _c1; OopClosure* _c2; public: diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/immutableSpace.cpp --- a/src/hotspot/share/gc/parallel/immutableSpace.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/immutableSpace.cpp Sat May 26 06:59:49 2018 +0200 @@ -24,6 +24,7 @@ #include "precompiled.hpp" #include "gc/parallel/immutableSpace.hpp" +#include "memory/iterator.inline.hpp" #include "memory/universe.hpp" #include "oops/oop.inline.hpp" #include "utilities/macros.hpp" @@ -39,7 +40,7 @@ _end = end; } -void ImmutableSpace::oop_iterate(ExtendedOopClosure* cl) { +void ImmutableSpace::oop_iterate(OopIterateClosure* cl) { HeapWord* obj_addr = bottom(); HeapWord* t = end(); // Could call objects iterate, but this is easier. diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/immutableSpace.hpp --- a/src/hotspot/share/gc/parallel/immutableSpace.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/immutableSpace.hpp Sat May 26 06:59:49 2018 +0200 @@ -59,7 +59,7 @@ virtual size_t capacity_in_words(Thread*) const { return capacity_in_words(); } // Iteration. - virtual void oop_iterate(ExtendedOopClosure* cl); + virtual void oop_iterate(OopIterateClosure* cl); virtual void object_iterate(ObjectClosure* cl); // Debugging diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/mutableSpace.cpp --- a/src/hotspot/share/gc/parallel/mutableSpace.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/mutableSpace.cpp Sat May 26 06:59:49 2018 +0200 @@ -25,6 +25,7 @@ #include "precompiled.hpp" #include "gc/parallel/mutableSpace.hpp" #include "gc/shared/spaceDecorator.hpp" +#include "memory/iterator.inline.hpp" #include "oops/oop.inline.hpp" #include "runtime/atomic.hpp" #include "runtime/safepoint.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/psCardTable.cpp --- a/src/hotspot/share/gc/parallel/psCardTable.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/psCardTable.cpp Sat May 26 06:59:49 2018 +0200 @@ -31,6 +31,7 @@ #include "gc/parallel/psScavenge.hpp" #include "gc/parallel/psTasks.hpp" #include "gc/parallel/psYoungGen.hpp" +#include "memory/iterator.inline.hpp" #include "oops/access.inline.hpp" #include "oops/oop.inline.hpp" #include "runtime/prefetch.inline.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/psCompactionManager.cpp --- a/src/hotspot/share/gc/parallel/psCompactionManager.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/psCompactionManager.cpp Sat May 26 06:59:49 2018 +0200 @@ -140,7 +140,11 @@ // everything else. ParCompactionManager::MarkAndPushClosure cl(cm); - InstanceKlass::oop_oop_iterate_oop_maps(obj, &cl); + if (UseCompressedOops) { + InstanceKlass::oop_oop_iterate_oop_maps(obj, &cl); + } else { + InstanceKlass::oop_oop_iterate_oop_maps(obj, &cl); + } } void InstanceMirrorKlass::oop_pc_follow_contents(oop obj, ParCompactionManager* cm) { @@ -169,7 +173,11 @@ } ParCompactionManager::MarkAndPushClosure cl(cm); - oop_oop_iterate_statics(obj, &cl); + if (UseCompressedOops) { + oop_oop_iterate_statics(obj, &cl); + } else { + oop_oop_iterate_statics(obj, &cl); + } } void InstanceClassLoaderKlass::oop_pc_follow_contents(oop obj, ParCompactionManager* cm) { diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/psCompactionManager.hpp --- a/src/hotspot/share/gc/parallel/psCompactionManager.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/psCompactionManager.hpp Sat May 26 06:59:49 2018 +0200 @@ -175,13 +175,13 @@ void update_contents(oop obj); - class MarkAndPushClosure: public ExtendedOopClosure { + class MarkAndPushClosure: public BasicOopIterateClosure { private: ParCompactionManager* _compaction_manager; public: MarkAndPushClosure(ParCompactionManager* cm) : _compaction_manager(cm) { } - template void do_oop_nv(T* p); + template void do_oop_work(T* p); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/psCompactionManager.inline.hpp --- a/src/hotspot/share/gc/parallel/psCompactionManager.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/psCompactionManager.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -85,12 +85,12 @@ } template -inline void ParCompactionManager::MarkAndPushClosure::do_oop_nv(T* p) { +inline void ParCompactionManager::MarkAndPushClosure::do_oop_work(T* p) { _compaction_manager->mark_and_push(p); } -inline void ParCompactionManager::MarkAndPushClosure::do_oop(oop* p) { do_oop_nv(p); } -inline void ParCompactionManager::MarkAndPushClosure::do_oop(narrowOop* p) { do_oop_nv(p); } +inline void ParCompactionManager::MarkAndPushClosure::do_oop(oop* p) { do_oop_work(p); } +inline void ParCompactionManager::MarkAndPushClosure::do_oop(narrowOop* p) { do_oop_work(p); } inline void ParCompactionManager::follow_klass(Klass* klass) { oop holder = klass->klass_holder(); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/psMarkSweepDecorator.cpp --- a/src/hotspot/share/gc/parallel/psMarkSweepDecorator.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/psMarkSweepDecorator.cpp Sat May 26 06:59:49 2018 +0200 @@ -32,6 +32,7 @@ #include "gc/parallel/psParallelCompact.inline.hpp" #include "gc/serial/markSweep.inline.hpp" #include "gc/shared/spaceDecorator.hpp" +#include "memory/iterator.inline.hpp" #include "oops/oop.inline.hpp" #include "runtime/prefetch.inline.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/psParallelCompact.cpp --- a/src/hotspot/share/gc/parallel/psParallelCompact.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/psParallelCompact.cpp Sat May 26 06:59:49 2018 +0200 @@ -24,6 +24,7 @@ #include "precompiled.hpp" #include "aot/aotLoader.hpp" +#include "classfile/javaClasses.inline.hpp" #include "classfile/stringTable.hpp" #include "classfile/symbolTable.hpp" #include "classfile/systemDictionary.hpp" @@ -53,8 +54,10 @@ #include "gc/shared/spaceDecorator.hpp" #include "gc/shared/weakProcessor.hpp" #include "logging/log.hpp" +#include "memory/iterator.inline.hpp" #include "memory/resourceArea.hpp" #include "oops/access.inline.hpp" +#include "oops/instanceClassLoaderKlass.inline.hpp" #include "oops/instanceKlass.inline.hpp" #include "oops/instanceMirrorKlass.inline.hpp" #include "oops/methodData.hpp" @@ -3069,14 +3072,22 @@ void InstanceKlass::oop_pc_update_pointers(oop obj, ParCompactionManager* cm) { PSParallelCompact::AdjustPointerClosure closure(cm); - oop_oop_iterate_oop_maps(obj, &closure); + if (UseCompressedOops) { + oop_oop_iterate_oop_maps(obj, &closure); + } else { + oop_oop_iterate_oop_maps(obj, &closure); + } } void InstanceMirrorKlass::oop_pc_update_pointers(oop obj, ParCompactionManager* cm) { InstanceKlass::oop_pc_update_pointers(obj, cm); PSParallelCompact::AdjustPointerClosure closure(cm); - oop_oop_iterate_statics(obj, &closure); + if (UseCompressedOops) { + oop_oop_iterate_statics(obj, &closure); + } else { + oop_oop_iterate_statics(obj, &closure); + } } void InstanceClassLoaderKlass::oop_pc_update_pointers(oop obj, ParCompactionManager* cm) { @@ -3118,7 +3129,11 @@ void ObjArrayKlass::oop_pc_update_pointers(oop obj, ParCompactionManager* cm) { assert(obj->is_objArray(), "obj must be obj array"); PSParallelCompact::AdjustPointerClosure closure(cm); - oop_oop_iterate_elements(objArrayOop(obj), &closure); + if (UseCompressedOops) { + oop_oop_iterate_elements(objArrayOop(obj), &closure); + } else { + oop_oop_iterate_elements(objArrayOop(obj), &closure); + } } void TypeArrayKlass::oop_pc_update_pointers(oop obj, ParCompactionManager* cm) { diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/psParallelCompact.hpp --- a/src/hotspot/share/gc/parallel/psParallelCompact.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/psParallelCompact.hpp Sat May 26 06:59:49 2018 +0200 @@ -934,13 +934,13 @@ virtual bool do_object_b(oop p); }; - class AdjustPointerClosure: public ExtendedOopClosure { + class AdjustPointerClosure: public BasicOopIterateClosure { public: AdjustPointerClosure(ParCompactionManager* cm) { assert(cm != NULL, "associate ParCompactionManage should not be NULL"); _cm = cm; } - template void do_oop_nv(T* p); + template void do_oop_work(T* p); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/psParallelCompact.inline.hpp --- a/src/hotspot/share/gc/parallel/psParallelCompact.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/psParallelCompact.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -125,11 +125,11 @@ } template -void PSParallelCompact::AdjustPointerClosure::do_oop_nv(T* p) { +void PSParallelCompact::AdjustPointerClosure::do_oop_work(T* p) { adjust_pointer(p, _cm); } -inline void PSParallelCompact::AdjustPointerClosure::do_oop(oop* p) { do_oop_nv(p); } -inline void PSParallelCompact::AdjustPointerClosure::do_oop(narrowOop* p) { do_oop_nv(p); } +inline void PSParallelCompact::AdjustPointerClosure::do_oop(oop* p) { do_oop_work(p); } +inline void PSParallelCompact::AdjustPointerClosure::do_oop(narrowOop* p) { do_oop_work(p); } #endif // SHARE_VM_GC_PARALLEL_PSPARALLELCOMPACT_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/psPromotionManager.cpp --- a/src/hotspot/share/gc/parallel/psPromotionManager.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/psPromotionManager.cpp Sat May 26 06:59:49 2018 +0200 @@ -23,6 +23,7 @@ */ #include "precompiled.hpp" +#include "classfile/javaClasses.inline.hpp" #include "gc/parallel/gcTaskManager.hpp" #include "gc/parallel/mutableSpace.hpp" #include "gc/parallel/parallelScavengeHeap.hpp" @@ -35,12 +36,14 @@ #include "logging/log.hpp" #include "logging/logStream.hpp" #include "memory/allocation.inline.hpp" +#include "memory/iterator.inline.hpp" #include "memory/memRegion.hpp" #include "memory/padded.inline.hpp" #include "memory/resourceArea.hpp" #include "oops/access.inline.hpp" #include "oops/arrayOop.inline.hpp" #include "oops/compressedOops.inline.hpp" +#include "oops/instanceClassLoaderKlass.inline.hpp" #include "oops/instanceKlass.inline.hpp" #include "oops/instanceMirrorKlass.inline.hpp" #include "oops/objArrayKlass.inline.hpp" @@ -394,19 +397,19 @@ } } -class PushContentsClosure : public ExtendedOopClosure { +class PushContentsClosure : public BasicOopIterateClosure { PSPromotionManager* _pm; public: PushContentsClosure(PSPromotionManager* pm) : _pm(pm) {} - template void do_oop_nv(T* p) { + template void do_oop_work(T* p) { if (PSScavenge::should_scavenge(p)) { _pm->claim_or_forward_depth(p); } } - virtual void do_oop(oop* p) { do_oop_nv(p); } - virtual void do_oop(narrowOop* p) { do_oop_nv(p); } + virtual void do_oop(oop* p) { do_oop_work(p); } + virtual void do_oop(narrowOop* p) { do_oop_work(p); } // Don't use the oop verification code in the oop_oop_iterate framework. debug_only(virtual bool should_verify_oops() { return false; }) @@ -414,7 +417,11 @@ void InstanceKlass::oop_ps_push_contents(oop obj, PSPromotionManager* pm) { PushContentsClosure cl(pm); - oop_oop_iterate_oop_maps_reverse(obj, &cl); + if (UseCompressedOops) { + oop_oop_iterate_oop_maps_reverse(obj, &cl); + } else { + oop_oop_iterate_oop_maps_reverse(obj, &cl); + } } void InstanceMirrorKlass::oop_ps_push_contents(oop obj, PSPromotionManager* pm) { @@ -425,7 +432,11 @@ InstanceKlass::oop_ps_push_contents(obj, pm); PushContentsClosure cl(pm); - oop_oop_iterate_statics(obj, &cl); + if (UseCompressedOops) { + oop_oop_iterate_statics(obj, &cl); + } else { + oop_oop_iterate_statics(obj, &cl); + } } void InstanceClassLoaderKlass::oop_ps_push_contents(oop obj, PSPromotionManager* pm) { @@ -469,7 +480,11 @@ void ObjArrayKlass::oop_ps_push_contents(oop obj, PSPromotionManager* pm) { assert(obj->is_objArray(), "obj must be obj array"); PushContentsClosure cl(pm); - oop_oop_iterate_elements(objArrayOop(obj), &cl); + if (UseCompressedOops) { + oop_oop_iterate_elements(objArrayOop(obj), &cl); + } else { + oop_oop_iterate_elements(objArrayOop(obj), &cl); + } } void TypeArrayKlass::oop_ps_push_contents(oop obj, PSPromotionManager* pm) { diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/parallel/psYoungGen.hpp --- a/src/hotspot/share/gc/parallel/psYoungGen.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/parallel/psYoungGen.hpp Sat May 26 06:59:49 2018 +0200 @@ -168,7 +168,7 @@ HeapWord** end_addr() const { return eden_space()->end_addr(); } // Iteration. - void oop_iterate(ExtendedOopClosure* cl); + void oop_iterate(OopIterateClosure* cl); void object_iterate(ObjectClosure* cl); virtual void reset_after_change(); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/serial/defNewGeneration.cpp --- a/src/hotspot/share/gc/serial/defNewGeneration.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/serial/defNewGeneration.cpp Sat May 26 06:59:49 2018 +0200 @@ -46,7 +46,7 @@ #include "gc/shared/strongRootsScope.hpp" #include "gc/shared/weakProcessor.hpp" #include "logging/log.hpp" -#include "memory/iterator.hpp" +#include "memory/iterator.inline.hpp" #include "memory/resourceArea.hpp" #include "oops/instanceRefKlass.hpp" #include "oops/oop.inline.hpp" @@ -112,18 +112,12 @@ _boundary = _g->reserved().end(); } -void ScanClosure::do_oop(oop* p) { ScanClosure::do_oop_work(p); } -void ScanClosure::do_oop(narrowOop* p) { ScanClosure::do_oop_work(p); } - FastScanClosure::FastScanClosure(DefNewGeneration* g, bool gc_barrier) : OopsInClassLoaderDataOrGenClosure(g), _g(g), _gc_barrier(gc_barrier) { _boundary = _g->reserved().end(); } -void FastScanClosure::do_oop(oop* p) { FastScanClosure::do_oop_work(p); } -void FastScanClosure::do_oop(narrowOop* p) { FastScanClosure::do_oop_work(p); } - void CLDScanClosure::do_cld(ClassLoaderData* cld) { NOT_PRODUCT(ResourceMark rm); log_develop_trace(gc, scavenge)("CLDScanClosure::do_cld " PTR_FORMAT ", %s, dirty: %s", @@ -155,9 +149,6 @@ _boundary = _g->reserved().end(); } -void ScanWeakRefClosure::do_oop(oop* p) { ScanWeakRefClosure::do_oop_work(p); } -void ScanWeakRefClosure::do_oop(narrowOop* p) { ScanWeakRefClosure::do_oop_work(p); } - DefNewGeneration::DefNewGeneration(ReservedSpace rs, size_t initial_size, const char* policy) diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/serial/defNewGeneration.hpp --- a/src/hotspot/share/gc/serial/defNewGeneration.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/serial/defNewGeneration.hpp Sat May 26 06:59:49 2018 +0200 @@ -96,8 +96,8 @@ PreservedMarksSet _preserved_marks_set; // Promotion failure handling - ExtendedOopClosure *_promo_failure_scan_stack_closure; - void set_promo_failure_scan_stack_closure(ExtendedOopClosure *scan_stack_closure) { + OopIterateClosure *_promo_failure_scan_stack_closure; + void set_promo_failure_scan_stack_closure(OopIterateClosure *scan_stack_closure) { _promo_failure_scan_stack_closure = scan_stack_closure; } diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/serial/defNewGeneration.inline.hpp --- a/src/hotspot/share/gc/serial/defNewGeneration.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/serial/defNewGeneration.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -45,7 +45,7 @@ } #endif // ASSERT - _cl->do_oop_nv(p); + Devirtualizer::do_oop_no_verify(_cl, p); // Card marking is trickier for weak refs. // This oop is a 'next' field which was filled in while we @@ -77,7 +77,7 @@ } #endif // ASSERT - _cl->do_oop_nv(p); + Devirtualizer::do_oop_no_verify(_cl, p); // Optimized for Defnew generation if it's the youngest generation: // we set a younger_gen card if we have an older->youngest diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/serial/markSweep.cpp --- a/src/hotspot/share/gc/serial/markSweep.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/serial/markSweep.cpp Sat May 26 06:59:49 2018 +0200 @@ -25,7 +25,6 @@ #include "precompiled.hpp" #include "compiler/compileBroker.hpp" #include "gc/serial/markSweep.inline.hpp" -#include "gc/serial/serial_specialized_oop_closures.hpp" #include "gc/shared/collectedHeap.inline.hpp" #include "gc/shared/gcTimer.hpp" #include "gc/shared/gcTrace.hpp" @@ -63,48 +62,6 @@ CLDToOopClosure MarkSweep::follow_cld_closure(&mark_and_push_closure); CLDToOopClosure MarkSweep::adjust_cld_closure(&adjust_pointer_closure); -inline void MarkSweep::mark_object(oop obj) { - // some marks may contain information we need to preserve so we store them away - // and overwrite the mark. We'll restore it at the end of markSweep. - markOop mark = obj->mark_raw(); - obj->set_mark_raw(markOopDesc::prototype()->set_marked()); - - if (mark->must_be_preserved(obj)) { - preserve_mark(obj, mark); - } -} - -template inline void MarkSweep::mark_and_push(T* p) { - T heap_oop = RawAccess<>::oop_load(p); - if (!CompressedOops::is_null(heap_oop)) { - oop obj = CompressedOops::decode_not_null(heap_oop); - if (!obj->mark_raw()->is_marked()) { - mark_object(obj); - _marking_stack.push(obj); - } - } -} - -inline void MarkSweep::follow_klass(Klass* klass) { - oop op = klass->klass_holder(); - MarkSweep::mark_and_push(&op); -} - -inline void MarkSweep::follow_cld(ClassLoaderData* cld) { - MarkSweep::follow_cld_closure.do_cld(cld); -} - -template -inline void MarkAndPushClosure::do_oop_nv(T* p) { MarkSweep::mark_and_push(p); } -void MarkAndPushClosure::do_oop(oop* p) { do_oop_nv(p); } -void MarkAndPushClosure::do_oop(narrowOop* p) { do_oop_nv(p); } -inline bool MarkAndPushClosure::do_metadata_nv() { return true; } -bool MarkAndPushClosure::do_metadata() { return do_metadata_nv(); } -inline void MarkAndPushClosure::do_klass_nv(Klass* k) { MarkSweep::follow_klass(k); } -void MarkAndPushClosure::do_klass(Klass* k) { do_klass_nv(k); } -inline void MarkAndPushClosure::do_cld_nv(ClassLoaderData* cld) { MarkSweep::follow_cld(cld); } -void MarkAndPushClosure::do_cld(ClassLoaderData* cld) { do_cld_nv(cld); } - template inline void MarkSweep::KeepAliveClosure::do_oop_work(T* p) { mark_and_push(p); } @@ -216,11 +173,6 @@ AdjustPointerClosure MarkSweep::adjust_pointer_closure; -template -void AdjustPointerClosure::do_oop_nv(T* p) { MarkSweep::adjust_pointer(p); } -void AdjustPointerClosure::do_oop(oop* p) { do_oop_nv(p); } -void AdjustPointerClosure::do_oop(narrowOop* p) { do_oop_nv(p); } - void MarkSweep::adjust_marks() { assert( _preserved_oop_stack.size() == _preserved_mark_stack.size(), "inconsistent preserved oop stacks"); @@ -269,6 +221,3 @@ MarkSweep::_gc_timer = new (ResourceObj::C_HEAP, mtGC) STWGCTimer(); MarkSweep::_gc_tracer = new (ResourceObj::C_HEAP, mtGC) SerialOldTracer(); } - -// Generate MS specialized oop_oop_iterate functions. -SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_MS(ALL_KLASS_OOP_OOP_ITERATE_DEFN) diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/serial/markSweep.hpp --- a/src/hotspot/share/gc/serial/markSweep.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/serial/markSweep.hpp Sat May 26 06:59:49 2018 +0200 @@ -56,7 +56,7 @@ // // Inline closure decls // - class FollowRootClosure: public OopsInGenClosure { + class FollowRootClosure: public BasicOopsInGenClosure { public: virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); @@ -170,29 +170,24 @@ static void follow_array_chunk(objArrayOop array, int index); }; -class MarkAndPushClosure: public ExtendedOopClosure { +class MarkAndPushClosure: public OopIterateClosure { public: - template void do_oop_nv(T* p); + template void do_oop_work(T* p); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - virtual bool do_metadata(); - bool do_metadata_nv(); - + virtual bool do_metadata() { return true; } virtual void do_klass(Klass* k); - void do_klass_nv(Klass* k); - virtual void do_cld(ClassLoaderData* cld); - void do_cld_nv(ClassLoaderData* cld); void set_ref_discoverer(ReferenceDiscoverer* rd) { set_ref_discoverer_internal(rd); } }; -class AdjustPointerClosure: public OopsInGenClosure { +class AdjustPointerClosure: public BasicOopsInGenClosure { public: - template void do_oop_nv(T* p); + template void do_oop_work(T* p); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); virtual ReferenceIterationMode reference_iteration_mode() { return DO_FIELDS; } diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/serial/markSweep.inline.hpp --- a/src/hotspot/share/gc/serial/markSweep.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/serial/markSweep.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -25,6 +25,7 @@ #ifndef SHARE_VM_GC_SERIAL_MARKSWEEP_INLINE_HPP #define SHARE_VM_GC_SERIAL_MARKSWEEP_INLINE_HPP +#include "classfile/classLoaderData.inline.hpp" #include "gc/serial/markSweep.hpp" #include "memory/metaspaceShared.hpp" #include "memory/universe.hpp" @@ -33,10 +34,44 @@ #include "oops/compressedOops.inline.hpp" #include "oops/oop.inline.hpp" -inline int MarkSweep::adjust_pointers(oop obj) { - return obj->oop_iterate_size(&MarkSweep::adjust_pointer_closure); +inline void MarkSweep::mark_object(oop obj) { + // some marks may contain information we need to preserve so we store them away + // and overwrite the mark. We'll restore it at the end of markSweep. + markOop mark = obj->mark_raw(); + obj->set_mark_raw(markOopDesc::prototype()->set_marked()); + + if (mark->must_be_preserved(obj)) { + preserve_mark(obj, mark); + } } +template inline void MarkSweep::mark_and_push(T* p) { + T heap_oop = RawAccess<>::oop_load(p); + if (!CompressedOops::is_null(heap_oop)) { + oop obj = CompressedOops::decode_not_null(heap_oop); + if (!obj->mark_raw()->is_marked()) { + mark_object(obj); + _marking_stack.push(obj); + } + } +} + +inline void MarkSweep::follow_klass(Klass* klass) { + oop op = klass->klass_holder(); + MarkSweep::mark_and_push(&op); +} + +inline void MarkSweep::follow_cld(ClassLoaderData* cld) { + MarkSweep::follow_cld_closure.do_cld(cld); +} + +template +inline void MarkAndPushClosure::do_oop_work(T* p) { MarkSweep::mark_and_push(p); } +inline void MarkAndPushClosure::do_oop(oop* p) { do_oop_work(p); } +inline void MarkAndPushClosure::do_oop(narrowOop* p) { do_oop_work(p); } +inline void MarkAndPushClosure::do_klass(Klass* k) { MarkSweep::follow_klass(k); } +inline void MarkAndPushClosure::do_cld(ClassLoaderData* cld) { MarkSweep::follow_cld(cld); } + template inline void MarkSweep::adjust_pointer(T* p) { T heap_oop = RawAccess<>::oop_load(p); if (!CompressedOops::is_null(heap_oop)) { @@ -59,4 +94,14 @@ } } +template +void AdjustPointerClosure::do_oop_work(T* p) { MarkSweep::adjust_pointer(p); } +inline void AdjustPointerClosure::do_oop(oop* p) { do_oop_work(p); } +inline void AdjustPointerClosure::do_oop(narrowOop* p) { do_oop_work(p); } + + +inline int MarkSweep::adjust_pointers(oop obj) { + return obj->oop_iterate_size(&MarkSweep::adjust_pointer_closure); +} + #endif // SHARE_VM_GC_SERIAL_MARKSWEEP_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/serial/serial_specialized_oop_closures.hpp --- a/src/hotspot/share/gc/serial/serial_specialized_oop_closures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,53 +0,0 @@ -/* - * Copyright (c) 2001, 2017, Oracle and/or its affiliates. All rights reserved. - * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. - * - * This code is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 only, as - * published by the Free Software Foundation. - * - * This code is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License - * version 2 for more details (a copy is included in the LICENSE file that - * accompanied this code). - * - * You should have received a copy of the GNU General Public License version - * 2 along with this work; if not, write to the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. - * - * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA - * or visit www.oracle.com if you need additional information or have any - * questions. - * - */ - -#ifndef SHARE_GC_SERIAL_SERIAL_SPECIALIZED_OOP_CLOSURES_HPP -#define SHARE_GC_SERIAL_SERIAL_SPECIALIZED_OOP_CLOSURES_HPP - -// The following OopClosure types get specialized versions of -// "oop_oop_iterate" that invoke the closures' do_oop methods -// non-virtually, using a mechanism defined in this file. Extend these -// macros in the obvious way to add specializations for new closures. - -// Forward declarations. - -// DefNew -class ScanClosure; -class FastScanClosure; -class FilteringClosure; - -// MarkSweep -class MarkAndPushClosure; -class AdjustPointerClosure; - -#define SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_S(f) \ - f(ScanClosure,_nv) \ - f(FastScanClosure,_nv) \ - f(FilteringClosure,_nv) - -#define SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_MS(f) \ - f(MarkAndPushClosure,_nv) \ - f(AdjustPointerClosure,_nv) - -#endif // SHARE_GC_SERIAL_SERIAL_SPECIALIZED_OOP_CLOSURES_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/cardTableRS.cpp --- a/src/hotspot/share/gc/shared/cardTableRS.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/shared/cardTableRS.cpp Sat May 26 06:59:49 2018 +0200 @@ -29,6 +29,7 @@ #include "gc/shared/generation.hpp" #include "gc/shared/space.inline.hpp" #include "memory/allocation.inline.hpp" +#include "memory/iterator.inline.hpp" #include "oops/access.inline.hpp" #include "oops/oop.inline.hpp" #include "runtime/atomic.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/genCollectedHeap.cpp --- a/src/hotspot/share/gc/shared/genCollectedHeap.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/shared/genCollectedHeap.cpp Sat May 26 06:59:49 2018 +0200 @@ -1047,7 +1047,7 @@ oop_iterate(&no_header_cl); } -void GenCollectedHeap::oop_iterate(ExtendedOopClosure* cl) { +void GenCollectedHeap::oop_iterate(OopIterateClosure* cl) { _young_gen->oop_iterate(cl); _old_gen->oop_iterate(cl); } diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/genCollectedHeap.hpp --- a/src/hotspot/share/gc/shared/genCollectedHeap.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/shared/genCollectedHeap.hpp Sat May 26 06:59:49 2018 +0200 @@ -259,7 +259,7 @@ // Iteration functions. void oop_iterate_no_header(OopClosure* cl); - void oop_iterate(ExtendedOopClosure* cl); + void oop_iterate(OopIterateClosure* cl); void object_iterate(ObjectClosure* cl); void safe_object_iterate(ObjectClosure* cl); Space* space_containing(const void* addr) const; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/genOopClosures.cpp --- a/src/hotspot/share/gc/shared/genOopClosures.cpp Mon Jun 25 12:44:52 2018 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,37 +0,0 @@ -/* Copyright (c) 2015, Oracle and/or its affiliates. All rights reserved. - * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. - * - * This code is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 only, as - * published by the Free Software Foundation. - * - * This code is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License - * version 2 for more details (a copy is included in the LICENSE file that - * accompanied this code). - * - * You should have received a copy of the GNU General Public License version - * 2 along with this work; if not, write to the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. - * - * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA - * or visit www.oracle.com if you need additional information or have any - * questions. - * - */ - -#include "precompiled.hpp" -#include "gc/shared/genOopClosures.inline.hpp" -#include "memory/iterator.inline.hpp" -#if INCLUDE_SERIALGC -#include "gc/serial/serial_specialized_oop_closures.hpp" -#endif - -void FilteringClosure::do_oop(oop* p) { do_oop_nv(p); } -void FilteringClosure::do_oop(narrowOop* p) { do_oop_nv(p); } - -#if INCLUDE_SERIALGC -// Generate Serial GC specialized oop_oop_iterate functions. -SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_S(ALL_KLASS_OOP_OOP_ITERATE_DEFN) -#endif diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/genOopClosures.hpp --- a/src/hotspot/share/gc/shared/genOopClosures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/shared/genOopClosures.hpp Sat May 26 06:59:49 2018 +0200 @@ -40,7 +40,7 @@ // method at the end of their own do_oop method! // Note: no do_oop defined, this is an abstract class. -class OopsInGenClosure : public ExtendedOopClosure { +class OopsInGenClosure : public OopIterateClosure { private: Generation* _orig_gen; // generation originally set in ctor Generation* _gen; // generation being scanned @@ -62,7 +62,7 @@ template void par_do_barrier(T* p); public: - OopsInGenClosure() : ExtendedOopClosure(NULL), + OopsInGenClosure() : OopIterateClosure(NULL), _orig_gen(NULL), _gen(NULL), _gen_boundary(NULL), _rs(NULL) {}; OopsInGenClosure(Generation* gen); @@ -81,11 +81,21 @@ }; +class BasicOopsInGenClosure: public OopsInGenClosure { + public: + BasicOopsInGenClosure() : OopsInGenClosure() {} + BasicOopsInGenClosure(Generation* gen); + + virtual bool do_metadata() { return false; } + virtual void do_klass(Klass* k) { ShouldNotReachHere(); } + virtual void do_cld(ClassLoaderData* cld) { ShouldNotReachHere(); } +}; + // Super class for scan closures. It contains code to dirty scanned class loader data. -class OopsInClassLoaderDataOrGenClosure: public OopsInGenClosure { +class OopsInClassLoaderDataOrGenClosure: public BasicOopsInGenClosure { ClassLoaderData* _scanned_cld; public: - OopsInClassLoaderDataOrGenClosure(Generation* g) : OopsInGenClosure(g), _scanned_cld(NULL) {} + OopsInClassLoaderDataOrGenClosure(Generation* g) : BasicOopsInGenClosure(g), _scanned_cld(NULL) {} void set_scanned_cld(ClassLoaderData* cld) { assert(cld == NULL || _scanned_cld == NULL, "Must be"); _scanned_cld = cld; @@ -110,8 +120,6 @@ ScanClosure(DefNewGeneration* g, bool gc_barrier); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); }; // Closure for scanning DefNewGeneration. @@ -129,8 +137,6 @@ FastScanClosure(DefNewGeneration* g, bool gc_barrier); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); }; #endif // INCLUDE_SERIALGC @@ -146,22 +152,21 @@ void do_cld(ClassLoaderData* cld); }; -class FilteringClosure: public ExtendedOopClosure { +class FilteringClosure: public OopIterateClosure { private: HeapWord* _boundary; - ExtendedOopClosure* _cl; + OopIterateClosure* _cl; protected: template inline void do_oop_work(T* p); public: - FilteringClosure(HeapWord* boundary, ExtendedOopClosure* cl) : - ExtendedOopClosure(cl->ref_discoverer()), _boundary(boundary), + FilteringClosure(HeapWord* boundary, OopIterateClosure* cl) : + OopIterateClosure(cl->ref_discoverer()), _boundary(boundary), _cl(cl) {} virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); - virtual bool do_metadata() { return do_metadata_nv(); } - inline bool do_metadata_nv() { assert(!_cl->do_metadata(), "assumption broken, must change to 'return _cl->do_metadata()'"); return false; } + virtual bool do_metadata() { assert(!_cl->do_metadata(), "assumption broken, must change to 'return _cl->do_metadata()'"); return false; } + virtual void do_klass(Klass*) { ShouldNotReachHere(); } + virtual void do_cld(ClassLoaderData*) { ShouldNotReachHere(); } }; #if INCLUDE_SERIALGC @@ -179,8 +184,6 @@ ScanWeakRefClosure(DefNewGeneration* g); virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); - inline void do_oop_nv(oop* p); - inline void do_oop_nv(narrowOop* p); }; #endif // INCLUDE_SERIALGC diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/genOopClosures.inline.hpp --- a/src/hotspot/share/gc/shared/genOopClosures.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/shared/genOopClosures.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -38,7 +38,7 @@ #endif inline OopsInGenClosure::OopsInGenClosure(Generation* gen) : - ExtendedOopClosure(gen->ref_processor()), _orig_gen(gen), _rs(NULL) { + OopIterateClosure(gen->ref_processor()), _orig_gen(gen), _rs(NULL) { set_generation(gen); } @@ -73,6 +73,9 @@ } } +inline BasicOopsInGenClosure::BasicOopsInGenClosure(Generation* gen) : OopsInGenClosure(gen) { +} + inline void OopsInClassLoaderDataOrGenClosure::do_cld_barrier() { assert(_scanned_cld != NULL, "Must be"); if (!_scanned_cld->has_modified_oops()) { @@ -105,8 +108,8 @@ } } -inline void ScanClosure::do_oop_nv(oop* p) { ScanClosure::do_oop_work(p); } -inline void ScanClosure::do_oop_nv(narrowOop* p) { ScanClosure::do_oop_work(p); } +inline void ScanClosure::do_oop(oop* p) { ScanClosure::do_oop_work(p); } +inline void ScanClosure::do_oop(narrowOop* p) { ScanClosure::do_oop_work(p); } // NOTE! Any changes made here should also be made // in ScanClosure::do_oop_work() @@ -130,8 +133,8 @@ } } -inline void FastScanClosure::do_oop_nv(oop* p) { FastScanClosure::do_oop_work(p); } -inline void FastScanClosure::do_oop_nv(narrowOop* p) { FastScanClosure::do_oop_work(p); } +inline void FastScanClosure::do_oop(oop* p) { FastScanClosure::do_oop_work(p); } +inline void FastScanClosure::do_oop(narrowOop* p) { FastScanClosure::do_oop_work(p); } #endif // INCLUDE_SERIALGC @@ -145,8 +148,8 @@ } } -void FilteringClosure::do_oop_nv(oop* p) { FilteringClosure::do_oop_work(p); } -void FilteringClosure::do_oop_nv(narrowOop* p) { FilteringClosure::do_oop_work(p); } +inline void FilteringClosure::do_oop(oop* p) { FilteringClosure::do_oop_work(p); } +inline void FilteringClosure::do_oop(narrowOop* p) { FilteringClosure::do_oop_work(p); } #if INCLUDE_SERIALGC @@ -163,8 +166,8 @@ } } -inline void ScanWeakRefClosure::do_oop_nv(oop* p) { ScanWeakRefClosure::do_oop_work(p); } -inline void ScanWeakRefClosure::do_oop_nv(narrowOop* p) { ScanWeakRefClosure::do_oop_work(p); } +inline void ScanWeakRefClosure::do_oop(oop* p) { ScanWeakRefClosure::do_oop_work(p); } +inline void ScanWeakRefClosure::do_oop(narrowOop* p) { ScanWeakRefClosure::do_oop_work(p); } #endif // INCLUDE_SERIALGC diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/generation.cpp --- a/src/hotspot/share/gc/shared/generation.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/shared/generation.cpp Sat May 26 06:59:49 2018 +0200 @@ -253,15 +253,15 @@ class GenerationOopIterateClosure : public SpaceClosure { public: - ExtendedOopClosure* _cl; + OopIterateClosure* _cl; virtual void do_space(Space* s) { s->oop_iterate(_cl); } - GenerationOopIterateClosure(ExtendedOopClosure* cl) : + GenerationOopIterateClosure(OopIterateClosure* cl) : _cl(cl) {} }; -void Generation::oop_iterate(ExtendedOopClosure* cl) { +void Generation::oop_iterate(OopIterateClosure* cl) { GenerationOopIterateClosure blk(cl); space_iterate(&blk); } diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/generation.hpp --- a/src/hotspot/share/gc/shared/generation.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/shared/generation.hpp Sat May 26 06:59:49 2018 +0200 @@ -474,7 +474,7 @@ // Iterate over all the ref-containing fields of all objects in the // generation, calling "cl.do_oop" on each. - virtual void oop_iterate(ExtendedOopClosure* cl); + virtual void oop_iterate(OopIterateClosure* cl); // Iterate over all objects in the generation, calling "cl.do_object" on // each. diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/space.cpp --- a/src/hotspot/share/gc/shared/space.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/shared/space.cpp Sat May 26 06:59:49 2018 +0200 @@ -32,6 +32,7 @@ #include "gc/shared/space.hpp" #include "gc/shared/space.inline.hpp" #include "gc/shared/spaceDecorator.hpp" +#include "memory/iterator.inline.hpp" #include "memory/universe.hpp" #include "oops/oop.inline.hpp" #include "runtime/atomic.hpp" @@ -181,7 +182,7 @@ } } -DirtyCardToOopClosure* Space::new_dcto_cl(ExtendedOopClosure* cl, +DirtyCardToOopClosure* Space::new_dcto_cl(OopIterateClosure* cl, CardTable::PrecisionStyle precision, HeapWord* boundary, bool parallel) { @@ -257,11 +258,11 @@ // (There are only two of these, rather than N, because the split is due // only to the introduction of the FilteringClosure, a local part of the // impl of this abstraction.) -ContiguousSpaceDCTOC__walk_mem_region_with_cl_DEFN(ExtendedOopClosure) +ContiguousSpaceDCTOC__walk_mem_region_with_cl_DEFN(OopIterateClosure) ContiguousSpaceDCTOC__walk_mem_region_with_cl_DEFN(FilteringClosure) DirtyCardToOopClosure* -ContiguousSpace::new_dcto_cl(ExtendedOopClosure* cl, +ContiguousSpace::new_dcto_cl(OopIterateClosure* cl, CardTable::PrecisionStyle precision, HeapWord* boundary, bool parallel) { @@ -480,7 +481,7 @@ } } -void Space::oop_iterate(ExtendedOopClosure* blk) { +void Space::oop_iterate(OopIterateClosure* blk) { ObjectToOopClosure blk2(blk); object_iterate(&blk2); } @@ -490,7 +491,7 @@ return true; } -void ContiguousSpace::oop_iterate(ExtendedOopClosure* blk) { +void ContiguousSpace::oop_iterate(OopIterateClosure* blk) { if (is_empty()) return; HeapWord* obj_addr = bottom(); HeapWord* t = top(); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/space.hpp --- a/src/hotspot/share/gc/shared/space.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/shared/space.hpp Sat May 26 06:59:49 2018 +0200 @@ -169,7 +169,7 @@ // Iterate over all the ref-containing fields of all objects in the // space, calling "cl.do_oop" on each. Fields in objects allocated by // applications of the closure are not included in the iteration. - virtual void oop_iterate(ExtendedOopClosure* cl); + virtual void oop_iterate(OopIterateClosure* cl); // Iterate over all objects in the space, calling "cl.do_object" on // each. Objects allocated by applications of the closure are not @@ -183,7 +183,7 @@ // overridden to return the appropriate type of closure // depending on the type of space in which the closure will // operate. ResourceArea allocated. - virtual DirtyCardToOopClosure* new_dcto_cl(ExtendedOopClosure* cl, + virtual DirtyCardToOopClosure* new_dcto_cl(OopIterateClosure* cl, CardTable::PrecisionStyle precision, HeapWord* boundary, bool parallel); @@ -256,7 +256,7 @@ class DirtyCardToOopClosure: public MemRegionClosureRO { protected: - ExtendedOopClosure* _cl; + OopIterateClosure* _cl; Space* _sp; CardTable::PrecisionStyle _precision; HeapWord* _boundary; // If non-NULL, process only non-NULL oops @@ -286,7 +286,7 @@ virtual void walk_mem_region(MemRegion mr, HeapWord* bottom, HeapWord* top); public: - DirtyCardToOopClosure(Space* sp, ExtendedOopClosure* cl, + DirtyCardToOopClosure(Space* sp, OopIterateClosure* cl, CardTable::PrecisionStyle precision, HeapWord* boundary) : _sp(sp), _cl(cl), _precision(precision), _boundary(boundary), @@ -582,7 +582,7 @@ HeapWord* allocate_aligned(size_t word_size); // Iteration - void oop_iterate(ExtendedOopClosure* cl); + void oop_iterate(OopIterateClosure* cl); void object_iterate(ObjectClosure* blk); // For contiguous spaces this method will iterate safely over objects // in the space (i.e., between bottom and top) when at a safepoint. @@ -621,7 +621,7 @@ } // Override. - DirtyCardToOopClosure* new_dcto_cl(ExtendedOopClosure* cl, + DirtyCardToOopClosure* new_dcto_cl(OopIterateClosure* cl, CardTable::PrecisionStyle precision, HeapWord* boundary, bool parallel); @@ -689,13 +689,13 @@ // apparent. virtual void walk_mem_region_with_cl(MemRegion mr, HeapWord* bottom, HeapWord* top, - ExtendedOopClosure* cl) = 0; + OopIterateClosure* cl) = 0; virtual void walk_mem_region_with_cl(MemRegion mr, HeapWord* bottom, HeapWord* top, FilteringClosure* cl) = 0; public: - FilteringDCTOC(Space* sp, ExtendedOopClosure* cl, + FilteringDCTOC(Space* sp, OopIterateClosure* cl, CardTable::PrecisionStyle precision, HeapWord* boundary) : DirtyCardToOopClosure(sp, cl, precision, boundary) {} @@ -718,13 +718,13 @@ virtual void walk_mem_region_with_cl(MemRegion mr, HeapWord* bottom, HeapWord* top, - ExtendedOopClosure* cl); + OopIterateClosure* cl); virtual void walk_mem_region_with_cl(MemRegion mr, HeapWord* bottom, HeapWord* top, FilteringClosure* cl); public: - ContiguousSpaceDCTOC(ContiguousSpace* sp, ExtendedOopClosure* cl, + ContiguousSpaceDCTOC(ContiguousSpace* sp, OopIterateClosure* cl, CardTable::PrecisionStyle precision, HeapWord* boundary) : FilteringDCTOC(sp, cl, precision, boundary) diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/shared/specialized_oop_closures.hpp --- a/src/hotspot/share/gc/shared/specialized_oop_closures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,87 +0,0 @@ -/* - * Copyright (c) 2001, 2017, Oracle and/or its affiliates. All rights reserved. - * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. - * - * This code is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 only, as - * published by the Free Software Foundation. - * - * This code is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License - * version 2 for more details (a copy is included in the LICENSE file that - * accompanied this code). - * - * You should have received a copy of the GNU General Public License version - * 2 along with this work; if not, write to the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. - * - * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA - * or visit www.oracle.com if you need additional information or have any - * questions. - * - */ - -#ifndef SHARE_VM_GC_SHARED_SPECIALIZED_OOP_CLOSURES_HPP -#define SHARE_VM_GC_SHARED_SPECIALIZED_OOP_CLOSURES_HPP - -#include "utilities/macros.hpp" -#if INCLUDE_CMSGC -#include "gc/cms/cms_specialized_oop_closures.hpp" -#endif -#if INCLUDE_G1GC -#include "gc/g1/g1_specialized_oop_closures.hpp" -#endif -#if INCLUDE_SERIALGC -#include "gc/serial/serial_specialized_oop_closures.hpp" -#endif -#if INCLUDE_ZGC -#include "gc/z/z_specialized_oop_closures.hpp" -#endif - -// The following OopClosure types get specialized versions of -// "oop_oop_iterate" that invoke the closures' do_oop methods -// non-virtually, using a mechanism defined in this file. Extend these -// macros in the obvious way to add specializations for new closures. - -// Forward declarations. -class ExtendedOopClosure; -class NoHeaderExtendedOopClosure; -class OopsInGenClosure; - -// This macro applies an argument macro to all OopClosures for which we -// want specialized bodies of "oop_oop_iterate". The arguments to "f" are: -// "f(closureType, non_virtual)" -// where "closureType" is the name of the particular subclass of ExtendedOopClosure, -// and "non_virtual" will be the string "_nv" if the closure type should -// have its "do_oop" method invoked non-virtually, or else the -// string "_v". ("ExtendedOopClosure" itself will be the only class in the latter -// category.) - -// This is split into several because of a Visual C++ 6.0 compiler bug -// where very long macros cause the compiler to crash - -#define SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_1(f) \ - f(NoHeaderExtendedOopClosure,_nv) \ - SERIALGC_ONLY(SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_S(f)) \ - CMSGC_ONLY(SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_P(f)) - -#define SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_2(f) \ - SERIALGC_ONLY(SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_MS(f)) \ - CMSGC_ONLY(SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_CMS(f)) \ - G1GC_ONLY(SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_G1(f)) \ - G1GC_ONLY(SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_G1FULL(f)) \ - ZGC_ONLY(SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_Z(f)) - -// We separate these out, because sometime the general one has -// a different definition from the specialized ones, and sometimes it -// doesn't. - -#define ALL_OOP_OOP_ITERATE_CLOSURES_1(f) \ - f(ExtendedOopClosure,_v) \ - SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_1(f) - -#define ALL_OOP_OOP_ITERATE_CLOSURES_2(f) \ - SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_2(f) - -#endif // SHARE_VM_GC_SHARED_SPECIALIZED_OOP_CLOSURES_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/z/zBarrier.cpp --- a/src/hotspot/share/gc/z/zBarrier.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/z/zBarrier.cpp Sat May 26 06:59:49 2018 +0200 @@ -26,6 +26,7 @@ #include "gc/z/zHeap.inline.hpp" #include "gc/z/zOop.inline.hpp" #include "gc/z/zOopClosures.inline.hpp" +#include "memory/iterator.inline.hpp" #include "oops/oop.inline.hpp" #include "runtime/safepoint.hpp" #include "utilities/debug.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/z/zHeapIterator.cpp --- a/src/hotspot/share/gc/z/zHeapIterator.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/z/zHeapIterator.cpp Sat May 26 06:59:49 2018 +0200 @@ -28,6 +28,7 @@ #include "gc/z/zHeapIterator.hpp" #include "gc/z/zOop.inline.hpp" #include "gc/z/zRootsIterator.hpp" +#include "memory/iterator.inline.hpp" #include "oops/oop.inline.hpp" #include "utilities/bitMap.inline.hpp" #include "utilities/stack.inline.hpp" @@ -73,7 +74,7 @@ } }; -class ZHeapIteratorPushOopClosure : public ExtendedOopClosure { +class ZHeapIteratorPushOopClosure : public BasicOopIterateClosure { private: ZHeapIterator* const _iter; const oop _base; @@ -83,23 +84,15 @@ _iter(iter), _base(base) {} - void do_oop_nv(oop* p) { + virtual void do_oop(oop* p) { const oop obj = HeapAccess::oop_load_at(_base, _base->field_offset(p)); _iter->push(obj); } - void do_oop_nv(narrowOop* p) { + virtual void do_oop(narrowOop* p) { ShouldNotReachHere(); } - virtual void do_oop(oop* p) { - do_oop_nv(p); - } - - virtual void do_oop(narrowOop* p) { - do_oop_nv(p); - } - #ifdef ASSERT virtual bool should_verify_oops() { return false; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/z/zMark.cpp --- a/src/hotspot/share/gc/z/zMark.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/z/zMark.cpp Sat May 26 06:59:49 2018 +0200 @@ -37,6 +37,7 @@ #include "gc/z/zUtils.inline.hpp" #include "gc/z/zWorkers.inline.hpp" #include "logging/log.hpp" +#include "memory/iterator.inline.hpp" #include "oops/objArrayOop.inline.hpp" #include "oops/oop.inline.hpp" #include "runtime/atomic.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/z/zOopClosures.cpp --- a/src/hotspot/share/gc/z/zOopClosures.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/z/zOopClosures.cpp Sat May 26 06:59:49 2018 +0200 @@ -69,6 +69,3 @@ ZVerifyHeapOopClosure cl(o); o->oop_iterate(&cl); } - -// Generate Z specialized oop_oop_iterate functions. -SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_Z(ALL_KLASS_OOP_OOP_ITERATE_DEFN) diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/z/zOopClosures.hpp --- a/src/hotspot/share/gc/z/zOopClosures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/z/zOopClosures.hpp Sat May 26 06:59:49 2018 +0200 @@ -26,11 +26,8 @@ #include "memory/iterator.hpp" -class ZLoadBarrierOopClosure : public ExtendedOopClosure { +class ZLoadBarrierOopClosure : public BasicOopIterateClosure { public: - void do_oop_nv(oop* p); - void do_oop_nv(narrowOop* p); - virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); @@ -54,13 +51,10 @@ }; template -class ZMarkBarrierOopClosure : public ExtendedOopClosure { +class ZMarkBarrierOopClosure : public BasicOopIterateClosure { public: ZMarkBarrierOopClosure(); - void do_oop_nv(oop* p); - void do_oop_nv(narrowOop* p); - virtual void do_oop(oop* p); virtual void do_oop(narrowOop* p); @@ -88,7 +82,7 @@ virtual void do_oop(narrowOop* p); }; -class ZVerifyHeapOopClosure : public ExtendedOopClosure { +class ZVerifyHeapOopClosure : public BasicOopIterateClosure { private: const oop _base; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/z/zOopClosures.inline.hpp --- a/src/hotspot/share/gc/z/zOopClosures.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/gc/z/zOopClosures.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -32,22 +32,14 @@ #include "runtime/atomic.hpp" #include "utilities/debug.hpp" -inline void ZLoadBarrierOopClosure::do_oop_nv(oop* p) { +inline void ZLoadBarrierOopClosure::do_oop(oop* p) { ZBarrier::load_barrier_on_oop_field(p); } -inline void ZLoadBarrierOopClosure::do_oop_nv(narrowOop* p) { +inline void ZLoadBarrierOopClosure::do_oop(narrowOop* p) { ShouldNotReachHere(); } -inline void ZLoadBarrierOopClosure::do_oop(oop* p) { - do_oop_nv(p); -} - -inline void ZLoadBarrierOopClosure::do_oop(narrowOop* p) { - do_oop_nv(p); -} - inline void ZMarkRootOopClosure::do_oop(oop* p) { ZBarrier::mark_barrier_on_root_oop_field(p); } @@ -66,28 +58,18 @@ template inline ZMarkBarrierOopClosure::ZMarkBarrierOopClosure() : - ExtendedOopClosure(finalizable ? NULL : ZHeap::heap()->reference_discoverer()) {} + BasicOopIterateClosure(finalizable ? NULL : ZHeap::heap()->reference_discoverer()) {} template -inline void ZMarkBarrierOopClosure::do_oop_nv(oop* p) { +inline void ZMarkBarrierOopClosure::do_oop(oop* p) { ZBarrier::mark_barrier_on_oop_field(p, finalizable); } template -inline void ZMarkBarrierOopClosure::do_oop_nv(narrowOop* p) { +inline void ZMarkBarrierOopClosure::do_oop(narrowOop* p) { ShouldNotReachHere(); } -template -inline void ZMarkBarrierOopClosure::do_oop(oop* p) { - do_oop_nv(p); -} - -template -inline void ZMarkBarrierOopClosure::do_oop(narrowOop* p) { - do_oop_nv(p); -} - inline bool ZPhantomIsAliveObjectClosure::do_object_b(oop o) { return ZBarrier::is_alive_barrier_on_phantom_oop(o); } diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/gc/z/z_specialized_oop_closures.hpp --- a/src/hotspot/share/gc/z/z_specialized_oop_closures.hpp Mon Jun 25 12:44:52 2018 +0200 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000 @@ -1,35 +0,0 @@ -/* - * Copyright (c) 2015, 2017, Oracle and/or its affiliates. All rights reserved. - * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. - * - * This code is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License version 2 only, as - * published by the Free Software Foundation. - * - * This code is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License - * version 2 for more details (a copy is included in the LICENSE file that - * accompanied this code). - * - * You should have received a copy of the GNU General Public License version - * 2 along with this work; if not, write to the Free Software Foundation, - * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. - * - * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA - * or visit www.oracle.com if you need additional information or have any - * questions. - */ - -#ifndef SHARE_GC_Z_Z_SPECIALIZED_OOP_CLOSURES_HPP -#define SHARE_GC_Z_Z_SPECIALIZED_OOP_CLOSURES_HPP - -class ZLoadBarrierOopClosure; -template class ZMarkBarrierOopClosure; - -#define SPECIALIZED_OOP_OOP_ITERATE_CLOSURES_Z(f) \ - f(ZLoadBarrierOopClosure,_nv) \ - f(ZMarkBarrierOopClosure,_nv) \ - f(ZMarkBarrierOopClosure,_nv) - -#endif // SHARE_GC_Z_Z_SPECIALIZED_OOP_CLOSURES_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/jfr/leakprofiler/chains/bfsClosure.cpp --- a/src/hotspot/share/jfr/leakprofiler/chains/bfsClosure.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/jfr/leakprofiler/chains/bfsClosure.cpp Sat May 26 06:59:49 2018 +0200 @@ -31,6 +31,7 @@ #include "jfr/leakprofiler/utilities/granularTimer.hpp" #include "jfr/leakprofiler/utilities/unifiedOop.hpp" #include "logging/log.hpp" +#include "memory/iterator.inline.hpp" #include "memory/resourceArea.hpp" #include "oops/access.inline.hpp" #include "oops/oop.inline.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/jfr/leakprofiler/chains/bfsClosure.hpp --- a/src/hotspot/share/jfr/leakprofiler/chains/bfsClosure.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/jfr/leakprofiler/chains/bfsClosure.hpp Sat May 26 06:59:49 2018 +0200 @@ -34,7 +34,7 @@ class EdgeQueue; // Class responsible for iterating the heap breadth-first -class BFSClosure : public ExtendedOopClosure { +class BFSClosure : public BasicOopIterateClosure { private: EdgeQueue* _edge_queue; EdgeStore* _edge_store; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/jfr/leakprofiler/chains/dfsClosure.cpp --- a/src/hotspot/share/jfr/leakprofiler/chains/dfsClosure.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/jfr/leakprofiler/chains/dfsClosure.cpp Sat May 26 06:59:49 2018 +0200 @@ -31,6 +31,7 @@ #include "jfr/leakprofiler/utilities/unifiedOop.hpp" #include "jfr/leakprofiler/utilities/rootType.hpp" #include "jfr/leakprofiler/chains/rootSetClosure.hpp" +#include "memory/iterator.inline.hpp" #include "memory/resourceArea.hpp" #include "oops/access.inline.hpp" #include "oops/oop.inline.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/jfr/leakprofiler/chains/dfsClosure.hpp --- a/src/hotspot/share/jfr/leakprofiler/chains/dfsClosure.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/jfr/leakprofiler/chains/dfsClosure.hpp Sat May 26 06:59:49 2018 +0200 @@ -34,7 +34,7 @@ class EdgeQueue; // Class responsible for iterating the heap depth-first -class DFSClosure: public ExtendedOopClosure { +class DFSClosure: public BasicOopIterateClosure { private: static EdgeStore* _edge_store; static BitSet* _mark_bits; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/jfr/leakprofiler/chains/rootSetClosure.hpp --- a/src/hotspot/share/jfr/leakprofiler/chains/rootSetClosure.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/jfr/leakprofiler/chains/rootSetClosure.hpp Sat May 26 06:59:49 2018 +0200 @@ -30,7 +30,7 @@ class EdgeQueue; -class RootSetClosure: public ExtendedOopClosure { +class RootSetClosure: public BasicOopIterateClosure { private: RootSetClosure(EdgeQueue* edge_queue); EdgeQueue* _edge_queue; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/memory/iterator.cpp --- a/src/hotspot/share/memory/iterator.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/memory/iterator.cpp Sat May 26 06:59:49 2018 +0200 @@ -63,12 +63,3 @@ do_nmethod(nm); } } - -// Generate the *Klass::oop_oop_iterate functions for the base class -// of the oop closures. These versions use the virtual do_oop calls, -// instead of the devirtualized do_oop_nv version. -ALL_KLASS_OOP_OOP_ITERATE_DEFN(ExtendedOopClosure, _v) - -// Generate the *Klass::oop_oop_iterate functions -// for the NoHeaderExtendedOopClosure helper class. -ALL_KLASS_OOP_OOP_ITERATE_DEFN(NoHeaderExtendedOopClosure, _nv) diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/memory/iterator.hpp --- a/src/hotspot/share/memory/iterator.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/memory/iterator.hpp Sat May 26 06:59:49 2018 +0200 @@ -55,17 +55,17 @@ }; extern DoNothingClosure do_nothing_cl; -// ExtendedOopClosure adds extra code to be run during oop iterations. +// OopIterateClosure adds extra code to be run during oop iterations. // This is needed by the GC and is extracted to a separate type to not // pollute the OopClosure interface. -class ExtendedOopClosure : public OopClosure { +class OopIterateClosure : public OopClosure { private: ReferenceDiscoverer* _ref_discoverer; protected: - ExtendedOopClosure(ReferenceDiscoverer* rd) : _ref_discoverer(rd) { } - ExtendedOopClosure() : _ref_discoverer(NULL) { } - ~ExtendedOopClosure() { } + OopIterateClosure(ReferenceDiscoverer* rd) : _ref_discoverer(rd) { } + OopIterateClosure() : _ref_discoverer(NULL) { } + ~OopIterateClosure() { } void set_ref_discoverer_internal(ReferenceDiscoverer* rd) { _ref_discoverer = rd; } @@ -89,23 +89,10 @@ // 1) do_klass on the header klass pointer. // 2) do_klass on the klass pointer in the mirrors. // 3) do_cld on the class loader data in class loaders. - // - // The virtual (without suffix) and the non-virtual (with _nv suffix) need - // to be updated together, or else the devirtualization will break. - // - // Providing default implementations of the _nv functions unfortunately - // removes the compile-time safeness, but reduces the clutter for the - // ExtendedOopClosures that don't need to walk the metadata. - // Currently, only CMS and G1 need these. - bool do_metadata_nv() { return false; } - virtual bool do_metadata() { return do_metadata_nv(); } - - void do_klass_nv(Klass* k) { ShouldNotReachHere(); } - virtual void do_klass(Klass* k) { do_klass_nv(k); } - - void do_cld_nv(ClassLoaderData* cld) { ShouldNotReachHere(); } - virtual void do_cld(ClassLoaderData* cld) { do_cld_nv(cld); } + virtual bool do_metadata() = 0; + virtual void do_klass(Klass* k) = 0; + virtual void do_cld(ClassLoaderData* cld) = 0; // True iff this closure may be safely applied more than once to an oop // location without an intervening "major reset" (like the end of a GC). @@ -120,19 +107,24 @@ #endif }; +// An OopIterateClosure that can be used when there's no need to visit the Metadata. +class BasicOopIterateClosure : public OopIterateClosure { +public: + BasicOopIterateClosure(ReferenceDiscoverer* rd = NULL) : OopIterateClosure(rd) {} + + virtual bool do_metadata() { return false; } + virtual void do_klass(Klass* k) { ShouldNotReachHere(); } + virtual void do_cld(ClassLoaderData* cld) { ShouldNotReachHere(); } +}; + // Wrapper closure only used to implement oop_iterate_no_header(). -class NoHeaderExtendedOopClosure : public ExtendedOopClosure { +class NoHeaderExtendedOopClosure : public BasicOopIterateClosure { OopClosure* _wrapped_closure; public: NoHeaderExtendedOopClosure(OopClosure* cl) : _wrapped_closure(cl) {} // Warning: this calls the virtual version do_oop in the the wrapped closure. - void do_oop_nv(oop* p) { _wrapped_closure->do_oop(p); } - void do_oop_nv(narrowOop* p) { _wrapped_closure->do_oop(p); } - - void do_oop(oop* p) { assert(false, "Only the _nv versions should be used"); - _wrapped_closure->do_oop(p); } - void do_oop(narrowOop* p) { assert(false, "Only the _nv versions should be used"); - _wrapped_closure->do_oop(p);} + virtual void do_oop(oop* p) { _wrapped_closure->do_oop(p); } + virtual void do_oop(narrowOop* p) { _wrapped_closure->do_oop(p); } }; class KlassClosure : public Closure { @@ -161,20 +153,13 @@ // The base class for all concurrent marking closures, // that participates in class unloading. // It's used to proxy through the metadata to the oops defined in them. -class MetadataAwareOopClosure: public ExtendedOopClosure { - +class MetadataVisitingOopIterateClosure: public OopIterateClosure { public: - MetadataAwareOopClosure() : ExtendedOopClosure() { } - MetadataAwareOopClosure(ReferenceDiscoverer* rd) : ExtendedOopClosure(rd) { } + MetadataVisitingOopIterateClosure(ReferenceDiscoverer* rd = NULL) : OopIterateClosure(rd) { } - bool do_metadata_nv() { return true; } - virtual bool do_metadata() { return do_metadata_nv(); } - - void do_klass_nv(Klass* k); - virtual void do_klass(Klass* k) { do_klass_nv(k); } - - void do_cld_nv(ClassLoaderData* cld); - virtual void do_cld(ClassLoaderData* cld) { do_cld_nv(cld); } + virtual bool do_metadata() { return true; } + virtual void do_klass(Klass* k); + virtual void do_cld(ClassLoaderData* cld); }; // ObjectClosure is used for iterating through an object space @@ -204,10 +189,10 @@ // Applies an oop closure to all ref fields in objects iterated over in an // object iteration. class ObjectToOopClosure: public ObjectClosure { - ExtendedOopClosure* _cl; + OopIterateClosure* _cl; public: void do_object(oop obj); - ObjectToOopClosure(ExtendedOopClosure* cl) : _cl(cl) {} + ObjectToOopClosure(OopIterateClosure* cl) : _cl(cl) {} }; // A version of ObjectClosure that is expected to be robust @@ -371,30 +356,22 @@ } }; -// The two class template specializations are used to dispatch calls -// to the ExtendedOopClosure functions. If use_non_virtual_call is true, -// the non-virtual versions are called (E.g. do_oop_nv), otherwise the -// virtual versions are called (E.g. do_oop). - -template -class Devirtualizer {}; - -// Dispatches to the non-virtual functions. -template <> class Devirtualizer { +// Dispatches to the non-virtual functions if OopClosureType has +// a concrete implementation, otherwise a virtual call is taken. +class Devirtualizer { public: - template static void do_oop(OopClosureType* closure, T* p); - template static void do_klass(OopClosureType* closure, Klass* k); - template static void do_cld(OopClosureType* closure, ClassLoaderData* cld); - template static bool do_metadata(OopClosureType* closure); + template static void do_oop_no_verify(OopClosureType* closure, T* p); + template static void do_oop(OopClosureType* closure, T* p); + template static void do_klass(OopClosureType* closure, Klass* k); + template static void do_cld(OopClosureType* closure, ClassLoaderData* cld); + template static bool do_metadata(OopClosureType* closure); }; -// Dispatches to the virtual functions. -template <> class Devirtualizer { +class OopIteratorClosureDispatch { public: - template static void do_oop(OopClosureType* closure, T* p); - template static void do_klass(OopClosureType* closure, Klass* k); - template static void do_cld(OopClosureType* closure, ClassLoaderData* cld); - template static bool do_metadata(OopClosureType* closure); + template static void oop_oop_iterate(OopClosureType* cl, oop obj, Klass* klass); + template static void oop_oop_iterate(OopClosureType* cl, oop obj, Klass* klass, MemRegion mr); + template static void oop_oop_iterate_backwards(OopClosureType* cl, oop obj, Klass* klass); }; #endif // SHARE_VM_MEMORY_ITERATOR_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/memory/iterator.inline.hpp --- a/src/hotspot/share/memory/iterator.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/memory/iterator.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -38,21 +38,21 @@ #include "oops/typeArrayKlass.inline.hpp" #include "utilities/debug.hpp" -inline void MetadataAwareOopClosure::do_cld_nv(ClassLoaderData* cld) { +inline void MetadataVisitingOopIterateClosure::do_cld(ClassLoaderData* cld) { bool claim = true; // Must claim the class loader data before processing. cld->oops_do(this, claim); } -inline void MetadataAwareOopClosure::do_klass_nv(Klass* k) { +inline void MetadataVisitingOopIterateClosure::do_klass(Klass* k) { ClassLoaderData* cld = k->class_loader_data(); - do_cld_nv(cld); + MetadataVisitingOopIterateClosure::do_cld(cld); } #ifdef ASSERT // This verification is applied to all visited oops. // The closures can turn is off by overriding should_verify_oops(). template -void ExtendedOopClosure::verify(T* p) { +void OopIterateClosure::verify(T* p) { if (should_verify_oops()) { T heap_oop = RawAccess<>::oop_load(p); if (!CompressedOops::is_null(heap_oop)) { @@ -65,54 +65,360 @@ #endif // Implementation of the non-virtual do_oop dispatch. +// +// The same implementation is used for do_metadata, do_klass, and do_cld. +// +// Preconditions: +// - Base has a pure virtual do_oop +// - Only one of the classes in the inheritance chain from OopClosureType to +// Base implements do_oop. +// +// Given the preconditions: +// - If &OopClosureType::do_oop is resolved to &Base::do_oop, then there is no +// implementation of do_oop between Base and OopClosureType. However, there +// must be one implementation in one of the subclasses of OopClosureType. +// In this case we take the virtual call. +// +// - Conversely, if &OopClosureType::do_oop is not resolved to &Base::do_oop, +// then we've found the one and only concrete implementation. In this case we +// take a non-virtual call. +// +// Because of this it's clear when we should call the virtual call and +// when the non-virtual call should be made. +// +// The way we find if &OopClosureType::do_oop is resolved to &Base::do_oop is to +// check if the resulting type of the class of a member-function pointer to +// &OopClosureType::do_oop is equal to the type of the class of a +// &Base::do_oop member-function pointer. Template parameter deduction is used +// to find these types, and then the IsSame trait is used to check if they are +// equal. Finally, SFINAE is used to select the appropriate implementation. +// +// Template parameters: +// T - narrowOop or oop +// Receiver - the resolved type of the class of the +// &OopClosureType::do_oop member-function pointer. That is, +// the klass with the do_oop member function. +// Base - klass with the pure virtual do_oop member function. +// OopClosureType - The dynamic closure type +// +// Parameters: +// closure - The closure to call +// p - The oop (or narrowOop) field to pass to the closure -template -inline void Devirtualizer::do_oop(OopClosureType* closure, T* p) { - debug_only(closure->verify(p)); - closure->do_oop_nv(p); -} -template -inline void Devirtualizer::do_klass(OopClosureType* closure, Klass* k) { - closure->do_klass_nv(k); -} -template -void Devirtualizer::do_cld(OopClosureType* closure, ClassLoaderData* cld) { - closure->do_cld_nv(cld); -} -template -inline bool Devirtualizer::do_metadata(OopClosureType* closure) { - // Make sure the non-virtual and the virtual versions match. - assert(closure->do_metadata_nv() == closure->do_metadata(), "Inconsistency in do_metadata"); - return closure->do_metadata_nv(); +template +static typename EnableIf::value, void>::type +call_do_oop(void (Receiver::*)(T*), void (Base::*)(T*), OopClosureType* closure, T* p) { + closure->do_oop(p); } -// Implementation of the virtual do_oop dispatch. +template +static typename EnableIf::value, void>::type +call_do_oop(void (Receiver::*)(T*), void (Base::*)(T*), OopClosureType* closure, T* p) { + // Sanity check + STATIC_ASSERT((!IsSame::value)); + closure->OopClosureType::do_oop(p); +} -template -void Devirtualizer::do_oop(OopClosureType* closure, T* p) { - debug_only(closure->verify(p)); - closure->do_oop(p); +template +inline void Devirtualizer::do_oop_no_verify(OopClosureType* closure, T* p) { + call_do_oop(&OopClosureType::do_oop, &OopClosure::do_oop, closure, p); } -template -void Devirtualizer::do_klass(OopClosureType* closure, Klass* k) { - closure->do_klass(k); + +template +inline void Devirtualizer::do_oop(OopClosureType* closure, T* p) { + debug_only(closure->verify(p)); + + do_oop_no_verify(closure, p); } -template -void Devirtualizer::do_cld(OopClosureType* closure, ClassLoaderData* cld) { - closure->do_cld(cld); -} -template -bool Devirtualizer::do_metadata(OopClosureType* closure) { + +// Implementation of the non-virtual do_metadata dispatch. + +template +static typename EnableIf::value, bool>::type +call_do_metadata(bool (Receiver::*)(), bool (Base::*)(), OopClosureType* closure) { return closure->do_metadata(); } -// The list of all "specializable" oop_oop_iterate function definitions. -#define ALL_KLASS_OOP_OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - ALL_INSTANCE_KLASS_OOP_OOP_ITERATE_DEFN( OopClosureType, nv_suffix) \ - ALL_INSTANCE_REF_KLASS_OOP_OOP_ITERATE_DEFN( OopClosureType, nv_suffix) \ - ALL_INSTANCE_MIRROR_KLASS_OOP_OOP_ITERATE_DEFN( OopClosureType, nv_suffix) \ - ALL_INSTANCE_CLASS_LOADER_KLASS_OOP_OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - ALL_OBJ_ARRAY_KLASS_OOP_OOP_ITERATE_DEFN( OopClosureType, nv_suffix) \ - ALL_TYPE_ARRAY_KLASS_OOP_OOP_ITERATE_DEFN( OopClosureType, nv_suffix) +template +static typename EnableIf::value, bool>::type +call_do_metadata(bool (Receiver::*)(), bool (Base::*)(), OopClosureType* closure) { + return closure->OopClosureType::do_metadata(); +} + +template +inline bool Devirtualizer::do_metadata(OopClosureType* closure) { + return call_do_metadata(&OopClosureType::do_metadata, &OopIterateClosure::do_metadata, closure); +} + +// Implementation of the non-virtual do_klass dispatch. + +template +static typename EnableIf::value, void>::type +call_do_klass(void (Receiver::*)(Klass*), void (Base::*)(Klass*), OopClosureType* closure, Klass* k) { + closure->do_klass(k); +} + +template +static typename EnableIf::value, void>::type +call_do_klass(void (Receiver::*)(Klass*), void (Base::*)(Klass*), OopClosureType* closure, Klass* k) { + closure->OopClosureType::do_klass(k); +} + +template +inline void Devirtualizer::do_klass(OopClosureType* closure, Klass* k) { + call_do_klass(&OopClosureType::do_klass, &OopIterateClosure::do_klass, closure, k); +} + +// Implementation of the non-virtual do_cld dispatch. + +template +static typename EnableIf::value, void>::type +call_do_cld(void (Receiver::*)(ClassLoaderData*), void (Base::*)(ClassLoaderData*), OopClosureType* closure, ClassLoaderData* cld) { + closure->do_cld(cld); +} + +template +static typename EnableIf::value, void>::type +call_do_cld(void (Receiver::*)(ClassLoaderData*), void (Base::*)(ClassLoaderData*), OopClosureType* closure, ClassLoaderData* cld) { + closure->OopClosureType::do_cld(cld); +} + +template +void Devirtualizer::do_cld(OopClosureType* closure, ClassLoaderData* cld) { + call_do_cld(&OopClosureType::do_cld, &OopIterateClosure::do_cld, closure, cld); +} + +// Dispatch table implementation for *Klass::oop_oop_iterate +// +// It allows for a single call to do a multi-dispatch to an optimized version +// of oop_oop_iterate that statically know all these types: +// - OopClosureType : static type give at call site +// - Klass* : dynamic to static type through Klass::id() -> table index +// - UseCompressedOops : dynamic to static value determined once +// +// when users call obj->oop_iterate(&cl). +// +// oopDesc::oop_iterate() calls OopOopIterateDispatch::function(klass)(cl, obj, klass), +// which dispatches to an optimized version of +// [Instance, ObjArry, etc]Klass::oop_oop_iterate(oop, OopClosureType) +// +// OopClosureType : +// If OopClosureType has an implementation of do_oop (and do_metadata et.al.), +// then the static type of OopClosureType will be used to allow inlining of +// do_oop (even though do_oop is virtual). Otherwise, a virtual call will be +// used when calling do_oop. +// +// Klass* : +// A table mapping from *Klass::ID to function is setup. This happens once +// when the program starts, when the static _table instance is initialized for +// the OopOopIterateDispatch specialized with the OopClosureType. +// +// UseCompressedOops : +// Initially the table is populated with an init function, and not the actual +// oop_oop_iterate function. This is done, so that the first time we dispatch +// through the init function we check what the value of UseCompressedOops +// became, and use that to determine if we should install an optimized +// narrowOop version or optimized oop version of oop_oop_iterate. The appropriate +// oop_oop_iterate function replaces the init function in the table, and +// succeeding calls will jump directly to oop_oop_iterate. + + +template +class OopOopIterateDispatch : public AllStatic { +private: + class Table { + private: + template + static void oop_oop_iterate(OopClosureType* cl, oop obj, Klass* k) { + ((KlassType*)k)->KlassType::template oop_oop_iterate(obj, cl); + } + + template + static void init(OopClosureType* cl, oop obj, Klass* k) { + OopOopIterateDispatch::_table.set_resolve_function_and_execute(cl, obj, k); + } + + template + void set_init_function() { + _function[KlassType::ID] = &init; + } + + template + void set_resolve_function() { + // Size requirement to prevent word tearing + // when functions pointers are updated. + STATIC_ASSERT(sizeof(_function[0]) == sizeof(void*)); + if (UseCompressedOops) { + _function[KlassType::ID] = &oop_oop_iterate; + } else { + _function[KlassType::ID] = &oop_oop_iterate; + } + } + + template + void set_resolve_function_and_execute(OopClosureType* cl, oop obj, Klass* k) { + set_resolve_function(); + _function[KlassType::ID](cl, obj, k); + } + + public: + void (*_function[KLASS_ID_COUNT])(OopClosureType*, oop, Klass*); + + Table(){ + set_init_function(); + set_init_function(); + set_init_function(); + set_init_function(); + set_init_function(); + set_init_function(); + } + }; + + static Table _table; +public: + + static void (*function(Klass* klass))(OopClosureType*, oop, Klass*) { + return _table._function[klass->id()]; + } +}; + +template +typename OopOopIterateDispatch::Table OopOopIterateDispatch::_table; + + +template +class OopOopIterateBoundedDispatch { +private: + class Table { + private: + template + static void oop_oop_iterate_bounded(OopClosureType* cl, oop obj, Klass* k, MemRegion mr) { + ((KlassType*)k)->KlassType::template oop_oop_iterate_bounded(obj, cl, mr); + } + + template + static void init(OopClosureType* cl, oop obj, Klass* k, MemRegion mr) { + OopOopIterateBoundedDispatch::_table.set_resolve_function_and_execute(cl, obj, k, mr); + } + + template + void set_init_function() { + _function[KlassType::ID] = &init; + } + + template + void set_resolve_function() { + if (UseCompressedOops) { + _function[KlassType::ID] = &oop_oop_iterate_bounded; + } else { + _function[KlassType::ID] = &oop_oop_iterate_bounded; + } + } + + template + void set_resolve_function_and_execute(OopClosureType* cl, oop obj, Klass* k, MemRegion mr) { + set_resolve_function(); + _function[KlassType::ID](cl, obj, k, mr); + } + + public: + void (*_function[KLASS_ID_COUNT])(OopClosureType*, oop, Klass*, MemRegion); + + Table(){ + set_init_function(); + set_init_function(); + set_init_function(); + set_init_function(); + set_init_function(); + set_init_function(); + } + }; + + static Table _table; +public: + + static void (*function(Klass* klass))(OopClosureType*, oop, Klass*, MemRegion) { + return _table._function[klass->id()]; + } +}; + +template +typename OopOopIterateBoundedDispatch::Table OopOopIterateBoundedDispatch::_table; + + +template +class OopOopIterateBackwardsDispatch { +private: + class Table { + private: + template + static void oop_oop_iterate_backwards(OopClosureType* cl, oop obj, Klass* k) { + ((KlassType*)k)->KlassType::template oop_oop_iterate_reverse(obj, cl); + } + + template + static void init(OopClosureType* cl, oop obj, Klass* k) { + OopOopIterateBackwardsDispatch::_table.set_resolve_function_and_execute(cl, obj, k); + } + + template + void set_init_function() { + _function[KlassType::ID] = &init; + } + + template + void set_resolve_function() { + if (UseCompressedOops) { + _function[KlassType::ID] = &oop_oop_iterate_backwards; + } else { + _function[KlassType::ID] = &oop_oop_iterate_backwards; + } + } + + template + void set_resolve_function_and_execute(OopClosureType* cl, oop obj, Klass* k) { + set_resolve_function(); + _function[KlassType::ID](cl, obj, k); + } + + public: + void (*_function[KLASS_ID_COUNT])(OopClosureType*, oop, Klass*); + + Table(){ + set_init_function(); + set_init_function(); + set_init_function(); + set_init_function(); + set_init_function(); + set_init_function(); + } + }; + + static Table _table; +public: + + static void (*function(Klass* klass))(OopClosureType*, oop, Klass*) { + return _table._function[klass->id()]; + } +}; + +template +typename OopOopIterateBackwardsDispatch::Table OopOopIterateBackwardsDispatch::_table; + + +template +void OopIteratorClosureDispatch::oop_oop_iterate(OopClosureType* cl, oop obj, Klass* klass) { + OopOopIterateDispatch::function(klass)(cl, obj, klass); +} + +template +void OopIteratorClosureDispatch::oop_oop_iterate(OopClosureType* cl, oop obj, Klass* klass, MemRegion mr) { + OopOopIterateBoundedDispatch::function(klass)(cl, obj, klass, mr); +} + +template +void OopIteratorClosureDispatch::oop_oop_iterate_backwards(OopClosureType* cl, oop obj, Klass* klass) { + OopOopIterateBackwardsDispatch::function(klass)(cl, obj, klass); +} #endif // SHARE_VM_MEMORY_ITERATOR_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/arrayKlass.cpp --- a/src/hotspot/share/oops/arrayKlass.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/arrayKlass.cpp Sat May 26 06:59:49 2018 +0200 @@ -81,7 +81,8 @@ return super()->uncached_lookup_method(name, signature, Klass::skip_overpass, private_mode); } -ArrayKlass::ArrayKlass(Symbol* name) : +ArrayKlass::ArrayKlass(Symbol* name, KlassID id) : + Klass(id), _dimension(1), _higher_dimension(NULL), _lower_dimension(NULL) { diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/arrayKlass.hpp --- a/src/hotspot/share/oops/arrayKlass.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/arrayKlass.hpp Sat May 26 06:59:49 2018 +0200 @@ -46,7 +46,7 @@ // Constructors // The constructor with the Symbol argument does the real array // initialization, the other is a dummy - ArrayKlass(Symbol* name); + ArrayKlass(Symbol* name, KlassID id); ArrayKlass() { assert(DumpSharedSpaces || UseSharedSpaces, "only for cds"); } public: @@ -147,36 +147,4 @@ void oop_verify_on(oop obj, outputStream* st); }; -// Array oop iteration macros for declarations. -// Used to generate the declarations in the *ArrayKlass header files. - -#define OOP_OOP_ITERATE_DECL_RANGE(OopClosureType, nv_suffix) \ - void oop_oop_iterate_range##nv_suffix(oop obj, OopClosureType* closure, int start, int end); - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -// Named NO_BACKWARDS because the definition used by *ArrayKlass isn't reversed, see below. -#define OOP_OOP_ITERATE_DECL_NO_BACKWARDS(OopClosureType, nv_suffix) \ - void oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* closure); -#endif - - -// Array oop iteration macros for definitions. -// Used to generate the definitions in the *ArrayKlass.inline.hpp files. - -#define OOP_OOP_ITERATE_DEFN_RANGE(KlassType, OopClosureType, nv_suffix) \ - \ -void KlassType::oop_oop_iterate_range##nv_suffix(oop obj, OopClosureType* closure, int start, int end) { \ - oop_oop_iterate_range(obj, closure, start, end); \ -} - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -#define OOP_OOP_ITERATE_DEFN_NO_BACKWARDS(KlassType, OopClosureType, nv_suffix) \ -void KlassType::oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* closure) { \ - /* No reverse implementation ATM. */ \ - oop_oop_iterate(obj, closure); \ -} -#else -#define OOP_OOP_ITERATE_DEFN_NO_BACKWARDS(KlassType, OopClosureType, nv_suffix) -#endif - #endif // SHARE_VM_OOPS_ARRAYKLASS_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/instanceClassLoaderKlass.hpp --- a/src/hotspot/share/oops/instanceClassLoaderKlass.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/instanceClassLoaderKlass.hpp Sat May 26 06:59:49 2018 +0200 @@ -25,7 +25,6 @@ #ifndef SHARE_VM_OOPS_INSTANCECLASSLOADERKLASS_HPP #define SHARE_VM_OOPS_INSTANCECLASSLOADERKLASS_HPP -#include "gc/shared/specialized_oop_closures.hpp" #include "oops/instanceKlass.hpp" #include "utilities/macros.hpp" @@ -40,8 +39,11 @@ class InstanceClassLoaderKlass: public InstanceKlass { friend class VMStructs; friend class InstanceKlass; - private: - InstanceClassLoaderKlass(const ClassFileParser& parser) : InstanceKlass(parser, InstanceKlass::_misc_kind_class_loader) {} +public: + static const KlassID ID = InstanceClassLoaderKlassID; + +private: + InstanceClassLoaderKlass(const ClassFileParser& parser) : InstanceKlass(parser, InstanceKlass::_misc_kind_class_loader, ID) {} public: InstanceClassLoaderKlass() { assert(DumpSharedSpaces || UseSharedSpaces, "only for CDS"); } @@ -57,39 +59,24 @@ #endif // Oop fields (and metadata) iterators - // [nv = true] Use non-virtual calls to do_oop_nv. - // [nv = false] Use virtual calls to do_oop. // // The InstanceClassLoaderKlass iterators also visit the CLD pointer (or mirror of anonymous klasses.) - private: + public: // Forward iteration // Iterate over the oop fields and metadata. - template + template inline void oop_oop_iterate(oop obj, OopClosureType* closure); -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS // Reverse iteration // Iterate over the oop fields and metadata. - template + template inline void oop_oop_iterate_reverse(oop obj, OopClosureType* closure); -#endif // Bounded range iteration // Iterate over the oop fields and metadata. - template + template inline void oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr); - - public: - - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL) - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL_BACKWARDS) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL_BACKWARDS) -#endif - }; #endif // SHARE_VM_OOPS_INSTANCECLASSLOADERKLASS_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/instanceClassLoaderKlass.inline.hpp --- a/src/hotspot/share/oops/instanceClassLoaderKlass.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/instanceClassLoaderKlass.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -26,7 +26,7 @@ #define SHARE_VM_OOPS_INSTANCECLASSLOADERKLASS_INLINE_HPP #include "classfile/javaClasses.hpp" -#include "memory/iterator.inline.hpp" +#include "memory/iterator.hpp" #include "oops/instanceClassLoaderKlass.hpp" #include "oops/instanceKlass.inline.hpp" #include "oops/oop.inline.hpp" @@ -34,48 +34,40 @@ #include "utilities/globalDefinitions.hpp" #include "utilities/macros.hpp" -template +template inline void InstanceClassLoaderKlass::oop_oop_iterate(oop obj, OopClosureType* closure) { - InstanceKlass::oop_oop_iterate(obj, closure); + InstanceKlass::oop_oop_iterate(obj, closure); - if (Devirtualizer::do_metadata(closure)) { + if (Devirtualizer::do_metadata(closure)) { ClassLoaderData* cld = java_lang_ClassLoader::loader_data(obj); // cld can be null if we have a non-registered class loader. if (cld != NULL) { - Devirtualizer::do_cld(closure, cld); + Devirtualizer::do_cld(closure, cld); } } } -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -template +template inline void InstanceClassLoaderKlass::oop_oop_iterate_reverse(oop obj, OopClosureType* closure) { - InstanceKlass::oop_oop_iterate_reverse(obj, closure); + InstanceKlass::oop_oop_iterate_reverse(obj, closure); - assert(!Devirtualizer::do_metadata(closure), + assert(!Devirtualizer::do_metadata(closure), "Code to handle metadata is not implemented"); } -#endif // INCLUDE_OOP_OOP_ITERATE_BACKWARDS - -template +template inline void InstanceClassLoaderKlass::oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr) { - InstanceKlass::oop_oop_iterate_bounded(obj, closure, mr); + InstanceKlass::oop_oop_iterate_bounded(obj, closure, mr); - if (Devirtualizer::do_metadata(closure)) { + if (Devirtualizer::do_metadata(closure)) { if (mr.contains(obj)) { ClassLoaderData* cld = java_lang_ClassLoader::loader_data(obj); // cld can be null if we have a non-registered class loader. if (cld != NULL) { - Devirtualizer::do_cld(closure, cld); + Devirtualizer::do_cld(closure, cld); } } } } -#define ALL_INSTANCE_CLASS_LOADER_KLASS_OOP_OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN( InstanceClassLoaderKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_BOUNDED( InstanceClassLoaderKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_BACKWARDS(InstanceClassLoaderKlass, OopClosureType, nv_suffix) - #endif // SHARE_VM_OOPS_INSTANCECLASSLOADERKLASS_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/instanceKlass.cpp --- a/src/hotspot/share/oops/instanceKlass.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/instanceKlass.cpp Sat May 26 06:59:49 2018 +0200 @@ -38,7 +38,6 @@ #include "code/dependencyContext.hpp" #include "compiler/compileBroker.hpp" #include "gc/shared/collectedHeap.inline.hpp" -#include "gc/shared/specialized_oop_closures.hpp" #include "interpreter/oopMapCache.hpp" #include "interpreter/rewriter.hpp" #include "jvmtifiles/jvmti.h" @@ -401,7 +400,8 @@ return vtable_indices; } -InstanceKlass::InstanceKlass(const ClassFileParser& parser, unsigned kind) : +InstanceKlass::InstanceKlass(const ClassFileParser& parser, unsigned kind, KlassID id) : + Klass(id), _static_field_size(parser.static_field_size()), _nonstatic_oop_map_size(nonstatic_oop_map_size(parser.total_oop_map_count())), _itable_len(parser.itable_size()), diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/instanceKlass.hpp --- a/src/hotspot/share/oops/instanceKlass.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/instanceKlass.hpp Sat May 26 06:59:49 2018 +0200 @@ -29,7 +29,6 @@ #include "classfile/classLoaderData.hpp" #include "classfile/moduleEntry.hpp" #include "classfile/packageEntry.hpp" -#include "gc/shared/specialized_oop_closures.hpp" #include "memory/referenceType.hpp" #include "oops/annotations.hpp" #include "oops/constMethod.hpp" @@ -120,8 +119,11 @@ friend class ClassFileParser; friend class CompileReplay; + public: + static const KlassID ID = InstanceKlassID; + protected: - InstanceKlass(const ClassFileParser& parser, unsigned kind); + InstanceKlass(const ClassFileParser& parser, unsigned kind, KlassID id = ID); public: InstanceKlass() { assert(DumpSharedSpaces || UseSharedSpaces, "only for CDS"); } @@ -1225,89 +1227,56 @@ #endif // Oop fields (and metadata) iterators - // [nv = true] Use non-virtual calls to do_oop_nv. - // [nv = false] Use virtual calls to do_oop. // // The InstanceKlass iterators also visits the Object's klass. // Forward iteration public: // Iterate over all oop fields in the oop maps. - template + template inline void oop_oop_iterate_oop_maps(oop obj, OopClosureType* closure); - protected: // Iterate over all oop fields and metadata. - template + template inline int oop_oop_iterate(oop obj, OopClosureType* closure); - private: - // Iterate over all oop fields in the oop maps. - // Specialized for [T = oop] or [T = narrowOop]. - template - inline void oop_oop_iterate_oop_maps_specialized(oop obj, OopClosureType* closure); - // Iterate over all oop fields in one oop map. - template + template inline void oop_oop_iterate_oop_map(OopMapBlock* map, oop obj, OopClosureType* closure); // Reverse iteration -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS - public: - // Iterate over all oop fields in the oop maps. - template - inline void oop_oop_iterate_oop_maps_reverse(oop obj, OopClosureType* closure); - - protected: // Iterate over all oop fields and metadata. - template + template inline int oop_oop_iterate_reverse(oop obj, OopClosureType* closure); private: // Iterate over all oop fields in the oop maps. - // Specialized for [T = oop] or [T = narrowOop]. - template - inline void oop_oop_iterate_oop_maps_specialized_reverse(oop obj, OopClosureType* closure); + template + inline void oop_oop_iterate_oop_maps_reverse(oop obj, OopClosureType* closure); // Iterate over all oop fields in one oop map. - template + template inline void oop_oop_iterate_oop_map_reverse(OopMapBlock* map, oop obj, OopClosureType* closure); -#endif // INCLUDE_OOP_OOP_ITERATE_BACKWARDS // Bounded range iteration public: // Iterate over all oop fields in the oop maps. - template + template inline void oop_oop_iterate_oop_maps_bounded(oop obj, OopClosureType* closure, MemRegion mr); - protected: // Iterate over all oop fields and metadata. - template + template inline int oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr); private: - // Iterate over all oop fields in the oop maps. - // Specialized for [T = oop] or [T = narrowOop]. - template - inline void oop_oop_iterate_oop_maps_specialized_bounded(oop obj, OopClosureType* closure, MemRegion mr); - // Iterate over all oop fields in one oop map. - template + template inline void oop_oop_iterate_oop_map_bounded(OopMapBlock* map, oop obj, OopClosureType* closure, MemRegion mr); public: - - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL) - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL_BACKWARDS) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL_BACKWARDS) -#endif - u2 idnum_allocated_count() const { return _idnum_allocated_count; } public: diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/instanceKlass.inline.hpp --- a/src/hotspot/share/oops/instanceKlass.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/instanceKlass.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -54,30 +54,28 @@ // By force inlining the following functions, we get similar GC performance // as the previous macro based implementation. -template +template ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_map(OopMapBlock* map, oop obj, OopClosureType* closure) { T* p = (T*)obj->obj_field_addr_raw(map->offset()); T* const end = p + map->count(); for (; p < end; ++p) { - Devirtualizer::do_oop(closure, p); + Devirtualizer::do_oop(closure, p); } } -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -template +template ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_map_reverse(OopMapBlock* map, oop obj, OopClosureType* closure) { T* const start = (T*)obj->obj_field_addr_raw(map->offset()); T* p = start + map->count(); while (start < p) { --p; - Devirtualizer::do_oop(closure, p); + Devirtualizer::do_oop(closure, p); } } -#endif -template +template ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_map_bounded(OopMapBlock* map, oop obj, OopClosureType* closure, MemRegion mr) { T* p = (T*)obj->obj_field_addr_raw(map->offset()); T* end = p + map->count(); @@ -96,111 +94,73 @@ } for (;p < end; ++p) { - Devirtualizer::do_oop(closure, p); + Devirtualizer::do_oop(closure, p); } } -template -ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_maps_specialized(oop obj, OopClosureType* closure) { +template +ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_maps(oop obj, OopClosureType* closure) { OopMapBlock* map = start_of_nonstatic_oop_maps(); OopMapBlock* const end_map = map + nonstatic_oop_map_count(); for (; map < end_map; ++map) { - oop_oop_iterate_oop_map(map, obj, closure); + oop_oop_iterate_oop_map(map, obj, closure); } } -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -template -ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_maps_specialized_reverse(oop obj, OopClosureType* closure) { +template +ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_maps_reverse(oop obj, OopClosureType* closure) { OopMapBlock* const start_map = start_of_nonstatic_oop_maps(); OopMapBlock* map = start_map + nonstatic_oop_map_count(); while (start_map < map) { --map; - oop_oop_iterate_oop_map_reverse(map, obj, closure); + oop_oop_iterate_oop_map_reverse(map, obj, closure); } } -#endif -template -ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_maps_specialized_bounded(oop obj, OopClosureType* closure, MemRegion mr) { +template +ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_maps_bounded(oop obj, OopClosureType* closure, MemRegion mr) { OopMapBlock* map = start_of_nonstatic_oop_maps(); OopMapBlock* const end_map = map + nonstatic_oop_map_count(); for (;map < end_map; ++map) { - oop_oop_iterate_oop_map_bounded(map, obj, closure, mr); - } -} - -template -ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_maps(oop obj, OopClosureType* closure) { - if (UseCompressedOops) { - oop_oop_iterate_oop_maps_specialized(obj, closure); - } else { - oop_oop_iterate_oop_maps_specialized(obj, closure); + oop_oop_iterate_oop_map_bounded(map, obj, closure, mr); } } -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -template -ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_maps_reverse(oop obj, OopClosureType* closure) { - if (UseCompressedOops) { - oop_oop_iterate_oop_maps_specialized_reverse(obj, closure); - } else { - oop_oop_iterate_oop_maps_specialized_reverse(obj, closure); - } -} -#endif - -template -ALWAYSINLINE void InstanceKlass::oop_oop_iterate_oop_maps_bounded(oop obj, OopClosureType* closure, MemRegion mr) { - if (UseCompressedOops) { - oop_oop_iterate_oop_maps_specialized_bounded(obj, closure, mr); - } else { - oop_oop_iterate_oop_maps_specialized_bounded(obj, closure, mr); - } -} - -template +template ALWAYSINLINE int InstanceKlass::oop_oop_iterate(oop obj, OopClosureType* closure) { - if (Devirtualizer::do_metadata(closure)) { - Devirtualizer::do_klass(closure, this); + if (Devirtualizer::do_metadata(closure)) { + Devirtualizer::do_klass(closure, this); } - oop_oop_iterate_oop_maps(obj, closure); + oop_oop_iterate_oop_maps(obj, closure); return size_helper(); } -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -template +template ALWAYSINLINE int InstanceKlass::oop_oop_iterate_reverse(oop obj, OopClosureType* closure) { - assert(!Devirtualizer::do_metadata(closure), + assert(!Devirtualizer::do_metadata(closure), "Code to handle metadata is not implemented"); - oop_oop_iterate_oop_maps_reverse(obj, closure); - - return size_helper(); -} -#endif - -template -ALWAYSINLINE int InstanceKlass::oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr) { - if (Devirtualizer::do_metadata(closure)) { - if (mr.contains(obj)) { - Devirtualizer::do_klass(closure, this); - } - } - - oop_oop_iterate_oop_maps_bounded(obj, closure, mr); + oop_oop_iterate_oop_maps_reverse(obj, closure); return size_helper(); } -#define ALL_INSTANCE_KLASS_OOP_OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN( InstanceKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_BOUNDED( InstanceKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_BACKWARDS(InstanceKlass, OopClosureType, nv_suffix) +template +ALWAYSINLINE int InstanceKlass::oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr) { + if (Devirtualizer::do_metadata(closure)) { + if (mr.contains(obj)) { + Devirtualizer::do_klass(closure, this); + } + } + + oop_oop_iterate_oop_maps_bounded(obj, closure, mr); + + return size_helper(); +} #endif // SHARE_VM_OOPS_INSTANCEKLASS_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/instanceMirrorKlass.cpp --- a/src/hotspot/share/oops/instanceMirrorKlass.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/instanceMirrorKlass.cpp Sat May 26 06:59:49 2018 +0200 @@ -26,7 +26,6 @@ #include "classfile/javaClasses.hpp" #include "classfile/systemDictionary.hpp" #include "gc/shared/collectedHeap.inline.hpp" -#include "gc/shared/specialized_oop_closures.hpp" #include "memory/iterator.inline.hpp" #include "memory/oopFactory.hpp" #include "oops/instanceKlass.hpp" diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/instanceMirrorKlass.hpp --- a/src/hotspot/share/oops/instanceMirrorKlass.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/instanceMirrorKlass.hpp Sat May 26 06:59:49 2018 +0200 @@ -26,7 +26,6 @@ #define SHARE_VM_OOPS_INSTANCEMIRRORKLASS_HPP #include "classfile/systemDictionary.hpp" -#include "gc/shared/specialized_oop_closures.hpp" #include "oops/instanceKlass.hpp" #include "runtime/handles.hpp" #include "utilities/macros.hpp" @@ -45,10 +44,13 @@ friend class VMStructs; friend class InstanceKlass; + public: + static const KlassID ID = InstanceMirrorKlassID; + private: static int _offset_of_static_fields; - InstanceMirrorKlass(const ClassFileParser& parser) : InstanceKlass(parser, InstanceKlass::_misc_kind_mirror) {} + InstanceMirrorKlass(const ClassFileParser& parser) : InstanceKlass(parser, InstanceKlass::_misc_kind_mirror, ID) {} public: InstanceMirrorKlass() { assert(DumpSharedSpaces || UseSharedSpaces, "only for CDS"); } @@ -98,60 +100,33 @@ #endif // Oop fields (and metadata) iterators - // [nv = true] Use non-virtual calls to do_oop_nv. - // [nv = false] Use virtual calls to do_oop. // // The InstanceMirrorKlass iterators also visit the hidden Klass pointer. - public: // Iterate over the static fields. - template + template inline void oop_oop_iterate_statics(oop obj, OopClosureType* closure); - private: - // Iterate over the static fields. - // Specialized for [T = oop] or [T = narrowOop]. - template - inline void oop_oop_iterate_statics_specialized(oop obj, OopClosureType* closure); - // Forward iteration // Iterate over the oop fields and metadata. - template + template inline void oop_oop_iterate(oop obj, OopClosureType* closure); - // Reverse iteration -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS // Iterate over the oop fields and metadata. - template + template inline void oop_oop_iterate_reverse(oop obj, OopClosureType* closure); -#endif - // Bounded range iteration // Iterate over the oop fields and metadata. - template + template inline void oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr); - // Iterate over the static fields. - template - inline void oop_oop_iterate_statics_bounded(oop obj, OopClosureType* closure, MemRegion mr); + private: // Iterate over the static fields. - // Specialized for [T = oop] or [T = narrowOop]. - template - inline void oop_oop_iterate_statics_specialized_bounded(oop obj, OopClosureType* closure, MemRegion mr); - - - public: - - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL) - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL_BACKWARDS) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL_BACKWARDS) -#endif + template + inline void oop_oop_iterate_statics_bounded(oop obj, OopClosureType* closure, MemRegion mr); }; #endif // SHARE_VM_OOPS_INSTANCEMIRRORKLASS_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/instanceMirrorKlass.inline.hpp --- a/src/hotspot/share/oops/instanceMirrorKlass.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/instanceMirrorKlass.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -33,30 +33,21 @@ #include "utilities/globalDefinitions.hpp" #include "utilities/macros.hpp" -template -void InstanceMirrorKlass::oop_oop_iterate_statics_specialized(oop obj, OopClosureType* closure) { +template +void InstanceMirrorKlass::oop_oop_iterate_statics(oop obj, OopClosureType* closure) { T* p = (T*)start_of_static_fields(obj); T* const end = p + java_lang_Class::static_oop_field_count(obj); for (; p < end; ++p) { - Devirtualizer::do_oop(closure, p); + Devirtualizer::do_oop(closure, p); } } -template -void InstanceMirrorKlass::oop_oop_iterate_statics(oop obj, OopClosureType* closure) { - if (UseCompressedOops) { - oop_oop_iterate_statics_specialized(obj, closure); - } else { - oop_oop_iterate_statics_specialized(obj, closure); - } -} +template +void InstanceMirrorKlass::oop_oop_iterate(oop obj, OopClosureType* closure) { + InstanceKlass::oop_oop_iterate(obj, closure); -template -void InstanceMirrorKlass::oop_oop_iterate(oop obj, OopClosureType* closure) { - InstanceKlass::oop_oop_iterate(obj, closure); - - if (Devirtualizer::do_metadata(closure)) { + if (Devirtualizer::do_metadata(closure)) { Klass* klass = java_lang_Class::as_Klass(obj); // We'll get NULL for primitive mirrors. if (klass != NULL) { @@ -66,9 +57,9 @@ // loader data is claimed, this is done by calling do_cld explicitly. // For non-anonymous classes the call to do_cld is made when the class // loader itself is handled. - Devirtualizer::do_cld(closure, klass->class_loader_data()); + Devirtualizer::do_cld(closure, klass->class_loader_data()); } else { - Devirtualizer::do_klass(closure, klass); + Devirtualizer::do_klass(closure, klass); } } else { // We would like to assert here (as below) that if klass has been NULL, then @@ -83,22 +74,20 @@ } } - oop_oop_iterate_statics(obj, closure); + oop_oop_iterate_statics(obj, closure); } -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -template +template void InstanceMirrorKlass::oop_oop_iterate_reverse(oop obj, OopClosureType* closure) { - InstanceKlass::oop_oop_iterate_reverse(obj, closure); + InstanceKlass::oop_oop_iterate_reverse(obj, closure); - InstanceMirrorKlass::oop_oop_iterate_statics(obj, closure); + InstanceMirrorKlass::oop_oop_iterate_statics(obj, closure); } -#endif // INCLUDE_OOP_OOP_ITERATE_BACKWARDS -template -void InstanceMirrorKlass::oop_oop_iterate_statics_specialized_bounded(oop obj, - OopClosureType* closure, - MemRegion mr) { +template +void InstanceMirrorKlass::oop_oop_iterate_statics_bounded(oop obj, + OopClosureType* closure, + MemRegion mr) { T* p = (T*)start_of_static_fields(obj); T* end = p + java_lang_Class::static_oop_field_count(obj); @@ -116,39 +105,25 @@ } for (;p < end; ++p) { - Devirtualizer::do_oop(closure, p); + Devirtualizer::do_oop(closure, p); } } -template -void InstanceMirrorKlass::oop_oop_iterate_statics_bounded(oop obj, OopClosureType* closure, MemRegion mr) { - if (UseCompressedOops) { - oop_oop_iterate_statics_specialized_bounded(obj, closure, mr); - } else { - oop_oop_iterate_statics_specialized_bounded(obj, closure, mr); - } -} +template +void InstanceMirrorKlass::oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr) { + InstanceKlass::oop_oop_iterate_bounded(obj, closure, mr); -template -void InstanceMirrorKlass::oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr) { - InstanceKlass::oop_oop_iterate_bounded(obj, closure, mr); - - if (Devirtualizer::do_metadata(closure)) { + if (Devirtualizer::do_metadata(closure)) { if (mr.contains(obj)) { Klass* klass = java_lang_Class::as_Klass(obj); // We'll get NULL for primitive mirrors. if (klass != NULL) { - Devirtualizer::do_klass(closure, klass); + Devirtualizer::do_klass(closure, klass); } } } - oop_oop_iterate_statics_bounded(obj, closure, mr); + oop_oop_iterate_statics_bounded(obj, closure, mr); } -#define ALL_INSTANCE_MIRROR_KLASS_OOP_OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN( InstanceMirrorKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_BOUNDED( InstanceMirrorKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_BACKWARDS(InstanceMirrorKlass, OopClosureType, nv_suffix) - #endif // SHARE_VM_OOPS_INSTANCEMIRRORKLASS_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/instanceRefKlass.hpp --- a/src/hotspot/share/oops/instanceRefKlass.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/instanceRefKlass.hpp Sat May 26 06:59:49 2018 +0200 @@ -25,7 +25,6 @@ #ifndef SHARE_VM_OOPS_INSTANCEREFKLASS_HPP #define SHARE_VM_OOPS_INSTANCEREFKLASS_HPP -#include "gc/shared/specialized_oop_closures.hpp" #include "oops/instanceKlass.hpp" #include "utilities/macros.hpp" @@ -50,8 +49,11 @@ class InstanceRefKlass: public InstanceKlass { friend class InstanceKlass; + public: + static const KlassID ID = InstanceRefKlassID; + private: - InstanceRefKlass(const ClassFileParser& parser) : InstanceKlass(parser, InstanceKlass::_misc_kind_reference) {} + InstanceRefKlass(const ClassFileParser& parser) : InstanceKlass(parser, InstanceKlass::_misc_kind_reference, ID) {} public: InstanceRefKlass() { assert(DumpSharedSpaces || UseSharedSpaces, "only for CDS"); } @@ -67,52 +69,48 @@ #endif // Oop fields (and metadata) iterators - // [nv = true] Use non-virtual calls to do_oop_nv. - // [nv = false] Use virtual calls to do_oop. // // The InstanceRefKlass iterators also support reference processing. // Forward iteration -private: // Iterate over all oop fields and metadata. - template + template inline void oop_oop_iterate(oop obj, OopClosureType* closure); // Reverse iteration -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS // Iterate over all oop fields and metadata. - template + template inline void oop_oop_iterate_reverse(oop obj, OopClosureType* closure); -#endif // Bounded range iteration // Iterate over all oop fields and metadata. - template + template inline void oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr); + private: + // Reference processing part of the iterators. - // Specialized for [T = oop] or [T = narrowOop]. - template - inline void oop_oop_iterate_ref_processing_specialized(oop obj, OopClosureType* closure, Contains& contains); + template + inline void oop_oop_iterate_ref_processing(oop obj, OopClosureType* closure, Contains& contains); // Only perform reference processing if the referent object is within mr. - template + template inline void oop_oop_iterate_ref_processing_bounded(oop obj, OopClosureType* closure, MemRegion mr); // Reference processing - template + template inline void oop_oop_iterate_ref_processing(oop obj, OopClosureType* closure); // Building blocks for specialized handling. - template + template static void do_referent(oop obj, OopClosureType* closure, Contains& contains); - template + template static void do_next(oop obj, OopClosureType* closure, Contains& contains); - template + template static void do_discovered(oop obj, OopClosureType* closure, Contains& contains); template @@ -120,32 +118,23 @@ // Do discovery while handling InstanceRefKlasses. Reference discovery // is only done if the closure provides a ReferenceProcessor. - template + template static void oop_oop_iterate_discovery(oop obj, ReferenceType type, OopClosureType* closure, Contains& contains); // Used for a special case in G1 where the closure needs to be applied // to the discovered field. Reference discovery is also done if the // closure provides a ReferenceProcessor. - template + template static void oop_oop_iterate_discovered_and_discovery(oop obj, ReferenceType type, OopClosureType* closure, Contains& contains); // Apply the closure to all fields. No reference discovery is done. - template + template static void oop_oop_iterate_fields(oop obj, OopClosureType* closure, Contains& contains); template static void trace_reference_gc(const char *s, oop obj) NOT_DEBUG_RETURN; public: - - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL) - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL_BACKWARDS) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL_BACKWARDS) -#endif - // Update non-static oop maps so 'referent', 'nextPending' and // 'discovered' will look like non-oops static void update_nonstatic_oop_maps(Klass* k); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/instanceRefKlass.inline.hpp --- a/src/hotspot/share/oops/instanceRefKlass.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/instanceRefKlass.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -37,19 +37,19 @@ #include "utilities/globalDefinitions.hpp" #include "utilities/macros.hpp" -template +template void InstanceRefKlass::do_referent(oop obj, OopClosureType* closure, Contains& contains) { T* referent_addr = (T*)java_lang_ref_Reference::referent_addr_raw(obj); if (contains(referent_addr)) { - Devirtualizer::do_oop(closure, referent_addr); + Devirtualizer::do_oop(closure, referent_addr); } } -template +template void InstanceRefKlass::do_discovered(oop obj, OopClosureType* closure, Contains& contains) { T* discovered_addr = (T*)java_lang_ref_Reference::discovered_addr_raw(obj); if (contains(discovered_addr)) { - Devirtualizer::do_oop(closure, discovered_addr); + Devirtualizer::do_oop(closure, discovered_addr); } } @@ -76,7 +76,7 @@ return false; } -template +template void InstanceRefKlass::oop_oop_iterate_discovery(oop obj, ReferenceType type, OopClosureType* closure, Contains& contains) { // Try to discover reference and return if it succeeds. if (try_discover(obj, type, closure)) { @@ -84,38 +84,38 @@ } // Treat referent and discovered as normal oops. - do_referent(obj, closure, contains); - do_discovered(obj, closure, contains); + do_referent(obj, closure, contains); + do_discovered(obj, closure, contains); } -template +template void InstanceRefKlass::oop_oop_iterate_fields(oop obj, OopClosureType* closure, Contains& contains) { - do_referent(obj, closure, contains); - do_discovered(obj, closure, contains); + do_referent(obj, closure, contains); + do_discovered(obj, closure, contains); } -template +template void InstanceRefKlass::oop_oop_iterate_discovered_and_discovery(oop obj, ReferenceType type, OopClosureType* closure, Contains& contains) { // Explicitly apply closure to the discovered field. - do_discovered(obj, closure, contains); + do_discovered(obj, closure, contains); // Then do normal reference processing with discovery. - oop_oop_iterate_discovery(obj, type, closure, contains); + oop_oop_iterate_discovery(obj, type, closure, contains); } -template -void InstanceRefKlass::oop_oop_iterate_ref_processing_specialized(oop obj, OopClosureType* closure, Contains& contains) { +template +void InstanceRefKlass::oop_oop_iterate_ref_processing(oop obj, OopClosureType* closure, Contains& contains) { switch (closure->reference_iteration_mode()) { - case ExtendedOopClosure::DO_DISCOVERY: + case OopIterateClosure::DO_DISCOVERY: trace_reference_gc("do_discovery", obj); - oop_oop_iterate_discovery(obj, reference_type(), closure, contains); + oop_oop_iterate_discovery(obj, reference_type(), closure, contains); break; - case ExtendedOopClosure::DO_DISCOVERED_AND_DISCOVERY: + case OopIterateClosure::DO_DISCOVERED_AND_DISCOVERY: trace_reference_gc("do_discovered_and_discovery", obj); - oop_oop_iterate_discovered_and_discovery(obj, reference_type(), closure, contains); + oop_oop_iterate_discovered_and_discovery(obj, reference_type(), closure, contains); break; - case ExtendedOopClosure::DO_FIELDS: + case OopIterateClosure::DO_FIELDS: trace_reference_gc("do_fields", obj); - oop_oop_iterate_fields(obj, closure, contains); + oop_oop_iterate_fields(obj, closure, contains); break; default: ShouldNotReachHere(); @@ -127,14 +127,10 @@ template bool operator()(T* p) const { return true; } }; -template +template void InstanceRefKlass::oop_oop_iterate_ref_processing(oop obj, OopClosureType* closure) { AlwaysContains always_contains; - if (UseCompressedOops) { - oop_oop_iterate_ref_processing_specialized(obj, closure, always_contains); - } else { - oop_oop_iterate_ref_processing_specialized(obj, closure, always_contains); - } + oop_oop_iterate_ref_processing(obj, closure, always_contains); } class MrContains { @@ -144,38 +140,31 @@ template bool operator()(T* p) const { return _mr.contains(p); } }; -template +template void InstanceRefKlass::oop_oop_iterate_ref_processing_bounded(oop obj, OopClosureType* closure, MemRegion mr) { const MrContains contains(mr); - if (UseCompressedOops) { - oop_oop_iterate_ref_processing_specialized(obj, closure, contains); - } else { - oop_oop_iterate_ref_processing_specialized(obj, closure, contains); - } + oop_oop_iterate_ref_processing(obj, closure, contains); +} + +template +void InstanceRefKlass::oop_oop_iterate(oop obj, OopClosureType* closure) { + InstanceKlass::oop_oop_iterate(obj, closure); + + oop_oop_iterate_ref_processing(obj, closure); } -template -void InstanceRefKlass::oop_oop_iterate(oop obj, OopClosureType* closure) { - InstanceKlass::oop_oop_iterate(obj, closure); +template +void InstanceRefKlass::oop_oop_iterate_reverse(oop obj, OopClosureType* closure) { + InstanceKlass::oop_oop_iterate_reverse(obj, closure); - oop_oop_iterate_ref_processing(obj, closure); + oop_oop_iterate_ref_processing(obj, closure); } -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -template -void InstanceRefKlass::oop_oop_iterate_reverse(oop obj, OopClosureType* closure) { - InstanceKlass::oop_oop_iterate_reverse(obj, closure); +template +void InstanceRefKlass::oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr) { + InstanceKlass::oop_oop_iterate_bounded(obj, closure, mr); - oop_oop_iterate_ref_processing(obj, closure); -} -#endif // INCLUDE_OOP_OOP_ITERATE_BACKWARDS - - -template -void InstanceRefKlass::oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr) { - InstanceKlass::oop_oop_iterate_bounded(obj, closure, mr); - - oop_oop_iterate_ref_processing_bounded(obj, closure, mr); + oop_oop_iterate_ref_processing_bounded(obj, closure, mr); } #ifdef ASSERT @@ -192,11 +181,4 @@ } #endif -// Macro to define InstanceRefKlass::oop_oop_iterate for virtual/nonvirtual for -// all closures. Macros calling macros above for each oop size. -#define ALL_INSTANCE_REF_KLASS_OOP_OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN( InstanceRefKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_BOUNDED( InstanceRefKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_BACKWARDS(InstanceRefKlass, OopClosureType, nv_suffix) - #endif // SHARE_VM_OOPS_INSTANCEREFKLASS_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/klass.cpp --- a/src/hotspot/share/oops/klass.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/klass.cpp Sat May 26 06:59:49 2018 +0200 @@ -190,9 +190,10 @@ // which doesn't zero out the memory before calling the constructor. // Need to set the _java_mirror field explicitly to not hit an assert that the field // should be NULL before setting it. -Klass::Klass() : _prototype_header(markOopDesc::prototype()), - _shared_class_path_index(-1), - _java_mirror(NULL) { +Klass::Klass(KlassID id) : _id(id), + _prototype_header(markOopDesc::prototype()), + _shared_class_path_index(-1), + _java_mirror(NULL) { CDS_ONLY(_shared_class_flags = 0;) CDS_JAVA_HEAP_ONLY(_archived_mirror = 0;) _primary_supers[0] = this; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/klass.hpp --- a/src/hotspot/share/oops/klass.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/klass.hpp Sat May 26 06:59:49 2018 +0200 @@ -26,7 +26,6 @@ #define SHARE_VM_OOPS_KLASS_HPP #include "classfile/classLoaderData.hpp" -#include "gc/shared/specialized_oop_closures.hpp" #include "memory/iterator.hpp" #include "memory/memRegion.hpp" #include "oops/metadata.hpp" @@ -38,6 +37,18 @@ #include "jfr/support/jfrTraceIdExtension.hpp" #endif +// Klass IDs for all subclasses of Klass +enum KlassID { + InstanceKlassID, + InstanceRefKlassID, + InstanceMirrorKlassID, + InstanceClassLoaderKlassID, + TypeArrayKlassID, + ObjArrayKlassID +}; + +const uint KLASS_ID_COUNT = 6; + // // A Klass provides: // 1: language level class object (method dictionary etc.) @@ -103,6 +114,9 @@ // because it is frequently queried. jint _layout_helper; + // Klass identifier used to implement devirtualized oop closure dispatching. + const KlassID _id; + // The fields _super_check_offset, _secondary_super_cache, _secondary_supers // and _primary_supers all help make fast subtype checks. See big discussion // in doc/server_compiler/checktype.txt @@ -173,11 +187,14 @@ protected: // Constructor - Klass(); + Klass(KlassID id); + Klass() : _id(KlassID(-1)) { assert(DumpSharedSpaces || UseSharedSpaces, "only for cds"); } void* operator new(size_t size, ClassLoaderData* loader_data, size_t word_size, TRAPS) throw(); public: + int id() { return _id; } + enum DefaultsLookupMode { find_defaults, skip_defaults }; enum OverpassLookupMode { find_overpass, skip_overpass }; enum StaticLookupMode { find_static, skip_static }; @@ -660,24 +677,6 @@ virtual void oop_pc_update_pointers(oop obj, ParCompactionManager* cm) = 0; #endif - // Iterators specialized to particular subtypes - // of ExtendedOopClosure, to avoid closure virtual calls. -#define Klass_OOP_OOP_ITERATE_DECL(OopClosureType, nv_suffix) \ - virtual void oop_oop_iterate##nv_suffix(oop obj, OopClosureType* closure) = 0; \ - /* Iterates "closure" over all the oops in "obj" (of type "this") within "mr". */ \ - virtual void oop_oop_iterate_bounded##nv_suffix(oop obj, OopClosureType* closure, MemRegion mr) = 0; - - ALL_OOP_OOP_ITERATE_CLOSURES_1(Klass_OOP_OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(Klass_OOP_OOP_ITERATE_DECL) - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -#define Klass_OOP_OOP_ITERATE_DECL_BACKWARDS(OopClosureType, nv_suffix) \ - virtual void oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* closure) = 0; - - ALL_OOP_OOP_ITERATE_CLOSURES_1(Klass_OOP_OOP_ITERATE_DECL_BACKWARDS) - ALL_OOP_OOP_ITERATE_CLOSURES_2(Klass_OOP_OOP_ITERATE_DECL_BACKWARDS) -#endif - virtual void array_klasses_do(void f(Klass* k)) {} // Return self, except for abstract classes with exactly 1 @@ -725,44 +724,4 @@ static Klass* decode_klass(narrowKlass v); }; -// Helper to convert the oop iterate macro suffixes into bool values that can be used by template functions. -#define nvs_nv_to_bool true -#define nvs_v_to_bool false -#define nvs_to_bool(nv_suffix) nvs##nv_suffix##_to_bool - -// Oop iteration macros for declarations. -// Used to generate declarations in the *Klass header files. - -#define OOP_OOP_ITERATE_DECL(OopClosureType, nv_suffix) \ - void oop_oop_iterate##nv_suffix(oop obj, OopClosureType* closure); \ - void oop_oop_iterate_bounded##nv_suffix(oop obj, OopClosureType* closure, MemRegion mr); - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -#define OOP_OOP_ITERATE_DECL_BACKWARDS(OopClosureType, nv_suffix) \ - void oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* closure); -#endif - - -// Oop iteration macros for definitions. -// Used to generate definitions in the *Klass.inline.hpp files. - -#define OOP_OOP_ITERATE_DEFN(KlassType, OopClosureType, nv_suffix) \ -void KlassType::oop_oop_iterate##nv_suffix(oop obj, OopClosureType* closure) { \ - oop_oop_iterate(obj, closure); \ -} - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -#define OOP_OOP_ITERATE_DEFN_BACKWARDS(KlassType, OopClosureType, nv_suffix) \ -void KlassType::oop_oop_iterate_backwards##nv_suffix(oop obj, OopClosureType* closure) { \ - oop_oop_iterate_reverse(obj, closure); \ -} -#else -#define OOP_OOP_ITERATE_DEFN_BACKWARDS(KlassType, OopClosureType, nv_suffix) -#endif - -#define OOP_OOP_ITERATE_DEFN_BOUNDED(KlassType, OopClosureType, nv_suffix) \ -void KlassType::oop_oop_iterate_bounded##nv_suffix(oop obj, OopClosureType* closure, MemRegion mr) { \ - oop_oop_iterate_bounded(obj, closure, mr); \ -} - #endif // SHARE_VM_OOPS_KLASS_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/objArrayKlass.cpp --- a/src/hotspot/share/oops/objArrayKlass.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/objArrayKlass.cpp Sat May 26 06:59:49 2018 +0200 @@ -29,7 +29,6 @@ #include "classfile/systemDictionary.hpp" #include "classfile/vmSymbols.hpp" #include "gc/shared/collectedHeap.inline.hpp" -#include "gc/shared/specialized_oop_closures.hpp" #include "memory/iterator.inline.hpp" #include "memory/metadataFactory.hpp" #include "memory/metaspaceClosure.hpp" @@ -142,7 +141,7 @@ return oak; } -ObjArrayKlass::ObjArrayKlass(int n, Klass* element_klass, Symbol* name) : ArrayKlass(name) { +ObjArrayKlass::ObjArrayKlass(int n, Klass* element_klass, Symbol* name) : ArrayKlass(name, ID) { this->set_dimension(n); this->set_element_klass(element_klass); // decrement refcount because object arrays are not explicitly freed. The diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/objArrayKlass.hpp --- a/src/hotspot/share/oops/objArrayKlass.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/objArrayKlass.hpp Sat May 26 06:59:49 2018 +0200 @@ -34,6 +34,10 @@ class ObjArrayKlass : public ArrayKlass { friend class VMStructs; friend class JVMCIVMStructs; + + public: + static const KlassID ID = ObjArrayKlassID; + private: // If you add a new field that points to any metaspace object, you // must add this field to ObjArrayKlass::metaspace_pointers_do(). @@ -127,63 +131,39 @@ #endif // Oop fields (and metadata) iterators - // [nv = true] Use non-virtual calls to do_oop_nv. - // [nv = false] Use virtual calls to do_oop. // // The ObjArrayKlass iterators also visits the Object's klass. - private: + // Iterate over oop elements and metadata. + template + inline void oop_oop_iterate(oop obj, OopClosureType* closure); // Iterate over oop elements and metadata. - template - inline void oop_oop_iterate(oop obj, OopClosureType* closure); + template + inline void oop_oop_iterate_reverse(oop obj, OopClosureType* closure); // Iterate over oop elements within mr, and metadata. - template + template inline void oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr); - // Iterate over oop elements with indices within [start, end), and metadata. - template - inline void oop_oop_iterate_range(oop obj, OopClosureType* closure, int start, int end); - // Iterate over oop elements within [start, end), and metadata. - // Specialized for [T = oop] or [T = narrowOop]. - template - inline void oop_oop_iterate_range_specialized(objArrayOop a, OopClosureType* closure, int start, int end); + template + inline void oop_oop_iterate_range(objArrayOop a, OopClosureType* closure, int start, int end); public: // Iterate over all oop elements. - template + template inline void oop_oop_iterate_elements(objArrayOop a, OopClosureType* closure); private: - // Iterate over all oop elements. - // Specialized for [T = oop] or [T = narrowOop]. - template - inline void oop_oop_iterate_elements_specialized(objArrayOop a, OopClosureType* closure); + // Iterate over all oop elements with indices within mr. + template + inline void oop_oop_iterate_elements_bounded(objArrayOop a, OopClosureType* closure, void* low, void* high); - // Iterate over all oop elements with indices within mr. - template + template inline void oop_oop_iterate_elements_bounded(objArrayOop a, OopClosureType* closure, MemRegion mr); - // Iterate over oop elements within [low, high).. - // Specialized for [T = oop] or [T = narrowOop]. - template - inline void oop_oop_iterate_elements_specialized_bounded(objArrayOop a, OopClosureType* closure, void* low, void* high); - - public: - - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL_RANGE) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL_RANGE) - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL_NO_BACKWARDS) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL_NO_BACKWARDS) -#endif - // JVM support jint compute_modifier_flags(TRAPS) const; diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/objArrayKlass.inline.hpp --- a/src/hotspot/share/oops/objArrayKlass.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/objArrayKlass.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -26,7 +26,7 @@ #define SHARE_VM_OOPS_OBJARRAYKLASS_INLINE_HPP #include "memory/memRegion.hpp" -#include "memory/iterator.inline.hpp" +#include "memory/iterator.hpp" #include "oops/arrayOop.inline.hpp" #include "oops/arrayKlass.hpp" #include "oops/klass.hpp" @@ -35,18 +35,18 @@ #include "oops/oop.inline.hpp" #include "utilities/macros.hpp" -template -void ObjArrayKlass::oop_oop_iterate_elements_specialized(objArrayOop a, OopClosureType* closure) { +template +void ObjArrayKlass::oop_oop_iterate_elements(objArrayOop a, OopClosureType* closure) { T* p = (T*)a->base_raw(); T* const end = p + a->length(); for (;p < end; p++) { - Devirtualizer::do_oop(closure, p); + Devirtualizer::do_oop(closure, p); } } -template -void ObjArrayKlass::oop_oop_iterate_elements_specialized_bounded( +template +void ObjArrayKlass::oop_oop_iterate_elements_bounded( objArrayOop a, OopClosureType* closure, void* low, void* high) { T* const l = (T*)low; @@ -63,78 +63,58 @@ } for (;p < end; ++p) { - Devirtualizer::do_oop(closure, p); + Devirtualizer::do_oop(closure, p); } } -template -void ObjArrayKlass::oop_oop_iterate_elements(objArrayOop a, OopClosureType* closure) { - if (UseCompressedOops) { - oop_oop_iterate_elements_specialized(a, closure); - } else { - oop_oop_iterate_elements_specialized(a, closure); - } -} - -template -void ObjArrayKlass::oop_oop_iterate_elements_bounded(objArrayOop a, OopClosureType* closure, MemRegion mr) { - if (UseCompressedOops) { - oop_oop_iterate_elements_specialized_bounded(a, closure, mr.start(), mr.end()); - } else { - oop_oop_iterate_elements_specialized_bounded(a, closure, mr.start(), mr.end()); - } -} - -template +template void ObjArrayKlass::oop_oop_iterate(oop obj, OopClosureType* closure) { assert (obj->is_array(), "obj must be array"); objArrayOop a = objArrayOop(obj); - if (Devirtualizer::do_metadata(closure)) { - Devirtualizer::do_klass(closure, obj->klass()); + if (Devirtualizer::do_metadata(closure)) { + Devirtualizer::do_klass(closure, obj->klass()); } - oop_oop_iterate_elements(a, closure); + oop_oop_iterate_elements(a, closure); } -template +template +void ObjArrayKlass::oop_oop_iterate_reverse(oop obj, OopClosureType* closure) { + // No reverse implementation ATM. + oop_oop_iterate(obj, closure); +} + +template void ObjArrayKlass::oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr) { assert(obj->is_array(), "obj must be array"); objArrayOop a = objArrayOop(obj); - if (Devirtualizer::do_metadata(closure)) { - Devirtualizer::do_klass(closure, a->klass()); + if (Devirtualizer::do_metadata(closure)) { + Devirtualizer::do_klass(closure, a->klass()); } - oop_oop_iterate_elements_bounded(a, closure, mr); -} - -template -void ObjArrayKlass::oop_oop_iterate_range_specialized(objArrayOop a, OopClosureType* closure, int start, int end) { - T* low = start == 0 ? cast_from_oop(a) : a->obj_at_addr_raw(start); - T* high = (T*)a->base_raw() + end; - - oop_oop_iterate_elements_specialized_bounded(a, closure, low, high); + oop_oop_iterate_elements_bounded(a, closure, mr.start(), mr.end()); } // Like oop_oop_iterate but only iterates over a specified range and only used // for objArrayOops. -template -void ObjArrayKlass::oop_oop_iterate_range(oop obj, OopClosureType* closure, int start, int end) { - assert(obj->is_array(), "obj must be array"); - objArrayOop a = objArrayOop(obj); +template +void ObjArrayKlass::oop_oop_iterate_range(objArrayOop a, OopClosureType* closure, int start, int end) { + T* low = start == 0 ? cast_from_oop(a) : a->obj_at_addr_raw(start); + T* high = (T*)a->base_raw() + end; + + oop_oop_iterate_elements_bounded(a, closure, low, high); +} +// Placed here to resolve include cycle between objArrayKlass.inline.hpp and objArrayOop.inline.hpp +template +void objArrayOopDesc::oop_iterate_range(OopClosureType* blk, int start, int end) { if (UseCompressedOops) { - oop_oop_iterate_range_specialized(a, closure, start, end); + ((ObjArrayKlass*)klass())->oop_oop_iterate_range(this, blk, start, end); } else { - oop_oop_iterate_range_specialized(a, closure, start, end); + ((ObjArrayKlass*)klass())->oop_oop_iterate_range(this, blk, start, end); } } -#define ALL_OBJ_ARRAY_KLASS_OOP_OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN( ObjArrayKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_BOUNDED( ObjArrayKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_RANGE( ObjArrayKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_NO_BACKWARDS(ObjArrayKlass, OopClosureType, nv_suffix) - #endif // SHARE_VM_OOPS_OBJARRAYKLASS_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/objArrayOop.cpp --- a/src/hotspot/share/oops/objArrayOop.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/objArrayOop.cpp Sat May 26 06:59:49 2018 +0200 @@ -23,7 +23,6 @@ */ #include "precompiled.hpp" -#include "gc/shared/specialized_oop_closures.hpp" #include "oops/access.inline.hpp" #include "oops/objArrayKlass.hpp" #include "oops/objArrayOop.inline.hpp" @@ -43,12 +42,3 @@ Klass* objArrayOopDesc::element_klass() { return ObjArrayKlass::cast(klass())->element_klass(); } - -#define ObjArrayOop_OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - \ -void objArrayOopDesc::oop_iterate_range(OopClosureType* blk, int start, int end) { \ - ((ObjArrayKlass*)klass())->oop_oop_iterate_range##nv_suffix(this, blk, start, end); \ -} - -ALL_OOP_OOP_ITERATE_CLOSURES_1(ObjArrayOop_OOP_ITERATE_DEFN) -ALL_OOP_OOP_ITERATE_CLOSURES_2(ObjArrayOop_OOP_ITERATE_DEFN) diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/objArrayOop.hpp --- a/src/hotspot/share/oops/objArrayOop.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/objArrayOop.hpp Sat May 26 06:59:49 2018 +0200 @@ -25,7 +25,6 @@ #ifndef SHARE_VM_OOPS_OBJARRAYOOP_HPP #define SHARE_VM_OOPS_OBJARRAYOOP_HPP -#include "gc/shared/specialized_oop_closures.hpp" #include "oops/arrayOop.hpp" #include "utilities/align.hpp" @@ -107,12 +106,10 @@ Klass* element_klass(); +public: // special iterators for index ranges, returns size of object -#define ObjArrayOop_OOP_ITERATE_DECL(OopClosureType, nv_suffix) \ + template void oop_iterate_range(OopClosureType* blk, int start, int end); - - ALL_OOP_OOP_ITERATE_CLOSURES_1(ObjArrayOop_OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(ObjArrayOop_OOP_ITERATE_DECL) }; #endif // SHARE_VM_OOPS_OBJARRAYOOP_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/oop.hpp --- a/src/hotspot/share/oops/oop.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/oop.hpp Sat May 26 06:59:49 2018 +0200 @@ -25,7 +25,6 @@ #ifndef SHARE_VM_OOPS_OOP_HPP #define SHARE_VM_OOPS_OOP_HPP -#include "gc/shared/specialized_oop_closures.hpp" #include "memory/iterator.hpp" #include "memory/memRegion.hpp" #include "oops/access.hpp" @@ -288,32 +287,20 @@ inline void ps_push_contents(PSPromotionManager* pm); #endif - - // iterators, returns size of object -#define OOP_ITERATE_DECL(OopClosureType, nv_suffix) \ - inline void oop_iterate(OopClosureType* blk); \ - inline void oop_iterate(OopClosureType* blk, MemRegion mr); // Only in mr. + template + inline void oop_iterate(OopClosureType* cl); - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_ITERATE_DECL) - -#define OOP_ITERATE_SIZE_DECL(OopClosureType, nv_suffix) \ - inline int oop_iterate_size(OopClosureType* blk); \ - inline int oop_iterate_size(OopClosureType* blk, MemRegion mr); // Only in mr. + template + inline void oop_iterate(OopClosureType* cl, MemRegion mr); - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_ITERATE_SIZE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_ITERATE_SIZE_DECL) - - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS + template + inline int oop_iterate_size(OopClosureType* cl); -#define OOP_ITERATE_BACKWARDS_DECL(OopClosureType, nv_suffix) \ - inline void oop_iterate_backwards(OopClosureType* blk); + template + inline int oop_iterate_size(OopClosureType* cl, MemRegion mr); - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_ITERATE_BACKWARDS_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_ITERATE_BACKWARDS_DECL) - -#endif // INCLUDE_OOP_OOP_ITERATE_BACKWARDS + template + inline void oop_iterate_backwards(OopClosureType* cl); inline int oop_iterate_no_header(OopClosure* bk); inline int oop_iterate_no_header(OopClosure* bk, MemRegion mr); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/oop.inline.hpp --- a/src/hotspot/share/oops/oop.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/oop.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -432,35 +432,40 @@ } #endif // INCLUDE_PARALLELGC -#define OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - \ -void oopDesc::oop_iterate(OopClosureType* blk) { \ - klass()->oop_oop_iterate##nv_suffix(this, blk); \ -} \ - \ -void oopDesc::oop_iterate(OopClosureType* blk, MemRegion mr) { \ - klass()->oop_oop_iterate_bounded##nv_suffix(this, blk, mr); \ +template +void oopDesc::oop_iterate(OopClosureType* cl) { + OopIteratorClosureDispatch::oop_oop_iterate(cl, this, klass()); +} + +template +void oopDesc::oop_iterate(OopClosureType* cl, MemRegion mr) { + OopIteratorClosureDispatch::oop_oop_iterate(cl, this, klass(), mr); } -#define OOP_ITERATE_SIZE_DEFN(OopClosureType, nv_suffix) \ - \ -int oopDesc::oop_iterate_size(OopClosureType* blk) { \ - Klass* k = klass(); \ - int size = size_given_klass(k); \ - k->oop_oop_iterate##nv_suffix(this, blk); \ - return size; \ -} \ - \ -int oopDesc::oop_iterate_size(OopClosureType* blk, MemRegion mr) { \ - Klass* k = klass(); \ - int size = size_given_klass(k); \ - k->oop_oop_iterate_bounded##nv_suffix(this, blk, mr); \ - return size; \ +template +int oopDesc::oop_iterate_size(OopClosureType* cl) { + Klass* k = klass(); + int size = size_given_klass(k); + OopIteratorClosureDispatch::oop_oop_iterate(cl, this, k); + return size; +} + +template +int oopDesc::oop_iterate_size(OopClosureType* cl, MemRegion mr) { + Klass* k = klass(); + int size = size_given_klass(k); + OopIteratorClosureDispatch::oop_oop_iterate(cl, this, k, mr); + return size; +} + +template +void oopDesc::oop_iterate_backwards(OopClosureType* cl) { + OopIteratorClosureDispatch::oop_oop_iterate_backwards(cl, this, klass()); } int oopDesc::oop_iterate_no_header(OopClosure* blk) { // The NoHeaderExtendedOopClosure wraps the OopClosure and proxies all - // the do_oop calls, but turns off all other features in ExtendedOopClosure. + // the do_oop calls, but turns off all other features in OopIterateClosure. NoHeaderExtendedOopClosure cl(blk); return oop_iterate_size(&cl); } @@ -470,24 +475,6 @@ return oop_iterate_size(&cl, mr); } -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS -#define OOP_ITERATE_BACKWARDS_DEFN(OopClosureType, nv_suffix) \ - \ -inline void oopDesc::oop_iterate_backwards(OopClosureType* blk) { \ - klass()->oop_oop_iterate_backwards##nv_suffix(this, blk); \ -} -#else -#define OOP_ITERATE_BACKWARDS_DEFN(OopClosureType, nv_suffix) -#endif - -#define ALL_OOPDESC_OOP_ITERATE(OopClosureType, nv_suffix) \ - OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - OOP_ITERATE_SIZE_DEFN(OopClosureType, nv_suffix) \ - OOP_ITERATE_BACKWARDS_DEFN(OopClosureType, nv_suffix) - -ALL_OOP_OOP_ITERATE_CLOSURES_1(ALL_OOPDESC_OOP_ITERATE) -ALL_OOP_OOP_ITERATE_CLOSURES_2(ALL_OOPDESC_OOP_ITERATE) - bool oopDesc::is_instanceof_or_null(oop obj, Klass* klass) { return obj == NULL || obj->klass()->is_subtype_of(klass); } diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/typeArrayKlass.cpp --- a/src/hotspot/share/oops/typeArrayKlass.cpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/typeArrayKlass.cpp Sat May 26 06:59:49 2018 +0200 @@ -86,7 +86,7 @@ return new (loader_data, size, THREAD) TypeArrayKlass(type, name); } -TypeArrayKlass::TypeArrayKlass(BasicType type, Symbol* name) : ArrayKlass(name) { +TypeArrayKlass::TypeArrayKlass(BasicType type, Symbol* name) : ArrayKlass(name, ID) { set_layout_helper(array_layout_helper(type)); assert(is_array_klass(), "sanity"); assert(is_typeArray_klass(), "sanity"); diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/typeArrayKlass.hpp --- a/src/hotspot/share/oops/typeArrayKlass.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/typeArrayKlass.hpp Sat May 26 06:59:49 2018 +0200 @@ -33,6 +33,10 @@ class TypeArrayKlass : public ArrayKlass { friend class VMStructs; + + public: + static const KlassID ID = TypeArrayKlassID; + private: jint _max_length; // maximum number of elements allowed in an array @@ -87,28 +91,20 @@ private: // The implementation used by all oop_oop_iterate functions in TypeArrayKlasses. - inline void oop_oop_iterate_impl(oop obj, ExtendedOopClosure* closure); + inline void oop_oop_iterate_impl(oop obj, OopIterateClosure* closure); + public: // Wraps oop_oop_iterate_impl to conform to macros. - template + template inline void oop_oop_iterate(oop obj, OopClosureType* closure); // Wraps oop_oop_iterate_impl to conform to macros. - template + template inline void oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr); - public: - - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL) - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL_RANGE) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL_RANGE) - -#if INCLUDE_OOP_OOP_ITERATE_BACKWARDS - ALL_OOP_OOP_ITERATE_CLOSURES_1(OOP_OOP_ITERATE_DECL_NO_BACKWARDS) - ALL_OOP_OOP_ITERATE_CLOSURES_2(OOP_OOP_ITERATE_DECL_NO_BACKWARDS) -#endif - + // Wraps oop_oop_iterate_impl to conform to macros. + template + inline void oop_oop_iterate_reverse(oop obj, OopClosureType* closure); protected: // Find n'th dimensional array diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/oops/typeArrayKlass.inline.hpp --- a/src/hotspot/share/oops/typeArrayKlass.inline.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/oops/typeArrayKlass.inline.hpp Sat May 26 06:59:49 2018 +0200 @@ -31,27 +31,27 @@ #include "oops/typeArrayKlass.hpp" #include "oops/typeArrayOop.hpp" -class ExtendedOopClosure; +class OopIterateClosure; -inline void TypeArrayKlass::oop_oop_iterate_impl(oop obj, ExtendedOopClosure* closure) { +inline void TypeArrayKlass::oop_oop_iterate_impl(oop obj, OopIterateClosure* closure) { assert(obj->is_typeArray(),"must be a type array"); - // Performance tweak: We skip iterating over the klass pointer since we - // know that Universe::TypeArrayKlass never moves. + // Performance tweak: We skip processing the klass pointer since all + // TypeArrayKlasses are guaranteed processed via the null class loader. } -template +template void TypeArrayKlass::oop_oop_iterate(oop obj, OopClosureType* closure) { oop_oop_iterate_impl(obj, closure); } -template +template void TypeArrayKlass::oop_oop_iterate_bounded(oop obj, OopClosureType* closure, MemRegion mr) { oop_oop_iterate_impl(obj, closure); } -#define ALL_TYPE_ARRAY_KLASS_OOP_OOP_ITERATE_DEFN(OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN( TypeArrayKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_BOUNDED( TypeArrayKlass, OopClosureType, nv_suffix) \ - OOP_OOP_ITERATE_DEFN_NO_BACKWARDS(TypeArrayKlass, OopClosureType, nv_suffix) +template +void TypeArrayKlass::oop_oop_iterate_reverse(oop obj, OopClosureType* closure) { + oop_oop_iterate_impl(obj, closure); +} #endif // SHARE_VM_OOPS_TYPEARRAYKLASS_INLINE_HPP diff -r d9132bdf6c30 -r 9d62da00bf15 src/hotspot/share/utilities/macros.hpp --- a/src/hotspot/share/utilities/macros.hpp Mon Jun 25 12:44:52 2018 +0200 +++ b/src/hotspot/share/utilities/macros.hpp Sat May 26 06:59:49 2018 +0200 @@ -239,14 +239,6 @@ #define NOT_ZGC_RETURN_(code) { return code; } #endif // INCLUDE_ZGC -#if INCLUDE_CMSGC || INCLUDE_EPSILONGC || INCLUDE_G1GC || INCLUDE_PARALLELGC || INCLUDE_ZGC -#define INCLUDE_NOT_ONLY_SERIALGC 1 -#else -#define INCLUDE_NOT_ONLY_SERIALGC 0 -#endif - -#define INCLUDE_OOP_OOP_ITERATE_BACKWARDS INCLUDE_NOT_ONLY_SERIALGC - #ifndef INCLUDE_NMT #define INCLUDE_NMT 1 #endif // INCLUDE_NMT