352 // Sleep(0) will _not yield to lower priority threads, while SwitchThreadTo() will. |
352 // Sleep(0) will _not yield to lower priority threads, while SwitchThreadTo() will. |
353 // |
353 // |
354 // See the comments in synchronizer.cpp for additional remarks on spinning. |
354 // See the comments in synchronizer.cpp for additional remarks on spinning. |
355 // |
355 // |
356 // In the future we might: |
356 // In the future we might: |
357 // 1. Modify the safepoint scheme to avoid potentially unbounded spinning. |
357 // -- Modify the safepoint scheme to avoid potentially unbounded spinning. |
358 // This is tricky as the path used by a thread exiting the JVM (say on |
358 // This is tricky as the path used by a thread exiting the JVM (say on |
359 // on JNI call-out) simply stores into its state field. The burden |
359 // on JNI call-out) simply stores into its state field. The burden |
360 // is placed on the VM thread, which must poll (spin). |
360 // is placed on the VM thread, which must poll (spin). |
361 // 2. Find something useful to do while spinning. If the safepoint is GC-related |
361 // -- Find something useful to do while spinning. If the safepoint is GC-related |
362 // we might aggressively scan the stacks of threads that are already safe. |
362 // we might aggressively scan the stacks of threads that are already safe. |
363 // 3. Use Solaris schedctl to examine the state of the still-running mutators. |
363 // -- YieldTo() any still-running mutators that are ready but OFFPROC. |
364 // If all the mutators are ONPROC there's no reason to sleep or yield. |
364 // -- Check system saturation. If the system is not fully saturated then |
365 // 4. YieldTo() any still-running mutators that are ready but OFFPROC. |
|
366 // 5. Check system saturation. If the system is not fully saturated then |
|
367 // simply spin and avoid sleep/yield. |
365 // simply spin and avoid sleep/yield. |
368 // 6. As still-running mutators rendezvous they could unpark the sleeping |
366 // -- As still-running mutators rendezvous they could unpark the sleeping |
369 // VMthread. This works well for still-running mutators that become |
367 // VMthread. This works well for still-running mutators that become |
370 // safe. The VMthread must still poll for mutators that call-out. |
368 // safe. The VMthread must still poll for mutators that call-out. |
371 // 7. Drive the policy on time-since-begin instead of iterations. |
369 // -- Drive the policy on time-since-begin instead of iterations. |
372 // 8. Consider making the spin duration a function of the # of CPUs: |
370 // -- Consider making the spin duration a function of the # of CPUs: |
373 // Spin = (((ncpus-1) * M) + K) + F(still_running) |
371 // Spin = (((ncpus-1) * M) + K) + F(still_running) |
374 // Alternately, instead of counting iterations of the outer loop |
372 // Alternately, instead of counting iterations of the outer loop |
375 // we could count the # of threads visited in the inner loop, above. |
373 // we could count the # of threads visited in the inner loop, above. |
376 // 9. On windows consider using the return value from SwitchThreadTo() |
374 // -- On windows consider using the return value from SwitchThreadTo() |
377 // to drive subsequent spin/SwitchThreadTo()/Sleep(N) decisions. |
375 // to drive subsequent spin/SwitchThreadTo()/Sleep(N) decisions. |
378 |
376 |
379 if (int(iterations) == -1) { // overflow - something is wrong. |
377 if (int(iterations) == -1) { // overflow - something is wrong. |
380 // We can only overflow here when we are using global |
378 // We can only overflow here when we are using global |
381 // polling pages. We keep this guarantee in its original |
379 // polling pages. We keep this guarantee in its original |
559 log_info(safepoint)("Leaving safepoint region"); |
557 log_info(safepoint)("Leaving safepoint region"); |
560 |
558 |
561 // Start suspended threads |
559 // Start suspended threads |
562 jtiwh.rewind(); |
560 jtiwh.rewind(); |
563 for (; JavaThread *current = jtiwh.next(); ) { |
561 for (; JavaThread *current = jtiwh.next(); ) { |
564 // A problem occurring on Solaris is when attempting to restart threads |
|
565 // the first #cpus - 1 go well, but then the VMThread is preempted when we get |
|
566 // to the next one (since it has been running the longest). We then have |
|
567 // to wait for a cpu to become available before we can continue restarting |
|
568 // threads. |
|
569 // FIXME: This causes the performance of the VM to degrade when active and with |
|
570 // large numbers of threads. Apparently this is due to the synchronous nature |
|
571 // of suspending threads. |
|
572 // |
|
573 // TODO-FIXME: the comments above are vestigial and no longer apply. |
|
574 // Furthermore, using solaris' schedctl in this particular context confers no benefit |
|
575 if (VMThreadHintNoPreempt) { |
|
576 os::hint_no_preempt(); |
|
577 } |
|
578 ThreadSafepointState* cur_state = current->safepoint_state(); |
562 ThreadSafepointState* cur_state = current->safepoint_state(); |
579 assert(cur_state->type() != ThreadSafepointState::_running, "Thread not suspended at safepoint"); |
563 assert(cur_state->type() != ThreadSafepointState::_running, "Thread not suspended at safepoint"); |
580 cur_state->restart(); |
564 cur_state->restart(); |
581 assert(cur_state->is_running(), "safepoint state has not been reset"); |
565 assert(cur_state->is_running(), "safepoint state has not been reset"); |
582 } |
566 } |