The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
공유 메모리 다중 프로세서는 동시에 실행되는 여러 병렬 프로그램이 있는 컴퓨팅 서버로 자주 사용됩니다. 이러한 환경에서 운영 체제는 여러 프로세스의 컨텍스트를 전환합니다. 운영 체제가 컨텍스트를 전환하면 교체되는 프로세스의 컨텍스트를 저장하고 실행할 새 프로세스의 컨텍스트를 가져오는 비용 외에도 프로세서의 캐시 성능도 영향을 받을 수 있습니다. 차단된 알고리즘은 메모리 참조의 지역성을 높여 캐시 성능을 향상시킵니다. 이 알고리즘을 사용하는 차단된 프로그램에서는 캐시 메모리에 로드된 블록의 재사용으로 인해 프로그램 성능이 크게 영향을 받을 수 있습니다. 블록이 완전히 재사용되기 전에 빈번한 컨텍스트 전환으로 블록을 교체하는 경우 차단된 프로그램의 캐시 지역성을 성공적으로 활용할 수 없습니다. 이 문제를 해결하기 위해 우리는 다중 프로그래밍 시스템에서 차단된 프로그램의 캐시 지역성을 활용하는 선점 안전 정책을 제안합니다. 제안된 정책은 블록이 프로그램 내에서 완전히 재사용될 때까지 컨텍스트 전환을 지연하지만 프로세서 스케줄링 메커니즘에서 독점 프로세서 시간을 보상합니다. 우리의 시뮬레이션 결과는 차단된 프로그램이 다중 프로그래밍된 공유 메모리 다중 프로세서에서 실행되는 상황에서 제안된 정책이 캐시 미스 감소로 인해 이러한 프로그램의 성능을 향상시키는 것을 보여줍니다. 이러한 상황에서는 향상된 프로세서 활용으로 인해 전체 시스템 성능에 유익한 영향을 미치기도 합니다.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Inbum JUNG, Jongwoong HYUN, Joonwon LEE, "A Scheduling Policy for Blocked Programs in Multiprogrammed Shared-Memory Multiprocessors" in IEICE TRANSACTIONS on Information,
vol. E83-D, no. 9, pp. 1762-1771, September 2000, doi: .
Abstract: Shared memory multiprocessors are frequently used as compute servers with multiple parallel programs executing at the same time. In such environments, an operating system switches the contexts of multiple processes. When the operating system switches contexts, in addition to the cost of saving the context of the process being swapped out and that of bringing in the context of the new process to be run, the cache performance of processors also can be affected. The blocked algorithm improves cache performance by increasing the locality of memory references. In a blocked program using this algorithm, program performance can be significantly affected by the reuse of a block loaded into a cache memory. If frequent context switching replaces the block before it is completely reused, the cache locality in a blocked program cannot be successfully exploited. To address this problem, we propose a preemption-safe policy to utilize the cache locality of blocked programs in a multiprogrammed system. The proposed policy delays context switching until a block is fully reused within a program, but also compensates for the monopolized processor time on processor scheduling mechanisms. Our simulation results show that in a situation where blocked programs are run on multiprogrammed shared-memory multiprocessors, the proposed policy improves the performance of these programs due to a decrease in cache misses. In such situations, it also has a beneficial impact on the overall system performance due to the enhanced processor utilization.
URL: https://global.ieice.org/en_transactions/information/10.1587/e83-d_9_1762/_p
부
@ARTICLE{e83-d_9_1762,
author={Inbum JUNG, Jongwoong HYUN, Joonwon LEE, },
journal={IEICE TRANSACTIONS on Information},
title={A Scheduling Policy for Blocked Programs in Multiprogrammed Shared-Memory Multiprocessors},
year={2000},
volume={E83-D},
number={9},
pages={1762-1771},
abstract={Shared memory multiprocessors are frequently used as compute servers with multiple parallel programs executing at the same time. In such environments, an operating system switches the contexts of multiple processes. When the operating system switches contexts, in addition to the cost of saving the context of the process being swapped out and that of bringing in the context of the new process to be run, the cache performance of processors also can be affected. The blocked algorithm improves cache performance by increasing the locality of memory references. In a blocked program using this algorithm, program performance can be significantly affected by the reuse of a block loaded into a cache memory. If frequent context switching replaces the block before it is completely reused, the cache locality in a blocked program cannot be successfully exploited. To address this problem, we propose a preemption-safe policy to utilize the cache locality of blocked programs in a multiprogrammed system. The proposed policy delays context switching until a block is fully reused within a program, but also compensates for the monopolized processor time on processor scheduling mechanisms. Our simulation results show that in a situation where blocked programs are run on multiprogrammed shared-memory multiprocessors, the proposed policy improves the performance of these programs due to a decrease in cache misses. In such situations, it also has a beneficial impact on the overall system performance due to the enhanced processor utilization.},
keywords={},
doi={},
ISSN={},
month={September},}
부
TY - JOUR
TI - A Scheduling Policy for Blocked Programs in Multiprogrammed Shared-Memory Multiprocessors
T2 - IEICE TRANSACTIONS on Information
SP - 1762
EP - 1771
AU - Inbum JUNG
AU - Jongwoong HYUN
AU - Joonwon LEE
PY - 2000
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E83-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2000
AB - Shared memory multiprocessors are frequently used as compute servers with multiple parallel programs executing at the same time. In such environments, an operating system switches the contexts of multiple processes. When the operating system switches contexts, in addition to the cost of saving the context of the process being swapped out and that of bringing in the context of the new process to be run, the cache performance of processors also can be affected. The blocked algorithm improves cache performance by increasing the locality of memory references. In a blocked program using this algorithm, program performance can be significantly affected by the reuse of a block loaded into a cache memory. If frequent context switching replaces the block before it is completely reused, the cache locality in a blocked program cannot be successfully exploited. To address this problem, we propose a preemption-safe policy to utilize the cache locality of blocked programs in a multiprogrammed system. The proposed policy delays context switching until a block is fully reused within a program, but also compensates for the monopolized processor time on processor scheduling mechanisms. Our simulation results show that in a situation where blocked programs are run on multiprogrammed shared-memory multiprocessors, the proposed policy improves the performance of these programs due to a decrease in cache misses. In such situations, it also has a beneficial impact on the overall system performance due to the enhanced processor utilization.
ER -