02/28/85 total_time_meters, ttm Syntax as a command: ttm {-control_arg} Function: prints out the CPU time percentage and average CPU time spent doing various tasks. Control arguments: -report_reset, -rr generates a full report and then performs the reset operation. -reset, -rs resets the metering interval for the invoking process so that the interval begins at the last call with -reset specified. If -reset has never been given in a process, it is equivalent to having been specified at system initialization time. Access required: This command requires access to phcs_ or metering_gate_. Notes: If the total_time_meters command is given with no control argument, it prints a full report. The following are brief descriptions of each of the variables printed out by total_time_meters. Average CPU times are given in microseconds. In the description below, system CPU time is the total amount of CPU time generated by all configured CPUs. Idle time is CPU time consumed by an idle process; an idle process is given a CPU only if no other (nonidle) process can be given that CPU. System nonidle time is the difference between system CPU time and the aggregate idle time. In this computation, MP idle time, work class idle time, and loading idle time are considered as overhead time and are included in system nonidle time. That is, system idle time is defined to include only the idle time caused by light load; it does not include the idle time caused by system bottlenecks; that time is counted as overhead. The three columns in the display contain, respectively, the percent of system CPU time, percent of system nonidle time, and average time per instance (for the overhead tasks). The percents of non-idle time are included to assist the user in comparing values measured under light load with those measured under heavy load. It can not be emphasized too often that measurements made under light load should not be used to make tuning or configuration decision. Several of the overhead task names are indented, to indicate that they are part of the preceding, non-indented task. The percents for these indented tasks are also included in the percent for the preceding task. That is, in the example at the end of this description, page faults used 1.49% of system CPU time; 0.14% was used by PC Loop Locks, and the remaining 1.35% was used by other page fault overhead. Page Faults is the percentage of CPU time spent handling page faults and the average time spent per page fault. PC Loop Locks is the percentage of CPU time spent looping on the page table lock, and the average time spent per looplocking. This number will be nonzero only on a multiprocessor system. This number is also included in page fault time. PC Queue is the percentage of CPU time spent processing the core queue, and the average time spent per core queue processing. The core queue is used to prevent loop looks in page control on interrupt side. If an interrupt for a page I/O is received when the page table is locked, an entry is made into the core queue. When the page table is next unlocked, the core queue is processed. Seg Faults is the percentage of CPU time spent handling segment faults, and the average time spent per segment fault. These values do not include the time spent handling page faults that occurred during the segment fault handling. Bound Faults is the percentage of CPU time spent handling bound faults and the average time spent per bound fault. These values do not include time spent handling page faults that occurred during bound fault processing. Interrupts is the percentage of CPU time spent handling interrupts, and the average time spent per interrupt. Other Fault is the percentage of CPU time spent handling certain other faults. The fault processing time included is fault handling time that is not charged to the user process as virtual CPU time and that does not appear elsewhere in the total_time_meters output (i.e., it is not page fault, segment fault, or bound fault processing). The vast majority of the time included as Other Fault processing is related to the processing of connect faults and timer_runout faults. Getwork is the percentage of CPU time spent in the getwork function of traffic control, and the average time spent per pass through getwork. The getwork routine is used to select a process to run on a CPU and to switch address spaces to that process. This number is also included in other fault time. TC Loop Locks is the percentage of CPU time spent looping on a traffic control lock, and the average time spent per looplocking. The locks included in this category are the global traffic control lock and the individual Active Process Table Entry (APTE) locks. This time is nonzero only on a multiprocessor system. This number is also included in other fault time. Post Purging is the percentage of CPU time spent in post purging processes that have lost eligibility, and the average time spent per post purge. Post purging a process involves moving all of its per-process pages that are in main memory into the "most recently used" position in the core map and computing the working set of the process. This time is nonzero only if the "post_purge" tuning parameter is set to "on." This number is also included in other fault time. MP Idle is the multiprogramming idle. This is the percentage of CPU time that is spent idling when processes are contending for eligibility, but not all contending processes are eligible. This occurs because some site-defined or system limit on eligibility has been reached--e.g., maximum number of eligible processes (tuning parameter "max_eligible"), maximum number of ring 0 stacks (tuning parameter "max_max_eligible"), per-work-class maximum number of eligible processes, working set limit, etc. MP idle is CPU time wasted in idling because the eligibility limits are set too low for the configuration, or because there is not enough memory in the configuration to hold the working sets of a larger number of eligible processes. Work Class Idle is the percent of CPU time spent idling because the only processes that could have been run belonged to work classes that had used their maximum percentage of CPU time. Setting upper limits on work classes will cause the system to go idle rather than run processes in those work classes that have reached their maximum percent. This meter indicates the percent of CPU time wasted in idling because of the setting of these limits. Loading Idle is the percentage of CPU time that is spent idling when processes are contending for eligibility, not all contending processes can be made eligible, and some eligible processes are being loaded. Being loaded means wiring the two per-process pages that must be in main memory in order for a process to run--the first page of the descriptor segment (DSEG) and the first page of the process descriptor segment (PDS). NMP Idle Is the nonmultiprogramming idle--the percentage of system CPU time that is spent idling when all processes contending for eligibility are eligible. Time is charged to NMP idle under two different circumstances: 1) there are fewer processes contending for eligibility than there are processors in the configuration; 2) there are fewer non-waiting processes than there are processors in the configuration (that is, most of the eligible processes are waiting for system events such as page faults), and no additional processes are contending for eligibility. Both of these circumstances are caused by light load; therefore NMP idle time, along with zero idle time, is subtracted from system CPU time to get system non-idle time. Zero Idle is the percentage of system CPU time that is spent idling when no processes are ready and contending for eligibility. Other Overhead is the percentage of system CPU time that is overhead but cannot be attributed to any of the above categories of overhead. This is almost entirely instrumentation artifact, due to a small but indeterminable amount of time between the occurrence of a fault or interrupt and the reading of the system clock (which begins the charging of time to some overhead function). Due to hardware features such as cache memory and associative memory, this time is not constant per fault, even though the same instruction sequence is executed each time. Other Overhead represents the effect of this nondeterminism. Virtual CPU Time is the precent of CPU time delivered to user processes as virtual CPU time. Virtual CPU time is time spent running user ring code (commands, application programs, etc.) or inner ring code in direct response to user ring requests (via gate calls). System virtual CPU time is total system CPU time less all system overhead and idle time. It is the sum of the virtual CPU time charged to all processes. One objective of tuning is to maximize virtual CPU time. ----------------------------------------------------------- Historical Background This edition of the Multics software materials and documentation is provided and donated to Massachusetts Institute of Technology by Group BULL including BULL HN Information Systems Inc. as a contribution to computer science knowledge. This donation is made also to give evidence of the common contributions of Massachusetts Institute of Technology, Bell Laboratories, General Electric, Honeywell Information Systems Inc., Honeywell BULL Inc., Groupe BULL and BULL HN Information Systems Inc. to the development of this operating system. Multics development was initiated by Massachusetts Institute of Technology Project MAC (1963-1970), renamed the MIT Laboratory for Computer Science and Artificial Intelligence in the mid 1970s, under the leadership of Professor Fernando Jose Corbato. Users consider that Multics provided the best software architecture for managing computer hardware properly and for executing programs. Many subsequent operating systems incorporated Multics principles. Multics was distributed in 1975 to 2000 by Group Bull in Europe , and in the U.S. by Bull HN Information Systems Inc., as successor in interest by change in name only to Honeywell Bull Inc. and Honeywell Information Systems Inc. . ----------------------------------------------------------- Permission to use, copy, modify, and distribute these programs and their documentation for any purpose and without fee is hereby granted,provided that the below copyright notice and historical background appear in all copies and that both the copyright notice and historical background and this permission notice appear in supporting documentation, and that the names of MIT, HIS, BULL or BULL HN not be used in advertising or publicity pertaining to distribution of the programs without specific prior written permission. Copyright 1972 by Massachusetts Institute of Technology and Honeywell Information Systems Inc. Copyright 2006 by BULL HN Information Systems Inc. Copyright 2006 by Bull SAS All Rights Reserved