For servers whose primary role is that of an application or database server, the CPU is a critical resource and can often be a source of performance bottlenecks. It is important to note that high CPU utilization does not always mean that a CPU is busy doing work; it might be waiting on another subsystem. When performing proper analysis, it is very important that you look at the system as a whole and at all subsystems because there could be a cascade effect within the subsystems.
Linux has a variety of tools to help determine this. The question is which tools to use. One tool is uptime. By analyzing the output from uptime, we can get a rough idea of what has been happening in the system for the past 15 minutes.
Using top, you can see the CPU utilization and what processes are the biggest contributors to the problem. If you have set up sar, you are collecting a lot of information, some of which is CPU utilization, over a period of time. Analyzing this information can be difficult, so use isag, which can use sar output to plot a graph. Otherwise, you may wish to parse the information through a script and use a spreadsheet to plot it to see any trends in CPU utilization. You can also use sar from the command line by issuing sar -u or sar -U processornumber. To gain a broader perspective of the system and current utilization of more than just the CPU subsystem, a good tool is vmstat
SMP-based systems can present their own set of interesting problems that can be difficult to detect. In an SMP environment, there is the concept of CPU affinity, which implies that you bind a process to a CPU.
The main reason this is useful is because of CPU cache optimization, which is achieved by keeping the same process on one CPU rather than moving between processors. When a process moves between CPUs, the cache of the new CPU must be flushed. Therefore, a process that moves between processors causes many cache flushes to occur, which means that an individual process will take longer to finish. This scenario is very hard to detect because, when monitoring it, the CPU load will appear to be very balanced and not necessarily peaking on any CPU.
It is not possible to change the process priority of a process. This is only indirectly possible through the use of the nice level of the process, but even this is not always possible. If a process is running too slowly, you can assign more CPU to it by giving it a lower nice level. Of course, this means that all other programs will have fewer processor cycles and will run more slowly.
Linux supports nice levels from 19 (lowest priority) to -20 (highest priority). The default value is 0. To change the nice level of a program to a negative number (which makes it higher priority), it is necessary to log on or su to root.
To start the program xyz with a nice level of -5, issue the command:
1. Bind processes that cause a significant amount of interrupts to a CPU.
CPU affinity enables the system administrator to bind interrupts to a group or a single physical processor (of course, this does not apply on a single CPU system). To change the affinity of any given IRQ, go into /proc/irq/%{number of respective irq}/ and change the CPU mask stored in the file smp_affinity. To set the affinity of IRQ 19 to the third CPU in a system (without SMT) use the command as per below
2. Let physical processors handle interrupts.
In symmetric multi-threading (SMT) systems such as IBM POWER 5+ processors supporting multi-threading, it is suggested that you bind interrupt handling to the physical processor rather than the SMT instance. The physical processors usually have the lower CPU numbering so in a two-way system with multi-threading enabled, CPU ID 0 and 2 would refer to the physical CPU, and 1 and 3 would refer to the multi-threading instances. If you do not use the smp_affinity flag, you will not have to worry about this.
It is a workload generator tool designed to subject your system to a configurable measure of CPU, memory, I/O and disk stress.
Install stress tool:-
Lets Try:--