This chapter describes two types of use case to illustrate the analysis and configuration methods described elsewhere in this guide. The first example considers typical servers and the second is a typical laptop.
A typical standard server nowadays comes with basically all of the necessary hardware features supported in Red Hat Enterprise Linux 6. The first thing to take into consideration is the kinds of workloads for which the server will mainly be used. Based on this information you can decide which components can be optimized for power savings.
Regardless of the type of server, graphics performance is generally not required. Therefore, GPU power savings can be left turned on.
A webserver needs network and disk I/O. Depending on the external connection speed 100 Mbit/s might be enough. If the machine serves mostly static pages, CPU performance might not be very important. Power-management choices might therefore include:
-
no disk or network plugins for tuned.
-
ALPM turned on.
-
ondemand
governor turned on.
-
network card limited to 100 Mbit/s.
A compute server mainly needs CPU. Power management choices might include:
-
depending on the jobs and where data storage happens, disk or network plugins for tuned; or for batch-mode systems, fully active tuned.
-
depending on utilization, perhaps the performance
governor.
A mailserver needs mostly disk I/O and CPU. Power management choices might include:
-
ondemand
governor turned on, because the last few percent of CPU performance are not important.
-
no disk or network plugins for tuned.
-
network speed should not be limited, because mail is often internal and can therefore benefit from a 1 Gbit/s or 10 Gbit/s link.
Fileserver requirements are similar to those of a mailserver, but depending on the protocol used, might require more CPU performance. Typically, Samba-based servers require more CPU than NFS, and NFS typically requires more than iSCSI. Even so, you should be able to use the ondemand
governor.
A directory server typically has lower requirements for disk I/O, especially if equipped with enough RAM. Network latency is important although network I/O less so. You might consider latency network tuning with a lower link speed, but you should test this carefully for your particular network.