Copyright © 2007 Red Hat, Inc. and others [1]
The following topics are covered in this document:
Installation-Related Notes
Feature Updates
Driver Updates
Kernel-Related Updates
Other Updates
Technology Previews
Resolved Issues
Known Issues
Some updates on Red Hat Enterprise Linux 5.1 may not appear in this version of the Release Notes. An updated version may also be available at the following URL:
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/index.html
This section includes information specific to Anaconda and the installation of Red Hat Enterprise Linux 5.1.
In order to upgrade an already-installed Red Hat Enterprise Linux 5, you must use Red Hat Network to update those packages that have changed.
You may use Anaconda to perform a fresh installation of Red Hat Enterprise Linux 5.1 or to perform an upgrade from the latest updated version of Red Hat Enterprise Linux 5 to Red Hat Enterprise Linux 5.1.
If you are copying the contents of the Red Hat Enterprise Linux 5 CD-ROMs (in preparation for a network-based installation, for example) be sure to copy the CD-ROMs for the operating system only. Do not copy the Supplementary CD-ROM, or any of the layered product CD-ROMs, as this will overwrite files necessary for Anaconda's proper operation.
The contents of the Supplementary CD-ROM and other layered product CD-ROMs must be installed after Red Hat Enterprise Linux 5.1 has been installed.
When installing Red Hat Enterprise Linux 5.1 on a fully virtualized guest, do not use the kernel-xen kernel. Using this kernel on fully virtualized guests can cause your system to hang.
If you are using an Installation Number when installing Red Hat Enterprise Linux 5.1 on a fully virtualized guest, be sure to deselect the Virtualization package group during the installation. The Virtualization package group option installs the kernel-xen kernel.
Note that paravirtualized guests are not affected by this issue. Paravirtualized guests always use the kernel-xen kernel.
If you are using the Virtualized kernel when upgrading from Red Hat Enterprise Linux 5 to 5.1, you must reboot after completing the upgrade.
The hypervisors of Red Hat Enterprise Linux 5 and 5.1 are not ABI-compatible. If you do not reboot between upgrades, the upgraded Virtualization RPMs will not match the running kernel.
iSCSI installation and boot was originally introduced in Red Hat Enterprise Linux 5 as a Technology Preview. This feature is now fully supported, with the restrictions described below.
This capability has three configurations depending on whether you are:
using a hardware iSCSI initiator (such as the QLogic qla4xxx)
using the open-iscsi initiator on a system with firmware boot support for iSCSI (such as iSCSI Boot Firmware, or a version of Open Firmware that features the iSCSI boot capability)
using the open-iscsi initiator on a system with no firmware boot support for iSCSI
If you are using a hardware iSCSI initiator, you can use the card's BIOS set-up utility to enter the IP address and other parameters required to obtain access to the remote storage. The logical units of the remote storage will be available in Anaconda as standard sd devices, with no additional set-up required.
If you need to determine the initiator's qualified name (IQN) in order to configure the remote storage server, follow these steps during installation:
Go to the installer page where you select which disk drives to use for the installation.
Click on
.Click on
.The iSCSI IQN will be displayed on that screen.
If you are using the open-iscsi software initiator on a system with firmware boot support for iSCSI, use the firmware's setup utility to enter the IP address and other parameters needed to access the remote storage. Doing this configures the system to boot from the remote iSCSI storage.
Currently, Anaconda does not access the iSCSI information held by the firmware. Instead, you must manually enter the target IP address during installation. To do so, determine the IQN of the initiator using the procedure described above. Afterwards, on the same installer page where the initiator IQN is displayed, specify the IP address of the iSCSI target you wish to install to.
After manually specifying the IP address of the iSCSI target, the logical units on the iSCSI targets will be available for installation. The initrd created by Anaconda will now obtain the IQN and IP address of the iSCSI target.
If the IQN or IP address of the iSCSI target are changed in the future, enter the iBFT or Open Firmware set-up utility on each initiator and change the corresponding parameters. Afterwards, modify the initrd (stored in the iSCSI storage) for each initiator as follows:
Expand the initrd using gunzip.
Unpack it using cpio -i.
In the init file, search for the line containing the string iscsistartup. This line also contains the IQN and IP address of the iSCSI target; update this line with the new IQN and IP address.
Re-pack the initrd using cpio -o.
Re-compress the initrd using gunzip.
The ability of the operating system to obtain iSCSI information held by the Open Firmware / iBFT firmware is planned for a future release. Such an enhancement will remove the need to modify the initrd (stored in the iSCSI storage) for each initiator whenever the IP address or IQN of the iSCSI target is changed.
If you are using the open-iscsi software initiator on a system with no firmware boot support for iSCSI, use a network boot capability (such as PXE/tftp). In this case, follow the same procedure described earlier to determine the initiator IQN and specify the IP address of the iSCSI target. Once completed, copy the initrd to the network boot server and set up the system for network boot.
Similarly, if the IP address or IQN of the iSCSI target is changed, the initrd should be modified accordingly as well. To do so, use the same procedure described earlier to modify the initrd for each initiator.
The maximum capacity of the EXT3 is now 16TB (increased from 8TB). This enhancement was originally included in Red Hat Enterprise Linux 5 as a Technology Preview, and is now fully supported in this update.
It is now possible to limit yum to install security updates only. To do so, simply install the yum-security plugin and run the following command:
yum update --security
It is now possible to restart a resource in a cluster without interrupting its parent service. This can be configured in /etc/cluster/cluster.conf on a running node using the __independent_subtree="1" attribute to tag a resource as independent.
For example:
<service name="example"> <fs name="One" __independent_subtree="1" ...> <nfsexport ...> <nfsclient .../> </nfsexport> </fs> <fs name="Two" ...> <nfsexport ...> <nfsclient .../> </nfsexport> <script name="Database" .../> </fs> <ip/> </service>
Here, two file system resources are used: One and Two. If One fails, it is restarted without interrupting Two. If Two fails, all components (One, children of One and children of Two) are restarted. At no given time are Two and its children dependent on any resource provided by One.
Note that Samba requires a specific service structure, and as such it cannot be used in a service with independent subtrees. This is also true for several other resources, so use the __independent_subtree="1" attribute with caution.
The following Virtualization updates are also included in this release:
The virtualized kernel can now use the kdump function.
AMD-V is now supported in this release. This enables live domain migration for fully virtualized guests.
The virtualized kernel can now support up to 256GB of RAM.
The in-kernel socket API is now expanded. This was done to fix a bug that occurs when running sctp between guests.
Virtual networking is now part of libvirt, the virtualization library. libvirt has a set of commands that sets up a virtual NAT/router and private network for all local guests on a machine. This is especially useful for guests that do not need to be routable from the outside. It is also useful for developers who use Virtualization on laptops.
Note that the virtual networking capability adds a dependency on dnsmasq, which handles dhcp for the virtual network.
For more information about libvirt, refer to http://libvirt.org.
libvirt can now manage inactive virtual machines. libvirt does this by defining and undefining domains without stopping or starting them. This functionality is similar to the virsh define and virsh undefine commands.
This enhancement allows the Red Hat Virtual Machine Manager to display all available guests. This allows you to start these guests directly from the GUI.
Installing the kernel-xen package no longer leads to the creation of incorrect / incomplete elilo.conf entries.
Fully virtualized guests now support hot-migration.
The xm create command now has a graphical equivalent in virt-manager.
Nested Paging (NP) is now supported. This feature reduces the complexity of memory management in virtualized environments. In addition, NP also reduces CPU utilization in memory-intensive guests.
At present, NP is not enabled by default. If your system supports NP, it is recommended that you enable NP by booting the hypervisor with the parameter hap=1.
This update to the Virtualization feature also includes the capability to install and run paravirtualized 32-bit guests on 64-bit hosts. However, this capability is provided as a Technology Preview; as such, it is not supported for production use.
Shared page tables are now supported for hugetlb memory. This enables page table entries to be shared among multiple processes.
Sharing page table entries among multiple processes consumes less cache space. This improves application cache hit ratio, resulting in better application performance.
The tick_divider=<value> option is a sysfs parameter that allows you to adjust the system clock rate while maintaining the same visible HZ timing value to user space applications.
Using the tick_divider= option allows you to reduce CPU overhead and increase efficiency at the cost of lowering the accuracy of timing operations and profiling.
Useful <values> for the standard 1000Hz clock are:
2 = 500Hz
4 = 250Hz
5 = 200Hz
8 = 125Hz
10 = 100Hz (value used by previous releases of Red Hat Enterprise Linux)
Note that the virtualized kernel does not support multiple timer rates on guests. dom0 uses a fixed timing rate set across all guests; this reduces the load that multiple tick rates could cause.
Anaconda now has the capability to detect, create, and install to dm-multipath devices. To enable this feature, add the parameter mpath to the kernel boot line.
This feature was originally introduced in Red Hat Enterprise Linux 5 as a Technology Preview, and is now fully supported in this release.
Note that dm-multipath also features inbox support for the Dell MD3000. However, multiple nodes that use dm-multipath to access the MD3000 cannot perform immediate failback.
Further, it is recommended that you use the Anaconda if your system has both multipath and non-multipath devices. Using in such cases may create both types of devices in the same logical volume groups.
interface inAt present, the following restrictions apply to this feature:
If there is only one path to the boot Logical Unit Number (LUN), Anaconda installs to the SCSI device even if mpath is specified. Even after you enable multiple paths to the boot LUN and recreate the initrd, the operating system will boot from the SCSI device instead of the dm-multipath device.
However, if there are multiple paths to the boot LUN to begin with, Anaconda will correctly install to the corresponding dm-multipath device after mpath is specified in the kernel boot line.
By default, user_friendly_names is set to yes in multipath.conf. This is a required setting in the support implementation of the dm-multipath root device. As such, setting user_friendly_names to no and recreating the initrd will result in a boot failure with the following error:
Checking filesystems fsck.ext3: No such file or directory while trying to open /dev/mapper/mpath0p1
The ability to boot from a SAN disk device is now supported. In this case, SAN refers to a Fibre Channel or iSCSI interface. This capability also features support for system-to-storage connection through multiple paths using dm-multipath.
In configurations that use multiple host bus adapters (HBA), you may need to set the system BIOS to boot from another adapter if all paths through the current adapter fail.
nfsroot is fully supported in this update. This allows users to run Red Hat Enterprise Linux 5.1 with its root file system (/) mounted via NFS.
nfsroot was originally introduced in Red Hat Enterprise Linux 5 as a subset of the Technology Preview feature Stateless Linux. The full implementation of Stateless Linux remains a Technology Preview.
At present, nfsroot has the following restrictions:
Each client must have its own separate root file system over the NFS server. This restriction applies even when read-only root is in use.
SWAP is not supported over NFS.
SELinux cannot be enabled on nfsroot clients. In general, Red Hat does not recommend disabling SELinux. As such, customers must carefully consider the security implications of this action.
Refer to the following procedure on how to set up nfsroot. This procedure assumes that your network device is eth0 and the associated network driver is tg3. You may need to adjust according to your system configuration:
Create the initrd in your home directory using the following command:
mkinitrd --with=tg3 --rootfs=nfs --net-dev=eth0 --rootdev=<nfs server ip>:/<path to nfsroot> ~/initrd-<kernel-version>.img <kernel-version>
This initrd must be created using the Red Hat Enterprise Linux 5.1 kernel.
Next, create a zImage.initrd image from the initrd generated earlier. zImage.initrd is a compressed kernel and initrd in one image. Use the following command:
mkzimage /boot/System.map-<kernel-version> ~/initrd-<kernel-version>.img /usr/share/ppc64-utils/zImage.stub ~/zImage.initrd-<kernel-version>
Copy the created zImage.initrd-<kernel-version> to an exportable location on your tftp server.
Ensure that the exported nfsroot file system on the nfs server contains the necessary binaries and modules. These binaries and modules must correspond to the version of the kernel used to create the initrd in the first step.
Configure the DHCP server to point the client to the target zImage.initrd-<kernel-version>.
To do this, add the following entries to the /etc/dhcpd.conf file of the DHCP server:
next-server <tftp hostname/IP address>; filename "<tftp-path>/zImage.initrd";
Note that <tftp-path> should specify the path to the zImage.initrd from within the tftp-exported directory. For example, if the absolute path to the zImage.initrd is /tftpboot/mykernels/zImage.initrd and /tftpboot/ is the tftp-exported directory, then <tftp-path> should be mykernels/zImage.initrd.
Finally, set your system's boot configuration parameters to make it boot first from the network device (in this example, the network device is eth0).
GFS2 is an incremental advancement of GFS. This update applies several significant improvements that require a change to the on-disk file system format. GFS file systems can be converted to GFS2 using the utility gfs2_convert, which updates the metadata of a GFS file system accordingly.
GFS2 was originally released in Red Hat Enterprise Linux 5 as a Technology Preview, and is now fully supported in this update. Benchmark tests indicate faster performance on the following:
heavy usage in a single directory and faster directory scans (Postmark benchmark)
synchronous I/O operations (fstest benchmark test indicates improved performance for messaging applications like TIBCO)
cached reads, as there is no longer any locking overhead
direct I/O to preallocated files
NFS file handle lookups
df, as allocation information is now cached
In addition, GFS2 also features the following changes:
journals are now plain (though hidden) files instead of metadata. Journals can now be dynamically added as additional servers mount a file system.
quotas are now enabled and disabled by the mount option quota=<on|off|account>
quiesce is no longer needed on a cluster to replay journals for failure recovery
nanosecond timestamps are now supported
similar to ext3, GFS2 now supports the data=ordered mode
attribute settings lsattr() and chattr() are now supported via standard ioctl()
file system sizes above 16TB are now supported
GFS2 is a standard file system, and can be used in non-clustered configurations
The Driver Update Program (DUP) was designed to allow third-party vendors (such as OEMs) to add their own device drivers and other Linux Kernel Modules to Red Hat Enterprise Linux 5 systems using regular RPM packages as the distribution containers.
Red Hat Enterprise Linux 5.1 applies several updates to the DUP, most notably:
install-time Driver Update RPMs through Driver Update Disks is now supported
bootpath Driver Updates affecting the system bootpath are now supported
support for third-party packaging of Advanced Linux Sound Architecture (ALSA) is now deprecated
Further, various updates were applied to the approved kernel ABI symbol whitelists. These whitelists are used by packaging drivers to determine which symbols and data structures provided by the kernel can be used in a third-party driver.
For more information, refer to http://www.kerneldrivers.org/RedHatKernelModulePackages.
acpi: updated ibm_acpi module to address several ACPI and docking station issues with Lenovo laptops.
ipmi: Polling kthread no longer runs when hardware interrupt is assigned to a Baseboard Management Controller.
sata: SATA/SAS upgraded to version 2.6.22-rc3.
openib and openmpi: upgraded to OFED (OpenFabrics Enterprise Distribution) version 1.2.
powernow-k8: upgraded to version 2.0.0 to fully support Greyhound.
xinput: added to enable full RSA support.
aic94xx: upgraded to version 1.0.2-1, in line with an upgrade of the embedded sequencer firmware to v17. These updates apply the following changes:
fixed ascb race condition on platforms with expanders
added REQ_TASK_ABORT and DEVICE_RESET handlers
physical ports are now cleaned up properly after a discovery error
phys can now be enabled and disabled through sysfs
extended use of DDB lock to prevent race condition of DDB
ALSA updated to version 1.0.14. This update applies the following fixes:
fixed noise problem on the IBM Taroko (M50)
Realtek ALC861 is now supported
fixed a muting problem on xw8600 and xw6600
ADI 1884 Audio is now supported
fixed an audio configuration problem on xw4600
added function calls to set maximum read request size for PCIX and PCI-Express
IBM System P machines now support PCI-Express hotplugging
added necessary drivers and PCI ID to support SB600 SMBus
e1000 driver: updated to version 7.3.20-k2 to support I/OAT-enabled chipsets.
bnx2 driver: updated to version 1.5.11 to support 5709 hardware.
B44 ethernet driver: backported from upstream version 2.6.22-rc4 to apply the following changes:
several endianness fixes were made
DMA_30BIT_MASK constant is now used
skb_copy_from_linear_data_offset() is now used
spin_lock_irqsave() now features safer interrupt disabling
simple error checking is performed during resume
several fixes to multicast were applied
chip reset now takes longer than previously anticipated
Marvell sky2 driver: updated to version 1.14 to fix a bug that causes a kernel panic if the ifup/ifdown commands are executed repeatedly.
forcedeth-0.60 driver: now included in this release. This applies several critical bug fixes for customers using NVIDIA's MCP55 motherboard chipsets and corresponding onboard NIC.
ixgb driver: updated to the latest upstream version (1.0.126).
netxen_nic driver: version 3.4.2-2 added to enable support for NetXen 10GbE network cards.
Chelsio 10G Ethernet Network Controller is now supported.
added support for PCI error recovery to the s2io device.
Broadcomm wireless ethernet driver now supports PCI ID for nx6325 card.
fixed a bug that caused an ASSERTION FAILED error when attempting to start a BCM4306 via ifup.
ixgb driver: updated to add EEH PCI error recovery support for the Intel 10-gigabit ethernet card. For more information, refer to /usr/share/doc/kernel-doc-<kernel version>/Documentation/pci-error-recovery.txt.
qla3xxx driver: re-enabled and updated to version 2.03.00-k3 to provide networking support for QLogic iSCSI adapters without using iSCSI.
Intel PRO/Wireless 3945ABG network driver: updated to version 1.2.0. This update resolves several issues, including a soft lockup bug that could occur under certain circumstances on some laptops.
qla2xxx: driver upgraded to version 8.01.07-k6. This applies several changes, most notably:
iIDMA is now supported
the following Fibre Channel attributes are now supported:
symbolic nodename
system hostname
fabric name
host port state
trace-control async events are no longer logged
reset handling logic has been corrected
MSI-X is now supported
IRQ-0 assignments are now handled per system
NVRAM updates immediately go into effect
This release includes an update of the IPMI driver set to include the upstream changes as of version 2.6.21.3, with some patches included from 2.6.22-rc-4. This update features the following changes (among others):
fixed uninitialized data bug in ipmi_si_intf
kipmid is no longer started if another driver supports interrupts
users are now allowed to override the kernel daemon enable through force_kipmid
per-channel command registration is now supported
MAX_IPMI_INTERFACES is no longer used
hot system interface removal is now supported
added a Maintenance Mode to support firmware updates
added poweroff support for the pigeonpoint IPMC
BT subdriver can now survive long timeouts
added pci_remove handling for proper cleanup on a hot remove
For information about new module parameters, refer to /usr/share/doc/kernel-doc-<kernel version>/Documentation/IPMI.txt.
ported SCSI blacklist from Red Hat Enterprise Linux 4 to this release.
added PCI IDs for aic79xx driver.
aacraid driver: updated to version 1.1.5-2437 to support PRIMERGY RX800S2 and RX800S3.
megaraid_sas driver: updated to version 3.10. This update defines the entry point for bios_param, adds an IOCTL memory pool, and applies several minor bug fixes.
Emulex lpfc driver: updated to version 8.1.10.9. This update applies several changes, most notably:
fixed host_lock management in the ioctl paths
the AMD chipset is now automatically detected, and reduced the DMA length to 1024 bytes
nodes are no longer removed during dev_loss_tmo if discovery is active
8GB link speeds are now enabled
qla4xxx driver updated to apply the following changes:
added support for IPV6, QLE406x and ioctl module
fixed a mutex_lock bug that could cause lockups
resolved lockup issues of qla4xxx and qla3xxx when attempting to load/unload either interface
mpt fusion drivers: updated to version 3.04.04. This update applies several changes, most notably:
fixed several error handling bugs
mptsas now serializes target resets
mptsas and mptfc now support LUNs and targets greater than 255
fixed an LSI mptspi driver regression that resulted in extremely slow DVD driver performance
when an LSI SCSI device returns a BUSY status, I/O attempts no longer fail after several retries
RAID arrays are no longer unavailable after auto-rebuild
arcmsr driver: included to provide support for Areca RAID controllers.
3w-9xxx module: updated to correctly support 3ware 9650SE.
The CIFS client has been updated to version 1.48aRH. This is based upon the 1.48a release, with patches that apply the following changes:
the mount option sec=none results in an anonymous mount
CIFS now honors the umask when POSIX extensions are enabled
fixed sec= mount options that request packet signing
Note that for users of the EMC Celerra product (NAS Code 5.5.26.x and below), the CIFS client hangs when accessing shares on EMC NAS. This issue is characterized by the following kernel messages:
kernel: CIFS VFS: server not responding kernel: CIFS VFS: No response for cmd 162 mid 380 kernel: CIFS VFS: RFC1001 size 135 bigger than SMB for Mid=384
After a CIFS mount, it becomes impossible to read/write any file on it and any application that attempts an I/O on the mountpoint will hang. To resolve this issue, upgrade to NAS Code 5.5.27.5 or later (use EMC Primus case number emc165978).
MODULE_FIRMWARE tags are now supported.
ICH9 controllers are now supported.
Greyhound processors are now supported in CPUID calls.
Oprofile now supports new Greyhound performance counter events.
Directed DIAG is now supported to improve z/VM utilization.
The Intel graphics chipset is now supported through the DRM kernel module. Further, the DRM API has been upgraded to version 1.3 to support direct rendering.
Updates to ACPI power management have improved S3 suspend-to-RAM and S4 hibernate.
gaim is now called pidgin.
Intel microcode updated to version 1.17. This adds support for new Intel processors.
Implicit active-active failover using dm-multipath on EMC Clariion storage is now supported.
The Chinese font Zysong is no longer installed as part of the fonts-chinese package. Zysong is now packaged separately as fonts-chinese-zysong. The fonts-chinese-zysong package is located in the Supplementary CD.
Note that the fonts-chinese-zysong package is needed to support the Chinese National Standard GB18030.
The Challenge Handshake Authentication Protocol (CHAP) username and password have a character limit of 256 each.
pump is deprecated in this update. As such, configuring your network interface through netconfig may result in broken ifcfg scripts.
To properly configure your network interface, use system-config-network instead. Installing the updated system-config-network package removes netconfig.
rpm --aid is no longer supported. It is recommended that you use yum when updating and installing packages.
Technology Preview features are currently not supported under Red Hat Enterprise Linux 5.1 subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the feature with wider exposure.
Customers may find these features useful in a non-production environment. Customers are also free to provide feedback and functionality suggestions for a Technology Preview feature before it becomes fully supported. Erratas will be provided for high-severity security issues.
During the development of a Technology Preview feature, additional components may become available to the public for testing. It is the intention of Red Hat to fully support Technology Preview features in a future release.
Stateless Linux is a new way of thinking about how a system should be run and managed, designed to simplify provisioning and management of large numbers of systems by making them easily replaceable. This is accomplished primarily by establishing prepared system images which get replicated and managed across a large number of stateless systems, running the operating system in a read-only manner (refer to /etc/sysconfig/readonly-root for more details).
In its current state of development, the Stateless features are subsets of the intended goals. As such, the capability remains as Technology Preview.
The following is a list of the initial capabilities included in Red Hat Enterprise Linux 5:
running a stateless image over NFS
running a stateless image via loopback over NFS
running on iSCSI
It is highly recommended that those interested in testing stateless code read the HOWTO at http://fedoraproject.org/wiki/StatelessLinuxHOWTO and join stateless-list@redhat.com.
The enabling infrastructure pieces for Stateless Linux were originally introduced in Red Hat Enterprise Linux 5.
AIGLX is a Technology Preview feature of the otherwise fully supported X server. It aims to enable GL-accelerated effects on a standard desktop. The project consists of the following:
a lightly modified X server
an updated Mesa package that adds new protocol support
By installing these components, you can have GL-accelerated effects on your desktop with very few changes, as well as the ability to enable and disable them at will without replacing your X server. AIGLX also enables remote GLX applications to take advantage of hardware GLX acceleration.
The devicescape stack enables the iwlwifi 4965GN wireless driver. This stack allows certain wireless devices to connect to any Wi-Fi network.
This stack has a code base that is yet to be accepted upstream. In addition, the stability of this stack is yet to be conclusively verified through testing. As such, this stack is included in this release as a Technology Preview.
FS-Cache is a local caching facility for remote file systems that allows users to cache NFS data on a locally mounted disk. To set up the FS-Cache facility, install the cachefilesd RPM and refer to the instructions in /usr/share/doc/cachefilesd-<version>/README.
Replace <version> with the corresponding version of the cachefilesd package installed.
Systemtap provides free software (GPL) infrastructure to simplify the gathering of information about the running Linux system. This assists the diagnosis of a performance or functional problem. With the help of systemtap, developers no longer need to go through the tedious and disruptive instrument, recompile, install, and reboot sequence that may be otherwise required to collect data.
The Linux target (tgt) framework allows a system to serve block-level SCSI storage to other systems that have a SCSI initiator. This capability is being initially deployed as a Linux iSCSI target, serving storage over a network to any iSCSI initiator.
To set up the iSCSI target, install the scsi-target-utils RPM and refer to the instructions in:
/usr/share/doc/scsi-target-utils-<version>/README
/usr/share/doc/scsi-target-utils-<version>/README.iscsi
Replace <version> with the corresponding version of the package installed.
For more information, refer to man tgtadm.
The firewire-sbp2 module is included in this update as a Technology Preview. This module enables connectivity with FireWire storage devices and scanners.
At present, FireWire does not support the following:
IPv4
pcilynx host controllers
multi-LUN storage devices
non-exclusive access to storage devices
In addition, the following issues still exist in this version of FireWire:
a memory leak in the SBP2 driver may cause the machine to become unresponsive.
a code in this version does not work properly in big-endian machines. This could lead to unexpected behavior in PowerPC.
A SATA bug that caused SATA-equipped systems to pause during the boot process and display an error before resuming is now fixed.
In multi-boot systems, parted now preserves the starting sector of the first primary partition where Windows Vista™ is installed. As such, when setting up a multi-boot system with both Red Hat Enterprise Linux 5.1 and Windows Vista™, the latter is no longer rendered unbootable.
rmmod xennet no longer causes domU to crash.
4-socket AMD Sun Blade X8400 Server Module systems that do not have memory configured in node 0 no longer panic during boot.
conga and luci can now be used to create and configure failover domains.
When installing the Cluster Storage group through yum, the transaction no longer fails.
During installation, incorrect SELinux contexts are no longer assigned to /var/log/faillog and /var/log/tallylog.
Installing Red Hat Enterprise Linux 5.1 using split installation media (for example, CD or NFSISO) no longer causes an error in the installation of amanda-server.
EDAC now reports the correct amount of memory on the latest k8 processors.
Logging in remotely to a Gnome desktop via gdm no longer causes the login screen to hang.
A bug in autofs that prevented multi-mounts from working properly is now fixed.
Running tvtime and xawtv with the bttv kernel module no longer causes the system to freeze.
Several patches to utrace apply the following fixes:
fixed a bug that causes a crash in race condition when using ptrace
fixed a regression that resulted in erroneous EIO returns from some PTRACE_PEEKUSR calls
fixed a regression that prevented some wait4 calls from waking up when a child exited under certain circumstances
fixed a regression that sometimes prevented SIGKILL from terminating a process. This occurred if ptrace was performed on a process under certain cirtumstances.
A RealTime Clock (RTC) bug that prevented alarms and periodic RTC interrupts from working properly is now fixed.
The first time the Anaconda, a delay occurs while the window renders the Release Notes. During this delay, a seemingly empty list appears in the window. The rendering normally completes quickly, so most users may not notice this.
button is clicked inThis delay is mostly due to the fact that the package installation phase is the most CPU-intensive phase of installation.
Some machines that use NVIDIA graphics cards may display corrupted graphics or fonts when using the graphical installer or during a graphical login. To work around this, switch to a virtual console and back to the original X host.
Host bus adapters that use the MegaRAID driver must be set to operate in "Mass Storage" emulation mode, not in "I2O" emulation mode. To do this, perform the following steps:
Enter the MegaRAID BIOS Set Up Utility.
Enter the Adapter settings menu.
Under Other Adapter Options, select Emulation and set it to Mass Storage.
If the adapter is incorrectly set to "I2O" emulation, the system will attempt to load the i2o driver. This will fail, and prevent the proper driver from being loaded.
Previous Red Hat Enterprise Linux releases generally do not attempt to load the I2O driver before the MegaRAID driver. Regardless of this, the hardware should always be set to "Mass Storage" emulation mode when used with Linux.
Laptops equipped with the Cisco Aironet MPI-350 wireless may hang trying to get a DHCP address during any network-based installation using the wired ethernet port.
To work around this, use local media for your installation. Alternatively, you can disable the wireless card in the laptop BIOS prior to installation (you can re-enable the wireless card after completing the installation).
Currently, system-config-kickstart does not support package selection and deselection. When using system-config-kickstart, the Package Selection option indicates that it is disabled. This is because system-config-kickstart uses yum to gather group information, but is unable to configure yum to connect to Red Hat Network.
At present, you need to update package sections in your kickstart files manually. When using system-config-kickstart to open a kickstart file, it will preserve all package information in it and write it back out when you save.
Boot-time logging to /var/log/boot.log is not available in this update of Red Hat Enterprise Linux 5. An equivalent functionality will be added in a future update.
When upgrading from Red Hat Enterprise Linux 4 to Red Hat Enterprise Linux 5, the Deployment Guide is not automatically installed. You need to use pirut to manually install it after completing the upgrade.
The system may not successfully reboot into a kexec/kdump kernel if X is running and using a driver other than vesa. This problem only exists with ATI Rage XL graphics chipsets.
If X is running on a system equipped with ATI Rage XL, ensure that it is using the vesa driver in order to successfully reboot into a kexec/kdump kernel.
Installing the Virtualization feature may cause a time went backwards warning on HP systems with model numbers xw9300 and xw9400.
To work around this issue for xw9400 machines, configure the BIOS settings to enable the HPET timer. Note that this option is not available on xw9300 machines.
This will be resolved in an upcoming BIOS update by HP.
When using Red Hat Enterprise Linux 5 on a machine with an nVidia CK804 chipset installed, the following kernel messages may appear:
kernel: assign_interrupt_mode Found MSI capability kernel: pcie_portdrv_probe->Dev[005d:10de] has invalid IRQ. Check vendor BIOS
These messages indicate that certain PCI-E ports are not requesting IRQs. Further, these messages do not, in any way, affect the operation of the machine.
Removable storage devices (such as CDs and DVDs) do not automatically mount when you are logged in as root. As such, you will need to manually mount the device through the graphical file manager.
Alternatively, you can run the following command to mount a device to /media:
mount /dev/<device name> /media
The Calgary IOMMU chip is not supported by default in this update. To enable support for this chip, use the kernel command line option iommu=calgary.
The IBM System z does not provide a traditional Unix-style physical console. As such, Red Hat Enterprise Linux 5 for the IBM System z does not support the firstboot functionality during initial program load.
To properly initialize setup for Red Hat Enterprise Linux 5 on the IBM System z, run the following commands after installation:
/usr/bin/setup — provided by the setuptool package.
/usr/bin/rhn_register — provided by the rhn-setup package.
When upgrading from Red Hat Enterprise Linux 5 to Red Hat Enterprise Linux 5.1 via Red Hat Network, yum may not prompt you to import the redhat-beta key. As such, it is advised that you import the redhat-beta key manually prior to upgrading. To do this, run the following command:
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta
When a LUN is deleted on a configured filer, the change is not reflected on the host. In such cases, lvm commands will hang indefinitely when dm-multipath is used, as the LUN has now become stale.
To work around this, delete all device and mpath link entries in /etc/lvm/.cache specific to the stale LUN.
To find out what these entries are, run the following command:
ls -l /dev/mpath | grep <stale LUN>
For example, if <stale LUN> is 3600d0230003414f30000203a7bc41a00, the following results may appear:
lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00 -> ../dm-4 lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00p1 -> ../dm-5
This means that 3600d0230003414f30000203a7bc41a00 is mapped to two mpath links: dm-4 and dm-5.
As such, the following lines should be deleted from /etc/lvm/.cache:
/dev/dm-4 /dev/dm-5 /dev/mapper/3600d0230003414f30000203a7bc41a00 /dev/mapper/3600d0230003414f30000203a7bc41a00p1 /dev/mpath/3600d0230003414f30000203a7bc41a00 /dev/mpath/3600d0230003414f30000203a7bc41a00p1
When attempting to create a fully virtualized Windows™ guest from a CD or DVD, the second stage of the guest install might not continue upon reboot.
To work around this, edit /etc/xen/<name of guest machine> by properly appending an entry for the CD / DVD device.
If an installation to a simple file is used as a virtual device, the disk line of /etc/xen/<name of guest machine> will read like the following:
disk = [ 'file:/PATH-OF-SIMPLE-FILE,hda,w']
A DVD-ROM device located on the host as /dev/dvd can be made available to stage 2 of the installation as hdc by appending an entry like 'phy:/dev/dvd,hdc:cdrom,r'. As such, the disk line should now read as follows:
disk = [ 'file:/opt/win2003-sp1-20061107,hda,w', 'phy:/dev/dvd,hdc:cdrom,r']
The precise device path to use may vary depending on your hardware.
If the sctp module is not added to the kernel, running netstat with the -A inet or -A inet6 option abnormally terminates with the following message:
netstat: no support for `AF INET (sctp)' on this system.
To avoid this, install the sctp kernel module.
Installing Red Hat Enterprise Linux 3.9 on a fully virtualized guest may be extremely slow. In addition, booting up the guest after installation may result in hda: lost interrupt errors.
To avoid this bootup error, configure the guest to use the SMP kernel.
Current kernels do not assert Data Terminal Ready (DTR) signals before printing to serial ports during boot time. DTR assertion is required by some devices; as a result, kernel boot messages are not printed to serial consoles on such devices.
Upgrading a host (dom0) system to Red Hat Enterprise Linux 5.1 may render existing Red Hat Enterprise Linux 4.5 SMP paravirtualized guests unbootable. This is more likely to occur when the host system has more than 4GB of RAM.
To work around this, boot each Red Hat Enterprise Linux 4.5 guest in single CPU mode and upgrade its kernel to the latest version (for Red Hat Enterprise Linux 4.5.z).
The AMD 8132 and HP BroadCom HT100 used on some platforms (such as the HP dc7700) do not support MMCONFIG cycles. If your system uses either chipset, your PCI configuration should use the legacy PortIO CF8/CFC mechanism. To configure this, boot the system with the kernel parameter -pci nommconfig during installation and add pci=nommconf to GRUB after rebooting.
Further, the AMD 8132 chipset does not support Message Signaled Interrupts (MSI). If your system uses this chipset, you should also disable MSI. To do this, use the kernel parameter -pci nomsi during installation and add pci=nomsi to GRUB after rebooting.
However, if your specific platform is already blacklisted by the kernel, your system does not require the aforementioned pci kernel parameters. The following HP platforms are already blacklisted by the kernel:
DL585g2
dc7500
xw9300
xw9400
The Virtual Machine Manager (virt-manager) included in this release does not allow users to specify additional boot arguments to the paravirtualized guest installer. This is true even when such arguments are required to install certain types of paravirtualized guests on specific types of hardware.
This issue will be addressed in a future release of virt-manager. To specify arbitrary kernel arguments in installing paravirtualized guests from the command line, use virt-install.
With the default dm-multipath configuration, Netapp devices may take several minutes to complete failback after a previously failed path is restored. To resolve this problem, add the following Netapp device configuration to the devices section of the multipath.conf file:
devices { device { vendor "NETAPP" product "LUN" getuid_callout "/sbin/scsi_id -g -u -s /block/%n" prio_callout "/sbin/mpath_prio_netapp /dev/%n" features "1 queue_if_no_path" hardware_handler "0" path_grouping_policy group_by_prio failback immediate rr_weight uniform rr_min_io 128 path_checker directio }
( amd64 )
[1] This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, v1.0, available at http://www.opencontent.org/openpub/.