These release notes cover the following areas:
This release of SUSE Linux Enterprise Server ships with Novell AppArmor. The AppArmor intrusion prevention framework builds a firewall around your applications by limiting the access to files, directories, and POSIX capabilities to the minimum required for normal operation. AppArmor protection can be enabled via the AppArmor control panel, located in YaST under Novell AppArmor. For detailed information about using Novell AppArmor, see the documentation in /usr/share/doc/packages/apparmor-docs.
The AppArmor profiles included with SUSE Linux have been developed with our best efforts to reproduce how most users use their software. The profiles provided work unmodified for many users, but some users find our profiles too restrictive for their environments.
If you discover that some of your applications do not function as you expected, you may need to use the AppArmor Update Profile Wizard in YaST (or use the aa-logprof(8) command line utility) to update your AppArmor profiles. Place all your profiles into learning mode with the following: aa-complain /etc/apparmor.d/*
When a program generates many complaints, the system's performance is degraded. To mitigate this, we recommend periodically running the Update Profile Wizard (or aa-logprof(8)) to update your profiles even if you choose to leave them in learning mode. This reduces the number of learning events logged to disk, which improves the performance of the system.
SuSEfirewall2 is enabled by default. That means that by default you cannot log in from remote systems. It also interferes with network browsing and multicast applications, such as SLP, Samba ("Network Neighborhood"), and some games. You can fine-tune the firewall settings using YaST.
Starting with SUSE Linux Enterprise 10, vsftpd can be configured independently or over the xinetd. The default is stand-alone. In previous versions, the default was xinetd.
To run it over xinetd, make sure that the service is enabled in the xinetd configuration (/etc/xinetd.d/vsftpd) and set the following line in /etc/vsftpd.conf:
listen=NO
One of the exciting possibilities that Xen offers is to relocate virtual machines (domUs) from one physical machine to another. The relocation can be done with minimal downtime. The virtual machine keeps running (at slightly degraded performance) while it is relocated and only has a downtime in the order of 100ms. This is called live migration or live relocation.
During our testing, we found that virtual machines that are under load on the x86-64 (AMD64 or intel EM64T) architecture can suffer memory corruption. The problem is still under investigation. We will address this with a later update. For now we advice customers not to do live migrations on x86-64.
With the special hardware support from Intel (Vanderpool technology) and AMD (Pacifica), you can run fully virtualized domains under Linux. This allows you to run unmodified operating systems--not only Linux, but also some versions of Microsoft Windows (R).
The support for this feature is enabled in the Xen packages shipped with SLE10. The feature works and we offer support for it in the YaST module that facilitates the setup and control of virtual machines.
However, we have not been able to validate a broad range of operating system and workload scenarios to be able to guarantee correct operation. So we cannot yet guarantee to address all customer scenarios. Novell will have certified offerings in this area later.
Journaling file systems, like reiserfs, ext3, or XFS, write records of file system activities to the hard disk to allow them to recover in case of a power outage or an operating system crash. For this to work, the order in which write operations are performed needs to be preserved. The file system uses barriers to prevent reordering of the operations. Most storage devices are capable of honoring these.
When using Xen and running multiple domains, all I/O is typically handled in domain0. The write ordering within the virtual machines is preserved, so if they crash, the journalling works to provide the file system consistency. However, the I/O may be reordered in domain0, so if a power outage or a crash in domain0 occurs, the file systems of the virtual machines may not be in a consistent state.
The reason for this is that barriers are not passed down to domain0, so it does not know about the ordering requirements. This will be worked on and fixed in a future version.
If the virtual machine's file system lives on a raw hard disk partition or LVM or EVMS volume, this puts a small risk to the integrity of your virtual machine's data if domain0 crashed or power is lost--although the situation is no worse than for storage that does not honor the barriers.
If the file system lives on an image that's stored as a (sparse) file in one of domain0's file system (loopback mode), the situation is much worse, because the domain0 page cache will do a large amount of write buffering, so the lost data may be significant and the potential for reordering is very large.
We have introduced a sync mode for the loop device to address the case where the file system image lives on a (sparse) file. domain0 will do all writes from the virtual machines synchronously, thus avoiding the situation that writes could be reordered or even lost in case of power outage or a domain0 crash. The downside of this is that the write performance suffers, especially if a sparse file is filled by the writes.
The sync mode is set by the -y flags to losetup in the script /etc/xen/scripts/block. You can remove this flag if you want to get better write performance in virtual machines and if you can cope with the risk of file system corruption for domain0 crashes or power outages.
An issue exists when creating fully-virtualized 3rd party guests. Under certain conditions the Xen Hypervisor is unable to allocate memory during the installation process causing a failure.
Reducing the amount of memory that Domain 0 utilizes, can provide a workaround to allow successful creation of the fully-virtualized guest.
The "xm mem-set 0 <Mem>" command (at a command prompt) immediately reduces the current Domain 0 memory usage. The <Mem> field unit is Megabytes.
Example:
root@linux# xm mem-set 0 512
This command sets the current Domain 0 memory usage to 512 Megabytes. Run this command immediately prior to creation of the third party guest.
The amount of memory needed is highly dependent on the individual setup. As a general rule, one can assume that using a <Mem> value that is equal to a quarter (25%) of the total physical memory size is a good first approximation.
hot-add-memory is not supported at this point in time. A maintenance update will explicitly mention the availabilty of this function.
By default, IPv6 support is not enabled for KDE. You can enable it using the /etc/sysconfig editor of YaST. The reason for disabling this feature is that IPv6 addresses are not properly supported by all Internet service providers and, as a consequence, this would lead to error messages while browsing the Web and delays while displaying web pages.
Updates from SLES 9 to SLES 10 are supported starting from one of the following bases:
Update a system by starting the SLES 10 installation system and choosing Update instead of New installation. To verify whether one of the above variants is installed, you can use the tool SPident -vv. This shows the current level of your system.
MIT Kerberos is now used instead of heimdal. Converting an existing Heimdal configuration automatically is not always possible. During a system update, backup copies of configuration files are created in /etc with the suffix .heimdal. YaST-generated configuration settings in /etc/krb5.conf are converted, but check whether the results match your expectations.
Before starting the update, you should decrypt an existing Heimdal database into a human-readable file with the command kadmin -l dump -d heimdal-db.txt. This way, you can create a list of available principals that you can restore one-by-one using kdc from MIT Kerberos. Find more information about setting up a KDC in the documentation in the "krb5-doc" package.
To configure a Kerberos client, start the YaST Kerberos Client module and enter your values for "Standard Domain", "Standard Realm", and "KDC Server Address".
Do not set the LD_ASSUME_KERNEL environment variable any longer. In the past, it was possible to use it to enforce LinuxThreads support, which was dropped. If you set LD_ASSUME_KERNEL to a kernel version lower than 2.6.5, everything breaks because ld.so looks for libraries in a version that does not exist anymore.
SUSE Linux Enterprise Server 9 set up the user environment with an unlimited stack size resource limit to work around restrictions in stack handling of multithreaded applications. With SUSE Linux Enterprise Server 10, this is no longer necessary and has been removed. The login environment now defaults to the kernel default stack size limit. To restore the old behavior, add "ulimit -Ss unlimited" to /etc/profile.local. If you want an automatic configuration of your resource limits suited to protect desktop systems, you may want to install the "ulimit" package.
When updating a system with the snd-intel8x0 module (for Intel, SIS, AMD, and Nvidia on-board chips), the system might be unable to load the module at reboot, because the module option joystick was removed from the newer version. To fix the problem, reconfigure the sound system using YaST.
Although most existing PHP 4 code should work without changes, there are a few backward incompatible changes. Find a list of this changes at:
http://www.zend.com/manual/migration5.incompatible.php
To use iSCSI disks during installation it is necessary to add the following parameter to the kernel parameter line:
withiscsi=1During installation, an additional screen appears that provides the possibilty to attach iSCSI disks to the system and use them in the installation process.
Do not use the /dev/mapper device path for the root= kernel parameter. /dev/mapper is an internal name of the LVM2 system. Instead use the proper LVM notation /dev/VG/LV, as in /dev/system/root for the logical volume root on volume group system.
If installing to a multiboot setup, there might be cases where one of the already installed systems is not detected. As a result, the proposal for a boot partition to use for SLES 10 might not be what the user expects. This could leave one of the preinstalled systems without a working boot loader setup.
In a multiboot setup, it is therefore strongly recommended to go the boot loader configuration dialog during installation and verify that
If you want to use EDD information (/sys/firmware/edd/<device>) to identify your storage devices, change the installer default settings using an additional kernel parameter.
Requirements:
Procedure:
If you have installed and configured an iSCSI SAN, and have created and configured EVMS Disks/Volumes on that iSCSI SAN, your EVMS volumes might not be visible or accessible. This problem is caused by EVMS starting before the iSCSI service. iSCSI must be started and running before any disks/volumes on the iSCSI SAN can be accessed.
To resolve this problem, enter either chkconfig evms on or chkconfig boot.evms on at the Linux server console of every server that is part of your iSCSI SAN. This will ensure that EVMS and iSCSI start in the proper order each time your servers reboot.
If you plan to add additional storage devices to your system _after_ the OS installation, we strongly recommend to use persistent devicenames for all storage devices during installation. The installer by default uses the kernel device names.
How to proceed:
During installation, enter the partitioner. For each partition, select "Edit" and go to the "FStab Options" dialog. Any mount option except "Device name" will provide you persistent devicenames.
To switch an already installed system to use persistent devicenames, proceed as described above for all existing partitions. In addition you will have to rerun the bootloader module in YaST to switch the bootloader to use the persistent devicename also. Just start the module and select "Finish" to write the new proposed configuration to disk. This needs to be done _before_ adding the new storage devices.
With SUSE Linux Enterprise Server 10 we switched to "cryptoloop" as the default encryption module. SUSE Linux Enterprise Server 9 used twofish256 using loop_fish2 with 256 bits. Now we are using twofish256 using cryptoloop with 256 bits. The old twofish256 is available as twofishSL92.
When the way the root device is mounted (e.g. by UUID or by label) is changed in YaST2, the bootloader configuration needs to be saved again to make the change effective for the bootloader.
Please note that the "mount by" setting displayed by YaST2 bootloader is the setting that will be in effect after saving the configuration.
JFS it is no longer supported for new installations. The kernel file system driver is still there, but YaST does not offer partitioning with JFS.
Hotplug events are now completely handled by the udev daemon (udevd). We do not use the event multiplexer system in /etc/hotplug.d and /etc/dev.d anymore. Instead udevd calls all hotplug helper tools directly, according to its rules. Udev rules and helper tools are provided by udev and various other packages.
By default, calling su to become root does not set the PATH for root. Either call su - to start a login shell with the complete environment for root or set ALWAYS_SET_PATH to yes in /etc/default/su if you want to change the default behavior of su.
The shell script sux was removed. The functionality of forwarding xauth keys between users is now handled by the pam_xauth module and su.
By default the kernel tries to keep threads on the local CPU (and local node on NUMA machines). Depending on the application this may not deliver the best performance, especially applications with a large working set for each thread tend to perform better when being scheduled to different nodes because they can use caches of multiple nodes then.
With the following sysctl this behaviour is changed. By setting the sysctl variable kernel.affinity_load_balancing to 1 the scheduler does no longer try to keep thread local to a CPU.
Using this sysctl on the wrong application scenario may degrade system performance
cardmgr no longer manages PC cards. Instead, as with Cardbus cards and other subsystems, a kernel module manages them. All necessary actions are executed by hotplug. The pcmcia start script has been removed and cardctl is replaced by pccardctl. For more information, see /usr/share/doc/packages/pcmciautils/README.SUSE.
Java packages are changed to follow the "JPackage Standard" (http://www.jpackage.org/). Read the documentation in file:///usr/share/doc/packages/jpackage-utils/ for information.
If you are not satisfied with locale system defaults, change the settings in ~/.i18n. Entries in ~/.i18n override system defaults from /etc/sysconfig/language. Use the same variable names but without the RC_ namespace prefixes, for example, use LANG instead of RC_LANG. For information about locales in general, see "Language and Country-Specific Settings" in the Reference Manual.
Many applications now rely on D-BUS for interprocess communication (IPC). Calling dbus-launch starts dbus-daemon. The systemwide /etc/X11/xinit/xinitrc uses dbus-launch to start the window manager.
If you have a local ~/.xinitrc file, you must change it accordingly. Otherwise applications might fail. Save your old ~/.xinitrc. Then copy the new template file into your home directory with:
cp /etc/skel/.xinitrc.template ~/.xinitrc
Finally, add your customizations from the saved .xinitrc.
For reasons of compatibility with LSB (Linux Standard Base), most configuration files and the init script were renamed from xntp to ntp. The new filenames are:
/etc/slp.reg.d/ntp.reg
/etc/init.d/ntp
/etc/logrotate.d/ntp
/usr/sbin/rcntp
/etc/sysconfig/ntp
KDB is no longer available as loadable module. KDB is only supported in the debug kernel.
Entering KDB code breakpoints on multiple CPUs in parallel can lead to deadlocks.
For reasons of compatibility with SLES 9 the mapped-base functionality is present in SLES 10. This functionality is used by 32-Bit applications, which need a larger dynamic data space (e.g. database management systems).
With SLES 10 a similar functionality called flexmap is available. As this is now the preferred way mapped-base is deprecated and will vanish in future releases.
SLES 10 provides different I/O-schedulers. The scheduler can be set per disk. The general default is CFQ. This default may be modified by the device driver or by the user through
echo keyword > /sys/block/dasda/queue/schedulerwhere keyword is one of the following:
noop anticipatory [deadline] cfqChanging the scheduler may seriously impact the system performance.
The default (by the kernel or the device driver) has been shown to be the best selection. There may be of course setups where this is not true.
The libhugetlbfs project shipped with SLES 10 is a preview of providing applications with transparent access to system huge pages. While the library will provide an application with easy access to huge pages when sufficient huge pages have been previously allocated on the system, additional development and testing is required to provide a stable transition to normal pages in a production environment.
The default mdadm.conf (and lvm.conf) will not work properly with multipathed devices. By default both md and LVM2 scan physical devices _only_ and ignore any symlinks or device-mapper devices.
Of course this does not work for multipathed devices as there we have to omit all physical devices and scan devices in /dev/disk/by-name only (as these are the correct multipathed devices).
So if there was a previous MD installation you'll have either modify mdadm.conf to handle the devices correctly (by using the line 'DEVICES /dev/disk/by-name/*') or clearing the md superblock altogether.
Root-partition on multipath is only supported if the /boot partition is on a seperate, non-multipathed partition. Else no bootloader will be written.
During boot there may be drivers loaded that are not needed at runtime. To prevent this loading at bootime insert the following line into /etc/modprobe.conf.local.
install driver-name /bin/true Replace driver-name with the actual name of the module.Attention
Be very careful. Inserting the wrong module name may lead to an unusable system.
With SLES 10 running on a Compaq MSA1000 SAN, whenever a disk fails or faults, MSA1000 SAN requires the failed/faulted disk to be removed from the disk array and to be recreated. By recreating the disk, the disk array reshuffles the order of the disks in the SAN. The recreated disk will be pushed to the last device in the array.
An iSCSI shared device should never be mounted directly on the local machine. In a OCFS2 environment, doing so will cause all hardware to hard hang.
On the top level of the first CD you will find a very detailed ChangeLog. Please also read the READMEs on the CD.
In case of encountering a bug please report it through your support contact.
Your SUSE Linux Enterprise Team
Wed Jul 5 21:00:03 UTC 2006