Sunday, January 25, 2009

Lustre 1.6.6 with MX 1.2.7

Below is the process for installing Lustre 1.6.6 while using MX (Myricom) as the transport.

1) Compile and install Lustre Kernel
- yum install rpm-build redhat-rpm-config
- mkdir -p rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
- echo '%_topdir %(echo $HOME)/rpmbuild' > .rpmmacros
- rpm -ivh kernel-lustre-source-2.6.18-92.1.10.el5_lustre.1.6.6.x86_64.rpm (can be obtained from http://www.sun.com/software/products/lustre/get.jsp)
- make distclean
- make oldconfig dep bzImage modules
- cp /boot/config-`uname -r` .config
- make oldconfig || make menuconfig
- make include/asm
- make include/linux/version.h
- make SUBDIRS=scripts
- make rpm
- rpm -ivh ~/rpmbuild/kernel-lustre-2.6.18-92.1.10.el5_lustre.1.6.6.x86_64.rpm
- mkinitrd /boot/2.6.18-92.1.10.el5_lustre.1.6.6
- Update /etc/grub.conf with new kernel boot information
- /sbin/shutdown 0 -r

2) Compile and install MX Stack
- cd /usr/src/
- gunzip mx_1.2.7.tar.gz (can be obtained from www.myri.com/scs/)
- tar -xvf mx_1.2.7.tar
- cd mx-1.2.7
- ln -s common include
- ./configure --with-kernel-lib
- make
- make install

3) Compile and install Lustre
- cd /usr/src/
- gunzip lustre-1.6.6.tar.gz (can be obtained from http://www.sun.com/software/products/lustre/get.jsp)
- tar -xvf lustre-1.6.6.tar
- cd lustre-1.6.6
- ./configure --with-linux=/usr/src/linux --with-mx=/usr/src/mx-1.2.7
- make
- make rpms (at the bottom of the output it will show location of the generated RPMs)
- rpm -ivh lustre-1.6.6-2.6.18_92.1.10.el5_lustre.1.6.6smp.x86_64.rpm
lustre-modules-1.6.6-2.6.18_92.1.10.el5_lustre.1.6.6smp.x86_64.rpm
lustre-ldiskfs-3.0.6-2.6.18_92.1.10.el5_lustre.1.6.6smp.x86_64.rpm

4) Add the following lines to /etc/modprobe.conf
options kmxlnd hosts=/etc/hosts.mxlnd
options lnet networks=mx0(myri0),tcp0(eth0)

5) Populate myri0 Configuration with proper IP addresses
- vim /etc/sysconfig/network-scripts/myri0

6) Populate /etc/hosts.mxlnd with the following information
# IP HOST BOARD EP_ID

7) Start Lustre by mounting the disks that contain the MGS, MDT and OSS data stores

Monday, January 19, 2009

Automated OS Provisioning

Automated operating system (OS) provisioning is an important part of any IT department. It allows staff to rapidly build new servers or virtual machines that are identical to all others currently in use. This lack of variation allows for easy tracking of necessary upgrades, and minimizes the number of combinations of software and hardware that must be tested before deploying updates and patches. This commonality across systems also allows system administrators to work on new systems with very little ramp-up time to understand how one system may be different from all the others.

I have been working in several large, heterogeneous environments recently that required the ability to build common OS images, across a variety of OSs and platforms. Here are the common utilities for the different OSs to use when deploying OS images, in one nice clean list:

Solaris
Jumpstart

HP-UX
Ignite-UX

Linux (SLES)
AutoYAST

Linux (Fedora/RHEL/CentOS)
Kickstart
Cobbler

Linux (Ubuntu)
Kickstart
preseed

AIX
Network Installation Manager (NIM)

Windows (Server 2003)
Unattended

Windows (Server 2008)

Automated Installation Kit (AIK)

Windows (XP)
Unattended

Windows (Vista)
Automated Installation Kit (AIK)

Multi-OS Support
Altiris

Monday, January 5, 2009

Virtualizing your Quality Assurance Environment

Today virtualization is used in many Information Technology (IT) shops for a variety of purposes including:
  • Disaster Recovery
  • Enable better utilization of hardware assets
  • Create demonstration environments for new applications
  • Compartmentalize applications for security
  • Legacy application support
Virtualization is a two-edged sword, it can allow much better utilization of resources and allow for much more efficient separation of services and applications. Virtualization can also add an additional level of complexity to often already complicated environments. Because of this complexity, virtualization must be properly planned and evaluated before implementing it within an environment to ensure that the benefits it offers outweigh the additional costs and time to manage a more complex environment.

How can virtualization improve QA?
I am going to talk about utilizing virtualization in an information technology (IT) environment for a specific goal; creating an automated environment to allow test driven development and automated quality assurance of software builds to facilitate automated movement of software into a production environment.

This targeted goal is specifically meant to speed up the testing process for in house developed applications, traditionally web based. While speeding up the testing process, we want to ensure that we do not raise the need for more quality assurance (QA) staff. A goal is to ensure that all tools implemented maintain a strong balance of automation and human interaction, without increasing the time commitment for individual staff or their teams. By properly maintaining a balance of automated development and manual testing, we can ensure development teams can scale to support a growing number of deployed versions in the field, and a constantly growing list of features.

In addition to ensuring a scalable model for growing a QA environment, we should ensure that any QA process enables testing of the entire software stack, including libraries, OS patches, third-party software, application servers and associated databases. This testing should be isolated from any underlying hardware to ensure applications are portable between various hardware platforms. This isolation and integrated testing will be completed by creating a new VM as part of each software build. Each of these VMs will contain all necessary applications, services, libraries, third-party applications and data to complete the testing process. This generation of a single VM for each software build will ensure that testing is completed on the entire stack, and all results are easily reproducible.

Tools necessary for success
The key to ensuring a successful QA environment that can dynamically test new software builds and automatically move them between environments is ensuring that the correct tools are in place to provide staff the appropriate level of visibility into the automated process, without causing an excessive work load to be added to the developers and software testing teams.

Here are the most common tools I see the need for in a QA environment:
  1. Bug/defect/feature tracking.
  2. Reporting for patches applied to the source tree and the source of the patches.
  3. Association of patches and the bug/defect corrected.
  4. Automated testing framework.
  5. Association of testing results with specific bug/defect reports.
  6. Reporting capabilities for number of defects/bugs per line of code.
  7. Reporting capabilities for number of bugs/defects per developer.
  8. Tool to show what features and bug fixes will be available in each given release, and the progress towards version completion.
  9. Tool to show time necessary to complete each new feature, and time necessary to correct reported bugs/defects.
  10. Tagging of builds with unique identifiers that associate a build with a list of included features, corrected bugs and patches.
While these requirements are listed as separate capabilities, the fewer actual tools and the more integrated the data is from these tools, the more efficiently decisions can be made and staff can input the required data. The more efficiently the tools supporting both the development and the testing process can be integrated, the more efficiently the developers can see possible areas for improvement with development. The more efficiently this data can be gathered and reported in an automated fashion, the more efficiently the development teams will be able to use it for improvement.

Environments
In my opinion, most development teams can complete their testing activities by using a system of five separate environments, each with specific purposes and goals. Larger development teams may very well have many more then this, but smaller teams should seldom have fewer as it creates the potential for different stages of testing to overlap in unpredictable ways. I envision the following environments and associated usage patterns, in the order they would be used to complete a final application build:
  • Sandbox – New library testing, manual builds, developing unit tests.
  • Development – Ensure error free builds, ensure library version matches and test database structure is correct.
  • Quality – Test application against unit tests, test application response time and test data level integrity.
  • User Acceptance Testing (UAT) – Test user input; both correct and invalid data handling, test application response time, test interactions with outside applications and tools. Limited testing by knowledgeable end users of the application.
  • Production – Customer facing application implemented in a way to meet all required Service Level Agreements (SLAs).
Development to Production
Now that we have defined our tools for a consistent, reproducibility build process, lets compartmentalize that within a virtual machine (VM) for complete control and reproducibility. The goal of compartmentalizing it is to completely remove external influences from testing results, these external influences can include varying hardware platforms, inconsistent library versions and updated data models.

This process to safely compartmentalize an application and test it within a VM is three steps:

First – Our first step is to define a clear process for moving a build from one environment to the next in the testing process. This process should include testing for errors and warnings as part of the build process, defining what are acceptable pass and fail ranges for all unit tests and defining what performance benchmarks must be met by the application for each stage of testing. This step will also include defining any manual or management approvals required for moving software builds from one testing environment to another.

Second – After defining the process for properly testing each component, we must clearly define what is part of a build, and what components are external. This will assist us is developing our testing matrix for versions of our application, any outside data models and applications and all associated libraries. This step will include that we properly define the boundaries for the automated testing process, versus what testing will require manual review and intervention.

Third – The final step is developing and testing the process for moving builds from one environment to another, while ensuring that no changes are made between testing phases, and all builds are archived in a way they can be referenced at a later date if necessary. This step is to ensure that both testing from one stage is valid during another, but also ensuring that any testing failures and findings are archived for future analysis and review.

Final Thoughts
By utilizing VMs to compartmentalize applications during the development and testing cycle you can provide not only isolated, reproducible environments, you can easily roll back to archived versions of the production application if a deployment fails. VMs provide a way to test an entire application stack, including outside software, libraries and data models in an integrated way to ensure stability when deployed in a production environment. As the technology around VMs continues to evolve developers will only get more capabilities around snapshots of VMs, the ability to roll back in time within a VM and better performance modeling and characterization capabilities.

I did not discuss any specific hypervisors because this is meant to be a discussion around the business objectives of a testing environment. Most hypervisors on the market can be automated in a way to deploy this automated testing and movement between environments. Most hypervisors on the market also have associated tools for quickly cloning test environments for analysis, performance monitoring or simple archiving. When reviewing your environment and choosing a possible hypervisor for your development environment, these tools can provide invaluable capabilities to both development and testing staff, as well as your system administrators responsible for the production environments.