EX-10.67 16 dex1067.htm STATEMENT OF WORK UNDER THE TECHNICAL COLLABORATION AGREEMENT Statement of Work under the Technical Collaboration Agreement

Exhibit 10.67

Confidential treatment has been requested for portions of this exhibit. The copy filed herewith omits the information subject to the confidentiality requested. Omissions are designated as [***]. A complete version of this exhibit has been filed separately with the United States Securities and Exchange Commission.

Exhibit B

Form Statement of Work

This Statement of Work is entered into by and between Microsoft and Novell on             , 2010 (“Statement of Work Effective Date”) under that Technical Collaboration Agreement between the parties with an Effective Date as of November 2, 2006 (“Agreement”).

 

1.1 PROJECT TITLE AND DESCRIPTION

Title: Virtualization – Enhance Linux Integration Components (LIC) for Hyper-V

Description:

Novell will work with Microsoft to enhance the Linux Integration Components for Hyper-V in order to add several customer critical features. Feature enhancements will occur in Two Phases.

Phase 1: Critical customer features to be targeted on existing SLES releases (SLES l0 and SLES 11) as well as RHEL 5.4

Phase 2: New customer features to be targeted for the mainline kernel version of the LIC. These features will also be validated on SLES 10, SLES 11 and RHEL5 (with the latest service packs available)

Each Phase is described in more detail below.

 

1.2 DEFINITIONS

 

1.2.1. “LIC” means the Linux Integration Components for Hyper-V

1.2.2. “Subcontractor” means a third party with whom Novell contracts on a work made for hire basis to meet Novell’s obligations under this SOW on Novell’s behalf.

 

1.2.3. “RTM” means Release to Manufacturing

 

2. PHASE 1

 

2.1 PARTIES’ OBLIGATIONS

Microsoft and Novell will modify the Linux Integration Components for Hyper-V to implement customer critical features. All features will be developed for SLES10 and SLES11 guests (with the latest service packs available) for Hyper-V Release 2. Phase 1 features are:

 

 

SMP (up to 4 vCPUs per guest to be supported)

 

 

Time Synchronization/Time Source

 

 

Graceful Shutdown

 

 

Modify build process to deliver binary RPMs for LIC (this is a low priority feature, and only to be done if we have time)

 

[*** Confidential Treatment Requested]    
   

 

Page 1

   


Novell will also provide a patch to RHEL 5.4 for Time Synchronization / Time Source. Microsoft will integrate this patch into RHEL 5.4 and distribute this patch to their customers. Novell will not support this code after delivering the patch to Microsoft.

The division of labor follows:

Novell:

Novell will be primarily responsible for development of the following features.

 

 

SMP (1, 2 and 4 vCPUs per guest to be supported)

 

 

Time Synchronization/Time Source (Linux side) for both SLES and RHEL distributions as noted above. For RHEL, Novell will develop the code, but will not be responsible for support of that code thereafter.

 

 

Advise Microsoft, as necessary, in their effort to move the Linux IC drivers from the staging area to the kernel mainline

 

 

Modify build process to deliver binary RPMs for LIC (this is a low priority feature, and only to be done if we have time)

 

 

Component Level Testing

 

 

Developer Documentation

In addition, Novell will provide technical consultation to Microsoft in the development of Graceful Shutdown.

Microsoft:

Microsoft will be primarily responsible for development of the following features

 

 

Time Synchronization/Time Source (VMBus side)

 

 

Graceful Shutdown

 

 

Mouse Integration Component restructuring

 

 

Component Level Testing

 

 

Developer Documentation

 

 

End User Documentation

In addition, Microsoft will provide technical consultation to Novell for understanding Hyper-V and their virtualization architecture as needed. This includes providing:

 

 

Implementation of the VMBus to the Hyper-V API interface

Microsoft will be responsible for the testing of the Linux Integration Components for Hyper-V on Release 2 of Hyper-V.*

 

 

Test Plan

 

 

System Level Testing

 

 

Performance Testing

 

[*** Confidential Treatment Requested]    
   

 

Page 2

   


* Interoperability lab staff from Novell will be available to assist in this testing as well. This staff is covered under current TCA funding.

 

2.4 PHASE 1 DELIVERY AND PAYMENT SCHEDULE

 

Milestone   Deliverable: Description/Payment
   

Phase 1 Beta Release (Code completed and deliver to Microsoft by March 20, 2010)

 

Phase 1 features completed at Beta quality and accepted by Microsoft as per the Exit criteria for Phase 1 (Attachment 1)

 

On acceptance of the code by Microsoft

payment to Novell: [***]

   

Phase 1 Final Release (Code completed and delivered to Microsoft by Jun 01, 2010)

 

Phase 1 features completed at RTM quality and accepted by Microsoft as per the Exit criteria for Phase 1 (attached as Attachment 1)

 

On acceptance of the code by Microsoft payment to Novell: [***]

   

Phase 1 maintenance of code (fixing of bugs as mutually agreed to by Microsoft and Novell)

 

At the commencement of Phase 1 maintenance period (Jul 01, 2010) payment to Novell: [***]

 

At the middle of the Phase 1 maintenance period (Dec 15, 2010) payment to Novell: [***]

 

At the end of Phase 1 maintenance period (Jun 30, 2011) payment to Novell: [***]

 

2.5 PHASE 1 LICENSING TERMS

All code developed by Novell will be distributed by Novell to Microsoft under the GPLv2 License.

 

3. PHASE 2

 

3.1 PARTIES’ OBLIGATIONS

Microsoft and Novell will modify the Linux Integration Components for Hyper-V to implement new customer features. All features will be developed for the mainline kernel version implemented by the LIC. In addition, features will be ported to and tested on various SLES and RHEL releases (with the latest service packs available) by Microsoft. Phase 2 features are:

 

 

Jumbo Frames

 

 

Hot Add/Remove Storage

 

 

Bidirectional communication between Host and Guest (KVP)

 

[*** Confidential Treatment Requested]    
   

 

Page 3


 

 

Heartbeat

 

 

Memory Management

 

 

Mouse Integration Component

Microsoft and Novell to agree upon a “Technical Specification for Enhancing the Linux Integration Components for Hyper-V: Phase 2” which defines the technical requirements and high level design approach for the above features prior to the start of Phase 2 (June 2010).

Microsoft and Novell need to agree on the “Exit Criteria for Phase 2 of the Microsoft Novell Collaborative SOW for LIC Development” prior to the start of work on the Phase 2 work items.

The division of labor follows:

Novell:

Novell will be primarily responsible for development of the following features.

 

 

Bidirectional communication between Host and Guest (KVP)

 

 

Porting the Mouse IC to the Linux kernel mainline after Microsoft has restructured it (in Phase 1)

 

 

Heartbeat

 

 

Memory Management (Linux side)

 

 

If Novell did not deliver binary RPMs for L1C during Phase 1, Novell will modify the build process to deliver binary RPMs for LIC

 

 

Advise Microsoft, as necessary, in their effort to move the Linux IC drivers from the staging area to the kernel mainline if this was not completed during Phase1

 

 

Component Level Testing

 

 

Developer Documentation

Microsoft:

Microsoft will be primarily responsible for development of the following features

 

 

Jumbo Frames

 

 

Hot Add/Remove Storage

 

 

Share details with Novell around Memory Management support on Hyper-V for guests

 

 

VMBus handling for memory management

 

 

VMBus handling for heartbeat

 

 

Move the Linux IC drivers from the staging area to the kernel mainline

 

[*** Confidential Treatment Requested]    
   

 

Page 4


 

 

Port all Phase 2 features from the kernel mainline to the appropriate SLES and RHEL

 

 

Component Level Testing

 

 

Developer Documentation

 

 

End User Documentation

In addition, Microsoft will provide technical support for understanding Hyper-V and their virtualization architecture as needed.

Microsoft will be responsible for the testing of the Linux Integration Components for Hyper-V on Release 2 of Hyper-V.*

 

 

Test Plan

 

 

System Level Testing

 

 

Performance Testing

 

* Interoperability lab staff from Novell will be available to assist in this testing as well. This staff is covered under current TCA finding.

 

3.3 PHASE 2 SUCCESS METRICS

Microsoft and Novell need to agree on the Technical Specification for Phase 2 features prior to the start of work on the Phase 2 work items.

Microsoft and Novell need to agree on the “Exit Criteria for Phase 2 of the Microsoft Novell Collaborative SOW for LIC Development” prior to the start of work on the Phase 2 work items.

Novell to deliver code for the mainline kernel which demonstrates the features listed above. Microsoft to run acceptance test on this code and delivery to be completed no later than December 15, 2010.

Microsoft will be responsible for porting Phase 2 LIC features as relevant to the appropriate SLES and RHEL distributions.

 

3.4 PHASE 2 DELIVERY AND PAYMENT SCHEDULE

 

Milestone   Deliverable: Description/Payment
   

Phase 2 development phase (delivery of all features and acceptance by Microsoft by Dec 15, 2010 and submission to the mainline Linux kernel)

 

Phase 2 features completed and accepted by Microsoft as per mutually accepted Exit criteria and submission of these features to the Linux mainline kernel no later than Dec 15, 2010.

 

On acceptance by Microsoft and submission into the Linux kernel payment to Novell: [***]

 

On acceptance of the submission into the mainline Linux kernel, payment to Novell: [***]

 

[*** Confidential Treatment Requested]    
   

 

Page 5


   

Phase 2 maintenance of code (fixing of bugs as mutually agreed to by Microsoft and Novell)

 

At the commencement of Phase 2 maintenance period (Jan 01, 2011) payment to Novell: [***]

 

At the middle of the Phase 1 maintenance period (Jun 30, 2011) payment to Novell: [***]

 

At the end of Phase 2 maintenance period (Dec 30, 2011) payment to Novell: [***]

Phase 2 will begin at the end of Phase 1 development. Ideally, this work can begin in April, 2010. However, unforeseen Phase 1 complexities could delay the start of work on Phase 2 until June 2010 (at the latest).

 

3.5 PHASE 2 LICENSING TERMS

All code developed by Novell will be distributed by Novell to Microsoft under the GPLv2 License.

 

4. OTHER

Microsoft agrees to reimburse Novell for any of the costs identified in Sections 2 and 3 above that are incurred by Novell, up to the amounts identified in this SOW and in any case subject to Section 5 of the Agreement. Neither party will exceed the anticipated project costs attributed to such party in this SOW without the other party’s prior written agreement. Further, both parties acknowledge and agree that total accumulated actual project costs for this and any other SOWs shall not exceed the Total Expense Cap set forth in Section 5 of the Agreement without the parties’ express written agreement to such effect.

Notwithstanding anything to the contrary in this SOW, the total amount to be paid by Microsoft under this SOW shall not exceed [***].

Novell may contract with a Subcontractor on terms requiring the Subcontractor to comply with the applicable terms of this SOW and the TCA.

As between Microsoft and Novell, Novell shall own the entire right, title and interest in and to the work developed under this SOW, including any applicable intellectual property rights other than Microsoft patents; provided, however, that Novell remains subject to and must comply with the license terms accompanying or provided with any Microsoft materials provided under this SOW.

This SOW may be terminated by either party in the event that the other party has materially breached any term of this SOW upon receipt of written notice thereof if the nonperformance or breach is incapable of cure, or upon the expiration of ten (10) days (or such additional cure period as the non-defaulting party may authorize) after receipt of written notice thereof if the nonperformance or breach is capable of cure and has not been cured.

The term of this SOW shall commence on the Statement of Work Effective Date. Under no circumstance will termination or expiration of this SOW result in termination or expiration of the TCA.

 

[*** Confidential Treatment Requested]    
   

 

Page 6


Notwithstanding anything to the contrary in this SOW, in no event shall Novell release, or otherwise distribute, contribute or submit the code developed under this SOW, to any third party with attribution to Microsoft or otherwise attribute (or cause any attribution of) this SOW or the initiation or funding of the work performed under this SOW, to Microsoft. Novell shall have no authority under this SOW to enter into any contract, or convey any intellectual property rights by license, implication, estoppel or otherwise, in the name of or on behalf of Microsoft.

Microsoft does not grant, under or pursuant to this SOW, to Novell or any third party to which Novell releases or otherwise distributes, submits or contributes the software developed under this SOW, any license under any patents or patent applications that Microsoft (or any of its affiliates) now or hereafter owns or has the right to license (the “Microsoft Patents”); provided, however, that nothing in this section shall limit Novell’s right to use the Microsoft materials in its performance of the work in accordance with this SOW.

Except for the limited license grants in this SOW, Microsoft does not grant Novell any license, covenant or other right under this SOW by implication, estoppel or otherwise, including, without limitation, (A) to any Microsoft intellectual property (including any Microsoft patents), (B) any distribution rights regarding any materials or software provided by Microsoft under this SOW, including any Microsoft software or materials provided in connection with the other Microsoft reference materials or Microsoft technical support, or any related documentation, or (C) in relation to the Linux operating system.

The parties consider this SOW and all discussions regarding it as confidential information subject to the NDA between the parties dated as of April 1, 2004, as amended on May 12, 2004.

Novell agrees that the following will be deemed confidential information under the NDA, (a) the existence, or the terms and conditions, of this SOW, or any of the discussions, negotiations, correspondence, or documentation related thereto (b) the relationship of parties with respect to the work or this SOW; and (c) Microsoft’s funding of or other relationship to the work or the development and release of the software developed under this SOW.

THIS STATEMENT of WORK is entered into by the parties as of date of the last signatory below.

 

Novell, Inc.     Microsoft Corporation
By:  

/s/ Jim Ebzery

    By:  

/s/ Mike Neil

Name:  

Jim Ebzery

    Name:  

Mike Neil

Title:  

SVP & General Manager

    Title:  

General Manager

Date Signed:  

3/16/2010

    Date Signed:  

3/24/2010

 

[*** Confidential Treatment Requested]    
   

 

Page 7


Attachment 1

EXIT CRITERIA FOR PHASE 1

Objective:

The objectives of this document are

 

   

Determine a set of specific test criteria that will allow both Novell and Microsoft to determine whether the SMP and the Time Sync features being incorporated into the Linux Integration Components (LIC) in the context of phase 1 of this agreement, are ready to be shipped to the customer. This criteria includes a set of functional tests, separate from the Phase 1 enhancements, but critical to ensure readiness to ship the LIC.

 

   

Outline the timeline of an iterative process that will verify these aforementioned criteria on an ongoing basis specifically for phase 1 of the agreement. There will be a separate document (TBD) for phase 2 of this agreement

Exit Criteria

These are a set of qualifying tests that allow Microsoft and Novell to determine the maturity and suitability of the LIC phase 1 features for customer ship. Phase 1 Functional Exit Criteria

Symmetrical Multi Processing in the Linux IC will need to pass the following test cases

 

Area    Subarea    Description
Installation of Guest VM          
     
    

Install

from ISO Image

  

User should be able to read from ISO/PT CD, can boot from CD/PT CD, can install from CD/PT CD media

     
     Passthru CDROM   

User should be able to install Guest VM through physical CD-ROM

     
     PassThru DVD-ROM   

User should be able to install Guest VM through physical DVD-ROM

Boot/read/write

storages

         
     
     Floppy   

User should be able to read/write from/to floppy.

     
    

VHD

IDE

  

1. User should be able to add IDE hard drive to the guest VM through settings pane.

 

2. We need to make sure small disks and disks < and > 127 GB can both be used (Iba issues)

 

3. Verify that disk has been added successfully inside guest VM (Verify size).

     
    

VHD

SCSI

  

1. User should be able to add SCSI hard drive to the guest VM through settings pane.

 

2. We need to make sure small disks and disks < and >

 

[*** Confidential Treatment Requested]    
   

 

Page 8


 

         

127 GB can both be used (Iba issues)

 

3. Verify that disk has been added successfully inside

guest VM (Verify size).

 

    

VHD

Dynamically

expanding

 

  

1. While adding IDE/SCSI disk to the guest VM choose dynamic expanding disk type.

 

2. Verify disk is added successfully.

 

3. Verify that its size of disk will expanding dynamically after adding more space in to.

 

    

VHD

Fixed size

 

  

1. While adding IDE/SCSI disk to the guest VM choose Fixed size disk type.

 

2. Verify disk is added successfully .

 

3. Verify that the disk size inside guest VM with what size you have given in step 1.

 

    

VHD

Differencing

 

  

1. While adding IDE/SCSI disk to the guest VM choose dynamic expanding disk type.

 

2. Verify disk is added successfully.

 

     Passthru HD   

1. Make sure passthru HD works

Networking          
    

Internal

network

  

1. Add a Internal NIC to guest VM.

 

2. Verify internal network added successfully to guest VM.

 

3. Verify guest VM can not access external network .

     
    

External

Network

  

1. Add a external NIC to guest VM.

 

2. Verify external network added successfully to guest VM.

 

3. Verify quest VM can access external network .

     
    

Guest

only network

  

1. Add a guest only network to guest VM.

 

2. Verify guest only network added successfully to guest VM.

     
    

Boot

from network

(PXE)

  

1. Verify that a user can install a guest VM through pxe.

Video     
     
    

Screen

resolutions

  

1. Go in to the guest VM and change its resolution to available resolution.

 

2. Verify screen can change to available resolution.

keyboard     
     International Languages   

1. Go in to the guest VM and change it to available language.

 

2. Verify language can change to other available international language.

 

[*** Confidential Treatment Requested]    
   

 

Page 9


 

IC’s          
     
    

IC Setup

test

   1. Installing the Linux IC
     
    

Heart

Beat/ Time

Sync tests

  

Not Implemented

     
    

KVP/

Shutdown IC

  

Not Implemented

     
    

Storage

VSC

  

1. Verify Scsi drivers are installed.

     
    

Network

VSC

  

1. Verify Network component are installed

     
    

InputVS

C

  

1. Verify Mouse integration is working inside guest VM.

   
Migration     
     
    

Live

Migration

  

1. Live Migration Succeeded

     

Core

devices

         
     

SMP

Scalability

         
     
         

Manual execution of the benchmarking tools

     
    

Kernbench

(CPU/memory)

  

See following section for Performance

    Characterization Expected Results

     
    

XDD

(disk I/O)

    
     
    

netperf

(network I/O)

    

 

[*** Confidential Treatment Requested]    
   

 

Page 10


Expected Results For SMP Scalability Tests

These are the expected results for comparative performance of Linux guests on Hyper-v vs. Linux on metal with comparable resources. Reference Hardware used will be 2ii GB RAM, 3.2 GHz quadiii core with 50 GB Disk space. Host is a 32 GB RAM, 3.2 GHz quad core Xeon with 250 GB disk space. Virtual configuration has the same RAM and disk space (2 GB ram/50 GB disk space) as the reference hardware. During the course of these performance tests, we shall monitor VCPU utilization and memory consumption on the guests, as well as the physical CPU utilization and memory consumption on the host.

Kernbench on Single VM (CPU/Memory)

LOGO

We will measure the scalability of Kernbench when running on bare metal with 1,2 and 4 CPUs respectively. We will then repeat these tests running virtualized (with the LIC installed) with 1, 2 and 4 vCPUs. We will assure that the results scale in the virtualized test cases to the same degree as they scale on bare metal within a 10% tolerance.

Also no error messages or panic should be encountered as this test is repeatedly executed over 36 hours

 

[*** Confidential Treatment Requested]    
   

 

Page 11


Netperf on single VM (For Network IO)

LOGO

Netperf Throughput for 1 CPU, 2 CPU’s, 4 CPU’s

LOGO

 

[*** Confidential Treatment Requested]    
   

 

Page 12


Netperf Testing

We have measured the scalability of Netperf when running on bare metal with 1,2 and 4 CPUs respectively (see Figure 3). We have observed that 1 CPU can saturate the NIC, and therefore, adding additional CPUs to the test does not increase network performance. The observed result is that the network throughput when running with 1, 2 and 4 CPUs is relatively the same.

To assure sanity of SMP operation, we will run Netperf virtualized (with the LIC installed) with 1, 2 and 4 vCPUs (see Figure 2). We will assure that the network throughput when running with 1, 2 and 4 vCPUs are the also relatively the same.

The way we utilize more than one core is by using netperf’s CPU binding options

(-T$locCPUnum,$remoteCPUnum) and binding the NIC’s interrupt handler(s) to different cores than the netperf/netserver processes.

Here is the the global -T option:

-T N # bind netperf and netserver to CPU id N on their respective systems

-T N, # bind netperf only, let netserver run where it mayiv

-T ,M # bind netserver only, let netperf run where it may

-T N,M # bind netperf to CPU N and netserver to CPU M

Please note that this is a netperf option, not a netserver option. However, during the course of this research, we have determined that there is enough confusion out there over these options to at least warrant another characterization with an alternative benchmark. Towards this end, we are looking at iperf as wellv solely for the purpose of verifying what we see with netperf.

 

[*** Confidential Treatment Requested]    
   

 

Page 13


XDD on single VM (Disk IO)

 

[*** Confidential Treatment Requested]    
   

 

Page 14

   


LOGO

Figure 4: xdd disk IO operations per second bm vs. VM for different cpu/vcpu counts

LOGO

Figure 5: Total time required to finish the same test across different CPU/vcpu counts

 

[*** Confidential Treatment Requested]    
   

 

Page 15


LOGO

Figure 6: Disk throughput across different cpu/vcpu counts

XDD Testing

We have measured the scalability of XDD when running on bare metal with 1,2 and 4 CPUs respectively (see Figures 4, 5 & 6). We have observed that XDD does not scale when additional CPUs are added to the test when running on bare metal. The observed result is that the XDD IOPS, Disk Throughput and test completion times, when running with 1, 2 and 4 CPUs are observed at the following:

 

   

IOPS measurements on 1,2 and 4 CPUs are within 12%

 

   

Disk Throughput measurements on 1,2 and 4 CPUs are within 12%

 

   

Test Completion times on 1,2 and 4 CPUs are within 11%

To assure sanity of SMP operation, we will run XDD virtualized (with the LIC installed) with 1, 2 and 4 vCPUs. We will assure that the XDD IOPS, Disk Throughput and test completion times when running with 1, 2 and 4 vCPUs are within the same ranges as when running on bare metal as above.On multi-processor systems it is possible to assign xdd threads to specific processors. This is accomplished with the –processor, –singleproc and the –roundrobin options.

The –processor option allows the explicit assignment of a processor to a specific xdd thread.

The –singleproc option will assign all xdd threads to a single processor specified as an argument to this option.

 

[*** Confidential Treatment Requested]    
   

 

Page 16


The –roundrobin option will distribute the xdd threads across M processors where M is the number of processors. M should be less than or equal to the number of processors in the system. The processor-numbering scheme used is 0 to N-1 where N is the number of processors in the system. For example, if there are five xdd threads running on a computer with eight processors, then the round robin processor assignment will assign threads 0 thru 4 on processors 0 thru 4. However, if there were only two processors on the computer, then xdd threads 0, 2, and 4 will be assigned to processor 0 and threads 1 and 3 will be assigned to processor 1.

We shall be testing with all these xdd options during the course of this validation effort to be absolutely certain.

Scale Tests

For the purposes of scale testing, the aforementioned performance tests will be executed across six VM’s on a 32 GB, 3.2 GHz quad core Xeon with 500 GB of disk space. The following test scenariosvi will be executed;

 

   

One VM running xdd and netperf (2 and 4 vcpu’s)

 

   

All VM’s running xdd (disk i/o) with 1, 2 and 4 vcpu’s

 

   

All VM’s running netperf with 1, 2 and 4 vcpu’s

 

   

1 VM running kernbench (2 vcpu’s), 2 running xdd (4 vcpu’s) and 3 running netperf (2 vcpu’s)

 

   

1 VM running kernbench (2 vcpu’s), 2 running netperf (4 vcpu’s) and 3 running xdd (2 vcpu’s)

We will be monitoring host’s and guests’ processor and memory usage as well as determining the impact on performance vis-a-vis the single VM characterizations performed earlier. Typically, the anticipated gains in subsystem performance mentioned previously for single VM benchmarks should not degrade by more than 20% for each VMvii involved in these scale test scenarios. We intend on executing each scale scenario for 24 hours at least. However, when executing these scale tests for Time sync, we shall sustain these for 7 days (168 hours).

 

[*** Confidential Treatment Requested]    
   

 

Page 17


Time Sync Exit Criteria

A single VM with the drift fix and SMP is going to be set up (idle) and observed for 7 days for clock drift. With the clock drift fix, it should be possible to effectively run NTP to keep the clock synchronized.

During the course of the SMP functional, performance and scale characterizations mentioned above, we are going to keep a track of any clock drift as well, since these VM’s will eventually have the correction for clock drift as well. However, specifically for verifying time sync, we’ll be executing the following additional tests as well;

1. Idle VM.

2. Busy VM (60-75% utilization)

3. Busy host (multiple virtual machines consuming resources, but VM near idle)

4. Busy VM + Host (combine 2 + 3 above)

The acceptance criteria for the time sync mechanism will be as follows;

 

   

At no time should a clock correction cause a VM kernel panic

 

   

Clock drifts should not be visible in the system logs

 

   

The monotonicity and continuity of the time value are maintained.

 

   

The time value should increase monotonically

 

   

The time value shouldn’t jump back and forth by a large amount (observable) except during rebooting or resuming from saved state.

 

   

No clock drift should be observed on repeated VM restarts

 

   

The VM should display correct time when it is started

TimeLine

Because of the amount of testing involved in this effort, no more than three of these test cycles are envisioned from now until the end of February 2010 (beta).

Test Cycle 1: SMP with the drift fix acceptance validation using Ky’s SMP fix and clock drift on SLES 10 SP3 modified kernel (January 27th to February 5th 2010)

Test Cycle 2: SMP only acceptance Criteria for SMP acceptance validation using Ky’s SMP fix on RHEL 5.4 (January 27th to February 15th 2010)

Test Cycle 3: Repeat test cycle 1 with SLES 11 (February 16th to the 26th 2010)

Caveats:

 

   

It may be that we may have to do Time Sync only on RHEL (since it is not clear who will make the drift fix in the kernel on RHEL 5.4).. If this is the case then we’ll make the time sync script available for RHEL users

 

   

It might be determined that the various criteria listed above are not entirely achievable because of factors that would require changes to the hypervisor design or other factors outside of the scope of this collaboration. In such a case, a best case mitigation will be affected and the criteria would be revised with mutual consultation between Novell and Microsoft

 

   

The focus will always be to verify the impact of the features that Novell is contributing to the Linux IC. Towards this end, Novell will help in troubleshooting and fixing issues that are clearly identified as related to SMP scale and time sync (subject to the caveats mentioned previously)

 

[*** Confidential Treatment Requested]    
   

 

Page 18


i

We will have an ongoing dialog with our Novell partners around these acceptance criteria and changes could very well be made to these as new findings etc. come to light during the course of the development effort. However all changes will need to be made with mutual agreement between Novell and Microsoft

ii

Actually limited to this value by configuring Linux parameters

iii

Similarly, core counts will be controlled through OS configuration

iv

We use this option, except that we run netserver on a different machine altogether

v

Recommended by Novell as well

vi

Limiting a potential state explosion at every turn in this effort is essential

vii

Each VM is 50 GB disk space and 2 GB RAM as was the case for single VM functional tests

 

[*** Confidential Treatment Requested]    
   

 

Page 19