When we were little we were taught to always look both ways before crossing the street. The same lesson should be learned by companies that make software but choose to assert their patent(s) against others. Any time you make a claim of patent infringement and you are a producer you face the possibility of counterclaims unrelated to your patents, and those claims are not limited to patent infringement claims. This is a lesson now being learned by Twin Peaks Software in what may be the most important GPL enforcement action to date. Twin Peaks, having sued Red Hat and Red Hat subsidiary Gluster for patent infringement, now faces a counterclaim for copyright infringement for including critical GPL code in its products while failing to comply with the GPL. If Red Hat is successful on that claim and obtains a permanent injunction, which it is requesting, Twin Peaks could be out of business.

It's always interesting to watch the timing of these infringement suits. Twin Peaks obviously wasn't interested in pursuing Gluster, but once Red Hat and its deep pockets entered the picture, Twin Peaks couldn't resist and filed this complaint [PDF] alleging infringement of its U.S. Patent No. 7,418,439. When the defendants complained of indefiniteness in the original complaint, Twin Peaks filed an amended complaint [PDF] . The infringement allegations directed at Red Hat and Gluster now read:

12. On information and belief, Defendants have infringed and continue to infringe one or
more claims of the ’439 patent in the State of California, in this judicial district, and elsewhere in the
United States by, among other things, making, using, importing, offering to sell, and/or selling in the
United States software products for managing data on computer networks, including but not limited to
GlusterFS and other products that incorporate GlusterFS technology such as Red Hat Storage Software
Appliance and Red Hat Virtual Storage Appliance (“the accused software products”).

13. On information and belief, Defendants indirectly infringe one or more claims of the ’439
patent by inducing their customers’ infringement using such software products. On information and
belief, Defendants have had knowledge of the ‘439 patent since at least February 23, 2012, the filing
date of this action. Despite this knowledge of the ‘439 patent, Defendants have continued to engage in
activities to encourage and assist their customers who use the accused software products to directly
infringe one or more claims of the ‘439 patent. For example, through their websites at
http://www.redhat.com and http://www.gluster.org, Defendants advertise and provide instructions on
how to use the feature in the accused software products of managing data through replicated volumes.

Such advertisements and instructions are provided in, for example, technical documentation made
available by Defendants through their websites, including but not limited to Administration Guides and
User Guides for the accused software products. On information and belief, by using features in the
accused software products such as this feature of managing data through replicated volumes,
Defendants’ customers have directly infringed and continue to directly infringe one or more claims of
the ‘439 patent. On information and belief, Defendants knew or should have known their activities in
encouraging and instructing customers in the use of the accused software products, including but not
limited to the activities set forth above, would induce their customers’ direct infringement of the ’439
patent.

While Twin Peaks suggests the defendants are continuing to knowingly infringe the '439 patent, Twin Peaks does not allege willful infringement.

For their part, Red Hat and Gluster have answered [PDF] the complaint with the typical responses to a patent infringement claim, i.e., we don't infringe, your claims are invalid, you prosecuted the patent in an improper manner, etc. For its part, Twin Peaks denied all of the Red Hat-Gluster counterclaims in its response [PDF].

And, as of Thursday morning, September 13, that's where things stood. Nothing particular surprising about this case up to that time. But that changed dramatically Thursday afternoon. Twin Peaks forgot to look both ways, i.e., Twin Peaks forgot that there is no limit to what a defendant can allege in a counterclaim once a lawsuit has commenced, and the stakes in this one have been upped substantially. For on Thursday afternoon Red Hat and Gluster amended their answer and counterclaims [PDF] to add a new counterclaim, to wit:

In its amended answer and counterclaim Red Hat explains what the "mount" program is (see, paragraph 44 et seq.) and the fact that it is licensed under the GPL and what that is all about. When your software, like that of Twin Peaks, is intended for file management, being able to mount a file is fairly critical.

Red Hat then asserts:

Twin Peaks’ Improper Use of Red Hat’s Source Code

49. Like Red Hat, Twin Peaks distributes software that runs on the Linux operating
system. Unlike Red Hat, however, Twin Peaks distributes software only under a proprietary license
that forbids copying, and does not make any of the source code for any of its products publicly
available.

50. Twin Peaks sells, subject to its proprietary license, and without providing any source
code, software that it calls an “innovative replication solution.” That software is branded as “TPS
Replication Plus.”

51. Twin Peaks also provides a “free” version of its TPS Replication Plus software,
called “TPS My Mirror.” This version is also provided only under a proprietary license, and also
without any source code or copy of the GPL.

52. On its website, Twin Peaks represents that the “TPS Replication Plus” and “TPS My
Mirror” software packages are covered by the same patent it accuses Red Hat of infringing in this
action (the ’439 Patent).

54. On information and belief, rather than develop its own source code to create its
proprietary software replication products, Twin Peaks copied substantial portions of open source
code into those products, including source code originally authored by Red Hat. Among the code
Twin Peaks improperly copied was that embodied in the “mount” program released in util-linux version 2.12a, which Twin Peaks copied into the source code for its own “mount.mfs” tool. Twin
Peaks’ verbatim and near-verbatim copying of open source and Red Hat source code into its “mount.mfs” tool is pervasive and extensive.

55. By selling or providing “TPS Replication Plus” and “TPS My Mirror” under
proprietary license agreements and not making any of their source code available to the public,
Twin Peaks has failed to comply with the explicit conditions of the GPL. Twin Peaks is thus
illegally free-riding off of Red Hat’s contributions to util-linux, as well as the contributions of many
others in the FOSS community to that software.

56. By reproducing, copying, and distributing Red Hat’s original source code in “TPS
Replication Plus” and “TPS My Mirror,” without approval or authorization by Red Hat and only
subject to its own proprietary license agreement, Twin Peaks is infringing and has infringed Red
Hat’s exclusive copyrights, and likewise is inducing and has induced its customers to infringe.

57. Red Hat has not licensed or otherwise authorized Twin Peaks to reproduce, copy or
distribute Red Hat’s copyrighted source code or any works derived from it, except under the
conditions of the GPL, which Twin Peaks has failed to satisfy.

And it doesn't stop there. Red Hat is not merely asking that Twin Peaks come into compliance with the GPL. Red Hat is seeking damages for copyright infringement and a permanent injunction against those Twin Peaks products.

That changes the stakes in this litigation substantially. Twin Peaks is suing Red Hat and Gluster for patent infringement related to a product that has no substantial commercial market at this writing. In other words, the exposure to Red Hat and Gluster is not that substantial when considering Red Hat's overall income and assets. On the other hand, Twin Peaks now faces the potential loss of half its product offerings and potentially its entire business if Red Hat were to be successful in establishing copyright infringement and obtaining a permanent injunction. This could potentially be the most important GPL case to date.

Of course, right now these are all just allegations. Neither party has proven its claims. But the balance of risk weighs heaviest on Twin Peak's side of the scale.

Twin Peaks should have learned that early lesson - Look Both Ways Before You Cross The Street!

2. On information and belief, Defendant Red Hat, Inc. ("Red Hat") is a corporation duly organized and existing under the laws of the State of Delaware, having its principal place of business at
1801 Varsity Drive, Raleigh, North Carolina 27606.

3. On information and belief, Defendant Gluster, Inc. ("Gluster") is a corporation duly organized and existing under the laws of the State of California, having its principal place of business at 640 W. California Ave., Suite 200, Sunnyvale, California, 94086.

4. On or about October 4, 2011, Red Hat publicly announced that it was entering into an agreement to acquire full ownership of Gluster.

5. On information and belief, Gluster currently operates as a fully-owned subsidiary of Red Hat.

JURISDICTION AND VENUE

6. This is an action for patent infringement arising under the Patent Act, 35 U.S.C. §§101 et seq. This Court has jurisdiction over Plaintiffs federal law claims under 28 U.S.C. §§1331 and 1338(a).

7. This Court has specific and/or general personal jurisdiction over Defendants Red Hat and Gluster (collectively, "Defendants") because they have committed acts giving rise to this action within this judicial district and/or have established minimum contacts within California and within this judicial district such that the exercise of jurisdiction over Defendants would not offend traditional notions of fair play and substantial justice.

8. Venue is proper in this District pursuant to 28 U.S.C. §§1391 (b)-(c) and 1400(b) because Defendants have committed acts within this judicial district giving rise to this action, and continue to

1

conduct business in this district, and/or have committed acts of patent infringement within this District giving rise to this action.

BACKGROUND AND INFRINGEMENT OF U.S. PATENT 7.418.439

9. Twin Peaks re-alleges and incorporates by reference the allegations set forth in the preceding paragraphs as if fully set forth herein.

10. On August 26, 2008, the United States Patent and Trademark Office duly and lawfully issued United States Patent Number 7,418,439 ("the '439 patent") entitled "Mirror File System" to the inventor John P. Wong. Mr. Wong is the founder, Chairman, and Chief Technology Officer of Twin Peaks. A true and correct copy of the '439 patent is attached hereto as Exhibit A.

11. Twin Peaks is the owner and assignee of all right, title, and interest in and to the '439 patent, including the right to assert all causes of action arising under said patent and the right to any remedies for infringement of it.

12. On information and belief, Defendants have infringed and continue to infringe one or more claims of the '439 patent in the State of California, in this judicial district, and elsewhere in the United States by, among other things, making, using, importing, offering to sell, and/or selling in the United States software products for managing data on computer networks, including but not limited to GlusterFS and other products that incorporate GlusterFS technology such as Red Hat Storage Software Appliance and Red Hat Virtual Storage Appliance. On information and belief, Defendants indirectly infringe one or more claims of the '439 patent by contributing to and/or inducing their customers' infringement using such software products. On information and belief, Defendants knew or should have known their actions would induce and/or contribute to infringement of the '439 patent.

13. On information and belief, Defendants will continue to infringe the '439 patent unless enjoined by this Court.

14. Defendants' acts of infringement have damaged Twin Peaks in an amount to be proven at trial, but in no event less than a reasonable royalty. Defendants' infringement of Twin Peaks' rights under the '439 patent will continue to damage Twin Peaks causing irreparable harm, for which there is

2

no adequate remedy at law, unless enjoined by this Court.

PRAYER FOR RELIEF

WHEREFORE, Twin Peaks prays for relief as follows:

a. For judgment that Defendants have infringed and continue to infringe the claims of the '439 patent;

b. For a preliminary and/or permanent injunction against Defendants and their respective officers, directors, agents, servants, affiliates, employees, divisions, branches, subsidiaries, parents, and all others acting in active concert therewith from infringement of the '439 patent;

c. For an accounting of all damages caused by Defendants' acts of infringement;

d. For a judgment and order requiring each Defendant to pay Twin Peaks' damages, costs, expenses, and pre- and post-judgment interest for its infringement of the '439 patent as provided under 35 U.S.C. §284;

e. For a judgment and order finding this to be an exceptional case, and awarding Twin Peaks attorney fees under 35 U.S.C. §285; and

f. For such relief at law and in equity as the Court may deem just and proper.

A mirror file systems (MFS) is a virtual file system that links two or more file systems together and mirrors between them in real time. The file systems linked and mirrored through the mirror file system can be a local file system connected to a physical device, or a network file system exported by a remote system on a network. The mirroring mechanism is established by linking a file system to another file system on a single directory through an MFS mounting protocol. User applications perform normal file system operation and file/directory operation system calls like open, read, write and close functions from the pathname of either file system. When updates occur, such as a write operation, the MFS mechanism ensures that all data updates go to both the file systems in real time.

18 Claims, 10 Drawing Sheets

5

U.S. Patent Aug. 26, 2008 Sheet 1 of 10 US 7,418,439 B2

Fig. 1

6

U.S. Patent Aug. 26, 2008 Sheet 2 of 10 US 7,418,439 B2

Fig. 2

7

U.S. Patent Aug. 26, 2008 Sheet 3 of 10 US 7,418,439 B2

Fig. 3

8

U.S. Patent Aug. 26, 2008 Sheet 4 of 10 US 7,418,439 B2

Fig. 4

9

U.S. Patent Aug. 26, 2008 Sheet 5 of 10 US 7,418,439 B2

Fig. 5

10

U.S. Patent Aug. 26, 2008 Sheet 6 of 10 US 7,418,439 B2

Fig. 6

11

U.S. Patent Aug. 26, 2008 Sheet 7 of 10 US 7,418,439 B2

Fig. 7

12

U.S. Patent Aug. 26, 2008 Sheet 8 of 10 US 7,418,439 B2

Fig. 8

13

U.S. Patent Aug. 26, 2008 Sheet 9 of 10 US 7,418,439 B2

Fig. 9

14

U.S. Patent Aug. 26, 2008 Sheet 10 of 10 US 7,418,439 B2

Fig. 10

15

US 7,418,439 B2

MIRROR FILE SYSTEM

This application claims priority under 35 U.S.C. §§119 and/or365 to 60/189,979 filed in the United States of America on 17 Mar. 2000; the entire content of which is hereby incorporated by reference.

BACKGROUND OF THE INVENTION

In a computer network environment, hundreds or even thousands of computer systems may be connected by a communication channel. They can all communicate with each other through many different communication protocols. Each protocol has a particular way to link the systems together to transmit data from one to another. To help the systems cooperate more closely, resource sharing mechanisms have been developed to allow computer systems to share files across the computer network. One example of such a mechanism is the client-server Network File System (NFS) developed by Sun Microsystems. By sharing the files across the network, every client system on the network can access the shared files as if the files were local files on the client system, although the files may be physically located on and managed by a network server system at a remote location on the network. The file sharing provided by the NFS enhances the network operation with the following features:

a. Each client system no longer needs to physically keep a local copy of the files.

b. Every client system can access the shared files in the same manner as it accesses its own local files.

c. There is only one copy of files located on and managed by a network server, so it is always the only version and always up-to-date.

This file sharing provided by the NFS works well in a small or middle size network environment. As more client systems are added to the network, and more subnets are connected to the network, more routers and switches are needed to interconnect many different small networks or sub-networks to form a large network. A network server that shares its files across such a network to the client systems faces the following problems:

1. The network server is loaded heavily by increasing requests from many client systems on the network. To alleviate the load problem, the network server can be upgraded to add more CPUs on the system, and the storage devices which store the shared information can also be upgraded to provide more bandwidth on their data channels, so that requests for the information from client systems on the network can be serviced without delays.

2. The network is congested with the traffic generated by the client systems' requests from all different directions and the server's return. To alleviate the congestion problem, the bandwidth of network communications media can be increased to accommodate more traffic and faster routers and/or switches can be added to transfer data packets faster on the network.

By using more CPUs on the system, faster data channels on the storage media, increased network bandwidth, and adding faster routers and/or switches, the overloading problem on the network server and the traffic congestion problem on the network are reduced to some degree. But this single centralized network server configuration and topology still faces other problems:

3. If the storage device that stores the shared files is not available due to a) power outage, b) hardware failure, or c) scheduled maintenance, then the network clients that
depend on the network server to store and to retrieve critical infonnation from the shared files on that storage device will not function properly. To reduce the risk from such disasters, a disk array technology known as RAID (Redundant Array of Independent Disks) was developed to minimize the damage and more easily recover from failure due to the above mentioned situations. The RAID disk array technology can protect the files on the disk from damage or corruption by using the techniques of striping, mirroring and parity checking, etc. But this only protects the storage system, and not the network server.

4. If the network server goes down for any reason, it cannot
store or retrieve critical information for the network clients. To deal with the problem caused when a network server goes down, the following two computer systems were designed:

a. Fault-tolerant computer systems that require duplicate copies of every hardware component in the system as
stand-by parts.

b. Clustering systems which have more than one network server physically connected to the same storage system on which the snared files are located. All these network servers (or nodes) are running at the same time, but only one of them actually serves the clients's requests; the others function as stand-bys. When the primary server is down, a stand-by server kicks in and takes over the operation.

With more CPUs on the system, RAID disk arrays, fault-30 tolerant computer systems and clustering network systems, many ofthe problems that are associated with sharing files by means of a server on the network seem to be overcome or reduced. However, in contrast to these expensive and cumbersome hardware solutions, a simpler and better way to 35 achieve the same results is through a software solution.

The root cause of the problems mentioned previously is the fact that there is only a single copy of shared files stored on the disk of the network server. The advantage of keeping one single copy ofthe shared files on the network is that it is easy 40 to maintain and update the files. However, since there is only one copy of the shared files on the network, the following disadvantages result:

1. All clients systems on the network have to send their requests through multiple routers and/or switches before they reach the network server. Consequently, the net-
work server is overloaded and the network becomes congested.

2. No network can afford to let this single copy of shared information become unavailable, so a disk array with a RAID level is needed to protect the sole copy of files on
the disk from becoming unavailable.

3. In addition to using the disk array to protect the shared information on the disk, a fault-tolerant system or clustering system is also needed as protection against network server failures, which can result from failures in any of several key components as well as from failure of the network server itself.

SUMMARY OF THE INVENTION

These disadvantages can be mitigated or eliminated by using multiple network servers on the network, preferably one per sub-network. Each network server contains a copy of the shared files on its disk and shares them across the network. This arrangement works successfully as long as every copy of the files is identical and all copies are updated in real time whenever an update occurs on any copy.

16

In accordance with the present invention, this objective is achieved by means of a mirror file system (MPS). A MFS is a virtual file system that links two or more file systems together and mirrors between them in real lime. When the MFS receives updated data from an application all file systems linked by the MFS are updated in real time. The file systems linked and mirrored through the mirror file system can be a local file system connected to a physical device, or a network file system exported by a remote system on a network. The real-time mirroring mechanism provided by the MFS is transparent to user applications. The system administrator first sets up the mirroring mechanism by linking a file system to another file system on a single directory through an MFS mounting protocol. These two file systems and their files are linked together and become a mirroring pair. Both copies are owned by. and under the management of. the MFS. All access to files or directories in both file systems go through the MFS. The user applications perform normal file system operation and file/directory operation system calls like open, read, write and close functions from the pathname of either file system. Most of the file operations (such as a read operation) only need to go to one file system under the MFS to get the data. Only when updates occur, such as a write operation, the MFS mechanism ensures that all data updates go to both the file systems. With this mirroring mechanism of the MFS, the files/directories in one file system arc mirrored to their mirroring counterparts of another file system in real time. With the MFS technology, a standalone system is able to make multiple copies of data available to the application. In the network environment, multiple servers owning the same data copy can be distributed on the network and mirror the data to each other in real time to provide more efficient and more reliable service to their clients.

Hence, the mirror file system links any two regular file systems together and provides data management to make sure that the two file systems contain identical data and are synchronized with each other in real time. There are several benefits associated with the use of the mirror file system. A network server with the mirror file system on a sub-network can mirror its file system to another file system located on another network server, or on a different sub-network, in real time. Thus, the mirror file system allows critical information to be reflected simultaneously on multiple servers at different sub-networks, which synchronize with one another instantaneously so that neither time nor information is lost during updates. With real-time mirroring of critical information over the larger network, a client system can access the information on any network server. Although il is preferable to use the closest network server on its sub-network, a client system can switch seamlessly to an alternate network server on another sub-network whenever necessary and continue to access the critical information without interruption.

The mirror file system achieves the following major objectives of network operation:

1. It provides a complete solution to the RAS (Reliability. Availability, Serviceability) problem on all levels (storage, system, and network). Whenever a disk storage system, a system connected to it, or any network (or sub-network) component becomes unavailable due to power outage, system crash, hardware failure, or scheduled maintenance, the critical information remains available on another network server. All clients that cannot be served by their primary network server can switch to their secondary network server for virtually continuous access to the same critical information. The secondary network server can be deployed on a different subnetwork of a large enterprise network, and can be located as far away as desired.

2. It provides fast service for mission-critical applications. With more than one network server deployed on different sub-networks, a client can access the closest network server to get critical information faster; without the need to traverse many switches or routers on the enterprise network, which is the case when there is only one network server.

3. It reduces network traffic congestion by serving identica I information on multiple network servers. When a client can get the critical information from the closest network server on its sub-network, there is no need to travel outside the sub-network. This reduces total traffic on the large enterprise network as well as the cost of purchasing and maintaining multiple fast switches and routers.

4. It eliminates the problem of overloading a single network server. When a single network server is overloaded by an increasing number of requests from the clients, IT professionals can simply add more network servers on the enterprise's network instead of getting more CPUs for the single network server. Several small to mid-size network servers work better than a single centralized network server in terms of dealing with the RAS problem, providing fast service, and reducing network traffic.

5. It distributes and balances workload and traffic among multiple network servers. With multiple network servers containing the same critical information. IT professionals can distribute and balance the workload and traffic on the enterprise' s sub-network to make overall network operation considerably faster and smoother.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram of a conventional file system framework;

FIG. 2 is a block diagram of a file system incorporating the present invention;

FIG. 3 is a schematic illustration of mirroring between two file structures;

FIG. 4 is a schematic illustration of the manner in which the present invention links and mirrors between two file structures;

FIG. 5 is an illustration of a first embodiment of the invention, in which a standalone system mirrors between two local file systems;

FIG. 6 is an illustration of a second embodiment of the ss invention, comprising a master mirror server and a slave mirror server;

FIG. 7 is an illustration of a third embodiment of the invention, comprising two master mirror servers;

FIG. 8 is an illustration of a fourth embodiment of the 60 invention, in which a client mirrors two imported network file systems;

FIG. 9 is an illustration of a fifth embodiment of the invention, comprising two master mirror servers on a network:

FIG. 10 is an illustration of a sixth embodiment of the invention, in which a client links two imported mirror file systems on a network.

17

DETAILED DESCRIPTION

A. Overview

To facilitate an understanding of the invention, an overview will first be provided of a typical example of a file system. Most operating systems, such as Unix, provide multiple and different file system types within their operating environments. Some file systems, e.g. the Unix File System (UFS) or High Sierra File (HSFS) for CD-ROMs, have the physical storage to hold the actual file data; other file systems, e.g. Special Device File System (Specfs) or Network File System (NFS), do not have the physical storage. All these file systems observe interface conventions defined by the operating system, so that they can be loaded and plugged into the operating system easily. An application program can access the file data or device in these file systems by using the standard system calls provided by the operating system without the need to know the idiosyncrasies of each file system.

The interfaces can be used:

1. Between the system calls and the underlying file systems: An application program makes system calls to access the file system or individual file/directory. The system calls convert those access requests into one or more file system or file/directory operation requests lor the intended file system through the interface. The file system then presents those requests to its physical storage and returns the result back to the application program.

2. Between the file systems: A file system gets the file system and file/directory operation requests from the system call through the interface. It can present those requests to its physical storage, or send the request to another file system through the interface again and let another file system handle the activities of physical storage.

The interlaces defined by the operating system fall into two categories: one is the interface for the file system itself: the otheris the interfaces for individual files or directories within the file system. For ease of understanding, the terminology for interfaces as defined by the UNIX Operating System will be employed hereinafter. The interface for the file system is called the Virtual File System interface (VFS), and the interface for the individual file or directory is called the Virtual Node (VNODE) interface.

1. The Virtual File System (VFS) Interface

The VFS interface has seven or eight interfaces/operations for a File System:

1) vfs_mount() mounts a file system
2) vfs_unmount( ) unmount a file system
3) vfs__root( ) find the root for a file system
4) vis statvfs( ) gels the statistics of a file system
5) vfs sync( ) sync the file system
6) vfs_vgct( ) find the vnodc That matches a file ID
7) vfs_mountroot( ) mount the file system on the root
directory

All VFS interfaces are intended for the operations on a file system, such as mounting, unmounting, or synchronizing a file system. The VFS interface consists of two parts. One is the vfs structure, the other is the MACRO definitions for the vfs operation of the file system.

The vfs structure is as follows:

Within the vfs structure, there is a vfsops struct containing file system operations like mount, unmount, sync, etc. that can be performed on the file system.

The vfsops structure looks like the following:

All of the functions in the vfsops structure are invoked through VFS MACROs, which are defined as follows:

18

In the Unix operating system, every file system is allocated a vfs structure. When the operating system kernel receives a system call from an application program that intends to perform a file system operation on a file system, it uses the above MACROS with the vfs structure pointed to by the vfsp input parameter to invoke the file system operation on the file system. The MACROS are defined in a file-system-independent manner. With the input vfsp parameter, the kernel invokes the desired file system operation of a tile system.

2. The Virtual Node Interface

The Vnode interface has about 30 to 40 interfaces/operations for a file/directorv:

All Vnode interfaces are intended for operation on an individual file or directory within a file system. Like the file system operations in the VFS interface, the Vnode Interface also consists of two parts, one part is tlie vnode structure, the other is the MACRO definitions for the vnode operations o^ the file/directory.

The following is the vnode structure:

Within the vnode structure, there is a vnodeops struct containing the file/directory operations such as vop_access( ). vop_opcn( ), vop_crcat( ) and vop_writc(), etc. that can be performed on the associated vnode of a file/directory.

The vnodeops structure looks like the following:

Tlie functions in the vnodeops structure are invoked through the vnode operations MACROs. Tlie MACROS definitions of vnode operations are the following:

19

Every file or directory in the file system is allocated a vnode structure that holds all information about that file or directory.

When the operating system kernel receives a file or directory operation system call from an application program that intends to perform an operation on a file or directory, it uses the foregoing macros with the information in the vnode structure pointed to by the vp input parameter to invoke the desired vnode operation on the file or directory.

FIG. 1 shows the layout of several file systems and the VFS, Vnode interfaces in the operating system.

The operation and data flow proceed as follows:

a. A user application 10 makes a file system operation or file operation system call into the operating system 12.

b. The system call generates one or more VFS and Vnode operation calls 14.

c. The VFS and Vnode operation calls then go through 1he VFS and Vnode interface layer 16 to, switch to the intended file system 18.

d. The intended file system sends the VFS and Vnode operation to its physical storage 20 and gets the result.

e. The intended file system returns the result back to the application program 10.

3. The mirror file system interface

The mirror file system of the present invention, like other file systems, also follows the VFS and Vnode interfaces, so it can be loaded and plugged into the operating system. The application 10 uses the same system calls to access the file system and individual file/directory within the mirror file system. The mirror file system does not have physical storage; instead it has two or more file systems under it. Hach of the file systems under the mirror file system lias a local physical storage 18a, e.g. UFS, or a remote physical storage 18b on another system, e.g. NFS. The UFS or NFS under the mirror file system has the same VFS and Vnode interfaces as it normally would. The mirror file system use these standard interfaces to perform the operations on the UFS/NFS file systems and their individual files or directories.

FIG. 2 shows several file systems and the mirror file system in an operating system. Tlie mirror file system 22 is loaded on top of a Unix File System (UFS) and a Network File System (NFS). Other UFS and NFS file systems can co-exist in parallel with MFS as shown in the figure. When the MFS is loaded into the system, it links the two file systems, UFS and NFS, together through its mount protocol. After the mount operation, the UFS and NFS are under the management of MFS. All system calls for file system operations and individual file/directory operations from the application are directed to the MFS first via the same VFS and Vnode interfaces. When it receives the VFS or Vnode operation from the system calls originated by the application, the MFS first performs some housekeeping tasks on the operations, and then sends the operations to UFS and NFS via the VFS and Vnode interface again. By keeping the same VFS and Vnode interface between the system call and MFS, and between the MFS and the underlying UFS and NFS, the MFS achieves the following goals:

1) The application does not need to re-compile or to re-link. The path name for a file or directory accessed by the application remains intact. No new directory or symbolic links arc created or needed for the application to function properly with the MFS mounted. Consequently, the application need not be aware of the existence of the MFS in the system. The application can access the mirror file system and its file/directory in the same manner as it did before the MFS was loaded into to the system.

2) Tlie UFS and NFS do not need any changes. They can a) co-exist in parallel with the MFS as a standalone file system like UFS(l) and NFS(l), or b) be linked and managed by the MFS as a sub-file system like UFS(2) and NFS(2) in FIG. 2. In the first case, the UFS or NFS receives the VFS and Vnode operations from the system call originated by the application and sends the operations to its physical storage; in the second case the UFS and NFS receive VFS and Vnode operations from the MFS, and then send the operations to their physical storage.

3) It is a building block approach. The MFS is built on top of existing UFS and NFS. Another file system can also be built on top of the MFS and other file system jointly or independently, and be plugged into the operating system.

B. Exemplary Embodiment

A more detailed description of the mirror file system of the present invention is presented hereinafter.

1. The MFS Mount Protocol

In the Unix operating system, every file system mounted by the system has a virtual file system data structure named vfs
35 that contains information about tlie file system and its operations as described before. Normally only one file system can be mounted on a directory. When a new file system is mounted on a directory, the directory's previous contents are hidden and cannot be accessed until the new file system is unmounted from the directory. I Ience, the application can only see the contents of tlie new file system when accessing the directory. In contrast, when the MFS links file systems together and forms a mirroring pair, the MFS mount protocol mounts two file systems on a single directory. This protocol provides a new approach for mounting a file system on a directory.

a. The MFS mount protocol allows either an entire file system of part of a file system (represented by a directory) to be mounted on a directory.

b. When tlie MFS mounts a directory with a file system or so a part of a file system, the previous contents of the
mounted directory are not hidden.

c. The MFS inherits all of the contents of the mounted directory into its mfs_vfs virtual file system data structure. The inherited content is a copy of a mirroring pair. The new file system mounted on the directory is the other copy of the mirroring pair, all its contents are also inherited by MFS and put into the MFS file system data sture mfs_vfs.

d. The application still sees the previous contents of the 60 mounted directory through its previous path name. The
application also sees the contents of the newly mounted
file system through its previous path name.

The mfs_vfs file system is a virtual file system that holds
the infonnation for itself and two other file systems, one of
65 which is inherited from the mounted directory, and the other
of which is the new file system that was mounted. Hence, the
mfs_vfs structure contains three vfs data structures, one is the

20

MFS itself, the other two vfs structures are for the two file systems linked by the MFS. The super mfs_vfs data structure looks like the following:

Altera file system represented by a directory is mounted on another directory by the MFS mount protocol, these two file systems are linked together and become a mirroring pair under the management of MFS operation. FIG. 2 shows that the UFS(2) and NFS(2) are linked together by the MFS mount protocol and become a mirroring pair. The MFS can mirror the entire file systems or a portion of the file systems between a mirroring pair.

FIGS. 3 and 4 illustrate how the File System A 201 and the File System B 202 link and mirror each other. In FIG. 3, the struchire B 220 under directory b 211 of the File System A 201 is to be linked to structure Y 221 of the File System B 202 and mirror each other. Tlie file system mount operation of MFS is the key for linking up these two file structures 220 and 221, so the two file structures become a mirroring pair. The file system mount operation is described in detail below.

To link up these two tile structures and make them a mirroring pair, the MFS can do the one of the following two things:

a. Mount the directory y 221 of the File System B 202 onto the directory b 211 of the File System A 201.

b. Mount the directory b 211 of the File System A 201 onto the directory y 221 of the File System B 202.

It is not significant which directory of which file system is to be the mount point for other file system. Since the file systems arc a mirroring pair, they all have same privileges by default.
The MFS mount operation sets up the data structure mfs_ vfs to contain the vfs data structures for these two file system structures. After the mount operation, the following structures and relationships are created as depicted in FIG. 4:

1) A new virtual file system mirror file system 203 is created. The new mirror file system 203 is a layered virtual file system on top of File System A 201 and File System B 202. It has a data structure containing the file system and file operation information of File System 201 and File System 202.

2) Tlie newly created mirror file system 203 has all the elements of both the Structure B 220 (FIG. 3) of the File System A 201 and Structure Y 221 of the File System B 202. It has directories b/y 231, c 232, d 233, z 238 and f 235, files e 234, g 236, h 237 and y 239. Each element is either a file or a directory.

3) The directory b/y 231 of the mirror file system 203 becomes the root directory of mirror file system 203.

4) All elements of structure B 220 (FIG. 3) of File System A 201 are mirrored to directory y of File System B 202. All elements of structure Y 221 of File System B 202 are also mirrored to directory b of File System A 201. In other words, all of the elements of structure B and stmc-Rire Y are copied to a physical storage device of File System A and B, so the structures in the two file systems are synchronized with each other after the MFS mount operation.

5) If there is a file or directory that exists on both file ? systems, then the timestamp of the file or directory is
used to decide which copy is to be preserved.

6) An application can access the root directory b/y of MFS by using the path name from either file system, /A/b or /X/y, and get to the root node of the newly created MFS. All file system operations, as well as individual file or directory operations, are handled by the MFS for all the files and directories under the root directory b/y of the newly created MFS.

2. The MFS Unmount Protocol

To break the mirroring setup, the mirror file system unmounts directory y of File System B 202 from the directory b of the File System A 201. Then all relationships are reverted back to their original state. The two file systems that were 2o linked and mirrored to each other by the MFS are independent of one another again.

After two file systems are linked and mounted on a directory by the MFS mount protocol, the individual files and directories within the two file systems are ready to accept operations from the MFS and the application.

Every element of a file system, file or directory, has a vnode data structure containing information and the operations can be performed on this file or directory.

In Unix and other operating systems, normally only one vnode data structure is allocated per file or directory. Since
35 the MFS lias two file systems under its management, each file or directory in the MFS has two files or directories under its management, one for each of the two file systems. Every file or directory of MFS will have a super vnode structure called mnode. This mnode contains a vnode structure and two vnode
40 pointers. The vnode named m_vnode is the vnode for the file or directory within MFS, the two vnode pointers, *m_Xvp and *m_Yvp, point to the real vnode of the file or directory within the two file systems. Tlie mnode data structure of MFS File System looks like the following:

FIG. 4 shows a detailed picture ofwhat the MFS looks like and its relationship with two underlying file systems. The
60 directory b/y 231 is a newly created directory, the root of new mirror file system 203. The directory b/y 231 of mirror file system 203 is a virtual directory, there is no physical storage for any file or directory within the mirror file system. But the directory b/y 231 of mirror file system 203 has a mnode data structure allocated by the MFS. Within its mnode, it has two pointers: one pointer named m_Xvp points to tlie b directory of File System A 201; the other pointer named m_Yvp points

21

to y directory of File System B 202. These two directories pointed to by two pointers of mnode reside in the physical storage devices.

When an application program 10 accesses either 1) the b directory of File System A 201 by using the path name of/A/b from the File System A. or 2) the y directory of File System B 202 by using the path name /X/y from the File System B 202 as it did before the MFS is mounted, the system detects that the directory b or directory y has the mirror file system 203 mounted on it (by checking the v_vfsmounledhere field of the vnodc), and it becomes the root directory b/y 231 of mirror file system 203. All file access requests (open, read, write, seek, close, etc.) are directed to the vnode operation (struct vnodeops *v_op) of vnode for the virtual directory b 231 of mirror file system 203. When the vnode operation (lor example, the vop_open() operation for an open request from the application) of directory b 231 gets the open request, it will first get the mnode from private data field v_data of its vnode. From the mnode, the vop_open() operation finds both the vnodes of directory b of File System A 201 and the vnode of directory y of File System B 202. The open request is then sent to vop_open() operations of both vnodes. The codes for vop_open( ) in mirror file system look like the following:

All other vnode operations like mfs_read(), mfs_write(). mfs_setattr(), mfs_close( ), etc., follow the same procedure as described in rafs_open( ) to perform the same identical operations with the same parameters on both copies of the mirroring pair. This is how the mirror file system achieves the real-time mirroring effect between the mirroring pair.

4. One Read and Two Write Operations

Since both X and Y copies contain identical information, not every operation needs to be performed on both X and Y copies. For example, the read operation can get all information from either the X or Y copy.

The mirror file system basically applies the following rules in deciding which operation goes to which copy:

a. For Open and Create operations, the mirror file system will invoke the operations that go to both X and Y copies.

b. For a Read operation, the mirror file system only needs to invoke the operation that goes to one copy to obtain the requested data. Which copy a file operation goes to is configurable during the MFS mount operation.

c. For Write file operations, the mirror file system will invoke the operations that go to both X and Y copies.

5. Configuration of Master and Slave

The preceding section describes how the MFS mount protocol sets up a mirroring pair and how the file operations
operate on the mirroring pair. The privileges of the pairs arc equal, that is, either one can mirror its contents to its counterpart in real time. The user can also configure the pairs into a Master and Slave relationship. One file system is the Master; the other one is the Slave. The Master can mirror its contents to its Slave, but not the other way. The Master-Slave configuration may be desirable when one of the mirroring pair is a Network File System that has the physical storage on the remote host.

6. Data Coherency and Consistency

As discussed previously, the write operation will go to both copies. To make sure that the two copies will be identical at all times, the write operation on both copies should be atomic: in other words, during the data writing to both copies, no other operations (read and/or write) should be allowed on the two
25 copies. To achieve this, a locking mechanism is needed. The MFSs' vop_write( ) operation acquires the locks by calling the vop_rwlock() operation of the first vnode, then acquires the lock for second vnode. Both locks of vnode have to be secured before the writing can proceed. If only one lock is granted, and the other one is held by another process, the MFS releases the first lock it is holding to avoid a deadlock in the case that another process that held the second lock also is trying to hold the first lock. After releasing the lock of the first vnode, the vop_write() operation uses a backoff algorithm lo wait for a period of time before trying to acquire the locks on both vnodes again.

7. MFS Failover and Recover Operations

Most of the file operation requests from the application can be executed on the X copy 204 and get all correct data. The X copy 204 may become unavailable due to:

a. Maintenance work on the physical device of the X copy, or

b. Hardware failure on the controller or disk, or the network is down and the Network File System under MFS cannot be reached.

When this occurs, the mirror file system 203 switches the file operations to thcY copy to get the correct infonnation. 'lite recover or re-sync operation of MFS after the failover s" is the following:

1) In case a, the MFS is signaled by an application that issues IOCTL calls to tell the MFS that the X copy will be taken down. When the MFS receives the call, it flags the state of X copy in the mfs_vfs structure to be an unavailable state.

2) In case b, the MFS flags the state of X copy after retrying the operation a pre-defined number of times without success.

From that point on the state of X copy is changed and the MFS does not invoke any file operation of X copy, and keeps a log of what vnode (file or directory) lias been updated on theY copy. When the X copy comes back on line again, the application issues another call to signal MFS that the X copy is back on line again. The MFS then changes the state of X copy in the mfs_vfs structure to the available state. The MFS then syncs the X copy with the vnodes that were updated in the

22

meantime, as stored in the log, and changes the state of the X copy in the mfs_vfs structure to be the available state.

If the down time of the X copy becomes too long, so that the log entry of vnodes overflows, then the MFS re-syncs the entire X copy with the contents of Y copy, similar to the re-sync operation of MFS mount protocol, when it receives the signal from the application.

8. Sharing the Mirror File System on the Network

Once the two file system are linked by the MFS and set up on a network server, the mirror file system can be exported and shared by all clients on the network using the NFS share command and protocol. The clients can mount the mirror file system from the network server across the network and access it as if it were a local file system. All the mirroring is carried out on the network server. The command that shares or exports the mirror file system is the same command that is used to share any other file system; there is no additional file or database required to do the sharing or exporting. For the client to import or to mount the shared mirror file system on its system, it uses the same command as that which is used for importing or mounting other shared file systems.

C. Configuration and Application

The preceding sections describe how the mirror file system links and mirrors between file systems within a computer system. This section discusses how the mirror file system can be configured in the following system environments:

a. Standalone system

b. A server system in a network environment

c. A client system using the mirror file system

1. Mirror Between Two Local File Systems

FIG. 5 illustrates how a standalone mirror system X uses mirror file system X linking and mirroring between a local file systcmA and a local file system B. The local file system A has its data stored on a physical device Disk A; the local file system B has its data stored on Disk B. The two local file systems are linked and become a mirroring pair by the MFS mount protocol.

When Application 1 sends a file operation request 11 to mirror file system X, the mirror file system X will:

a. Invoke the file operation 13 on local file system A. The local file system A then sends the request 15 to the physical device Data A;

b. Then the mirror file system X invokes the file operation 14 on the local file system B. The local file system B then sends the request 16 to the physical device Data B.

In the case of a read file operation. MFS only needs to invoke the operation in local file system A. The manner in which MFS links and mirrors between these two file systems is described in the preceding sections.

2. Mirror Between One Local File System and One Network File System

FIG. 6 illustrates how a network server Master Mirror Server X uses mirror file system X to link and mirror between a local file system A and an Imported Network File system B. The local file systein A has a physical device Data A on the Master Mirror Server X system, the Imported Network File system B is a Network File System (NFS) exported from a Slave Mirror Server Y on the network. Its physical storage is the Data B on the Slave Mirror Server Y The mounting protocol, file system and file operations are the same as the two local file systems mirroring configuration described previously.

In this configuration, the Master Mirror Server X acts as a Master mirror system and the Slave Mirror Server Y acts as a Slave mirror system. The following two scenarios illustrate the Master-Slave relationship:

a. When the Master Mirror Server X updates one of the MFS pair—the local file system A or Imported Network File System B, the physical storage Data A will get updated and the Physical storage Data B of Imported Network File System B on the Slave Mirror Server Y will also get updated via the NFS protocol.

b. When the Slave Mirror Server Y updates its physical storage B through its local file system B, the updates will not go to physical storage Data A of Master Mirror Server X because the Slave Mirror Server Y does not have the MFS to carry out the mirroring. In that regard, the system is only a mirror slave system. It can receive the update from the Master Mirror Server X, but it cannot mirror its contents to the Master Mirror system.

For a mirror server to be a master mirror server on the network, it needs an imported network file system that is exported or shared by and has a physical storage on a network server. In the above example, the Master Mirror Server X can be a master mirror server due to the fact that it has an Imported
Network File System B that it can link together with its local file system A through MFS.

FIG. 7 shows how the Slave Mirror Server Y in FIG. 6 can be turned into a Master Mirror Server Y. To do that, as shown in FIG. 7. the Mirror Server X needs to export 60 its local file system A as the exported local file system A to the Master Mirror ServerY via network file system protocol 61 over the network, e.g. via Ethernet. The Master Mirror ServerY then links the Imported Network File System A and its local file system B together with the mirror file system Y.

When that is done, two master mirror servers reside on the network. These two master mirror servers mirror and backup each other on the network. An application can run on either master mirror server and get all needed information.

3. Mirror Between Two Imported Network File Systems

FIG. 8 illustrates how the Client Mirror System Z uses mirror file system Z linking and mirroring between imported Network File system A and imported Network File System B. In this configuration, the two imported network file systems are the network file systems imported from remote systems on the network. The physical devices of the imported network file systems are on the remote computer systems on the network.

These two imported network file system are mounted on a
so single directory by MFS, preferably the same directory that the applications have accessed. Since there are two file systems to be mounted, the mount protocol provides a new argument to indicate that the previous contents of the mounted directory should be hidden after the MFS mount operation. The contents of the two imported file systems arc inherited into mfs_vfs structure, as described previously.

In this configuration, the Client Mirror System Z is a client to access file systems on two servers, one is designated the primary server, and the otheris designated a secondary server.
60 The primary server may be deployed on the clients' subnetwork; the secondary server can be deployed on a different subnet and be faraway physically. When the primary server becomes unavailable, the client can switch to the secondary server. For most file system or file operations, especially the
65 read-related operations, the client only needs to access the primary server. Hie client only needs to access the secondary server when doing the write-related operations.

23

4. Sharing Mirror File System on the Network

FIG. 9 depicts how two Master Mirror Servers on the network can serve their users better by mirroring and backing up each other. One can make this configuration even better by sharing the Mirroring File Systems across the network to let the clients access them as the local file system. Every client can choose the closest Master Mirror Server on the network as its primary Mirror Server and the other one as its secondary Mirror Server. Ideally, the Master Mirror Server will be on the same subnet as all its clients to save much of the traffic from going through network routers and switches. If the primary Mirror Server becomes unavailable, the clients can switch to the secondary Mirror Server.

With reference to FIG. 9, the following is a scenario describing how the data flows between the Client and Mirror Servers:

1) Server exports the mirror file system. To share its mirror file system X 655 with client systems on the network, the Master Mirror Server X 650 needs to export 605 its mirror file system X 655 as the Exported mirror file system X 651 using the Network File System Protocol 606 to its Client System X 670 on the network, ideally on the same subnet.

2) Client imports the mirror file system. The Client System X 670 on the network imports the Exported mirror file system X 651650 as its Imported mirror file system X 671 by using the Network File System protocol 606
3) Applications on the client access shared mirror file system. When an Application 6 on the Client System X 670 makes an update 632 on the Imported mirror file system X 671, the update is sent 606 by using the Network File System Protocol to the Exported mirror file system X 651 on the Master Mirror Server X 650.

4) The mirror file system X updates two file systems under its management. When the mirror file system X 655 of Master Mirror Server X 650 receives 603 the update through its Exported mirror file system X 651, it does the following:

a. Send 604 the update to the local file system A 652 first. The local file system A 652 then sends 607 the update to its physical device Data A 656.

b. Send 605 the update to its Imported Network File System B 654. The Imported Network File System 654 then sends 609 the update via a Network File System protocol to Ihe Exported Network File system 664 on the Master Mirror Server Y 660.

c. The local file system B 663 of Master Mirror Server Y 660 receives 625 the update from its Exported Network File System B 664 and sends it 624 to the physical device Data B 666.

After the above steps are done, a copy of the update is stored in Data A 656 of Master Mirror Server X 650, and another copy is stored in Data B 666 of Master Mirror Server Y 660.

5. A Client Links Two Imported Mirror File Systems

FIG. 10 shows a client that links two imported Mirror
Master Servers. The configuration is the combination of configurations illustrated in FIGS. 8 and 9. In this configuration, the MPS mount protocol allows a file system like the Imported mirror file system 702 to be designated as the Active File System and the other File System, the Imported mirror file system 703, to be designated as the Passive File System during the MFS mount. The client only accesses the Active File System until the Active File System becomes unavailable. When the Active File System is not responding to the client's request, the client will failover to the Passive File System and continue its operation.

In this configuration, the client system X 670 imports and links two Mirror File systems, one from Master Mirror Server
X and the other from Master Mirror ServerY. Since these two imported Mirror file systems mirror each other on their own Master Mirror Servers X and Y. the Client system X 670 docs not need to do any mirroring between these two imported Mirror file systems, all the mirroring is done on the Master to Mirror Server X andY. The configuration is different from the configuration of FIG. 8 in the following respects:

1. The client does not have to do the mirroring between the two imported mirror file systems.

2. The client uses one imported mirror file system as its active file system, the other one as Ihe passive file system.

3. The client only needs to access the active file system to get all needed information at any given time, this includes read and write operations. When the client does a write operation on the active file system, the Master Mirror Server X will carry out the mirroring to the file system on the Master Mirror Server Y.

4. If the active file system becomes unavailable, the client can failover to the passive file system and continue its operation seamlessly.

5. When Ihe active file system is back on line again, all recovery and re-sync arc done on the master mirror server, not on the client.

This configuration can provide clients a very smooth, reliable and efficient network operation.

What is claimed is:

1. A virtual file system which provides mirroring and linking of two file systems, comprising:

means for mounting components of each of said two file systems on a single mount point constituting a single root directory for the components of both of said two file systems such that each mounted component of one of said two file systems is linked together with and becomes a mirroring pair with a corresponding mounted component in the other one of said two file systems, each of said two file systems having an application interface data structure constituting a programming interface for management thereof and access thereto; and

a virtual file system configured to manage the linking and mirroring of the corresponding mounted components of each of said two file systems, and including a super application interface data structure containing an application interface data structure of said virtual file system, and said application interface data structures of each of said two file systems.

2. 'Hie virtual file system of claim 1, wherein said super application interface data structure of said virtual file system contains said application interface data structure of said virtual file system for managing said virtual file system as a whole, and said application interface data structures of each of said mounted file systems for management of said mounted 60 file systems as a whole, respectively, and

wherein said super application interface data structure of said virtual file system is configured to serve as a fundamental interface frame structure to link said mounted file systems together as a mirroring pair.

3. The virtual file system of claim 1, wherein said components are one of a file system and a sub-structure of a file system that comprises directories and files.

24

4. A method for mirroring files and directories between file systems on a computer system or on two computer systems connected to each other via a network, comprising the steps of:

mounting components of each of two file systems on a single mount point constituting a single root directory to create a virtual file system in which each mounted component of one of said two file systems is linked together with a corresponding component in the other one of said two file systems, each of said mounted components being one of a directory and a file;

configuring said virtual file system so that each component of said virtual file system has a super application interface data structure containing an application interface data structure of said component in said virtual file system, an application interface data structure of a linked component in said one of said two file systems, and an application interface data structure of said corresponding linked component in said other one of said two file systems, said application interface data structure of said component in said virtual file system providing a mechanism for managing said component within said virtual file system and the corresponding linked components within said two file systems;

upon receiving a request to perform a write operation on one of said mounted components, using said application interface data structure of said component in said virtual file system to perform the write operation on said linked component in said one of said two file systems and on the corresponding linked component in said other one of said two file systems in real time in response to said request.

5. The method of claim 4 wherein said request designates said one component, on which the write operation is to be performed, by means of a path name that is common to both of said file systems.

6. The method of claim 4 wherein the step of performing said write operation includes the steps of acquiring a lock for each of said one component and said corresponding component of said one component, and inhibiting said write operation until both locks can be acquired.

7. The virtual file system of claim 1, wherein said mounting means mounts a directory of one of said file systems to a directory of the other file system via said single mount point.

8. The virtual file system of claim 1 wherein said single mount point constituting the single directory functions as a single mount point for access to the components of both of said two file systems.

9. The virtual file system of claim 1 wherein the mounted components of each file system are replicated in the other file system.

10. The method of claim 4 wherein said mounting step comprises mounting a directory of one of said file systems to adireclory ofthe other file system via said single mount point.

11. A mirrored file system, comprising:

a first server having a first local file system and a first physical storage device associated therewith;

a second server having a second local file system and a second physical storage device associated therewith; and

a client device having a virtual file system which mounts an imported file system from said first server and an imported file system from said second server on a single mount point constituting a single root directory to provide a single point of access for mounted components stored in each of said first and second local tile systems, to such that each mounted component in one of said first and second local file systems has a corresponding copy in the other one of said first and second local file systems.

12. The mirrored file system of claim 11, wherein said first is local file system and said second local file system are each imported into said client device, and said virtual file system mounts components ofeach ofsaid two imported file systems to a single directory via said single mount point.

13. The mirrored file system of claim 12 wherein said virtual file system contains a super application interface data structure including an application interface data structure of said virtual file system, an application interface data structure of said first local file system, and an application interface data structure of said second local file system, and

wherein said virtual file system is configured to access said application interface data structures of said first and
second local file systems to manage said first and second
local file systems mounted on said single mount point.

14. The mirrored file system of claim 11 wherein each of said first and second servers includes a virtual file system that mounts components of said server's local file system and components ofthe other server's local file system in a single directory on said single mount point.

15. The mirrored file system of claim 14 wherein the file systems tliat are imported into said client device comprise the virtual file systems ofsaid first and second servers.

16. The mirrored file system of claim 14 wherein the virtual file system in each server contains a super application interface data structure including an application interface data structure ofsaid virtual file system, an application interface data structure of said first local file system, and an application interface data structure of said second local file system, and

wherein said virtual file system in each server is configured to access said application interface data structures of
said first and second local file systems to manage said first and second local file systems mounted on said single mount point.

17. The method of claim 4, wherein said virtual file system causes the write operation performed on said one component so stored in one of said two file systems to be replicated in the corresponding component of said one component stored in the other one of said two file systems in real time.

18. The mirrored file system of claim 11, wherein said virtual file system is configured to cause an operation performed on a component stored in one of said first and second local file systems to be replicated in the corresponding copy of said component stored in the other one of said first and second local file systems in real time.

* * * * *

25

22 - Amended Complaint

UNITED STATES DISTRICT COURT
NORTHERN DISTRICT OF CALIFORNIA
SAN JOSE DIVISION

2. On information and belief, Defendant Red Hat, Inc. (“Red Hat”) is a corporation duly
organized and existing under the laws of the State of Delaware, having its principal place of business at
1801 Varsity Drive, Raleigh, North Carolina 27606.

3. On information and belief, Defendant Gluster, Inc. (“Gluster”) is a corporation duly
organized and existing under the laws of the State of California, having its principal place of business at
640 W. California Ave., Suite 200, Sunnyvale, California, 94086.

4. On or about October 4, 2011, Red Hat publicly announced that it was entering into an
agreement to acquire full ownership of Gluster.

5. On information and belief, Gluster currently operates as a fully-owned subsidiary of Red
Hat.

JURISDICTION AND VENUE

6. This is an action for patent infringement arising under the Patent Act, 35 U.S.C. §§101 et
seq. This Court has jurisdiction over Plaintiff’s federal law claims under 28 U.S.C. §§1331 and 1338(a).

7. This Court has specific and/or general personal jurisdiction over Defendants Red Hat and
Gluster (collectively, “Defendants”) because they have committed acts giving rise to this action within
this judicial district and/or have established minimum contacts within California and within this judicial
district such that the exercise of jurisdiction over Defendants would not offend traditional notions of fair
play and substantial justice.

8. Venue is proper in this District pursuant to 28 U.S.C. §§1391(b)-(c) and 1400(b) because
Defendants have committed acts within this judicial district giving rise to this action, and continue to

1

conduct business in this district, and/or have committed acts of patent infringement within this District
giving rise to this action.

BACKGROUND AND INFRINGEMENT OF U.S. PATENT 7,418,439

9. Twin Peaks re-alleges and incorporates by reference the allegations set forth in the
preceding paragraphs as if fully set forth herein.

10. On August 26, 2008, the United States Patent and Trademark Office duly and lawfully
issued United States Patent Number 7,418,439 (“the ’439 patent”) entitled “Mirror File System” to the
inventor John P. Wong. Mr. Wong is the founder, Chairman, and Chief Technology Officer of Twin
Peaks. A true and correct copy of the ’439 patent is attached hereto as Exhibit A.

11. Twin Peaks is the owner and assignee of all right, title, and interest in and to the ’439
patent, including the right to assert all causes of action arising under said patent and the right to any
remedies for infringement of it.

12. On information and belief, Defendants have infringed and continue to infringe one or
more claims of the ’439 patent in the State of California, in this judicial district, and elsewhere in the
United States by, among other things, making, using, importing, offering to sell, and/or selling in the
United States software products for managing data on computer networks, including but not limited to
GlusterFS and other products that incorporate GlusterFS technology such as Red Hat Storage Software
Appliance and Red Hat Virtual Storage Appliance (“the accused software products”).

13. On information and belief, Defendants indirectly infringe one or more claims of the ’439
patent by inducing their customers’ infringement using such software products. On information and
belief, Defendants have had knowledge of the ‘439 patent since at least February 23, 2012, the filing
date of this action. Despite this knowledge of the ‘439 patent, Defendants have continued to engage in
activities to encourage and assist their customers who use the accused software products to directly
infringe one or more claims of the ‘439 patent. For example, through their websites at
http://www.redhat.com and http://www.gluster.org, Defendants advertise and provide instructions on
how to use the feature in the accused software products of managing data through replicated volumes.

2

Such advertisements and instructions are provided in, for example, technical documentation made
available by Defendants through their websites, including but not limited to Administration Guides and
User Guides for the accused software products. On information and belief, by using features in the
accused software products such as this feature of managing data through replicated volumes,
Defendants’ customers have directly infringed and continue to directly infringe one or more claims of
the ‘439 patent. On information and belief, Defendants knew or should have known their activities in
encouraging and instructing customers in the use of the accused software products, including but not
limited to the activities set forth above, would induce their customers’ direct infringement of the ’439
patent.

14. On information and belief, Defendants will continue to infringe the ’439 patent unless
enjoined by this Court.

15. Defendants’ acts of infringement have damaged Twin Peaks in an amount to be proven at
trial, but in no event less than a reasonable royalty. Defendants’ infringement of Twin Peaks’ rights
under the ’439 patent will continue to damage Twin Peaks causing irreparable harm, for which there is
no adequate remedy at law, unless enjoined by this Court.

PRAYER FOR RELIEF

WHEREFORE, Twin Peaks prays for relief as follows:

a. For judgment that Defendants have infringed and continue to infringe the claims of
the ’439 patent;

b. For a preliminary and/or permanent injunction against Defendants and their
respective officers, directors, agents, servants, affiliates, employees, divisions,
branches, subsidiaries, parents, and all others acting in active concert therewith from
infringement of the ’439 patent;

c. For an accounting of all damages caused by Defendants’ acts of infringement;

d. For a judgment and order requiring each Defendant to pay Twin Peaks’ damages,
costs, expenses, and pre- and post-judgment interest for its infringement of the ’439

3

patent as provided under 35 U.S.C. §284;

e. For a judgment and order finding this to be an exceptional case, and awarding Twin
Peaks attorney fees under 35 U.S.C. §285; and

f. For such relief at law and in equity as the Court may deem just and proper.

Pursuant to Rules 8 and 15(a)(3) of the Federal Rules of Civil Procedure, Defendants Red
Hat, Inc. and Gluster, Inc. (collectively, “Red Hat” or “Defendants”) hereby respond to of Plaintiff
Twin Peaks Software Inc.’s (“Plaintiff”) First Amended Complaint for Patent Infringement (D.I.
22). Red Hat denies each and every allegation contained in the First Amended Complaint that is not
expressly admitted below. Any factual allegation below is admitted only as to the specific admitted
facts, not as to any purported conclusions, characterizations, implications or speculations that
arguably follow from the admitted facts. Red Hat denies that Plaintiff is entitled to the relief
requested or any other relief.

ANSWER

1. On information and belief, Red Hat admits that Plaintiff is a California corporation
with its principal place of business at 46732 Fremont Blvd., Fremont, California 94538. Red Hat
lacks sufficient knowledge to admit or deny the remaining allegations of Paragraph 1 and therefore
denies those allegations.

2. Red Hat, Inc. admits that it is a Delaware corporation with its principal place of
business at 1801 Varsity Drive, Raleigh, North Carolina 27606.

3. Gluster, Inc. denies that it is a California corporation having its principal place of
business at 640 W. California Ave., Suite 200, Sunnyvale, California, 94086.

4. Red Hat admits that on or about October 4, 2011, Red Hat, Inc. announced that it
was entering into an agreement to acquire Gluster, Inc. Except as expressly admitted, Red Hat
denies the remaining allegations of Paragraph 4.

7. Red Hat admits for purposes of this action only that this Court has personal
jurisdiction over Red Hat. Red Hat expressly denies that it has committed any act of infringement
in this judicial district or elsewhere. Except as expressly admitted, Red Hat denies the remaining
allegations of Paragraph 7.

8. Red Hat admits for purposes of this action only that venue is proper in this District.
Red Hat expressly denies that it has committed any acts of infringement in this judicial district or
elsewhere. Except as expressly admitted, Red Hat denies the remaining allegations of Paragraph 8.

9. Red Hat admits that Plaintiff has realleged and incorporated by reference its
allegations in the preceding paragraphs of the First Amended Complaint. Except as expressly
admitted, Red Hat denies the remaining allegations of Paragraph 9.

10. Red Hat admits that U.S. Patent No. 7,418,439 (“the ’439 Patent”) bears an issue
date of August 26, 2008, and that a copy of the ’439 Patent appears to be attached as Exhibit A to
the First Amended Complaint. Red Hat admits that the first page of the ’439 Patent states that the
patent’s title is “Mirror File System,” and identifies its inventor as John P. Wong of Fremont,
California. Red Hat denies that the ’439 Patent was duly and lawfully issued. Red Hat lacks
sufficient knowledge to admit or deny the remaining allegations of Paragraph 10 and therefore
denies those allegations.

11. Red Hat admits that the first page of the ’439 Patent identifies Twin Peaks Software,
Inc. as the Patent’s assignee. Red Hat lacks sufficient knowledge to admit or deny the allegations of
Paragraph 11 and therefore denies those allegations.

12. Red Hat admits that it has sold the Gluster FS, Red Hat Storage Software Appliance,
and Red Hat Virtual Storage Appliance products. Red Hat expressly denies that it has directly
and/or indirectly infringed any claim of the ’439 Patent, whether in this District or elsewhere in the
United States. Red Hat further denies the remaining allegations of Paragraph 12.

13. Red Hat admits that Plaintiff filed its original Complaint on February 23, 2012,
accusing Red Hat of infringing the ’439 Patent. Red Hat further admits that it owns and is
responsible for the Red Hat-branded content available on the www.redhat.com and
www.gluster.com websites, including technical documentation such as Administration Guides and

3

User Guides for its various products. Red Hat expressly denies that it has directly and/or indirectly
infringed any claim of the ’439 Patent, and denies that any of its customers directly infringe any
claim of the ’439 Patent by using any Red Hat product(s). Red Hat further denies the remaining
allegations of Paragraph 13.

14. Red Hat expressly denies that it has directly and/or indirectly infringed any claim of
the ’439 Patent. Red Hat denies that Plaintiff is entitled to injunctive relief of any kind. Red Hat
further denies the remaining allegations of Paragraph 14.

15. Red Hat denies that it has directly and/or indirectly infringed any claim of the ’439
Patent and denies causing any damage to Plaintiff of any kind. Red Hat further denies the
remaining allegations of Paragraph 15.

PLAINTIFF’S PRAYER FOR RELIEF

Red Hat denies the allegations of Plaintiff’s Prayer for Relief against Red Hat and denies
that Plaintiff is entitled to any relief whatsoever from Red Hat. Red Hat asks that judgment be
entered for Red Hat and that this action be found to be an exceptional case under 35 U.S.C. § 285,
entitling Red Hat to an award of its reasonable attorneys’ fees incurred in connection with Red
Hat’s defense against Plaintiff’s claims, together with such other and further relief as the Court
deems appropriate.

PLAINTIFF’S DEMAND FOR A JURY TRIAL

Red Hat acknowledges that Plaintiff has demanded a jury trial of this action.

AFFIRMATIVE DEFENSES

Red Hat asserts the following affirmative defenses in response to Plaintiff’s First Amended
Complaint. Red Hat reserves the right to allege additional affirmative defenses as they become
known throughout the course of discovery.

FIRST AFFIRMATIVE DEFENSE

(Non-Infringement)

16. Red Hat has not infringed and does not currently infringe (either directly,
contributorily, or by inducement) any valid claim of the ’439 Patent.

4

SECOND AFFIRMATIVE DEFENSE

(Invalidity)

17. The claims of the ’439 Patent are invalid and unenforceable because they fail to
satisfy one or more conditions for patentability set forth in 35 U.S.C. § 101 et seq., including,
without limitation, Sections 101, 102, 103, and 112, because the alleged invention of the ’439
Patent lacks utility, is taught by, suggested by, and/or, anticipated or obvious in view of the prior
art, is not enabled, and/or is unsupported by the written description of the patented invention, and no
claim of the ’439 Patent can be validly construed to cover any Red Hat product.

THIRD AFFIRMATIVE DEFENSE

(Laches/Unclean Hands/Equitable Estoppel/Waiver)

18. Plaintiff’s claims are barred, in whole or in part, by the equitable doctrines of laches,
unclean hands, estoppel and/or waiver.

FOURTH AFFIRMATIVE DEFENSE

(Prosecution History Estoppel)

19. Plaintiff’s claims are barred by the doctrine of prosecution history estoppel based on
statements, representations and admissions made during prosecution of the patent application
resulting in the ’439 Patent and/or in related patent applications.

23. Plaintiff’s claim for injunctive relief is barred because there exists an adequate
remedy at law and Plaintiff’s claims otherwise fail to meet the requirements for such relief.

NINTH AFFIRMATIVE DEFENSE

(Reservation of Rights)

24. Red Hat reserves the right to add any additional defenses (including but not limited
to inequitable conduct) or counterclaims which may now exist or in the future may be available
based on discovery and further factual investigation in this case.

25. These counterclaims seek declaratory judgments of non-infringement and invalidity
of the ’439 Patent asserted by Plaintiff in this action. Red Hat seeks judgment under the patent laws
of the United States, 35 U.S.C. § 101, et seq., and the Declaratory Judgment Act, 28 U.S.C. §§ 2201
and 2202.

Parties

26. Red Hat, Inc. is a Delaware corporation with a principal place of business at 1801
Varsity Drive, Raleigh, North Carolina 27606.

27. Gluster, Inc. is a Delaware corporation and a wholly owned subsidiary of Red Hat,
Inc., with a principal place of business at 1801 Varsity Drive, Raleigh, North Carolina 27606.

28. On information and belief, Twin Peaks Software Inc. is a California corporation with
a principal place of business at 46732 Fremont Blvd., Fremont, California 94538.

6

Jurisdiction and Venue

29. This Court has subject matter jurisdiction over these counterclaims pursuant to 28
U.S.C. §§ 1331 and 1338, the patent laws of the United States set forth at 35 U.S.C. §§ 101 et seq.,
and the Declaratory Judgment Act, 28 U.S.C. §§ 2201 and 2202.

30. Plaintiff has consented to the personal jurisdiction of this Court by commencing its
action against Red Hat for patent infringement in this judicial district, as set forth in Plaintiff’s First
Amended Complaint.

31. Venue is proper in this judicial district pursuant to 28 U.S.C. §§ 1391(b), (c) and
1400(b).

COUNT I

(Declaratory Judgment of Non-Infringement)

32. Red Hat incorporates by reference the allegations of Paragraphs 26-31 above as
though fully set forth herein.

33. An actual case or controversy exists between Red Hat and Twin Peaks Software Inc.
as to whether the ’439 Patent is or is not infringed by Red Hat.

34. Pursuant to the Federal Declaratory Judgment Act, 28 U.S.C. § 2201 et seq., Red Hat
requests a declaration of the Court that Red Hat has not infringed and does not currently infringe
any claim of the ’439 Patent, either directly, contributorily, or by inducement.

35. On information and belief, prior to filing its First Amended Complaint and at a
minimum prior to the filing of this Answer, Plaintiff knew, or reasonably should have known, that
the claims of the ’439 Patent are not infringed by Red Hat, and/or that its claims against Red Hat are
barred in whole or in part. Plaintiff’s filing of the First Amended Complaint and continued pursuit
of its present claims against Red Hat in view of this knowledge makes this case exceptional within
the meaning of 35 U.S.C. § 285.

COUNT II

(Declaratory Judgment of Invalidity)

36. Red Hat incorporates by reference the allegations of Paragraphs 26-35 above as
though fully set forth herein.

7

37. An actual case or controversy exists between Red Hat and Twin Peaks Software Inc.
as to whether the ’439 Patent is or is not invalid.

38. Pursuant to the Federal Declaratory Judgment Act, 28 U.S.C. § 2201 et seq., Red Hat
requests a declaration of the Court that the ’439 Patent is invalid because it fails to satisfy
conditions for patentability specified in 35 U.S.C. § 101 et seq., including, without limitation,
Sections 101, 102, 103, and/or 112, because the alleged invention of the ’439 Patent lacks utility, is
taught by, suggested by, and/or, anticipated or obvious in view of the prior art, is not enabled,
and/or is unsupported by the written description of the patented invention, and no claim of the ’439
Patent can be validly construed to cover any Red Hat product.

39. On information and belief, prior to filing its First Amended Complaint and at a
minimum prior to the filing of this Answer, Plaintiff knew, or reasonably should have known, that
the claims of the ’439 Patent are invalid. Plaintiff’s filing of the First Amended Complaint and
continued pursuit of its present claims against Red Hat in view of this knowledge makes this case
exceptional within the meaning of 35 U.S.C. § 285.

1. That Red Hat has not infringed and is not infringing, either directly, indirectly, or
otherwise, any valid claim of the ’439 Patent;

2. That the claims of the ’439 Patent are invalid;

3. Issuing a permanent injunction preventing Twin Peaks Software Inc., including its
officers, agents, employees and all persons acting in concert or participation with Twin Peaks
Software Inc., from charging that the ’439 Patent is infringed by Red Hat;

4. That Twin Peaks Software Inc. take nothing by its First Amended Complaint;

32. Twin Peaks incorporates by reference its responses to Paragraphs 26-31 of the
Counterclaims as if fully set forth herein.

33. Twin Peaks admits Paragraph 33 of the Counterclaims.

34. Twin Peaks admits that Red Hat requests, pursuant to the Federal Declaratory
Judgment Act, 28 U.S.C. § 2201 et seq., a declaration of the Court that Red Hat has not infringed
and does not currently infringe any claim of the ’439 Patent, either directly, contributorily, or by
inducement. Twin Peaks denies the remaining allegations in Paragraph 34 of the Counterclaims.

35. Twin Peaks denies Paragraph 35 of the Counterclaims.

COUNT II

(Declaratory Judgment of Invalidity)

36. Twin Peaks incorporates by reference its responses to Paragraphs 26-35 of the

Pursuant to Rules 8 and 15(a)(3) of the Federal Rules of Civil Procedure, Defendants Red
Hat, Inc. (“Red Hat”) and Gluster, Inc. (“Gluster”) (collectively, “Defendants”) hereby amend their
answer and counterclaims in response to Plaintiff Twin Peaks Software Inc.’s (“Plaintiff” or “Twin
Peaks”) First Amended Complaint for Patent Infringement (D.I. 22). Defendants deny each and
every allegation contained in the First Amended Complaint that is not expressly admitted below.
Any factual allegation below is admitted only as to the specific admitted facts, not as to any
purported conclusions, characterizations, implications or speculations that arguably follow from the
admitted facts. Defendants deny that Plaintiff is entitled to the relief requested or any other relief.

ANSWER

1. On information and belief, Defendants admit that Plaintiff is a California corporation
with its principal place of business at 46732 Fremont Blvd., Fremont, California 94538. Defendants
lack sufficient knowledge to admit or deny the remaining allegations of Paragraph 1 and therefore
deny those allegations.

2. Defendants admit that Red Hat is a Delaware corporation with its principal place of
business at 1801 Varsity Drive, Raleigh, North Carolina 27606.

3. Defendants deny that Gluster is a California corporation having its principal place of
business at 640 W. California Ave., Suite 200, Sunnyvale, California 94086.

4. Defendants admit that on or about October 4, 2011, Red Hat publicly announced that
it was entering into an agreement to acquire Gluster. Except as expressly admitted, Defendants
deny the remaining allegations of Paragraph 4.

7. Defendants admit for purposes of this action only that this Court has personal
jurisdiction over Defendants. Defendants expressly deny that they have committed any act of
infringement in this judicial district or elsewhere. Except as expressly admitted, Defendants deny
the remaining allegations of Paragraph 7.

8. Defendants admit for purposes of this action only that venue is proper in this District.
Defendants expressly deny that they have committed any acts of infringement in this judicial district
or elsewhere. Except as expressly admitted, Defendants deny the remaining allegations of
Paragraph 8.

9. Defendants admit that Plaintiff has realleged and incorporated by reference its
allegations in the preceding paragraphs of the First Amended Complaint. Except as expressly
admitted, Defendants deny the remaining allegations of Paragraph 9.

10. Defendants admit that U.S. Patent No. 7,418,439 (“the ’439 Patent”) bears an issue
date of August 26, 2008, and that a copy of the ’439 Patent appears to be attached as Exhibit A to
the First Amended Complaint. Defendants admit that the first page of the ’439 Patent states that the
patent’s title is “Mirror File System,” and identifies its inventor as John P. Wong of Fremont,
California. Defendants deny that the ’439 Patent was duly and lawfully issued. Defendants lack
sufficient knowledge to admit or deny the remaining allegations of Paragraph 10 and therefore deny
those allegations.

11. Defendants admit that the first page of the ’439 Patent identifies Twin Peaks
Software, Inc. as the Patent’s assignee. Defendants lack sufficient knowledge to admit or deny the
allegations of Paragraph 11 and therefore deny those allegations.

12. Gluster admits that it has sold the Gluster FS product, and Red Hat admits that it has
sold the Red Hat Storage Software Appliance and Red Hat Virtual Storage Appliance products.
Defendants expressly deny that they have directly and/or indirectly infringed any claim of the ’439
Patent, whether in this District or elsewhere in the United States. Except as expressly admitted,
Defendants deny the remaining allegations of Paragraph 12.

3

13. Defendants admit that Plaintiff filed its original Complaint on February 23, 2012,
accusing Defendants of infringing the ’439 Patent. Red Hat further admits that it owns and is
responsible for the Red Hat-branded content available on the www.redhat.com and
www.gluster.com websites, including technical documentation such as Administration Guides and
User Guides for its various products. Defendants expressly deny that they have directly and/or
indirectly infringed any claim of the ’439 Patent, and deny that any of their customers directly
infringe any claim of the ’439 Patent by using any of Defendants’ products. Except as expressly
admitted, Defendants deny the remaining allegations of Paragraph 13.

14. Defendants expressly deny that they have directly and/or indirectly infringed any
claim of the ’439 Patent. Defendants deny that Plaintiff is entitled to injunctive relief of any kind.
Defendants further deny the remaining allegations of Paragraph 14.

15. Defendants deny that they have directly and/or indirectly infringed any claim of the
’439 Patent and deny causing any damage to Plaintiff of any kind. Defendants further deny the
remaining allegations of Paragraph 15.

PLAINTIFF’S PRAYER FOR RELIEF

Defendants deny the allegations of Plaintiff’s Prayer for Relief and deny that Plaintiff is
entitled to any relief whatsoever from Defendants. Defendants ask that judgment be entered for
Defendants and that this action be found to be an exceptional case under 35 U.S.C. § 285, entitling
Defendants to an award of their reasonable attorneys’ fees incurred in connection with their defense
against Plaintiff’s claims, together with such other and further relief as the Court deems appropriate.

PLAINTIFF’S DEMAND FOR A JURY TRIAL

Defendants acknowledge that Plaintiff has demanded a jury trial of this action.

AFFIRMATIVE DEFENSES

Defendants assert the following affirmative defenses in response to Plaintiff’s First
Amended Complaint. Defendants reserve the right to allege additional affirmative defenses as they
become known throughout the course of discovery.

4

FIRST AFFIRMATIVE DEFENSE

(Non-Infringement)

16. Defendants have not infringed and does not currently infringe (either directly,
contributorily, or by inducement) any valid claim of the ’439 Patent.

SECOND AFFIRMATIVE DEFENSE

(Invalidity)

17. The claims of the ’439 Patent are invalid and unenforceable because they fail to
satisfy one or more conditions for patentability set forth in 35 U.S.C. § 101 et seq., including,
without limitation, Sections 101, 102, 103, and 112, because the alleged invention of the ’439
Patent lacks utility, is taught by, suggested by, and/or, anticipated or obvious in view of the prior
art, is not enabled, and/or is unsupported by the written description of the patented invention, and no
claim of the ’439 Patent can be validly construed to cover any of Defendants’ products.

THIRD AFFIRMATIVE DEFENSE

(Laches/Unclean Hands/Equitable Estoppel/Waiver)

18. Plaintiff’s claims are barred, in whole or in part, by the equitable doctrines of laches,
unclean hands, estoppel and/or waiver.

FOURTH AFFIRMATIVE DEFENSE

(Prosecution History Estoppel)

19. Plaintiff’s claims are barred by the doctrine of prosecution history estoppel based on
statements, representations and admissions made during prosecution of the patent application
resulting in the ’439 Patent and/or in related patent applications.

22. Plaintiff’s First Amended Complaint fails to state a claim for relief against
Defendants. Plaintiff’s First Amended Complaint identifies no person or entity who directly
infringes the claims of the ’439 Patent, as required to prove indirect infringement. Furthermore,
Plaintiff’s First Amended Complaint does not (and cannot) allege that Defendants’ accused
products lack substantial non-infringing uses, as required to prove contributory infringement.

EIGHTH AFFIRMATIVE DEFENSE

(No Right to Injunctive Relief)

23. Plaintiff’s claim for injunctive relief is barred because there exists an adequate
remedy at law and Plaintiff’s claims otherwise fail to meet the requirements for such relief.

NINTH AFFIRMATIVE DEFENSE

(Reservation of Rights)

24. Defendants reserve the right to add any additional defenses (including but not limited
to inequitable conduct) or counterclaims which may now exist or in the future may be available
based on discovery and further factual investigation in this case.

DEFENDANTS’ COUNTERCLAIMS AGAINST PLAINTIFF TWIN PEAKS

For their counterclaims against Plaintiff Twin Peaks, Defendants state and allege as follows:

Nature of the Action

25. These counterclaims seek declaratory judgments of non-infringement and invalidity
of the ’439 Patent asserted by Plaintiff in this action, and judgment against Twin Peaks for
copyright infringement. Defendants seek judgment under the patent laws of the United States, 35
U.S.C. § 101, et seq., the Declaratory Judgment Act, 28 U.S.C. §§ 2201 and 2202, and the copyright
laws of the United States, 17 U.S.C. § 101, et seq.

6

Parties

26. Defendant and Counterclaim-Plaintiff Red Hat is a Delaware corporation with a
principal place of business at 1801 Varsity Drive, Raleigh, North Carolina 27606.

27. Defendant and Counterclaim-Plaintiff Gluster is a Delaware corporation and a
wholly owned subsidiary of Red Hat, with a principal place of business at 1801 Varsity Drive,
Raleigh, North Carolina 27606.

28. On information and belief, Plaintiff and Counterclaim-Defendant Twin Peaks is a
California corporation with a principal place of business at 46732 Fremont Blvd., Fremont,
California 94538.

Jurisdiction and Venue

29. This Court has subject matter jurisdiction over these counterclaims pursuant to 28
U.S.C. §§ 1331 and 1338, the patent laws of the United States set forth at 35 U.S.C. §§ 101 et seq.,
the Declaratory Judgment Act, 28 U.S.C. §§ 2201 and 2202, and the Copyright Act, 17 U.S.C. §§
101 et seq.

30. Plaintiff has consented to the personal jurisdiction of this Court by commencing its
action against Defendants for patent infringement in this judicial district, as set forth in Plaintiff’s
First Amended Complaint.

31. Venue is proper in this judicial district pursuant to 28 U.S.C. §§ 1391(b), (c) and
1400(b), because a substantial part of the events giving rise to the counterclaims asserted herein
arise in this district, and Plaintiff, upon information and belief, is and at all relevant times was doing
business in this district.

Free and Open Source Software

32. Free and open source software (“FOSS”) is software in which the source code is
made available to users for inspection, modification, and distribution. Generally, when a computer
program is authored, the programmer writes code in a human-readable programming language.
This code is called “source code” and can be compiled into another form, called “object code,” that
is executable by a computer microprocessor. A software product (e.g., a collection of computer

7

programs) can be distributed solely in object code form, which allows the software product to be
fully functional on a computer system but which does not enable users easily to understand or
modify the software. By contrast, the source code to FOSS is made available to the recipient under
conditions set forth in an accompanying license, which grants relatively broad rights for recipients
to use, copy, modify, and distribute the software, but may also limit the ways in which the code or
derivative works of the code can be distributed so as to benefit the broader developer community.

33. The benefits of the FOSS development model are widely recognized. For example,
in holding an open source license enforceable under copyright law, the Court of Appeals for the
Federal Circuit noted that “[o]pen source licensing has become a widely used method of creative
collaboration that serves to advance the arts and sciences in a manner and at a pace that few could
have imagined just a few decades ago.” Jacobsen v. Katzer, 535 F.3d 1373, 1378 (Fed. Cir. 2008).
The Federal Circuit explained that these advances depend on the conditions provided in open source
licenses:

Open Source software projects invite computer programmers from
around the world to view software code and make changes and
improvements to it. Through such collaboration, software programs
can often be written and debugged faster and at lower cost than if the
copyright holder were required to do all of the work independently. In
exchange and in consideration for this collaborative work, the
copyright holder permits users to copy, modify and distribute the
software code subject to conditions that serve to protect downstream
users and to keep the code accessible.

Id. at 1379.

Red Hat and FOSS

34. Red Hat is a leading contributor to FOSS, including the many software packages that
make up the Linux operating system. Red Hat makes source code to Linux and its other FOSS
software offerings freely available to anyone, subject to certain conditions. Although it makes
software available under open source licenses, Red Hat derives revenues from aggregating,
certifying, testing, enhancing, packaging, maintaining, supporting and influencing the future
direction of the software, among other value-added offerings.

8

35. Over the past two decades, Red Hat, a publicly-traded company, has grown from a
handful of employees to over 4,500 employees and achieved annual revenue in excess of $1 billion.
Throughout this growth, Red Hat has remained committed to the open source development model.
Many of Red Hat’s thousands of employees have contributed and continue to contribute to the
FOSS ecosystem, including by developing and releasing code under FOSS licenses. By way of
example, Red Hat is the largest corporate contributor to the Linux kernel, which is a collection of
programs at the heart of the Linux operating system.

The GNU General Public License

37. The software that Red Hat makes available is typically distributed under a variety of
well-established, open source licenses, such as the GNU General Public License (the “GPL”), that
permit access to human-readable software source code as authored by contributors. These licenses
also provide relatively broad rights for licensees to use, copy, modify and distribute open source
software. These broad rights afford significant latitude for Red Hat’s customers to inspect, suggest
changes, customize or enhance the software if they so choose. A copy of version 2 of the GNU
General Public License (the “GPLv2”) is attached hereto as Exhibit A.

38. Although the GPL affords broad rights to software users, the GPL also includes
protections to prevent misappropriation of source code. Under the terms of the GPL, when
someone obtains software subject to the GPL and then redistributes it, with or without
modifications, in object code form, that person must make the complete corresponding source code
freely available to recipients of the software, including any modifications to that code, under the
same license—the GPL. This critical condition of making the source code to all modifications

9

available with the same freedoms that came with the original is the quid pro quo for having
benefited from the work of other developers. That quid pro quo is enforced through copyright law.
As Judge Easterbrook has noted, copyright law “ensures that open-source software remains free:
any attempt to sell a derivative work will violate the copyright laws, even if the improver has not
accepted the GPL.” Wallace v. Int'l Bus. Machines Corp., 467 F.3d 1104, 1105 (7th Cir. 2006). In
addition to requiring distributors of object code derived from GPL-licensed programs to provide
complete corresponding source code, the GPL also requires that distributors provide their recipients
with notice of the licensing terms through providing a copy of the GPL text.

39. When Red Hat distributes works licensed under the GPL, Red Hat grants certain
permissions to other parties to copy, modify and redistribute those works so long as those parties
satisfy certain conditions. In particular, Section 2(b) of the GPLv2, addressing each licensee, states:

You must cause any work that you distribute or publish, that in whole
or in part contains or is derived from the Program or any part thereof,
to be licensed as a whole at no charge to all third parties under the
terms of this License.

40. Thus, if a licensee redistributes works licensed under the GPL (included works
developed by Red Hat), it may do so only under the terms of the GPL.

41. The GPL permits a licensee to distribute licensed works, or works based on those
works, in object code form, on the condition (inter alia) that the licensee gives recipients access to
the source code corresponding to what they distribute. Specifically, Section 3 of the GPLv2
provides:

You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the
following:

a) Accompany it with the complete corresponding machinereadable
source code, which must be distributed under the
terms of Sections 1 and 2 above on a medium customarily
used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to
be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange . . . .

10

42. Furthermore, Section 4 of the GPLv2 states:

You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.

43. Therefore, under the GPL, any party that redistributes a work in a manner that does
not comply with the terms of the GPL (including, without limitation, those set out in the paragraphs
above) immediately and automatically loses all rights granted under it, including the right to
distribute the work or modified versions thereof.

util-linux and the “mount” Program

44. util-linux is a standard software package that is included in Red Hat’s distribution of
the Linux operating system. util-linux was created in the 1990’s and has undergone continuous
revision and improvements by many authors since then. It includes numerous tools that provide
critical basic functionality within the Linux operating system, such as making files on disks
available to the user of a computer system on which Linux is running. One such tool is a program
called “mount.”

45. The “mount” program in util-linux is licensed under the GPLv2. Both the object
code and the source code for the util-linux “mount” program can be freely downloaded and
redistributed, provided that the person doing so complies with the conditions of the GPLv2,
including the requirement to provide recipients of object code with complete corresponding
GPL-licensed source code.

46. Red Hat, through its employee-developers, has made significant contributions to the
tools in the util-linux package, and to its “mount” program in particular, in the form of
improvements implemented in the source code.

47. In February 2005, Red Hat released Red Hat Enterprise Linux 4. Red Hat Enterprise
Linux (or “RHEL”) is Red Hat’s Linux-based operating system that is especially targeted toward
the commercial market. RHEL 4 included many software packages, including a version of utillinux
numbered version 2.12a.

11

Red Hat’s Copyright Registrations

48. Red Hat has obtained registrations from the United States Copyright Office for its
original contributions to the “mount” program in util-linux. In particular, Red Hat is and at all
relevant times has been the owner of Copyright Reg. Nos. TX 7-557-456 (August 13, 2012),
entitled “Mount – 2.10m” and TX-7-557-458 (August 13, 2012), entitled “Mount – 2.12a.” True
and correct copies of these registration certificates are attached hereto as Exhibit B.

Twin Peaks’ Improper Use of Red Hat’s Source Code

49. Like Red Hat, Twin Peaks distributes software that runs on the Linux operating
system. Unlike Red Hat, however, Twin Peaks distributes software only under a proprietary license
that forbids copying, and does not make any of the source code for any of its products publicly
available.

50. Twin Peaks sells, subject to its proprietary license, and without providing any source
code, software that it calls an “innovative replication solution.” That software is branded as “TPS
Replication Plus.”

51. Twin Peaks also provides a “free” version of its TPS Replication Plus software,
called “TPS My Mirror.” This version is also provided only under a proprietary license, and also
without any source code or copy of the GPL.

52. On its website, Twin Peaks represents that the “TPS Replication Plus” and “TPS My
Mirror” software packages are covered by the same patent it accuses Red Hat of infringing in this
action (the ’439 Patent).

54. On information and belief, rather than develop its own source code to create its
proprietary software replication products, Twin Peaks copied substantial portions of open source
code into those products, including source code originally authored by Red Hat. Among the code
Twin Peaks improperly copied was that embodied in the “mount” program released in util-linux

12

version 2.12a, which Twin Peaks copied into the source code for its own “mount.mfs” tool. Twin
Peaks’ verbatim and near-verbatim copying of open source and Red Hat source code into its
“mount.mfs” tool is pervasive and extensive.

55. By selling or providing “TPS Replication Plus” and “TPS My Mirror” under
proprietary license agreements and not making any of their source code available to the public,
Twin Peaks has failed to comply with the explicit conditions of the GPL. Twin Peaks is thus
illegally free-riding off of Red Hat’s contributions to util-linux, as well as the contributions of many
others in the FOSS community to that software.

56. By reproducing, copying, and distributing Red Hat’s original source code in “TPS
Replication Plus” and “TPS My Mirror,” without approval or authorization by Red Hat and only
subject to its own proprietary license agreement, Twin Peaks is infringing and has infringed Red
Hat’s exclusive copyrights, and likewise is inducing and has induced its customers to infringe.

57. Red Hat has not licensed or otherwise authorized Twin Peaks to reproduce, copy or
distribute Red Hat’s copyrighted source code or any works derived from it, except under the
conditions of the GPL, which Twin Peaks has failed to satisfy.

59. Defendants incorporate by reference the allegations of Paragraphs 26-31 above as
though fully set forth herein.

60. An actual case or controversy exists between Defendants and Twin Peaks as to
whether or not Defendants have infringed/or and are infringing the ’439 Patent.

13

61. Pursuant to the Federal Declaratory Judgment Act, 28 U.S.C. § 2201 et seq.,
Defendants request a declaration of the Court that Defendants have not infringed and are not
currently infringing any claim of the ’439 Patent, either directly, contributorily, or by inducement.

62. On information and belief, prior to filing its First Amended Complaint and at a
minimum prior to the filing of this Answer, Plaintiff knew, or reasonably should have known, that
the claims of the ’439 Patent are not infringed by Defendants, and/or that its claims against
Defendants are barred in whole or in part. Plaintiff’s filing of the First Amended Complaint and
continued pursuit of its present claims against Defendants in view of this knowledge makes this
case exceptional within the meaning of 35 U.S.C. § 285.

COUNT II

(Declaratory Judgment of Invalidity)

63. Defendants incorporate by reference the allegations of Paragraphs 26-31 above as
though fully set forth herein.

64. An actual case or controversy exists between Defendants and Twin Peaks as to
whether or not the ’439 Patent is invalid.

65. Pursuant to the Federal Declaratory Judgment Act, 28 U.S.C. § 2201 et seq.,
Defendants request a declaration of the Court that the ’439 Patent is invalid because it fails to
satisfy conditions for patentability specified in 35 U.S.C. § 101 et seq., including, without
limitation, Sections 101, 102, 103, and/or 112, because the alleged invention of the ’439 Patent
lacks utility, is taught by, suggested by, and/or, anticipated or obvious in view of the prior art, is not
enabled, and/or is unsupported by the written description of the patented invention, and no claim of
the ’439 Patent can be validly construed to cover any of Defendants’ products.

66. On information and belief, prior to filing its First Amended Complaint and at a
minimum prior to the filing of this Answer, Plaintiff knew, or reasonably should have known, that
the claims of the ’439 Patent are invalid. Plaintiff’s filing of the First Amended Complaint and
continued pursuit of its present claims against Defendants in view of this knowledge makes this
case exceptional within the meaning of 35 U.S.C. § 285.

14

COUNT III

(Copyright Infringement)

67. Red Hat incorporates by reference the allegations of Paragraphs 26-58 above as
though fully set forth herein.

68. Red Hat is, and at all relevant times has been, a copyright owner under United States
copyright law in its contributions to the “mount” program in util-linux. Its copyright registrations
for its contributions to the “mount” program include: Copyright Reg. Nos. TX 7-557-456 (August
13, 2012), entitled “Mount – 2.10m” and TX-7-557-458 (August 13, 2012), entitled “Mount –
2.12a.” .

69. As the copyright owner in the “mount” program, Red Hat has the exclusive rights to
do and to authorize any of the following: to reproduce the copyrighted work in copies; to prepare
derivative works based upon the copyrighted work; and to distribute copies of the copyrighted work
pursuant to 17 U.S.C. § 106.

71. Twin Peaks’ development of software products derived from Red Hat’s copyrighted
code, without approval or authorization by Red Hat, infringes Red Hat’s exclusive copyrights in its
contributions to the “mount” program pursuant to 17 U.S.C. § 501.

72. Twin Peaks’ distribution of software products that contain Red Hat’s copyrighted
code, and which are derivative works based on Red Hat’s “mount” program, without approval or
authorization by Red Hat, infringes Red Hat’s exclusive copyrights in its contributions to the
“mount” program pursuant to 17 U.S.C. § 501.

73. Red Hat is entitled to recover from Twin Peaks for infringement of each copyright
the amount of its actual damages and any additional profits of Twin Peaks attributable to Twin
Peaks’ infringement pursuant to 17 U.S.C. § 504.

15

74. For each copyright, Red Hat is also entitled to permanent injunctive relief pursuant
to 17 U.S.C. § 502 because Red Hat has no adequate remedy at law for Twin Peaks’ wrongful
conduct because, among other things, (a) Red Hat’s copyrights are unique and valuable property
whose market value is impossible to assess, thus causing irreparable harm; (b) Twin Peaks’
infringement harms Red Hat such that Red Hat cannot be made whole by any monetary award; and
(c) Twin Peaks’ wrongful conduct, and the resulting damage to Red Hat, is continuing.

75. As of each copyright’s registration, Red Hat is also entitled to an order impounding
any and all infringing materials pursuant to 17 U.S.C. § 503.

PRAYER FOR RELIEF

WHEREFORE, Defendants respectfully request that this Court enter judgment in
Defendants’ favor against Plaintiff, and issue an order:

1. That Defendants have not infringed and are not infringing, either directly, indirectly,
or otherwise, any valid claim of the ’439 Patent;

2. That the claims of the ’439 Patent are invalid;

3. Granting a permanent injunction preventing Twin Peaks, including its officers,
agents, employees and all persons acting in concert or participation with Twin Peaks, from charging
that the ’439 Patent is infringed by Defendants;

4. That Twin Peaks take nothing by its First Amended Complaint;

5. Denying Twin Peaks’ request for injunctive relief;

6. Dismissing Twin Peaks’ First Amended Complaint with prejudice;

7. Declaring this case to be exceptional and awarding Defendants their costs, expenses
and reasonable attorney fees incurred in this action under 35 U.S.C. § 285, the Copyright Act, or
otherwise;

8. Granting a permanent injunction preventing Twin Peaks Software Inc. from copying,
modifying, distributing, or making any other infringing use of Red Hat’s software;