This paper describes some experience we have done in the development of multimedia application inside a distributed architecture. The requirements of such environment are the following: (1) to support high quality multimedia data; (2) to be independent from the hardware platform and from the transport protocol; (3) to be cheap form the client side. The goals to reach were twofold: to develop a client/server system for the management and transmission of MPEG bitstreams, and to optimize the transmission of MPEG over ATM networks. The first goal was reached using standardized technologies in the implementation of the system components: a DSM-CC server has been realized based on CORBA Services, which is able to manage MPEG streams as specified in the ISO DSM-CC document. The client module has been realized using Java. The system, originally written for Sun Solaris has been successfully tested on different Unix and NT platform. The evaluation of quality and performance of the transmission of MPEG over ATM was made using three types of signaling: classical IP over ATM,, LAN emulation and the ATM Native algorithm implemented over FORE API. Both classical IP over ATM and LAN Emulation allows the transparent use of ATM for application written using the widespread TCP/IP family of protocols, but they introduce an overload of data which could not be suitable for real-time transmission. Of course, native ATM gets better performance than Classical IP over ATM and LAN Emulation but binds the application to run only on ATM networks, and at the moment it is hardware dependent.

In this paper we propose an adaptive region-based, multi- scale, motion compensated video compression algorithm designed for transmission over hostile communication channels. Our codec extracts spatial information form video frames to create video regions that are then decomposed into sub-bands of different perceptual importance before being compressed and transmitted independently. This allows the system to apply unequal error protection, prioritized transmission, and 'lego-reconstruction' to guarantee a minimum spatial and temporal resolution at the receiver. Furthermore, the region segmented frames bound both spatial and temporal error propagation within frames and when combined with our novel connection-level inter-region statistical multiplexing scheme ensure optimal utilization of the reserved transmission bandwidth. Simulation results demonstrate that in the presence of severe time-varying error conditions and severe bandwidth constraints, our video codec exhibits better error concealment, better temporal resolution, and better bandwidth utilization properties than the popular International Telecommunication Union's and International Standard Organization's video coding standards.

Data transmission across a network using constant-bit-rate (CBR) service simplifies admission control and resource management techniques. We consider lossless, starvation- free, streaming CBR transmission of compressed digital video, which is known to exhibit significant, multi-time- scale rate variability. This transmission uses work-ahead transfer into available client buffers to send data at a rate significantly below the peak rate of the original video. The goal of any video transmission scheme is to minimize resources requirements such as client buffer, transmission rate, channel holding time and playback startup latency. We identify, for CBR video transmissions, formal structural properties of the tradeoffs among these resources. Specifically, we show that, (i) the minimum feasible client buffer requirement as a function of playback startup latency is unimodal with one minimal value, (ii) the minimum feasible CBR rate is a convex decreasing function of the startup latency, and (iii) the corresponding channel holding time is piecewise linear concave increasing function of the startup latency. Using these structural properties, we then develop an O(N log N) algorithm that computes the minimum client buffer size and the associated CBR rate and playback startup latency required to transmit a VBR video. This is a significant improvement over an existing O(N2 log N) algorithm to solve the same problem. We next quantitatively examine the resource tradeoffs using MPEG-1 traces, and find that both the CBR transmission rate and minimum client buffering requirement can be substantially reduced by requiring only very small playback startup latencies.

In recent years, buffering and traffic smoothing techniques for VBR video transmissions have received much attention and many effective algorithms have been proposed. It becomes essential that a unified abstraction model be introduced in order to better understand these algorithms and their relationships. In this paper, we present a simple abstraction model and show how these algorithms can be represented and how their corresponding problems relate to each other under this model. For example, we intuitively illustrate how one algorithm can be transformed to another with the addition of certain constraints. In particular, we focus on examining the similarities and the differences of two algorithms under our abstraction model: the minimum polyline smoothing and the majorization smoothing. The common smoothness parameters of these two algorithms are then thoroughly compared through experimental results based on some MPEG-1 video traces. The discrepancies in these results are explained and their implications on both transmission scheduling and network resource negotiation are discussed.

The derivation of the constraints on the encoded bit rate of a video signal imposed by a channel, encoder and decoder buffers is important in the context of video transmission over a wireless channel. In this paper, we derive the conditions for buffer control to ensure that the buffers do not overflow or underflow when video is transmitted over a variable bit rate channel. Using these conditions and a commonly proposed network-user contract, the effect of a network policing function on the allowable variability of the encoded video bit rate is examined. The details of how these effects might be implemented in a system that controls both the encoded and transmitted bit rates is also presented.

This paper presents an interactive multimedia conferencing system designed in compliance with the telecommunication information networking architecture software concepts and principles. The interactive multimedia conference service is implemented on Iona's Orbix as a distributed processing environment, windows-NT PC, UNIX Workstations, interconnected by ATM networks. The multimedia conference systems is developed to provide the network based multimedia conferencing services for large public ATM networks.

We are truly in the midst of the "Information Age". Communications equipment range from devices that fit in your pocket to equipment that brings interactive television viewing and the internet to your living room. Market forces on these communication devices are demanding indeed. All communications equipment must be interoperable. Multi-protocol support is required for interoperability with different networks or even different equipment on the same network. Minimizing the ROM and RAM footprint of the devices is essential in order to compete in the marketplace. CPU utilization due to the network protocol stacks must be kept to a minimum, both to allow the lowest cost CPU as well as to maximize CPU availability for multimedia applications. Many of these market forces seem to compete with each other. As we look at various software architecture approaches for these devices, we will uncover inefficiencies and options that create cost, performance, and footprint tradeoffs to the equipment manufacturer. The goal of this paper is to functionally specify a software environment to support the network interface requirements of a networked embedded system. First, the general characteristics of a networked embedded system will be defined. From these characteristics, requirements of a networked embedded system will be identified and potential approaches will be outlined. The potential approaches will then be examined with respect to the tradeoffs to be made in order to fulfill the requirements.

This paper describes a solution to multimedia networking using presently installed home cable. To reach this goal a concept based on a high performance single chip has been achieved. This concept has been validated in several applications and is now available in an open VHDL or silicon format. The flexibility of this network allows high-speed data and power multiplexing on the same wire. The MediaFlow solution, described in this paper, proposed a remote management tool for distributed device interconnection such as phone, hi-fi, video control and computers in the home. This management could either be locally or remotely controlled through ATM or ISDN networks. The concept allows a service provider to take over the installation, management and maintenance of the whole network.

Recent advances in access technologies are creating several new network elements that enable the delivery of multiple services over high bandwidth access streams. However these advances are often slowed by the substantial amount of legacy networks and systems, as well as by the lack of a unifying end-to-end architectural vision. This paper presents our view of the evolution of the network to an 'interconnection network' supporting connectivity to diverse end-to-end services using varied access methods. We emphasize the importance of a Residential Gateway to support emerging access technologies, legacy home networks, and new home network technologies in a seamless, modular way. An architecture for an interconnection network, including a low cost modular Residential Gateway, is presented. The architecture provides a framework for supporting self- configuring 'plug and play' network elements that provide dynamic access to multiple services.

An experiment has been conducted with emphases on the home network transmission layer and the Internet gateway device. The IEEE 1394 is chosen as the transmission mechanism for its capability of handling both isochronous and asynchronous transmissions. To cover the reach of a conventional home, a 1394/100BaseTX converter has been developed to carry the 1394 signaling protocol over a Category 5 unshielded twisted pair cable with a maximum point to point distance of 100 meters. A gateway device to the Internet is designed and configured based on a PC. The gateway consists of a 1394 transceiver with PCI interface, a digital subscriber line transceiver, and Windows based networking software. This paper focuses on the general long distance 1394 transceiver hardware, gateway system configuration and software architecture. Issues such as DHCP, private IP address, and DNS are also discussed.

This paper provides a general overview of the capabilities provided by the JTV API. The JTV API defines an extension of Java and the Java Media Framework for TV-centric devices and services. This API enables programmers to develop and distribute interactive applications and applets that combine the power of Java with both enhanced broadcast and interactive television content. JTV provides an open Java- based platform for developers of set-top and TV software.

In this paper we describe the work being done at HP Labs Bristol in the area of home networks and gateways. This work is based on the idea of breaking open the set top box by physically separating the access network specific functions from the application specific functions. The access network specific functions reside in an access network gateway that can be shared by many end user devices. The first section of the paper present the philosophy behind this approach. The end user devices and the access network gateways must be interconnected by a high bandwidth network which can offer a bounded delay service for delay sensitive traffic. We are advocating the use of IEEE 1394 for this network, and the next section of the paper gives a brief introduction to this technology. We then describe a prototype digital video broadcasting satellite compliant gateway that we have built. This gateway could be used, for example, by a PC for receiving a data service or by a digital TV for receiving an MPEG-2 video service. A control architecture is the presented which uses a PC application to provide a web based user interface to the system. Finally, we provide details of our work on extending the reach of IEEE 1394 and its standardization status.

An important element in the performance of the Medium Access and Control (MAC) layer of Hybrid Fiber Coax (HFC) systems are the medium access mechanisms employed: e.g. requests via contention mini-slots, piggybacking requests, request poling, etc. Therefore, the starting point in the design of the MAC of ATM cell based HFC systems, is an evaluation of the candidate mechanisms, which is presented in this paper. Furthermore, it is shown that different traffic types need different ideal MAC mechanisms, or different ideal combinations of them. Several proposals will be given for different traffic classes, and while in standardization and early deployment a lot of attention goes to Internet traffic today, this paper will consider in addition also the very different traffic of ATM terminals, which is rate controlled according to the ATM traffic characteristics.

It is difficult to simultaneously serve the millions of video streams that will be needed in the age of 'Mega-Media' networks by using only one high-performance server. To distribute the service load, caching servers should be location near users. However, in previously proposed caching mechanisms, the grade of service depends on whether the data is already cached at a caching server. To make the caching servers transparent to the users, the ability to randomly access the large volume of data stored in the central server should be supported, and the operational functions of the provided service should not be narrowly restricted. We propose a mechanism for constructing a video-stream-caching server that is transparent to the users and that will always support all special playback functions for all available programs to all the contents with a latency of only 1 or 2 seconds. This mechanism uses Variable-sized-quantum-segment- caching technique derived from an analysis of the historical usage log data generated by a line-on-demand-type service experiment and based on the basic techniques used by a time- slot-based multiple-stream video-on-demand server.

In this paper, we propose a new striping control technique for a distributed video server that makes it possible to provide a large number of video streams and enables system scalability. This is based on three key techniques: (1) multiple access using time-slot sequences that each server module controls independently to reduce the communication overhead between the modules and to shorten the response time, (2) load balancing using replicated segments to provide video streams without interruption, and (3) logical server modules and disks which makes it possible to manage carious types of server modules and disks uniformly. Simulations confirmed the superiority of our technique compared to other proposed techniques. We also built an experimental systems. This experimental systems also confirmed the effectiveness of our technique.

Application sharing is a key function for collaboration in multimedia conferencing. Application sharing have a role of a shared workspace in the virtual space by having an application of a private user in a particular system be shared in real time by multiple user who are distributed geographically. In this paper, we develop the application sharing development toolkit (ASDT) for collaboration in multimedia conferencing and implement a multimedia application sharing system (MASS) having a fully distributed control architecture using ASDT. MASS has functions of multipoint application sharing, white board, text chat, and vote.

This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.

THe purpose of reflection of presence is to create a framework for a telepresence environment that intelligently responds and adapts itself to its inhabitants in order to enhance interpersonal communication or also reflections of the other remotely-located participants, just as if every one is standing in the same room looking at each other through a real mirror. Using visual and auditory cues, segmented images of participants are dynamically layered into a single display using varying transparency, position and scale to reflect center of attention and degree of presence. Wireless tangible interfaces allow participants to customize their shared space and collaboratively manipulate and annotate media objects in the background. The system is novel in that it is implemented totally as a set of cooperating scripts instead of through a low-level programming language, enabling rapid experimental changes in the behavior of the prototype.

This paper presents the significance of a multimedia medical consulting system together with the recent related developments and studies, and an experiment using a medical consultation via teleconference system. Results revealed that the requirements and subjects for further development.

Conference applications facilitate communication and cooperation between users at different locations. Increased costs and the distribution of institutions and companies lead to a stronger need for coherence applications as an efficient utility for collaboration. The deployment of coherence applications was enabled by research progress in communications and distributed algorithms as well as by increased performance of computers and networks. Efficient standardized protocols become mandatory to enable the connectivity among systems of different vendors and to facilitate the implementation of coherence applications. The ITU defined a number of standards for conference applications in the T.120 series of standards. The paper describes and evaluates the features of the ITU T.120 series of standards. The concepts and algorithms are analyzed with regard to the application scenarios that can be covered efficiently. The analysis shows two main conceptual weaknesses. First, efficient multicast capabilities of networks are not directly used since multipoint connections are inefficiently mapped on reliable point-to-point connections. Second, the administration of the conference data base is defined in a way not suited for conferences with many participants. The theoretical analysis is backed by a number of measurements performed with our implementation of the T.120.

In Austria, some companies have exported a lot of complete sets of equipment aboard, including Asia and North America. They are composed of the local network system to implement the process controls. Once there are any troubles in the system, the companies have to dispatch specialists or technicians to the fields. Because they are very long distances, this takes a lot of time, manpower and finance. In recent years, network communications technology has been rapidly popularized and commercialized. This fact provides us a good means to realize remote troubleshooting and monitoring to the process control system. This paper proposed a remote troubleshooting method by means of the Internet. The basic ideas are: (1) recording the relevant process data of the control system in the local network as the historic database; (2) a field server as the front end of the database; (3) a proxy server mediating between the dial-in line and the Internet for network communication; (4) a client through the proxy server to access to field servers. Through the above-mentioned system layout, a specialist may retrieve the process information from the remote fields. He may analyze the data a through various diagnostic programs, to find out the cause of the problem. Them he can send the results to the field technicians to fix the problem. Moreover, it is also possible that man may have the remote monitoring to the process control systems, to timely understand the production condition and the devices status.

This paper proposes a hair volume algorithm that is effective in producing realistic hair model for individual. The hair volume is produced from three images of the head taken from the right, back and top view. This hair volume represents the real space volume where hairs are found and guides the hair strands to move in the desired direction. Hair strands are randomly generated on the skull. The outline region acquired through image processing together with the hair volume ensures that the randomly generated hair strands fall neatly into the hair volume to produce hair model resembling the input images. This hair model can find many applications in the generation of synthetic humans and creatures in movies, multimedia and computer game productions.

In an age where the wealth of information continuously grows, color not only fulfills the purpose of optically enriching data in the sense of successful presentations, but is also an important carrier of a large quota of this information. Computer Graphics without the use of color is something that can no longer be imagined. Application areas, such as technical, scientific visualization, document processing and medical image diagnostics are more dependent on exact reproducibility of color tints. Due to this necessity, great demands are made on the input and output equipment being used in these areas in relation to color fastness. This is especially true for the display systems as the numerically most common data terminal. When using color as the carrier of information in Computer Graphics' applications, there are two essential problematic aspects: • Reproducibility of an excellent chrominance on the display at various times, as well as on different displays at the same time. • Distinguishability of a row of hues, i.e. a color progression in all of its shades Both aspects are to be regarded as fundamental prerequisites. The reasons for the existence of these problems lies in the physical limitations of a display system. The need for a system to calibrate the hues respective to color correction is evident. In order to analyze the possible starting-points of such a correction more exactly, the essential factors influencing the color representation are briefly characterized in the following section.

These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

A parent has been awarded for a new CRT design that eliminates the internal shadow mask, and promises a bright, rugged, high-resolution/large-format, multiscan/multi-sync display with potentially low manufacturing costs. The design incorporates a light-valve within the CRT. Modulation of the electron gun and the function of the shadow mask are replaced by two chemical reactions typical of phosphors: decay stimulation and quenching. These reactions are controlled through illumination of the phosphors by non- visible light projected through the light-valve. The image is generated in traditional raster form, but with a single, non-modulated electron gun; control of each pixel is made through modulation of the light-valve. The design decreases manufacturing alignment requirements and eliminates the need to focus the electron gun. Advantages include: high-G resistance due to elimination of the shadow mask; brighter displays because the electron beam is not occluded or absorbed by the shadow mask; elimination of decay rate as a parameter in phosphor choice and frame state; and digital interfacing to image sources. Applications include: large- format displays without the physical constraints of the shadow mask and by elimination of distortion control requirements of the electron beam at the image edges; variable rates by modulating the decay rate of the phosphors through quenching; and stereoscopic displays without crosstalk through control of left-right phosphor decay rates.

All-optical systems are a promising technology for terabit- per-second fiber-optic communication networks. The transmission, switching and routing characteristics of all- optical networks that enable these high rates are intrinsically different from their electro-optic counterparts, particularly when considered with respect to vulnerability to service denial attacks. The characteristics of both components and architecture of all-optical networks appear to have new and little studied security vulnerabilities. Along with those vulnerabilities are a new set of countermeasures which are also different from the electro-optic scenario. This paper addresses the vulnerabilities of all-optical networks to attacks from both inside and outside the network, and present some preliminary results on countermeasures. This work concentrates on physical security differences between all-optical networks and more conventional electro-optic networks with a goal of understanding the differences in attack mechanisms. These difference suggest new countermeasures that may significantly reduce the infrastructure vulnerability of all-optical networks. The work is timely in that consideration of the physical security of all-optical networks in a way that will be difficult to match using post-deployment techniques.

Work reported here is part of a larger project on 'Smart Photonic Networks and Computer Security for Image Data', studying the interactions of coding and security, switching architecture simulations, and basic technologies. Coding and security: coding methods that are appropriate for data security in data fusion networks were investigated. These networks have several characteristics that distinguish them form other currently employed networks, such as Ethernet LANs or the Internet. The most significant characteristics are very high maximum data rates; predominance of image data; narrowcasting - transmission of data form one source to a designated set of receivers; data fusion - combining related data from several sources; simple sensor nodes with limited buffering. These characteristics affect both the lower level network design and the higher level coding methods.Data security encompasses privacy, integrity, reliability, and availability. Privacy, integrity, and reliability can be provided through encryption and coding for error detection and correction. Availability is primarily a network issue; network nodes must be protected against failure or routed around in the case of failure. One of the more promising techniques is the use of 'secret sharing'. We consider this method as a special case of our new space-time code diversity based algorithms for secure communication. These algorithms enable us to exploit parallelism and scalable multiplexing schemes to build photonic network architectures. A number of very high-speed switching and routing architectures and their relationships with very high performance processor architectures were studied. Indications are that routers for very high speed photonic networks can be designed using the very robust and distributed TCP/IP protocol, if suitable processor architecture support is available.

We introduce a meshed ring communication s network which employs cross-connect switches. Such a network provides higher reliability, security, and throughput capabilities than those offered by a SONET ring. The cross-connect switches can be implemented as ATM Virtual Path switches leading to an ATM compatible network system, or as wavelength routers yielding a WDM operation. We show here that this network architecture results in a significant increase in throughput performance in comparison with ring networks. The meshing of the ring topology contributes to increasing the network's reliability, survivability and security features. For a certain class of meshed rings, under a uniform traffic matrix, we derive the optimal topology which achieves maximum throughput efficiency. We also obtain lower and upper bounds on the number of identifiers required to achieve that efficiency. Wavelength graphs are constructed to demonstrate the implementation of the routing scheme when the above mentioned bounds are used to synthesize the optimal architecture. For practical implementation reasons, we also investigate the performance of networks which employ a reduced number of identifiers. We demonstrate the extent to which the attainable throughput efficiency is decreased as the number of wavelength is reduced.

An authentication protocol is proposed in this paper to implement secure functions which include two way authentication and key management between end users and head-end. The protocol can protect transmission from frauds, attacks such as reply and wiretap. Location privacy is also achieved. A rest protocol is designed to restore the system once when systems fail. The security is verified by taking several security and privacy requirements into consideration.

The growth of networked multimedia system has created a need for the copyright protection of digital images and video. Copyright protection involves the authentication of image content and/or ownership. This can be used to identify illegal copies of an image. One approach is to mark an image by adding an invisible structure known as a digital watermark to the image. Techniques of incorporating such a watermark into digital images include spatial-domain techniques, transform-domain algorithms and sub-band filtering approaches.

An attempt to eavesdrop a quantum cryptographic channel reveals itself through errors it inevitably introduces into the transmission. We investigate the relationship between the induced error rate and the maximum amount of information the eavesdropper can extract, both in the two-state B92 and the four-state BB84 quantum cryptographic protocols. In each case, the optimal eavesdropping method that on average yields the most information for a given error rate is explicitly constructed. Analysis is limited to eavesdropping strategies where each bit of the quantum transmission is attacked individually and independently form other bits. Subject to this restriction, however, we believe that all attacks not forbidden by physical laws are included. Unlike previous work, the eavesdropper's advantage is measured in terms of Renyi information, and with respect only to bits received error-free by Bob. This alters both the maximum extractable information and the optimal eavesdropping attack. The result can be used directly at the privacy amplification stage of the protocol to accomplish secure communication over a noisy channel.

A crucial phase in performing the quantum cryptographic protocol is the error-correction phase which commences after the raw bit transmission and consists of publicly disclosing the parity of blocks of bits and discarding bits until almost all errors are removed. We develop a formal mathematical model for analyzing the procedure based on specific assumption about the length of the blocks. We further derive a simple analytical bound which can server as an estimate for the cot of performing the error-correction. At the end, we present a simulation model developed to test the analytical derivations and conclude that there is a very good agreement between the simulations and the bound for moderate error rates.

We discuss three problems of practical quantum cryptography: continuous alignment, nose of the photon counters and eavesdropping. We present a new self-balanced interferometric setup using Faraday mirrors. This phase- coding setup needs no alignment of the interferometer nor polarization control, and therefore considerably facilitates the experiment. Moreover it features excellent fringe visibility. The importance of the detector noise is illustrated and means of reducing it are presented. Maximal distances and bit rates achievable with present day technologies are evaluated. Finally, practical eavesdropping strategies taking advantages of the optical fiber that could open a gate into the transmitter's receiver's offices are discussed.

High speed optical communications systems are evolving rapidly. Commercial systems achieve high aggregate data rates utilizing wavelength division multiplexing, where multiple wavelength channels carry information at electronic rates, typically 2.5 Gb/s. Data encryption in these systems will most likely be implemented electronically. However, future system may also utilize time division multiple access (TDMA) schemes and technologies for 100 Gb/s, single stream TDMA networks are currently being developed. These high speed TDMA networks will rely on all-optical switches and processors to interface the high-speed electronics in the users nodes to the ultra-high-speed optical data bus. Data encryption in these networks may need to be implemented using optical logic gates. Straightforward duplication of electronic encryption circuits using optical logic gates is not feasible because optical logic gates have low fan-out, require high optical powers, are difficult to synchronize and have high latency. In this paper, we propose a high- speed electro-optic scheme for reconfigurable feedback shift registers (RFSRs) that relies upon electronic encryption circuits to reconfigure a sequence of optical logic gates and which makes use of the latency in the optical gates as memory. We show that, for linear RFSRs, the low number of optical gates is not a drawback and that the period of the sequences is generally very large. Non-linear feedforward functions, such as all-optical bit swapping, many also be introduced to improve the pseudo-random properties of the sequences.

Direct sequence spread spectrum (DS/SS) is often employed for non-centralized multiple access, and an added degree of system security in coherent optical communications. With advances in coherent optical processing using Spatial- Spectral Holographic (SSH) Devices, it is possible to implement an all optical encoder/decoder for M-ary phase shift keying DS/SS. We address the issue of code selection of length N code sequences drawn from an alphabet consisting of M unit length phasers and designed for a multiuser environment. The codes are selected using very fast simulated re-annealing based on the criterion of minimizing the periodic auto and cross correlation sidelobes. The all optical implementation is discussed.

This paper describes a family of interconnect system based on optical time division multiplexing (OTDM) for use as system area networks (SAN). The low latency and high bisection bandwidth that is attainable with this type of systems is ideally suited for scalable multiprocessor applications. The ability to rapidly switch transmission channels of these OTDM system gives them the logical appearance of a fully connected crossbar switch. However unlike large electronic crossbar switches which are difficult to control, the synchronicity inherent in OTDM allows a class of efficient, distributed arbitration and control algorithms with incremental extension capability. This paper describes the optical and electronic architecture of an OTDM interconnect system and discusses implementation, security and fault tolerance issues.

Parallel architectures and algorithms will offer a solution to the system bottleneck arising from the need to encrypt very large amount of data without compromising security. In this respect the use of cellular automata (CA) with their parallel, simple, regular and modular structure is very promising. So far proposed cryptosystems based on CA use iterations of binary, 1D CA. We extend the block-cipher algorithm, based on the backward iteration and forward iteration of so-called 'toggle' CA rules to two-dimensions. Higher dimensional CA have more complex behavior and in general their inversion is a NP problem, therefore they are potentially resistant against cryptanalytical attacks. Other advantages are substantial increase in the speed of the algorithm, parallel with the increase in the block size and key length. The algorithm allows customized block and key size. It uses two independent keys, each of them sufficient for secure encryption. This allows one of the keys to be replaced by the time stamp, user identification information or other relevant information. Hardware implementations of the algorithm are considered.

Spectrally encoded code division multiple access (CDMA) systems using a complementary bipolar mechanism has been considered the most suitable way to implement CDMA on optical fiber system by many people in this paper, we present a proposed bipolar complementary spectrally encoded optical CDMA system based on a cascaded Mach-Zehnder encoder chain filter and give an analysis on the capacity limit of such systems.

1D photonic crystals offer extraordinarily low group velocities and high dispersion near their bandgaps. They therefore have an immediate application in CDMA and optical encoding. In our approach we have chosen photonic crystal implementations using long fiber optic Bragg gratings. This system has been numerically investigated and modeled for experimentally realizable structures. We have developed an experimental technique to measure the group delay, produced by 1D Photonic Bandgap structures, in the time domain. At the core of this technique we used a tunable optical pulse source. It consisted of a 1.55 micrometers , 160 fs mode-locked fiber laser with a bandwidth of 50 nm. One nm spectral slices were taken from this laser to obtain picosecond pulses that were tunable. The group delay was measured using a commercial autocorrelator as an ultrafast optical detector and cross-correlator. This technique allowed us to measure the effective dispersion incurred by optical pulses propagating through the grating. A maximum group delay of 10 ps was measured for a 3mm fiber Bragg grating. We have experimentally demonstrated the sensitivity of the group delay as a function of wavelength in the vicinity of the grating bandgap. Our experimental results were quantitatively confirmed by both theoretical and simulation predictions. We also performed experimental studies on cascaded fiber gratings and showed that group delay was additive for the two gratings measured. The use of 1D photonic crystals for pulse shaping and coding applications is important because of its inherent flexibility. By going to the other side of the photonic bandgap it is possible to make conjugate gratings, reversing the dispersion,and performing the task of decoding. Transmission losses and dispersion compensation in this spread spectrum approach will also be presented.

Optical code division multiple is one of the candidates for future broadband optical networks. We investigate here the performance of such systems from the security point of view. We demonstrate that 2D coding can enhance the security aspect while still maintaining the multiuser ability. Analysis of the eavesdropping activity in the presence of multiuser interference is presented and ways to further strengthen the security aspects are discussed.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Journal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews