Abstract:
Haptic technology, or haptics, is a feedback technology which takes advantage of a user's sense of touch by applying forces, vibrations, and/or motions upon the user. This mechanical stimulation may be used to assist in the creation of virtual objects (objects existing only in a computer simulation), for control of such virtual objects, and for the enhancement of the remote control of machines and devices (teleoperators). It has been described as "(doing) for the sense of touch what computer graphics does for vision" . how ever computer scientists have had great difficulty transferring this basic understanding of touch into their virtual reality systems without a mistake . this seminar describes how Haptic technology works and future expectation about Haptic technology

How haptic technology works
In the real world, persons receive and disseminate information in three-dimensional space. In a virtual world, the user can access information by imitating that three-dimensional space. To incorporate the sense of touch (the haptic sense), a device is created that allowed the user to interact with a computer by receiving tactile feedback. A haptic device achieves this feedback by applying a degree of opposing force to the user along the x, y, and z axes. While there is now some haptic software, much of the design is algorithmic.
However, to create a force feedback device still requires a great deal of math and engineering as well as computer graphic and computer language skills. In the Force Feedback Data Glove, for example [http://www.caip.rutgers.edu/~bouzit/lrp/glove.html]
the principle of a force feedback is simple[states the engineers.] It consists of opposing the movement of the hand in the same way that an object squeezed between the fingers resists the movement of the latter. The glove, in the absence of a real object must be capable of recreating the forces applied by the object on the human hand with the same intensity and the same direction. The mechanical structure created was made up of five fingers and had 19 degrees of freedom, five of which were passive. The mechanical structure adapted to different sizes of human hands and had a physical stop to offer security to the operator. The glove is controlled by 14 torque motors with continuous current equal to 1.4Nm. The global scheme has two command loops. Man is considered as a displacement generator while the glove is considered as a force generator.(link to video presentation in power point)
Another example of multiple disciplines contributing to the knowledge to create a haptic device is shown in the research at MIT to create a simulator for a mastoidectomy. [http://www.crs4.it/vic/data/papers/presence-2003.pdf] A real-time haptic and visual implementation of a bone-cutting burr is being developed. The current implementation, directly operating on a voxel discretization of patient-specific 3D CT and MR imaging data, is efficient enough to provide real-time feedback on a low-end multi-processing PC platform. [13] In the experiment, the simulator worked fine, but the researchers would like to have data of actual drilling samples [non-simulated].
Advantages and Disadvantages of haptic technology
Advantages include that communication is centered through touch and that the digital world can behave like the real world. When objects can be captured, manipulated, modified and rescaled digitally, working time is reduced. Medical field simulators allow would be surgeons to practice digitally, gaining confidence in the procedure before working on breathing patients. With haptic hardware and software, the designer can maneuver the part and feel the result, as if he/she were handling the physical object.
Disadvantages include debugging issuesâ€these are complicated since they involve real-time data analysis. Links in telemedicine must have 0% fault rates for extended periods of time. The precision of touch requires a lot of advance design. With only a sense of touch, haptic interfaces cannot deliver warnings.
Future
All of the research studies and papers stated that more research is needed. From the original gaming, so much has come about in less than 10 years. It is exciting to think what might happen in the next 10 years. Researchers at SUNY have completed experiments where they were able to transmit, from one person to another over the Internet, the sensation of touching a hard or soft object. Medical researchers at Rutgers filed a patent application for a new, PC-based virtual reality system that provides stroke patients with virtual hands. Artists and researchers at USC have developed a technology that will let individuals feel what a sculpture feels like at an art exhibit at a Haptic Museum Virtual reality systems are also making headway into training for manned space operations. There is a call for papers for a March/April 2004 conference on Haptic Renderingâ€Beyond Visual Computing. Haptics presents new challenges for the development of novel data structures to encode shape material properties, as well as new techniques for data processing, analysis, physical modeling, and visualization.

Haptics presentation.ppt (Size: 3.57 MB / Downloads: 2526)
Haptics, is the technology of adding the sensation of touch and feeling
to computers.

A haptic device gives people a sense of touch with computer-generated
environments, so that when virtual objects are touched, they seem real
and tangible.

Understanding and enabling a compelling experience of Presence not
limited to "being there", but extended to "being in touch" with remote
or virtual surroundings

Haptics is implemented through different type of interactions with a
haptic device communicating with the computer. These interactions can
be categorized into the different types of touch sensations a user can receive:

In order to complete the imitation of the real world one should be able to interact with the environment and get a feedback.

User should be able to touch the virtual object and feel a response from it.

This feedback is called Haptic Feedback.

The next important step towards realistically simulated environments

that have been envisioned by science fiction authors and futurists

alike.
Large potential for applications in critical fields as well as for

leisurely pleasures.
Haptic devices must be miniaturized so that they are lighter, simpler

ABSTRACT
ËœHapticsâ„¢ is a technology that adds the sense of touch to virtual environments. Users are given the illusion that they are touching or manipulating a real physical object.
This seminar discusses the important concepts in haptics, some of the most commonly used haptics systems like ËœPhantomâ„¢, ËœCybergloveâ„¢, ËœNovint Falconâ„¢ and such similar devices. Following this, a description about how sensors and actuators are used for tracking the position and movement of the haptic systems, is provided.
The different types of force rendering algorithms are discussed next. The seminar explains the blocks in force rendering. Then a few applications of haptic systems are taken up for discussion.
Chapter 2
INTRODUCTION
2. a) What is ËœHapticsâ„¢
Haptic technology refers to technology that interfaces the user with a virtual environment via the sense of touch by applying forces, vibrations, and/or motions to the user. This mechanical stimulation may be used to assist in the creation of virtual objects (objects existing only in a computer simulation), for control of such virtual objects, and to enhance the remote control of machines and devices (teleoperators). This emerging technology promises to have wide-reaching applications as it already has in some fields. For example, haptic technology has made it possible to investigate in detail how the human sense of touch works by allowing the creation of carefully controlled haptic virtual objects. These objects are used to systematically probe human haptic capabilities, which would otherwise be difficult to achieve. These new research tools contribute to our understanding of how touch and its underlying brain functions work. Although haptic devices are capable of measuring bulk or reactive forces that are applied by the user, it should not to be confused with touch or tactile sensors that measure the pressure or force exerted by the user to the interface.
The term haptic originated from the Greek word pt (haptikos), meaning pertaining to the sense of touch and comes from the Greek verb ptesa (haptesthai) meaning to contact or touch.
2. b) History of Haptics
In the early 20th century, psychophysicists introduced the word haptics to label the subfield of their studies that addressed human touch-based perception and manipulation. In the 1970s and 1980s, significant research efforts in a completely different field, robotics also began to focus on manipulation and perception by touch. Initially concerned with building autonomous robots, researchers soon found that building a dexterous robotic hand was much more complex and subtle than their initial naive hopes had suggested.
In time these two communities, one that sought to understand the human hand and one that aspired to create devices with dexterity inspired by human abilities found fertile mutual interest in topics such as sensory design and processing, grasp control and manipulation, object representation and haptic information encoding, and grammars for describing physical tasks.
In the early 1990s a new usage of the word haptics began to emerge. The confluence of several emerging technologies made virtualized haptics, or computer haptics possible. Much like computer graphics, computer haptics enables the display of simulated objects to humans in an interactive manner. However, computer haptics uses a display technology through which objects can be physically palpated.
Chapter 3
WORKING OF HAPTIC SYSTEMS
3. a) Basic system configuration.

Basically a haptic system consist of two parts namely the human part and the machine part. In the figure shown above, the human part (left) senses and controls the position of the hand, while the machine part (right) exerts forces from the hand to simulate contact with a virtual object. Also both the systems will be provided with necessary sensors, processors and actuators. In the case of the human system, nerve receptors performs sensing, brain performs processing and muscles performs actuation of the motion performed by the hand while in the case of the machine system, the above mentioned functions are performed by the encoders, computer and motors respectively.
3. b) Haptic Information
Basically the haptic information provided by the system will be the combination of (i) Tactile information and (ii) Kinesthetic information.
Tactile information refers the information acquired by the sensors which are actually connected to the skin of the human body with a particular reference to the spatial distribution of pressure, or more generally, tractions, across the contact area.
For example when we handle flexible materials like fabric and paper, we sense the pressure variation across the fingertip. This is actually a sort of tactile information. Tactile sensing is also the basis of complex perceptual tasks like medical palpation, where physicians locate hidden anatomical structures and evaluate tissue properties using their hands.
Kinesthetic information refers to the information acquired through the sensors in the joints.
Interaction forces are normally perceived through a combination of these two informations.
3. c) Creation of Virtual environment (Virtual reality).
Virtual reality is the technology which allows a user to interact with a computer-simulated environment, whether that environment is a simulation of the real world or an imaginary world. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special or stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Users can interact with a virtual environment or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus boom arm, and omnidirectional treadmill. The simulated environment can be similar to the real world, for example, simulations for pilot or combat training, or it can differ significantly from reality, as in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality experience, due largely to technical limitations on processing power, image resolution and communication bandwidth. However, those limitations are expected to eventually be overcome as processor, imaging and data communication technologies become more powerful and cost-effective over time.
Virtual Reality is often used to describe a wide variety of applications, commonly associated with its immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays; database gloves and miniaturization have helped popularize the motion. The most successful use of virtual reality is the computer generated 3-D simulators. The pilots use flight simulators. These flight simulators have designed just like cockpit of the airplanes or the helicopter. The screen in front of the pilot creates virtual environment and the trainers outside the simulators commands the simulator for adopt different modes. The pilots are trained to control the planes in different difficult situations and emergency landing. The simulator provides the environment. These simulators cost millions of dollars.
The virtual reality games are also used almost in the same fashion. The player has to wear special gloves, headphones, goggles, full body wearing and special sensory input devices. The player feels that he is in the real environment. The special goggles have monitors to see. The environment changes according to the moments of the player. These games are very expensive.
3. d) Haptic feedback
Virtual reality (VR) applications strive to simulate real or imaginary scenes with which users can interact and perceive the effects of their actions in real time. Ideally the user interacts with the simulation via all five senses. However, todayâ„¢s typical VR applications rely on a smaller subset, typically vision, hearing, and more recently, touch.
Figure below shows the structure of a VR application incorporating visual, auditory, and haptic feedback.
The applicationâ„¢s main elements are:
1) The simulation engine, responsible for computing the virtual environmentâ„¢s behavior over time;
2) Visual, auditory, and haptic rendering algorithms, which compute the virtual environmentâ„¢s graphic, sound, and force responses toward the user; and
3) Transducers, which convert visual, audio, and force signals from the computer into a form the operator can perceive.
The human operator typically holds or wears the haptic interface device and perceives audiovisual feedback from audio (computer speakers, headphones, and so on) and visual displays (for example a computer screen or head-mounted display).Whereas audio and visual channels feature unidirectional information and energy flow (from the simulation engine toward the user), the haptic modality exchanges information and energy in two directions, from and toward the user. This bidirectionality is often referred to as the single most important feature of the haptic interaction modality.
Chapter 4
HAPTIC DEVICES
A haptic device is the one that provides a physical interface between the user and the virtual environment by means of a computer. This can be done through an input/output device that senses the bodyâ„¢s movement, such as joystick or data glove. By using haptic devices, the user can not only feed information to the computer but can also receive information from the computer in the form of a felt sensation on some part of the body. This is referred to as a haptic interface.
Haptic devices can be broadly classified into
4. a) Virtual reality/ Telerobotics based devices
i) Exoskeletons and Stationary device
ii) Gloves and wearable devices
iii) Point-sources and Specific task devices
iv) Locomotion Interfaces
4. b) Feedback devices
i) Force feedback devices
ii) Tactile displays
4. a. i) Exoskeletons and Stationary devices
The term exoskeleton refers to the hard outer shell that exists on many creatures. In a technical sense, the word refers to a system that covers the user
or the user has to wear. Current haptic devices that are classified as exoskeletons are large and immobile systems that the user must attach him- or herself to.
4. a. ii) Gloves and wearable devices
These devices are smaller exoskeleton-like devices that are often, but not always, take the down by a large exoskeleton or other immobile devices. Since the goal of building a haptic system is to be able to immerse a user in the virtual or remote environment and it is important to provide a small remainder of the userâ„¢s actual environment as possible. The drawback of the wearable systems is that since weight and size of the devices are a concern, the systems will have more limited sets of capabilities.
4. a. iii) Point sources and specific task devices
This is a class of devices that are very specialized for performing a particular given task. Designing a device to perform a single type of task restricts the application of that device to a much smaller number of functions. However it allows the designer to focus the device to perform its task extremely well. These task devices have two general forms, single point of interface devices and specific task devices.
4. a. iv) Locomotion interfaces
An interesting application of haptic feedback is in the form of full body Force Feedback called locomotion interfaces. Locomotion interfaces are movement of force restriction devices in a confined space, simulating unrestrained mobility such as walking and running for virtual reality. These interfaces overcomes the limitations of using joysticks for maneuvering or whole body motion platforms, in which the user is seated and does not expend energy, and of room environments, where only short distances can be traversed.
4. b. i) Force feedback devices
Force feedback input devices are usually, but not exclusively, connected to computer systems and is designed to apply forces to simulate the sensation of weight and resistance in order to provide information to the user. As such, the feedback hardware represents a more sophisticated form of input/output devices, complementing others such as keyboards, mice or trackers. Input from the user in the form of hand, or other body segment whereas feedback from the computer or other device is in the form of hand, or other body segment whereas feedback from the computer or other device is in the form of force or position. These devices translate digital information into physical sensations.
4. b. ii) Tactile display devices
Simulation task involving active exploration or delicate manipulation of a virtual environment require the addition of feedback data that presents an objectâ„¢s surface geometry or texture. Such feedback is provided by tactile feedback systems or tactile display devices. Tactile systems differ from haptic systems in the scale of the forces being generated. While haptic interfaces will present the shape, weight or compliance of an object, tactile interfaces present the surface properties of an object such as the objectâ„¢s surface texture. Tactile feedback applies sensation to the skin.
Chapter 5
COMMONLY USED HAPTIC
INTERFACING DEVICES
5. a) PHANTOM

It is a haptic interfacing device developed by a company named Sensable technologies. It is primarily used for providing a 3D touch to the virtual objects. This is a very high resolution 6 DOF device in which the user holds the end of a motor controlled jointed arm. It provides a programmable sense of touch that allows the user to feel the texture and shape of the virtual object with a very high degree of realism. One of its key features is that it can model free floating 3 dimensional objects.

Figure above shows the contact display design of a Phantom device. Here when the user puts one of his finger in the thimble connected to the metal arm of the phantom device and when the user move his finger, then he could really feel the shape and size of the virtual 3 dimensional object that has been already programmed inside the computer. The virtual 3 dimensional space in which the phantom operates is called haptic scene which will be a collection of separate haptic objects with different behaviors and properties. The dc motor assembly is mainly used for converting the movement of the finger into a corresponding virtual movement.
5. b) Cyberglove

The principle of a Cyberglove is simple. It consists of opposing the movement of the hand in the same way that an object squeezed between the fingers resists the movement of the latter. The glove must therefore be capable, in the absence of a real object, of recreating the forces applied by the object on the human hand with (1) the same intensity and (2) the same direction. These two conditions can be simplified by requiring the glove to apply a torque equal to the interphalangian joint.
The solution that we have chosen uses a mechanical structure with three passive joints which, with the interphalangian joint, make up a flat four-bar closed-link mechanism. This solution use cables placed at the interior of the four-bar mechanism and following a trajectory identical to that used by the extensor tendons which, by nature, oppose the movement of the flexor tendons in order to harmonize the movement of the fingers. Among the advantages of this structure one can cite:
Â¢ Allows 4 dof for each finger
Â¢ Adapted to different size of the fingers
Â¢ Located on the back of the hand
Â¢ Apply different forces on each phalanx (The possibility of applying a lateral force on the fingertip by motorizing the abduction/adduction joint)
Â¢ Measure finger angular flexion (The measure of the joint angles are independent and can have a good resolution given the important paths traveled by the cables when the finger shut.
5. b. i) Mechanical structure of a Cyberglove

The glove is made up of five fingers and has 19 degrees of freedom 5 of which are passive. Each finger is made up of a passive abduction joint which links it to the base (palm) and to 9 rotoid joints which, with the three interphalangian joints, make up 3 closed-link mechanism with four-bar and 1 degree of freedom.
The structure of the thumb is composed of only two closed-links, for 3 dof of which one is passive. The segments of the glove are made of aluminum and can withstand high charges; their total weight does not surpass 350 grams. The length of the segments is proportional to the length of the phalanxes. All of the joints are mounted on miniature ball bearings in order to reduce friction.
The mechanical structure offers two essential advantages: the first is the facility of adapting to different sizes of the human hand. We have also provided for lateral adjustment in order to adapt the interval between the fingers at the palm. The second advantage is the presence of physical stops in the structure which offer complete security to the operator.
The force sensor is placed on the inside of a fixed support on the upper part of the phalanx. The sensor is made up of a steel strip on which a strain gauge was glued. The position sensor used to measure the cable displacement is incremental optical encoders offering an average theoretical resolution equal to 0.1 deg for the finger joints.
5. b. ii) Control of Cyberglove

The glove is controlled by 14 torque motors with continuous current which can develop a maximal torque equal to 1.4 Nm and a continuous torque equal to 0.12 Nm. On each motor we fix a pulley with an 8.5 mm radius onto which the cable is wound. The maximal force that the motor can exert on the cable is thus equal to 14.0 N, a value sufficient to ensure opposition to the movement of the finger. The electronic interface of the force feedback data glove is made of PC with several acquisition cards. The global scheme of the control is given in the figure shown below. One can distinguish two command loops: an internal loop which corresponds to a classic force control with constant gains and an external loop which integrates the model of distortion of the virtual object in contact with the fingers. In this schema the action of man on the position of the fingers joints is taken into consideration by the two control loops. Man is considered as a displacement generator while the glove is considered as a force generator.

Chapter 6
HAPTIC RENDERING
6. a) Principle of haptic interface

As illustrated in Fig. given above, haptic interaction occurs at an interaction tool of a haptic interface that mechanically couples two controlled dynamical systems: the haptic interface with a computer and the human user with a central nervous system. The two systems are exactly symmetrical in structure and information and they sense the environments, make decisions about control actions, and provide mechanical energies to the interaction tool through motions.
6. b) Characteristics commonly considered desirable for haptic interface devices
1) Low back-drive inertia and friction;
2) Minimal constraints on motion imposed by the device kinematics so free motion feels free;
3) Symmetric inertia, friction, stiffness, and resonant frequency properties (thereby regularizing the device so users donâ„¢t have to unconsciously compensate for parasitic forces);
4) Balanced range, resolution, and bandwidth of position sensing and force reflection; and
5) Proper ergonomics that let the human operator focus when wearing or manipulating the haptic interface as pain, or even discomfort, can distract the user, reducing overall performance.
6. c) Creation of an AVATAR
An avatar is the virtual representation of the haptic through which the user physically interacts with the virtual environment. Clearly the choice of avatar depends on whatâ„¢s being simulated and on the haptic deviceâ„¢s capabilities. The operator controls the avatarâ„¢s position inside the virtual environment. Contact between the interface avatar and the virtual environment sets off action and reaction forces. The avatarâ„¢s geometry and the type of contact it supports regulate these forces.
Within a given application the user might choose among different avatars. For example, a surgical tool can be treated as a volumetric object exchanging forces and positions with the user in a 6D space or as a pure point representing the toolâ„¢s tip, exchanging forces and positions in a 3D space.
6. d) System architecture for haptic rendering

Haptic-rendering algorithms compute the correct interaction forces between the haptic interface representation inside the virtual environment and the virtual objects populating the environment. Moreover, haptic rendering algorithms ensure that the haptic device correctly renders such forces on the human operator. Several components compose typical haptic rendering algorithms. We identify three main blocks, illustrated in Figure shown above.
Collision-detection algorithms detect collisions between objects and avatars in the virtual environment and yield information about where, when, and ideally to what extent collisions (penetrations, indentations, contact area, and so on) have occurred.
Force-response algorithms compute the interaction force between avatars and virtual objects when a collision is detected. This force approximates as closely as possible the contact forces that would normally arise during contact between real objects. Force-response algorithms typically operate on the avatarsâ„¢ positions, the positions of all objects in the virtual environment, and the collision state between avatars and virtual objects. Their return values are normally force and torque vectors that are applied at the device-body interface. Hardware limitations prevent haptic devices from applying the exact force computed by the force-response algorithms to the user.
Control algorithms command the haptic device in such a way that minimizes the error between ideal and applicable forces. The discrete-time nature of the haptic-rendering algorithms often makes this difficult; as we explain further later in the article. Desired force and torque vectors computed by force response algorithms feed the control algorithms. The algorithmsâ„¢ return values are the actual force and torque vectors that will be commanded to the haptic device.
A typical haptic loop consists of the following sequence of events:
1) Low-level control algorithms sample the position sensor sat the haptic interface device joints.
2) These control algorithms combine the information collected from each sensor to obtain the position of the device-body interface in Cartesian spaceâ€that is, the avatarâ„¢s position inside the virtual environment.
3) The collision-detection algorithm uses position information to find collisions between objects and avatars and report the resulting degree of penetration.
4) The force-response algorithm computes interaction forces between avatars and virtual objects involved in a collision.
5) The force-response algorithm sends interaction forces to the control algorithms, which apply them on the operator through the haptic device while maintaining a stable overall behavior.
The simulation engine then uses the same interaction forces to compute their effect on objects in the virtual environment. Although there are no firm rules about how frequently the algorithms must repeat these computations, a 1-KHz servo rate is common. This rate seems to be a subjectively acceptable compromise permitting presentation of reasonably complex objects with reasonable stiffness. Higher servo rates can provide crisper contact and texture sensations, but only at the expense of reduced scene complexity (or more capable computers).
6. e) Computing contact-response forces
Humans perceive contact with real objects through sensors (mechanoreceptors) located in their skin, joints, tendons, and muscles. We make a simple distinction between the information these two types of sensors can acquire. I.e. Tactile information and kinesthetic information. A tool-based interaction paradigm provides a convenient simplification because the system need only render forces resulting from contact between the toolâ„¢s avatar and objects in the environment. Thus, haptic interfaces frequently utilize a tool handle physical interface for the user.
To provide a haptic simulation experience, weâ„¢ve designed our systems to recreate the contact forces a user would perceive when touching a real object. The haptic interfaces measure the userâ„¢s position to recognize if and when contacts occur and to collect information needed to determine the correct interaction force. Although determining user motion is easy, determining appropriate display forces is a complex process and a subject of much research. Current haptic technology effectively simulates interaction forces for simple cases, but is limited when tactile feedback is involved.
Compliant object response modeling adds a dimension of complexity because of non negligible deformations, the potential for self-collision, and the general complexity of modeling potentially large and varying areas of contact.
We distinguish between two types of forces: forces due to object geometry and forces due to object surface properties, such as texture and friction.
6. f) Geometry-dependant force-rendering algorithms
The first type of force-rendering algorithms aspires to recreate the force interaction a user would feel when touching a frictionless and texture fewer objects. Such interaction forces depend on the geometry of the object being touched, its compliance, and the geometry of the avatar representing the haptic interface inside the virtual environment.
Although exceptions exist, 5 of the necessary DOF to describe the interaction forces between an avatar and a virtual object typically matches the actuated DOF of the haptic device being used. Thus for simpler devices, such as a 1-DOF force-reflecting gripper, the avatar consists of a couple of points that can only move and exchange forces along the line connecting them. For this device type, the force-rendering algorithm computes a simple 1-DOF squeeze force between the index finger and the thumb, similar to the force you would feel when cutting an object with scissors. When using a 6-DOF haptic device, the avatar can be an object of any shape. In this case, the force-rendering algorithm computes all the interaction forces between the object and the virtual environment and applies the resultant force and torque vectors to the user through the haptic device. We group current force-rendering algorithms by the number of DOF necessary to describe the interaction force being rendered.
6. g) Surface property-dependent force-rendering algorithms
All real surfaces contain tiny irregularities or indentations. Obviously, itâ„¢s impossible to distinguish each irregularity when sliding a finger over an object. However, tactile sensors in the human skin can feel their combined effects when rubbed against a real surface.
Micro-irregularities act as obstructions when two surfaces slide against each other and generate forces tangential to the surface and opposite to motion. Friction, when viewed at the microscopic level, is a complicated phenomenon. Nevertheless, simple empirical models exist, such as the one Leonardo Da Vinci proposed and Charles Augustin de Coulomb later developed in 1785. Such models served as a basis for the simpler frictional models in 3 DOF Researchers outside the haptic community have developed many models to render friction with higher accuracy, for example, the Karnopp model for modeling stick-slip friction, the Bristle model, and the reset integrator model. Higher accuracy, however, sacrifices speed, a critical factor in real-time applications. Any choice of modeling technique must consider this trade off. Keeping this trade off in mind, researchers have developed more accurate haptic-rendering algorithms for friction. A texture or pattern generally covers real surfaces. Researchers have proposed various techniques for rendering the forces that touching such textures generates.
6. f) Haptic interaction techniques
Many of these techniques are inspired by analogous techniques in modern computer graphics. In computer graphics, texture mapping adds realism to computer generated scenes by projecting a bitmap image onto surfaces being rendered. The same can be done haptically. Minsky first proposed haptic texture mapping for 2D and later extended his work to 3D scenes. Existing haptic rendering techniques are currently based upon two main principles: "point-interaction" or "ray-based rendering".
In point interactions, a single point, usually the distal point of a probe, thimble or stylus employed for direct interaction with the user, is employed in the simulation of collisions. The point penetrates the virtual objects, and the depth of indentation is calculated between the current point and a point on the surface of the object. Forces are then generated according to physical models, such as spring stiffness or a spring-damper model.
In ray-based rendering, the user interface mechanism, for example, a probe
is modeled in the virtual environment as a finite ray. Orientation is thus taken into account, and collisions are determined between the simulated probe and virtual objects. Collision detection algorithms return the intersection point between the ray and the surface of the simulated object.

Chapter 7
APPLICATIONS
The following are the major applications of haptic systems.
7. a) Graphical user interfaces.
Video game makers have been early adopters of passive haptics, which takes advantage of vibrating joysticks, controllers and steering wheels to reinforce on-screen activity. But future video games will enable players to feel and manipulate virtual solids, fluids, tools and avatars. The Novint Falcon haptics controller is already making this promise a reality. The 3-D force feedback controller allows you to tell the difference between a pistol report and a shotgun blast, or to feel the resistance of a longbow's string as you pull back an arrow.

Graphical user interfaces, like those that define Windows and Mac operating environments, will also benefit greatly from haptic interactions. Imagine being able to feel graphic buttons and receive force feedback as you depress a button. Some touchscreen manufacturers are already experimenting with this technology. Nokia phone designers have perfected a tactile touchscreen that makes on-screen buttons behave as if they were real buttons. When a user presses the button, he or she feels movement in and movement out. He also hears an audible click. Nokia engineers accomplished this by placing two small piezoelectric sensor pads under the screen and designing the screen so it could move slightly when pressed. Everything, movement and sound is synchronized perfectly to simulate real button manipulation.
7. b) Surgical Simulation and Medical Training.

Various haptic interfaces for medical simulation may prove especially useful for training of minimally invasive procedures (laparoscopy/interventional radiology) and remote surgery using teleoperators. In the future, expert surgeons may work from a central workstation, performing operations in various locations, with machine setup and patient preparation performed by local nursing staff. Rather than traveling to an operating room, the surgeon instead becomes a telepresence. A particular advantage of this type of work is that the surgeon can perform many more operations of a similar type, and with less fatigue. It is well documented that a surgeon who performs more procedures of a given kind will have statistically better outcomes for his patients. Haptic interfaces are also used in rehabilitation robotics.
In ophthalmology, "haptic" refers to a supporting spring, two of which hold an artificial lens within the lens capsule (after surgical removal of cataracts).
A 'Virtual Haptic Back' (VHB) is being successfully integrated in the curriculum of students at the Ohio University College of Osteopathic Medicine. Research indicates that VHB is a significant teaching aid in palpatory diagnosis (detection of medical problems via touch). The VHB simulates the contour and compliance (reciprocal of stiffness) properties of human backs, which are palpated with two haptic interfaces (Sensable Technologies, Phantom 3.0).
Reality-based modeling for surgical simulation consists of a continuous cycle. In the figure given above, the surgeon receives visual and haptic (force and tactile) feedback and interacts with the haptic interface to control the surgical robot and instrument. The robot with instrument then operates on the patient at the surgical site per the commands given by the surgeon. Visual and force feedback is then obtained through endoscopic cameras and force sensors that are located on the surgical tools and are displayed back to the surgeon.
7. c) Military Training in virtual environment.
From the earliest moments in the history of virtual reality (VR), the United States military forces have been a driving factor in developing and applying new VR technologies. Along with the entertainment industry, the military is responsible for the most dramatic evolutionary leaps in the VR field.
Virtual environments work well in military applications. When well designed, they provide the user with an accurate simulation of real events in a safe, controlled environment. Specialized military training can be very expensive, particularly for vehicle pilots. Some training procedures have an element of danger when using real situations. While the initial development of VR gear and software is expensive, in the long run it's much more cost effective than putting soldiers into real vehicles or physically simulated situations. VR technology also has other potential applications that can make military activities safer.

Today, the military uses VR techniques not only for training and safety enhancement, but also to analyze military maneuvers and battlefield positions. In the next section, we'll look at the various simulators commonly used in military training. Ã‚Â¬Out of all the earliest VR technology applications, military vehicle simulations have probably been the most successful. Simulators use sophisticated computer models to replicate a vehicle's capabilities and limitations within a stationary -- and safe -- computer station.
Possibly the most well-known of all the simulators in the military are the flight simulators. The Air Force, Army and Navy all use flight simulators to train pilots. Training missions may include how to fly in battle, how to recover in an emergency, or how to coordinate air support with ground operations.
Although flight simulators may vary from one model to another, most of them have a similar basic setup. The simulator sits on top of either an electronic motion base or a hydraulic lift system that reacts to user input and events within the simulation. As the pilot steers the aircraft, the module he sits in twists and tilts, giving the user haptic feedback. The word "haptic" refers to the sense of touch, so a haptic system is one that gives the user feedback he can feel. A joystick with force-feedback is an example of a haptic device.
Some flight simulators include a completely enclosed module, while others just have a series of computer monitors arranged to cover the pilot's field of view. Ideally, the flight simulator will be designed so that when the pilot looks around, he sees the same controls and layout as he would in a real aircraft. Because one aircraft can have a very different cockpit layout than another, there isn't a perfect simulator choice that can accurately represent every vehicle. Some training centers invest in multiple simulators, while others sacrifice accuracy for convenience and cost by sticking to one simulator model.
Ground Vehicle Simulators -Although not as high profile as flight simulators, VR simulators for ground vehicles is an important part of the militaryâ„¢s strategy. In fact, simulators are a key part of the Future Combat System (FCS) -- the foundation of the armed forces' future. The FCS consists of a networked battle command system and advanced vehicles and weapons platforms. Computer scientists designed FCS simulators to link together in a network, facilitating complex training missions involving multiple participants acting in various roles.
The FCS simulators include three computer monitors and a pair of joystick controllers attached to a console. The modules can simulate several different ground vehicles, including non-line-of-sight mortar vehicles, reconnaissance vehicles or an infantry carrier vehicle

The Army uses several specific devices to train soldiers to drive specialized vehicles like tanks or the heavily-armored Stryker vehicle. Some of these look like long-lost twins to flight simulators. They not only accurately recreate the handling and feel of the vehicle they represent, but also can replicate just about any environment you can imagine. Trainees can learn how the real vehicle handles in treacherous weather conditions or difficult terrain. Networked simulators allow users to participate in complex war games.
7. d) Telerobotics
In a telerobotic system, a human operator controls the movements of a robot that is located some distance away. Some teleoperated robots are limited to very simple tasks, such as aiming a camera and sending back visual images. In a more sophisticated form of teleoperation known as telepresence, the human operator has a sense of being located in the robot's environment. Haptics now makes it possible to include touch cues in addition to audio and visual cues in telepresence models. It won't be long before astronomers and planet scientists actually hold and manipulate a Martian rock through an advanced haptics-enabled telerobot, a high-touch version of the Mars Exploration Rover.
Chapter 8
LIMITATIONS OF HAPTIC SYSTEMS
Limitations of haptic device systems have sometimes made applying the forceâ„¢s exact value as computed by force-rendering algorithms impossible.
Various issues contribute to limiting a haptic deviceâ„¢s capability to render a desired force or, more often, desired impedance are given below.
1) Haptic interfaces can only exert forces with limited magnitude and not equally well in all directions, thus rendering algorithms must ensure that no output components saturate, as this would lead to erroneous or discontinuous application of forces to the user. In addition, haptic devices arenâ„¢t ideal force transducers.
2) An ideal haptic device would render zero impedance when simulating movement in free space, and any finite impedance when simulating contact with an object featuring such impedance characteristics. The friction, inertia, and backlash present in most haptic devices prevent them from meeting this ideal.
3) A third issue is that haptic-rendering algorithms operate in discrete time whereas users operate in continuous time, as Figure shown below illustrates. While moving into and out of a virtual object, the sampled avatar position will always lag behind the avatarâ„¢s actual continuous-time position. Thus, when pressing on a virtual object, a user needs to perform less work than in reality.
And when the user releases, however, the virtual object returns more work than its real-world counterpart would have returned. In other terms, touching a virtual object extracts energy from it. This extra energy can cause an unstable response from haptic devices.

4) Finally, haptic device position sensors have finite resolution. Consequently, attempting to determine where and when contact occurs always results in a quantization error. Although users might not easily perceive this error, it can create stability problems.
All of these issues, well known to practitioners in the field, can limit a haptic applicationâ„¢s realism. The first two issues usually depend more on the device mechanics; the latter two depend on the digital nature of VR applications.
Chapter 9
FUTURE VISION
As haptics moves beyond the buzzes and thumps of todayâ„¢s video games, technology will enable increasingly believable and complex physical interaction with virtual or remote objects. Already haptically enabled commercial products let designers sculpt digital clay figures to rapidly produce new product geometry, museum goers feel previously inaccessible artifacts, and doctors train for simple procedures without endangering patients.
Past technological advances that permitted recording, encoding, storage, transmission, editing, and ultimately synthesis of images and sound profoundly affected society. A wide range of human activities, including communication, education, art, entertainment, commerce, and science, were forever changed when we learned to capture, manipulate, and create sensory stimuli nearly indistinguishable from reality. Itâ„¢s not unreasonable to expect that future advancements in haptics will have equally deep effects. Though the field is still in its infancy, hints of vast, unexplored intellectual and commercial territory add excitement and energy to a growing number of conferences, courses, product releases, and invention efforts.
For the field to move beyond todayâ„¢s state of the art, researchers must surmount a number of commercial and technological barriers. Device and software tool-oriented corporate efforts have provided the tools we need to step out of the laboratory, yet we need new business models. For example, can we create haptic content and authoring tools that will make the technology broadly attractive
Can the interface devices be made practical and inexpensive enough to make them widely accessible Once we move beyond single-point force-only interactions with rigid objects, we should explore several technical and scientific avenues. Multipoint, multi-hand, and multi-person interaction scenarios all offer enticingly rich interactivity. Adding sub-modality stimulation such as tactile (pressure distribution) display and vibration could add subtle and important richness to the experience. Modeling compliant objects, such as for surgical simulation and training, presents many challenging problems to enable realistic deformations, arbitrary collisions, and topological changes caused by cutting and joining actions.
Improved accuracy and richness in object modeling and haptic rendering will require advances in our understanding of how to represent and render psychophysically and cognitively germane attributes of objects, as well as algorithms and perhaps specialty hardware (such as haptic or physics engines) to perform real-time computations.
Development of multimodal workstations that provide haptic, visual, and auditory engagement will offer opportunities for more integrated interactions. Weâ„¢re only beginning to understand the psychophysical and cognitive details needed to enable successful multimodality interactions. For example, how do we encode and render an object so there is a seamless consistency and congruence across sensory modalitiesâ€that is, does it look like it feels Are the objectâ„¢s densities, compliance, motion, and appearance familiar and unconsciously consistent with context Are sensory events predictable enough that we consider objects to be persistent, and can we make correct inference about properties Hopefully we could get bright solutions for all the queries in the near future itself.

ABSTRACT
Touch is a fundamental aspect of interpersonal communication. Whether a greeting handshake, an encouraging pat on the back, or a comforting hug, physical contact is a basic means through which people achieve a sense of connection, indicate intention, and express emotion. In close personal relationships, such as family and friends, touch is particularly important as a communicator of affection.
Current interpersonal communication technology, such as telephones, video conferencing systems, and email, provides mechanisms for audio-visual and text-based interaction. Communication through touch, however, has been left largely unexplored . unexplored. In this paper, we describe an approach for applying haptic feedback technology to create a physical link between people separated by distance. The aim is to enrich current real-time communication by opening a channel for expression through touch.

1.INTRODUCTION1. INTRODUCTION
For quite some time, most computer-based simulations of objects were only visual. The user usually had to look at a computer screen or don a headset to give him or her access to three-dimensional objects. Later, sound became possible and it improved the simulation experience. Until recently, however, one key element has been missing: the ability to feel the object, to get a sense of:
It'sIts shape
How heavy it is
How the surface texture feels
How hot or cold it is
For example, if a user tried to grab a virtual a ball, there was no non-visual way to let the user know that the ball is in contact with the user's virtual hand. Also, there was no mechanism to keep the virtual hand from passing through the ball.
Haptic research attempts to solve these problems. Haptics (from the Greekhaptesthai "to touch") refers to the modality of touch and its associated sensory feedback.
Haptic feedback devices and supporting software permit users to sense ("feel") and manipulate three-dimensional virtual objects in terms of shape, weight, surface textures, and temperature.
Many Haptic devices are employed in the area of Virtual Reality; however, that term refers more commonly to an artificial environment created with computers and software and presented to the user in such a way that it appears and feels like a real environment.
2.HAPTICS2. HAPTICS
The term haptics refers to the sense of touch, conveying information on physical properties of tactile reception such as temperature, compliance, changeand change in texture.
Because of this, haptic technology holds great potential for interaction designers. Understanding the details of haptic perception and the feasibility of incorporating haptic technology into tangible user interfaces adds a powerful idiom to the interaction design vocabulary, one from which we can develop new interaction paradigms.
This project was comprised of two parts:
Â¢ A cursory survey of the technology, research, applications, and industry
Â¢ A field trip to a research center where new haptics devices are being developed
. This sense can be divided into two categories:
Â¢ The kinesthetic sense, through which we sense movement or force in muscles and joints;
Â¢ The tactile sense, through which we sense shapes and textures.
Today, there are many devices that use haptic technology. For example, joysticks and similar devices that employ force feedback were among the earliest developed for a mass market, and were popularized in the mid-1990s when these joysticks became the rage.
Tactile perception provides us with a wide range of immediate information, which we process both consciously and unconsciously. Touching with one's hands is always a deliberate action and can be used as an effective means of input to a digital device.
2.1 TOWARDS THE TECHNOLOGY
The idea behind in Touch is to create the illusion that two people, separated by distance, are interacting with a shared physical object. In reality, each user is interacting with his/her own object; however, when one of the objects is manipulated, both users' objects are affected. In our current design, the two connected objects each consist of three cylindrical rollers mounted on a base (Figure 1). When one of the rollers is rotated, the corresponding roller on the remote object rotates in the same way. This behavior can be achieved using haptic (force-feedback) technology with sensors to monitor the physical states of the rollers and internal motors to synchronize these states.
Figure 1. inTouch conceptual sketch
Two geographically distant people can then cooperatively move the rollers, fight over the state of the rollers, or more passively feel the other person's manipulation of the device. The presence of the other person is thus made tangible throughtangible through physical interaction with the seemingly shared object. Since the two objects are not mechanically linked in reality, inconsistencies in their states must be resolved by the system agreeing on a single consistent state and then employing the motors to guide the objects into that state.
When we examine objects and surfaces in the real world, our sense of feeling (touch) is as important as seeing and hearing. Normally we use all of our senses in continuous and parallel cooperation to observe, orientate, learn and receive information. The most important combination of our senses are seeing, hearing and feeling.
For many tasks feeling provides vital information to the operator, such as situations with poor lighting conditions and jobs where details are so small that they are covered by the hands and tools that do the job. An example may illustrate this: Modern surgical robot equipment is still operated by doctors, who have to do their job without the sense of feeling since the robots do not have the ability to pick up and replay this touch information. Currently, doctors only receive visual (camera) information and in some casesdoctors only receive visual (camera) information and in some case "force feedback" when they attempt to access "out-of-bounds" areas.
When using computers and software for modeling processes and analyzing complex systems, we often need more than the 2 dimensions, even when using colours and shades. Techniques for visual representation of the 3rd dimension have been introduced but are still lacking in their presentation of the
"real world". Here also the touch channel could provide significant additional information
2.2 HAPTIC DEVICES AND CLASSIFICATION
The word "haptic" is relating to or proceeding from the sense of touch" A haptic interface is a device which allows a user to interact with a computer by receiving tactile feed back. This feedback is achieved by applying a degree of opposing force to the user along the x, y, and z axes. These devices can be used by people with disabilities or people who learn best through tactile or kinesthetic experiences. The use of haptic devices that once were cost prohibitive but now are incorporated into mainstream devices such as the iFeel Mouse and the IFeel Mouseman, promote inclusion and acceptance of "adaptive" technology into the "daily computer experience" of people with and without disabilities.
A haptic interface is a device which allows a user to interact with a computer by receiving tactile and kinesthetic feedback. A haptic interface device share the unparalleled ability to provide for simultaneous information exchange between a user and a machine as depicted below.
An illustration of the unique bi-directional information exchange of a haptic interface.
There are two main types of haptic devices:
Â¢ Glove or pen-type devices that allow the user to "touch" and manipulate 3-dementional virtual objects
Â¢ Devices that allow users to "feel" textures of 2-dementional objects with a pen or mouse-type interface
The 3-demensional haptic devices can be use for applications such as surgical simulations and remote operation of robotics in hazardous environments.
The 2-demensional haptic devices can be used to aid computer users who are blind or visually disabled; or who are tactile/Kinesthetic learners, by providing a slight resistance at the edges of windows and buttons so that the user can "feel" the Graphical User Interface (GUI). This technology can also provide resistance to textures in computer images which enables computer users to "feel" pictures such as maps and drawings.
Two Dimensional Devices
Â¢ The WingMan Force Feedback Mouse and the iFeel mouse are some of the haptic devices produced by Logitech.
Three Dimensional Devices
Â¢ The Phantom from SensAble Technologies is a 3-dimensional pen-style haptic interface, and comes in five models.
Â¢ CyberGrasp is a glove-style haptic interface that allows users to touch computer-generated objects and experience realistic force feedbacks
2.3 PHANToM
It is a haptic device which enables tactile interaction with a computer. By means of this devicethis ,visuallydevice, visually impaired people can interact with the computer.
Computers are becoming everyday technology for more and more people. Computers have opened up many doors for disabled people, for example it is now rather easy for a blind person to access written text. Any text in a computer can be read either with a one row Braille-display or a speech synthesizer. This is done in real time and is of course much more flexible and less space consuming than books with Braille-text on paper. There is a big problem though: Since the introduction of graphical user interfaces computers are getting easier and easier to use for sighted people, but GUIs has the opposite effect for non sighted people. The many fast accessible objects on a Windows desktop becomes a big mess for a user who can't see them. And if you don't know what is on the screen it is almost impossible to control the computer.
This is where the haptic interfaces can be a good help. With the PHANToM or a similar device it is possible to feel things represented in a computer. CERTEC has
developed a set of programs which demonstrate different ways for a blind user to control a virtual reality with finger movements and to get feedback via the sense of touch. One of the big tasks for the future is to make the Microsoft Windows environment completely accessible through haptics. If you can feel the START-button in the lower left corner etc. it is not only possible to control the environment, but it is also possible to start speaking about how to do things since both sighted and non sighted users have a common ground to start from.
CERTEC is working with the meeting between human needs and technical possibilities. Normally we start with the human needs and develop technical solutions from that aspect. Sometimes though it is motivated to start from the other side.
In this case we have used a technical solution which has been developed for other purposes and modified it to correspond to the needs of a disabled person. We have turned the PHANToM into the Phantasticon.
2.3.1 A short description of the PHANToM
The PHANToM is a small robot which acts as a haptic interface between a human and a computer. A normal program uses vision, sound, a keyboard and a mouse for the interaction with a user. The PHANToM adds a new dimension to human computer interaction, namely haptic interaction. Haptic interaction uses both the sense of touch in a small scale and movement in a slightly bigger scale.

It is not unusual to connect a robot to a computer as is done with the PHANToM. The special thing in this case is that movement and sense is used for interaction between the human and the computer. With the PHANToM a user can feel objects which are represented inside the computer. At the same time one can use movement to give commands and to get feedback from the program.
When activated the PHANToM works together with the computer to interpret the users finger position in three dimensional spacespace and apply an appropriate and variable resisting force. This process is completed 1000 times per second.It is a 6 DOF device.DOFdevice. DOF refers to Degree Of Freedom nothing but the no of dimensions required to completely specify the position and location of the object.
2.3.2 Making a Phantasticon out of the PHANToM
When the PHANToM is extended to meet the needs of disabled persons it becomes a complete system. This system includes the PHANToM itself, and the software from CERTEC. It also includes a lot of ideas and thoughts about what can be done for people with special needs using this hardware and software.
.
SensAble Technologies Inc., a spinoff from work Salisbury and colleagues did when he was at the Massachusetts Institute of Technology, commercialized one such haptic interface in 1993. Designers have used it to carve out of thin air products from Nike shoe soles to Chicken Run collectibles.
Salisbury's Stanford lab also uses a haptic interface from Force dimension, a company co-founded by graduate student Francois Conti. Conti is using one such device to take tactile "pictures." The spiderlikespider like robot handle presses on a surface and records the forces causing deformation. It can then play back the forces it experienced and make a person holding the handle feel like he's poking the surface himself.The computer communicates sensations through interfaces such as the PHANTOMâ€žÂ¢ Haptic Interface, produced by SensAble Technologies, Inc. of Woburn, Mass.
2.3.3 Software development for the Phantasticon
CERTEC is continuously developing programs for the Phantasticon. At this moment we have the following programs ready:
"Paint with your fingers"
A program with which the user can paint computer pictures with a finger. One can choose a colour from a palette and paint with it on the screen. The harder you push with your finger, the thicker becomes the line. Each color has an individual structure. When you are painting you can feel the structure which is being painted. You can also feel the structure of the whole picture by changing program mode with a simple click on the space key.
"Mathematical curves and surfaces"
Mathematics is a partially visual subject. That is often noticed by people who try to explain mathematics for blind persons. With the help of the Phantasticon also blind persons can learn to understand equations as curves and surfaces. CERTEC has developed a program which makes it possible to feel any mathematical curve or surface with the PHANToM.
"Submarines"
"Submarines" is a PHANToM variant of the well known battleship game. The player can feel 10x10 squares in a coordinate system. In the game your finger is a helicopter which is hunting submarines with depth charge bombs. If you put your finger on the "water surface" you can feel the smooth waves moving up and down. The surface feels different after you have dropped a bomb, and it also feels different if a submarine has been sinked. This computer game uses the PHANToM, the screen and the keyboard for the interaction with the user.
2.3.4 Touch Windows Andand Work In Progress
The big efforts at this moment isefforts at this moment are laid on developing a general user interface which is easily accessible for blind people. As a test bench for haptic interface objects and at the same time a haptic computer game we have developed Haptic Memory. The task for the user is to find pairs of sounds which are played when the users pushes different buttons. The Memory program is a good base to find out how different parts of a haptic interface should be designed to work as good as possible for low vision users.
The Haptic Memory has also been expanded into "the HOuSe". The HOuSe is CERTECs "first steps towards a Haptic Operating System". The HOuSe is a bigger haptic memory with five floors and five buttons on each floor. With this program we can gain some experience about how blind persons can use haptics to build inner pictures of complex environments. That knowledge is an important cornerstone when we start building the complete haptic windows system or other haptic programs for visually disabled people.
"The HOuSe" has been tested with four children and five adults. All of them are blind. A reference group of 21 children in the age of 12 has tested a program with the same function and layout, but with a graphical user interface. Since the haptic interface has the same layout as the graphical interface and both programs work the exactly same except for the way of interacting with the user it is possible to compare to results of the blind users with the results of the sighted users. All of the blind testers had a little more than one hour of experience with the phantom before the test started.
The tests show that a majority of the blind users could complete the task using about as many button pushes as the sighted users, but the blind users generally needed more time. The differences in the results where bigger in the group with blind users and two of them did not finish the task at all.
The results imply that it is meaningful to keep trying to make graphical user interfaces accessible for blind people using haptic technology. Most of the blind users showed big confidence when using the haptic interface even with the rather limited experience they had and the differences in time will probably be lower after more training.
As mentioned in the introduction there is a big problem for non sighted users that computer nowadays mostly have graphical users interfaces. However, Windows and other graphical user interfaces are widespread and accepted, so almost all new programs are made for those environments.
3.METHODS3. METHODS OF SUPPLYING GOOD SIMULATION
Good simulation createsimulation creates illusion of real objects by reflecting the proper amount of force feedback to the user.Theuser. The difficulty in providing this illusion is what makes haptics a challenging area of robotics.Therobotics. The three main components of haptic simulation are haptic software,humansoftware, human perception and haptic hardware.
1.HAPTIC1. HAPTIC SOFTWARE: Haptic devices possess the ability to simulate different objects in varying environments.Thisenvironments. This ability comes from the control algorithm governing the haptic simulation.Becausesimulation. Because haptic simulations are governed in software, a great deal of research has focused onfocused on designing the control algorithms.
2.HUMAN2. HUMAN PERCEPTION: Another component of haptic simulation is human perception of virtual objects.Byobjects. By suitably controlling the relationship between visual and haptic displays in the multimodal virtual environments,itenvironments, it may be possible to overcome the limitations of haptic interfaces.
3.HAPTIC3. HAPTIC HARDWARE:TheHARDWARE: The final component of haptic simulation, haptic hardware,encompasseshardware, encompasses issues regarding motor/actuator specification,transmissionsspecification, transmissions and overall design.somedesign. Some of these issues include elimination of backlash using belt drives rather than gears,usegears, use of composite material to reduce the overall inertia thereby decreasing momentum.
4.APPLICATIONS4. APPLICATIONS
Gaming is one of the first applications of haptics that is being realized. Two students in Salisbury's experimental haptics course last spring programmed a forceful version of virtual ping-pong they called "Haptic Battle Pong." Interest in the game caused an Internet traffic jam that shut down the haptic interface manufacturer's website for a day.
Haptic technology emerged as the technology of touch has already been explored in contexts as diverse as modeling and animation,geophysicalanimation, geophysical analysis,virtualanalysis, virtual museums,assemblymuseums, assembly planning,mineplanning, mine design,surgicaldesign, surgical simulation,designsimulation, design evaluation,controlevaluation, control of specific instruments and robotic simulation. Thus it is a most exhilarating possibility to use the Phantasticon for widening the touch based experiences and learning outside an armlengthâ„¢sarm lengthâ„¢s distance.Sodistance. So far, we have not come across any unclimbable obstacles.

This technology has been used for cultural applications namely in museums.Itmuseums. It is used in interacting with three-dimensional ultra sound data.Itdata. It is been applied in medicinal field too specially in Maxillo facial surgery.Itsurgery. It is used as a chemistâ„¢s tool in viewing the internal orbitalsorbital of an atom.Byatom. By means of this it is possible to visualize the cells clearly.Itclearly. It has been widely used in automobiles.
Also in the works of simulated surgery. Just as commercial pilots train in flight simulators before they're unleashed on real passengers, surgeons will be able to practice their first incisions without actually cutting anyone. Simulation for surgical training is a major focus in Salisbury's lab. This work is funded by the National Institutes of Health and Stanford's Bio-X Program.
Creating a realistic, interactive internal organ is no easy feat. "It's not just something that you can touch and say, 'OK, it's round and it's squishy and it's got a bump here,' but something that you can then cut and it will bleed, or sew and it will stop bleeding," Salisbury says.
Haptic technology has great potentials in many applications. This paper introduces our work on delivery haptic information via the Web. A multimodal tool has been developed to allow blind people to create virtual graphs independently. Multimodal interactions in the process of graph creation and exploration are provided by using a low-cost haptic device, the Logitech WingMan Force Feedback Mouse, and Web audio. The Web-based tool also provides blind people with the convenience of receiving information at home. In this paper, we present the development of the tool Introduction The Haptic Sense
.New Virtual Reality technologies provide the possibility of widening access to information in data. Haptics, the technology of touch, could be an interesting future aid and have large impact on medical applications. The use of haptic devices allows computer users to use their sense of touch, in order to feel virtual objects with a high degree of realism
.
4.1. HAPTIC TECHNOLOGY IN MEDICINE-MAXILLO FACIAL SURGERY
The aim of the thesis is to investigate the potential deployment and the benefits of using haptic force feedback instruments in maxillo-facial surgery. Based on a produced test application, the thesis includes suggested recommendations for future haptic implementations.
At the Department of Maxillo-Facial Surgery, at the Karolinska Hospital in Stockholm, Virtual Reality technologies are used as an aid to a limited extent during the production of physical medical models. The physical medical models are produced with Rapid Prototyping techniques. This process is examined and described in the thesis. Moreover, the future of the physical medical models is outlined, and a future alternative visualizing patient data in 3D and use haptics as an interaction tool, is described. Furthermore, we have examined the present use of haptic technology in medicine, and the benefits of using the technology as an aid for diagnostic and treatment planning.
Based on a presented literature study and an international outlook, we found that haptics could improve the management of medical models. The technology could be an aid, both for physical models as well as for virtual models. We found three different ways of implementing haptics in maxillo-facial surgery.
A haptic system could be developed in order to only manage virtual medical models and be an alternative solution to the complete Rapid Prototyping process. A haptic system could serve as a softwaresoftware, handling the image processing and interfacing from a medical scanner to an Rapid Prototyping system. A haptic system could be developed as an alternative interaction tool, which could be implemented as an additional function in currently used image processing software, in order to improve the management of virtual medical models before the Rapid Prototyping process.
An implementation for planning and examination in maxillo-facial surgery, using haptic force feedback interaction, is developed and evaluated. The test implementation is underlying our aim of investigating the potential deployment and the benefits of using haptic force feedback instruments in maxillo-facial surgery.

4.2.IMPACT. IMPACT OF HAPTIC TECHNOLOGY ON CULTURAL APPLICATONS
Four main benefits might come from the use of haptics.Ahaptics number of these extend the use of graphics and three-dimensional models already used in some museums,othersmuseums; others provide new experiences that are not currently available.
1.Allow1. Allow rare,fragilerare, fragile or dangerous objects to be handled
Objects that are very fragile,rarefragile, rare or dangerous maydangerous may not be handled by museum visitors or scholars.Visualscholars. Visual models can be created butcreated but there are many aspects that this does not capture, for example how heavy does it feelHowfeel How rough itâ„¢s surfaceTosurface To solve this problem objects could be haptically modelled andmodeled and then visitors or researchers could feel them using a haptic device.Thisdevice. This means that these objects can be made available to large numbers ofnumbers of people.
2.Allow2. Allow long distance visitors.
There are many potential visitors to a museum who cannot get to visit.Theyvisit. They might live far away or be immobile. If objects are haptically modelledmodeled and then made available on a museumâ„¢s website thenwebsite then other acessaccess methods become possible.Aschoolpossible. School could buy a haptic device so that children can continue to feel and manipulate objects after they have been for a visit.Avisit. A scholar could examine the haptic aspects of an object from a university across the world.Withworld. With a haptic device at home a visitor could feel and manipulate the object via the internet.
3.Improve3. Improve acessaccess for visually disabled people.
Visually impaired and blind people often lose-out when going to museums because objects are behind glass.thereglass. are over one million people in the UK who are blind or partially-sighted.Some Some museums provide special exhibits that the blind can feel.Howeverfeel. However these exhibits are usually small and may not contain the objects that the blind visitor is interested in.Within. With haptic technologies,suchtechnologies, such visitors could feel and interact with a much wider range of objects,enrichingobjects, enriching their experiences in the museum.Manymuseum.
normally sighted users could also enjoy the opportunity to touch museum exhibits.
4.Increase4. Increase the number of artefactsartifacts on display.
With limited amount of space museums can only show a limited range of artifacts from their collections.Ifcollections. If other objects that are not on the show are modelledmodeled graphically and haptically then visitors could experience these on computer,withoutcomputer, without taking up museum space.Withspace. several haptic devices a museum could allow many people to feel objects at the same time,sharing the experience.
4.3 AUTOMOTIVE USE
ALPS Corporation tried to develop and commercialize products which hasproducts which have applied numerous patents in haptic technology for automotive use. Immersion's haptic technologies are controls touch sense and operational directions with using programmed operation knob, various restriction force and alarm which are produced by mechanical and electrical effects with actuator and position sensor. Since touch sense can be adjusted easily, the multiple operations of car equipment can be controlled by single knob.
When applied this technology on automotive devices, the unique operational features of such electronic components as car audio, air conditioning, power mirrors and power seats are combined to enable control through a single knob. When something trouble or operating error occurs, the product notifies and alert the driver via restricted movement or vibration of the knob. Consequently, the driver is ensured safe, comfortable control of devices while in motion without averting attention from the road.
ALPS and Immersion previously made a joint development contract in July 2000 to apply haptic technology on automotive devices and carried out development under it. This cooperative endeavor is proliferated in the development of the iDrive for recently released BMW 7 Series.
ALPS will expand haptic technology applied products line in a global scale, and keep advanced development in various automotive products currently undergoing computerization and advanced safety concept.
"The Immersion-ALPS partnership comes at a time when automotive manufacturers are actively seeking compelling alternatives to traditional user interface methods. Since our first discussions together, our collective mission for haptics in the automotive space has been to enable users to more intuitively control user interfaces through their sense of touch thereby allowing them to visually focus on the road," said Vic Viegas, president and chief operating officer for Immersion. "The collaboration that started with the new BMW iDrive Controller is continuing with this multi-million dollar opportunity which will help to proliferate the use of haptic interfaces in the automotive industry."
How Haptic Technology Works in Cars Cars:-
In today's cars, various functions require individual controls including dials, sliders, and buttons -- many with a distinct mechanical feel. With haptics, a single user interface can replace the complex array of dials and buttons, while still maintaining the unique feel of each function. For example, as the driver operates the control in radio-tuning mode, the device will move smoothly until a preset station is reached. Tactile cues then indicate to the driver that a desired station is 'locked' in. Similarly, for balance or tone controls, programmable tactile sensations can be used to represent the neutral position and end of range. This interface can also recreate the unique and intuitive feel of many other basic functions including climate control, seat adjustments, and navigation systems.
Salient features
Demonstrating the most advanced auto-interface design technologies for the first time, four leading technology companies have fully integrated haptic controls with the industry's standard graphics, operating system and API. The term "haptics" refers to the science of touch, also known as force feedback. The demonstration in Wind River's booth No. 10-11 is a result of the collaboration among Immersion Corp. (NASDAQ: IMMR), a leading developer of haptic technology; Tilcon Software Ltd, one of the world's leading suppliers of visual user interface technology; Wind River Systems Inc. (NASDAQ:WIND), the worldwide market leader in embedded software and services; and Renesas Technology America Inc., a subsidiary of the new joint-venture semiconductor company established by Hitachi, Ltd. and Mitsubishi Electric Corp. The new all-in-one solution brings together graphic and haptic human-machine interaction (HMI) interfaces in an embedded platform with a real-time OS.
"In support of safety and new vehicle information systems, haptics is happening in the auto industry," said Joe DiNucci, Vice President of Sales and Marketing at Immersion Corp. "The sense of touch is already in high-end cars, and is scheduled to reach the broader auto market over the next few years. Previously a complex, laborious process, our four-way collaboration now fully integrates haptics with other state-of-the-art automotive design tools. For auto manufacturers and integrators, this integration means a significant savings of time and money while also giving them more design flexibility."
"Working with Immersion enables us to offer automotive manufacturers a ready-made solution combining cutting-edge haptic technology with superior dashboard display technology," says Ernest Hollander, Vice-President of Engineering at Tilcon. "Car manufacturers plan to deploy very elaborate information systems within automotive dashboards. A plan that will fail if these systems aren't easy to use. Drivers employ their sense of sight and touch to control a car. Integration of haptic and visual display technologies provides superior feedback both in what the driver sees by way of Tilcon's dashboard instruments and what the driver feels through Immersion's haptic controls. The integration of these technologies makes the information system usable and the car easier to drive."
Haptic Steering Wheel
Haptic Pedal

Haptic Steering Haptic Pedal

Haptic Pedal
Systems that operate automobiles and aircraft by means of computer-controlled electronic signals instead of direct action on control devices are called "X-by-wire" systems. For instance, previous systems used hydraulics to operate control surfaces when the pilot of an aircraft manipulated the joystick. In fly-by-wire systems, the action of the joystick is converted into an electronic signal used by a computer to control the movement of aircraft and the operation of other equipment.
Today, most automobiles are operated by mechanical steering systems that transmit steering-wheel movement to the wheels by means of shafts and gears. However, in response to demands for greater safety, we are beginning to see the increasing use of drive-by-wire systems that employ computer control.
These mechanical systems convey haptic information on road-surface conditions to the driver through the mechanical linkage between wheel and steering wheel. A drive-by-wire system controls the information on road-surface conditions conveyed to the driver, thereby enhancing safety. In addition, the steering yoke required in mechanical systems is eliminated, allowing greater freedom in design of steering systems and auto interior layouts. This reduces the number of chassis and body components, contributing to reductions in vehicle weight.
One problem with computer control of steering systems is that, becausethat, because operational information is conveyed by means of electronic signals, there is no mechanism for feedback of information to the driver on actual road-surface conditions, vehicle status, and cautionary information. ALPS turned to the capabilities of haptic technology to solve this problem and further improve safety by eliminating such deficiencies. Employing the force feedback technology of the Haptic CommanderTM, ALPS' drive-by-wire system conveys required information to the driver in tactile form. For instance, feedbackFeedback on irregularities in the surface of an unpaved road is transmitted via movement of the steering wheel, while information on the gradient of a hill is conveyed via an increase or decrease in the resistance of the accelerator pedal. Furtherpedal. , the
Merits
ALPS has developed a proprietary drive-by-wire system using a force feedback function through haptic technology
1. Haptic technology realizes tactile feedback to the driver in a drive-by-wire system.
2. Through the application of haptic technology, artificially generated cautionary information can be conveyed to the driver.
3. Contributes to increased safety in the operation of automobiles.
4. Contributes to increased freedom in auto interior layout and lighter vehicle weight.
4.4 HAPTIC MOUSE
This mouse employs haptic technology, or virtual touch. A growing rage in computer circles, haptic technology builds the illusion of tactile dimension into virtual worlds. In a flight simulator, it rocks the real hand of a pilot during feigned turbulence. On a standard computer desktop, it lends the feeling of mass and texture to icons made of nothing more than pixels.

Fig: Haptic mouse
The Haptic mouse is motorized, which creates sensations such as resistance and vibration clearly felt by the user. The concept is called forced feedback. When the pointer of the mouse sweeps across a computer screen, software recognizes the motion and affects the mouse's response. Icons feel different than the desktop pattern and can be made to feel distinguishable from each other. More than that, the pointer reacts like it is being pulled towards icons. The point-and-click interface shifts into a pulled-and-click interface.
The mouse offers "an artificial mode of what the icons would feel like if you could actually pick them up," said Dennerlein. More importantly, the haptic mouse saves time. The users of the haptic mouse performed point-and-click tasks 25 percent faster than standard-mouse users. The technology decreases the need for precision, but can it also decrease risk of workrelatedwork related musculoskeletal injuries correctly.
Musculoskeletal disorders include conditions such as carpal tunnel syndrome, lower back pain and some head and neck injuries. These types of injuries often result from repetition and force and were formerly called repetitive stress disorders. Each year, more than one-half million American workers take time off from work because of these disorders, according to Organizational Safety And Health Standard.Haptic technology could help prevent musculoskeletal disorders or perhaps rehabilitate them because it could be used to teach good habits," said Dennerlein. The haptic mouse, for example, controls the range of motion of the hand and wrist and for some tasks decreases the physical stress on the body. Researchers, with the goal of building a better mouse, have focused on changing how one holds the mouse but had not examined incorporating forced-feedback models into the workplace.
"People have been suffering from musculoskeletal disorders for years," said Dennerlein, "but now it is a hot topic because the computer has brought manual labor into the office environment. What is known is that the more time you spend in front of a computer, the more likely you are to develop one of these disorders. The specific causes remain unknown.. He is developing a monitoring system to be used in the field, as opposed to the lab, to characterize the physical exposures that may lead to injuries such as carpal tunnel syndrome. He also is investigating why more women than men suffer from chronic musculoskeletal disorders of the upper body.
In the meantime, haptic technology is being applied to an astonishing variety of fields. Joysticks and controllers in some video game systems use the technology to create the feel of rapid gunfire.
4.5 Haptic Phones
The announcement from Samsung opens up a wide myriad of possible applications that could be enabled by Haptic technology on mobile. Most obvious of which would be gaming, similar to what we have experience on the console game pads and controllers. Games could provide the stunning realism of effects like explosions and collisions, making the experience more fun and enjoyable. Music, video, and other mobile applications could also be enhanced with tactile rhythms, beats, effects, and usability cues. Mobile device could support different tactile sensation to identify different caller while in silent mode or even drawing focus to specific email message from the clutter of mails downloaded to the
mobile to signify higher priority messages when scrolling through.
Just like graphics and sound, touch can be coded as digital bits and sent in packets over the internet or a cell phone network then reassembled or "rendered" in some form on the receiving device. The transmission of data would however need to be fairly robust as a minute delay of 200 milliseconds, something which is acceptable for phone conversation or watching of video on the mobile, would make a difference in the replication of accurate tactile feedback for the receiving device.
Samsung Haptic technology is based on Immersion's VibeTonz platform. The platform comprises of a SDK library that allows the developer to simulate four basic sensation namely, vibrational (periodic), positional (texture, enclosure, ellipse, spring, Grid), directional (constant, ramp) and restrictive (damper, friction) effects. Currently the SDK is only available as BREW extension for CDMA phones but Immersion is already working on making the VibeTonz platform available on the Symbian smart phone OS which is widely supported by handset manufacturers like Nokia and Sony Ericsson. Immersion will provide the handset manufacturers with a hardware design spec related to the power amplifier and associated circuitry as well as the VibeTonz mobile player for embedding into the phone.
For the general users tactile feedback on mobile is something that may be good to have but would be extremely beneficial for the visually handicapped. Imagine the use for RFID implanted around the MRT station sending directional information to a Haptic phone that provides accentuated feedback that is more innate to the human sense of touch. The world would surely become more interesting for these less fortunate folks who may then be able to interact and move freely amongst us without fear or embarrassment.And advocates for the blind hope that the haptic-computer interface will help blind people navigate desktops more easily because they will be able to distinguish icons by the resistance of the mouse combined with acoustic cues.
4.6 HAPTICS AND EDUCATION
Haptics involves both kinesthetic movement and tactile perception. The term tactile is used primarily in referring to passive touch (being touched); but haptics involves active touch such as a student manipulating an object during hands-on science explorations. This active touch involves intentional actions that an individual chooses to do, whereas passive touch can occur without any initiating action.
For educators, involving students in consciously choosing to investigate the properties of an object is a powerful motivator and increases attention to learning. Contrast this active manipulation with passive learning, such as watching a science video. In active manipulation the student expends energy and makes a decision to manipulate materials. In more passive learning, such as watching a video, the student is asked only to sit and observe. It is more difficult to maintain attention and motivation in a passive learning context than an active one. Associated with active manipulation is the opportunity for the student to control actions, learning, and even the speed of exploration. Control has been shown to be an important part of intrinsic motivation .Thus far, the results of our studies have supported these assertions, students find the haptic technology exciting, engaging, and interesting.
Research results
One of the most commonly used devices, and one of the interfaces employed in our studies, is SensAble Technologies' PHANToM (shown below). It is a small, desk-grounded robot-like arm that permits simulation of fingertip contact with virtual objects through a pen-like stylus.
The PHANToM desktop device from SensAble Technologies, Inc.
With this new haptics application, students are able to feel nanosized materials such as viruses that are imaged under the AFM.In essence the user is afforded the opportunity to have a hands-on experience with objects at the nanometer scale that are too small to be touched or even seen otherwise. The study experience was engaging and developed more positive attitudes about science. Additionally, students showed significant gains in their understanding of viruses.Theviruses. The cost and logistics of delivering the live interaction with the atomic force microscope and virus samples limits the availability of this type of haptic instruction and prompted a second study. Here, students experienced a computer mediated inquiry program that incorporated stored images of the nanoManipulator's interaction with a virus sample. The goal of this exploratory study was to examine the differential impact of augmenting the computer mediated inquiry three feedback devices: the PHANToM,a Sidewinder, and a mouse Results suggest that the addition of haptic feedback provides a more immersive learning environment that not only makes the instruction more engaging but may also influence the way in which the students construct their understandings about viruses as evidenced by an increase in their use of spontaneously generated analogies.
Currently work is underway to explore how the addition of haptic feedback to computer-generated 3-D virtual models of an animal cell influences middle school students' understandings of cell concepts. .
S
The Haptic Cell Passive transport simulation: users can feel the organelles
The structural differences (i.e. relative size, surface area, texture, shape, elasticity & rigidity) of the parts are emphasized. Students can poke' through the cell membrane, feel the viscosity of the cytoplasm, and touch the rough endoplasmic reticulum. The program also highlights the mechanisms behind the cell membrane's selective permeability. Students learn how certain molecules traverse the membrane via the various types of passive transport by trying to pass these substances through the membrane and feeling the associated forces .
Passive transport simulation
Haptic Learning
Haptic learning plays an important role in a number of different learning environments. Students with visual impairments depend on haptics for learning through the use of Braille as well as other strategies. Looked at from a constructivist's perspective, the haptic augmentation of computer-generated 3-D virtual environments, in which the student is an active participant, can be a powerful teaching tool . tool. Learning is often defined as the construction of knowledge as sensory data are given meaning in terms of prior knowledge. The addition of haptics affords students the opportunity to become more fully immersed in this process of meaning-making; taking advantage of tactile, kinesthetic, experiential, and embodied knowledge in new ways. This prospective new instructional tool can have direct implications on the way in which students are taught. Perhaps soon students will be able to become immersed in a virtual animal cell; more fully exploring its structure and functioning. Physics instruction will make use of haptic feedback devices to teach students about invisible forces like gravity and friction more completely. Visually impaired students will learn math by touching data represented in tangible graph and chemistry by feeling the attractive and repulsive forces associated with various compounds. In the end, the use of haptics in education is bound only by our imagination. "Blind people, especially those who are blind since birth, can't perceive depth so they don't read graphics well," said Brent Gillespie, assistant professor of mechanical engineering at the University of Michigan in Ann Arbor. "But you can motorize a mouse and then they can feel things on the screen and navigate a little more easily." Haptic researchers are also working on hardware and software that will enable people to feel fabric in great detail--right down to the grain of the thread and the bias. But, he said, mainstream commercial use of haptics for e-commerce is years away.
4.7.HAPTIC. HAPTIC INTERACTION WITH 3D-ULTRA SOUND DATA.
The initial focus on parental interaction with ultra sound allows us to introduce haptic technology into clinical environment in a way that has clear benefits today and allows the medical/diagnostic uses of haptic interaction with 3Dultrasonicsound to be more gradually explored.
The e-Touch SonoTM System:-
The e-Touch SonoTM System is a turnkey hardware and software system that allows users to interactively feel and see 3D ultrasonic images. Sono also allows the 3d images to be interactively cleaned-up and exported for generation of 3D hardcopy or imagery.

ABSTRACT
The last decade has seen a substantial increase in commodity computer and network performance, mainly as a result of faster hardware and more sophisticated software. Nevertheless, there are still problems, in the fields of science, engineering, and business, which cannot be effectively dealt with using the current generation of supercomputers. In fact, due to their size and complexity, these problems are often very numerically and/or data intensive and consequently require a variety of heterogeneous resources that are not available on a single machine. A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources unified to act as a single powerful computer. This new approach is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently Grid computing.
The early efforts in Grid computing started as a project to link supercomputing sites, but have now grown far beyond their original intent. In fact, many applications can benefit from the Grid infrastructure, including collaborative engineering, data exploration, high-throughput computing, and of course distributed supercomputing. Moreover, due to the rapid growth of the Internet and Web, there has been a rising interest in Web-based distributed computing, and many projects have been started and aim to exploit the Web as an infrastructure for running coarse-grained distributed and parallel applications. In this context, the Web has the capability to be a platform for parallel and collaborative work as well as a key technology to create a pervasive and ubiquitous Grid-based infrastructure.
This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in developing this emerging technology

1. INTRODUCTION
The popularity of the Internet as well as the availability of powerful computers and high-speed network technologies as low-cost commodity components is changing the way we use computers today. These technology opportunities have led to the possibility of using distributed computers as a single, unified computing resource, leading to what is popularly known as Grid computing. The term Grid is chosen as an analogy to a power Grid that provides consistent, pervasive, dependable, transparent access to electricity irrespective of its source. The ideas of the Grid were brought together by Ian Foster, Carl Kesselman and Steve Tuecke, the so called "fathers of the Grid." A detailed analysis of this analogy can be found in. This new approach to network computing is known by several names, such as metacomputing, scalable computing, global computing, Internet computing, and more recently peer-to- peer (P2P) computing.

commerce. Thus creating virtual organizations and enterprises as a temporary alliance of enterprises or organizations that come together to share resources and skills, core competencies, or resources in order to better respond to business opportunities or large-scale application processing requirements, and whose cooperation is supported by computer networks.
The concept of Grid computing started as a project to link geographically dispersed supercomputers, but now it has grown far beyond its original intent. The Grid infrastructure can benefit many applications, including collaborative engineering, data exploration, high-throughput computing, and distributed supercomputing.
A Grid can be viewed as a seamless, integrated computational and collaborative environment (see Figure 1). The users interact with the Grid resource broker to solve problems, which in turn performs resource discovery, scheduling, and the processing of application jobs on the distributed Grid resources.

2.TYPES OF SERVICES.
From the end-user point of view, Grids can be used to provide the following types of services.
Â¢Computational services. These are concerned with providing secure services for executing application jobs on distributed computational resources individually or collectively. Resources brokers provide the services for collective use of distributed resources. A Grid providing computational services is often called a computational Grid. Some examples of computational Grids are: NASA IPG, the World Wide Grid, and the NSF TeraGrid .
Â¢Data services. These are concerned with proving secure access to distributed datasets and their management. To provide a scalable storage and access to the data sets, they may be replicated, catalogued, and even different datasets stored in different locations to create an illusion of mass storage. The processing of datasets is carried out using computational Grid services and such a combination is commonly called data Grids. Sample applications that need such services for management, sharing, and processing of large datasets are high-energy physics and accessing distributed chemical databases for drug design. Â¢Application services. These are concerned with application management and providing access to remote software and libraries transparently. . The emerging technologies such as Web services are expected to play a leading role in defining application services. They build on computational and data services provided by the Grid. An example system that can be used to develop such services is NetSolve.
Â¢Information services. These are concerned with the extraction and presentation of data with meaning by using the services of computational, data, and/or application services. The low-level details handled by this are the way that information is represented, stored, accessed, shared, and maintained. Given its key role in many scientific endeavors, the Web is the obvious point of departure for this level.
Â¢Knowledge services. These are concerned with the way that knowledge is acquired, used, retrieved, published, and maintained to assist users in achieving their particular goals and objectives. Knowledge is understood as information applied to achieve a goal, solve a problem, or execute a decision. An example of this is data mining for automatically building a new knowledge.
To build a Grid, the development and deployment of a number of services is required. These include security, information, directory, resource allocation, and payment mechanisms in an open environment and high-level services for application development, execution management, resource aggregation, and scheduling.
Grid applications (typically multidisciplinary and large-scale processing applications) often couple resources that cannot be replicated at a single site, or which may be globally located for other practical reasons. These are some of the driving forces behind the foundation of global Grids. In this light, the Grid allows users to solve larger or new problems by pooling
together resources that could not be easily coupled before. Hence, the Grid is not only a computing infrastructure, for large applications, it is a technology that can bond and unify remote and diverse distributed resources ranging from meteorological sensors to data vaults and from parallel supercomputers to personal digital organizers. As such, it will provide pervasive services to all users that need them.
This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in this area.
3.LEVELS OF DEPLOYMENT
Grid computing can be divided into three logical levels of deployment: Cluster Grids, Enterprise Grids, and Global Grids. Â¢ Cluster Grids
The simplest form of a grid, a Cluster Grid consists of multiple systems interconnected through a network. Cluster Grids may contain distributed workstations and servers, as well as centralized resources in a datacenter environment. Typically owned and used by a single project or department, Cluster Grids support both high throughput and high performance jobs. Common examples of the Cluster Grid architecture include compute farms, groups of multi-processor HPC systems, Beowulf clusters, and networks of workstations (NOW).

Figure 2 Three levels of grid computing: cluster, enterprise, and global grids. Â¢ Enterprise Grids
As capacity needs increase, multiple Cluster Grids can be combined into an Enterprise Grid. Enterprise Grids enable multiple projects or departments to share computing resources in a cooperative way. Enterprise Grids typically contain resources from multiple administrative domains, but are located in the same geographic location.

increased support for portal and other eScience application developers needs. The Gridbus Broker also features a WSRF compliant service, to allow access to most of the features of the broker through a WSRF interface.
WHAT'S NEW:
1)WSRF Broker Service deployable on a standard GT4-based WSRF Container providing web-service access to all the broker functionality.
2)Gridbus Broker Workbench GUI, for easier composition, initialisation, monitoring and management of grid applications
3)New service oriented modular design to allow vast improvements in scalability,
reliability and robust failure management
4)Enhanced flexibility, usability and adaptability
5)Support for GGF - JSDL (for non-parametric jobs)
6)Improved stability for all middleware
7)Improvements in data-aware scheduling
8)Various bug fixes

7.GRIDSCAPE II
Grid computing has emerged as an effective means of facilitating the sharing of distributed heterogeneous resources, enabling collaboration in large scale environments. However, the nature of Grid systems, coupled with the overabundance and fragmentation of information, makes it difficult to monitor resources, services, and computations in order to plan and make decisions. In this paper we present Gridscape II, a customisable portal component that can be used on its own or plugged in to compliment existing Grid portals. Gridscape II manages the gathering of information from arbitrary, heterogeneous and distributed sources and presents them together seamlessly within a single interface. It also leverages the Google Maps API in order to provide a highly interactive user interface. Gridscape II is simple and easy to use, providing a solution to those users who don't wish to invest heavily in developing their own monitoring portal from scratch, and also for those users who want something that is easy to customise and extend for their specific needs.
DESIGN AIMS
The design aims of Gridscape II are that it should:
1) Manage diverse forms of resource information from various types of information sources
2) Allow new information sources to be easily introduced;
3) Allow for simple portal management and administration;
4) Provide a clear and intuitive presentation of resource information in an interactive and dynamic portal; and
5) Have a flexible design and implementation such that core components can be reused in building new components, presentation of information can be easily changed and a high level of portability and accessibility (from the web browser perspective) can be provided.
8.AD VANTAGES
Grid computing can provide many benefits not available with traditional computing
models:
Â¢ Better utilization of resources â€ Grid computing uses distributed resources more efficiently and delivers more usable computing power. This can decrease time-to-market, allow for .innovation, or enable additional testing and simulation for improved product quality. By employing existing resources, grid computing helps protect IT investments, containing costs while providing more capacity.
Â¢ Increased user productivity â€ By providing transparent access to resources, work can be completed more quickly. Users gain additional productivity as they can focus on design and development rather than wasting valuable time hunting for resources and manually scheduling and managing large numbers of jobs.
Â¢ Scalability â€ Grids can grow seamlessly over time, allowing many thousands of processors to be integrated into one cluster. Components can be updated independently and additional resources can be added as needed, reducing large one-time expenses.
Â¢ Flexibility â€ Grid computing provides computing power where it is needed most, helping to better meet dynamically changing work loads. Grids can contain heterogeneous compute nodes, allowing resources to be added and removed as needs dictate.
9.DISAD VANTAGES
Microsoft is developing a security language for grids, designed to deal with some of the security issues raised by grids" decentralized nature. Grids are becoming widely used in enterprises, as well as for sharing computing resources among academic research institutions. However, there is no single, widely used approach to dealing with grid security.
The nature of Grid systems, coupled with the overabundance and fragmentation of information, makes it difficult to monitor resources, services, and computations in order to plan and make decisions.
In short grids are having the following disadvantages also.
Â¢ Grid software and standards are still evolving
Â¢ Learning curve to get started
Â¢ Non-interactive job submission

10.GRID APPLICATIONS
What types of applications will grids are used for? Building on experiences in gigabit testbeds, the I-WAY network, and other experimental systems, we have identified five major application classes for computational grids, and described briefly in this section..
Distributed Supercomputing
Distributed supercomputing applications use grids to aggregate substantial computational resources in order to tackle problems that cannot be solved on a single system. Depending on the grid on which we are working, these aggregated resources might comprise the majority of the supercomputers in the country or simply all of the workstations within a company. Here are some contemporary examples:
> Distributed interactive simulation (DIS) is a technique used for training and planning in the military. Realistic scenarios may involve hundreds of thousands of entities, each with potentially complex behavior patterns. Yet even the largest current supercomputers can handle at most 20,000 entities. In recent work, researchers at the California Institute of Technology have shown how multiple supercomputers can be coupled to achieve record-breaking levels of performance.
> The accurate simulation of complex physical processes can require high spatial and temporal resolution in order to resolve fine-scale detail. Coupled supercomputers can be used in such situations to overcome resolution barriers and hence to obtain qualitatively new scientific results. Although high latencies can pose significant obstacles, coupled supercomputers have been used successfully in cosmology, high-resolution abinitio computational chemistry computations, and climate modeling.
Challenging issues from a grid architecture perspective include the need to co schedule what are often scarce and expensive resources, the scalability of protocols and algorithms to tens or hundreds of thousands of nodes, latency-tolerant algorithms, and achieving and maintaining high levels of performance across heterogeneous systems.
High-Throughput Computing
In high-throughput computing, the grid is used to schedule large numbers of loosely coupled or independent tasks, with the goal of putting unused processor cycles (often from

idle workstations) to work. The result may be, as in distributed supercomputing, the focusing of available resources on a single problem, but the quasi-independent nature of the tasks involved leads to very different types of problems and problem-solving methods. Here are some examples:
1) Platform Computing Corporation reports that the microprocessor manufacturer Advanced Micro Devices used high-throughput computing techniques to exploit over a thousand computers during the peak design phases of their K6 and K7 microprocessors. These computers are located on the desktops of AMD engineers at a number of AMD sites and were used for design verification only when not in use by engineers.
2) The Condor system from the University of Wisconsin is used to manage pools of
hundreds of workstations at universities and laboratories around the world. These resources
have been used for studies as diverse as molecular simulations of liquid crystals, studies of
ground penetrating radar, and the design of diesel engines.
3) More loosely organized efforts have harnessed tens of thousands of computers
distributed world wide to tackle hard cryptographic problems.
On-Demand Computing
On-demand applications use grid capabilities to meet short-term requirements for resources that cannot be cost effectively or conveniently located locally. These resources may be computation, software, data repositories, specialized sensors, and so on. In contrast to distributed supercomputing applications, these applications are often driven by cost-performance concerns rather than absolute performance. For example:
Â¢ The NEOS and NetSolve network-enhanced numerical solver systems allow users to couple remote software and resources into desktop applications, dispatching to remote servers calculations that are computationally demanding or that require specialized software.
Â¢ A computer-enhanced MRI machine and scanning tunneling microscope (STM) developed at the National Center for Supercomputing Applications use supercomputers to achieve real time image processing. The result is a significant enhancement in the ability to understand what we are seeing and, in the case of the microscope, to steer the instrument.
Â¢ A system developed at the Aerospace Corporation for processing of data from meteorological satellites uses dynamically acquired supercomputer resources to deliver the results of a cloud detection algorithm to remote meteorologists in quasi real time.
The challenging issues in on-demand applications derive primarily from the dynamic nature of resource requirements and the potentially large populations of users and resources. These issues include resource location, scheduling, code management, configuration, fault tolerance, security, and payment mechanisms.
Data-Intensive Computing
In data-intensive applications, the focus is on synthesizing new information from data that is maintained in geographically distributed repositories, digital libraries, and databases. This synthesis process is often computationally and communication intensive as well.
Â¢ Future high-energy physics experiments will generate terabytes of data per day, or around a peta byte per year. The complex queries used to detect "interesting" events may need to access large fractions of this data. The scientific collaborators who will access this data are widely distributed, and hence the data systems in which data is placed are likely to be distributed as well.
Â¢ The Digital Sky Survey will, ultimately, make many terabytes of astronomical photographic data available in numerous network-accessible databases. This facility enables new approaches to astronomical research based on distributed analysis, assuming that appropriate computational grid facilities exist.
Â¢ Modern meteorological forecasting systems make extensive use of data assimilation to incorporate remote satellite observations. The complete process involves the movement and processing of many gigabytes of data.
Challenging issues in data-intensive applications are the scheduling and configuration of complex, high-volume data flows through multiple levels of hierarchy.
Collaborative Computing
Collaborative applications are concerned primarily with enabling and enhancing human-to-human interactions. Such applications are often structured in terms of a virtual shared space. Many collaborative applications are concerned with enabling the shared use of computational resources such as data archives and simulations; in this case, they also have characteristics of the other application classes just described. For example:
Â¢ The BoilerMaker system developed at Argonne National Laboratory allows multiple users to collaborate on the design of emission control systems in industrial incinerators. The different users interact with each other and with a simulation of the incinerator.
Â¢ The CAVE5D system supports remote, collaborative exploration of large geophysical data sets and the models that generate them-for example, a coupled physical/biological model of the Chesapeake Bay.
Â¢ The NICE system developed at the University of Illinois at Chicago allows children to participate in the creation and maintenance of realistic virtual worlds, for entertainment and education.
Challenging aspects of collaborative applications from a grid architecture perspective are the real- time requirements imposed by human perceptual capabilities and the rich variety of interactions that can take place.
We conclude this section with three general observations. First, we note that even in this brief survey we see a tremendous variety of already successful applications. This rich set has been developed despite the significant difficulties faced by programmers developing grid applications in the absence of a mature grid infrastructure. As grids evolve, we expect the range and sophistication of applications to increase dramatically. Second, we observe that almost all of the applications demonstrate a tremendous appetite for computational resources (CPU, memory, disk, etc.) that cannot be met in a timely fashion by expected growth in single-system performance. This
emphasizes the importance of grid technologies as a means of sharing computation as well as a data access and communication medium. Third, we see that many of the applications are interactive, or depend on tight synchronization with computational components, and hence depend on the availability of a grid infrastructure able to provide robust performance guarantees.
11.CONCLUSION
Grid computing is an emerging computing model that provides the ability to perform higher throughput computing by taking advantage of many networked computers to model a virtual computer architecture that is able to distribute process execution across a parallel infrastructure. Grids use the resources of many separate computers connected by a network (usually the Internet) to solve large-scale computation problems.
Grids provide the ability to perform computations on large data sets, by breaking them down into many smaller ones, or provide the ability to perform many more computations at once than would be possible on a single computer, by modeling a parallel division of labour between processes. Today resource allocation in a grid is done in accordance with SLAs (service level agreements).
One characteristic that currently distinguishes Grid computing from distributed computing is the abstraction of a 'distributed resource' into a Grid resource. One result of abstraction is that it allows resource substitution to be more easily accomplished. Some of the overhead associated with this flexibility is reflected in the middleware layer and the temporal latency associated with the access of a Grid (or any distributed) resource. This overhead, especially the temporal latency, must be evaluated in terms of the impact on computational performance when a Grid resource is employed.

12.FUTURE TRENDS
There are currently a large number of projects and a diverse range of new and emerging Grid developmental approaches being pursued. These systems range from Grid frameworks to application testbeds, and from collaborative environments to batch submission mechanisms.
It is difficult to predict the future in a field such as information technology where the technological advances are moving very rapidly. Hence, it is not an easy task to forecast what will become the 'dominant' Grid approach. Windows of opportunity for ideas and products seem to open and close in the 'blink of an eye'. However, some trends are evident. One of those is growing interest in the use of Java and Web services for network computing.
The Java programming language successfully addresses several key issues that accelerate the development of Grid environments, such as heterogeneity and security. It also removes the need to install programs remotely; the minimum execution environment is a Java-enabled Web browser. Java, with its related technologies and growing repository of tools and utilities, is having a huge impact on the growth and development of Grid environments. From a relatively slow start, the developments in Grid computing are accelerating fast with the advent of these new and emerging technologies. It is very hard to ignore the presence of the Common Object Request Broker Architecture (CORBA) in the background. We believe that frameworks incorporating CORBA services will be very influential on the design of future Grid environments.
The two other emerging Java technologies for Grid and P2P computing are Jini and JXTA . The Jini architecture exemplifies a network-centric service-based approach to computer systems. Jini replaces the notions of peripherals, devices, and applications with that of network-available services. Jini helps break down the conventional view of what a computer is, while including new classes of services that work together in a federated
architecture. The ability to move code from the server to its client is the core difference between the Jini environment and other distributed systems, such as CORBA and the Distributed Common Object Model (DCOM).

Whatever the technology or computing infrastructure that becomes predominant or most popular, it can be guaranteed that at some stage in the future its star will wane. Historically, in the field of computer research and development, this fact can be repeatedly observed. The lesson from this observation must therefore be drawn that, in the long term, backing only one technology can be an expensive mistake. The framework that provides a Grid environment must be adaptable, malleable, and extensible. As technology and fashions change it is crucial that Grid environments evolve with them.
Smarr observes that Grid computing has serious social consequences and is going to have as revolutionary an effect as railroads did in the American Midwest in the early 19th century. Instead of a 30-40 year lead-time to see its effects, however, its impact is going to be much faster. Smarr concludes by noting that the effects of Grids are going to change the world so quickly that mankind will struggle to react and change in the face of the challenges and issues they present. Therefore, at some stage in the future, our computing needs will be satisfied in the same pervasive and ubiquitous manner that we use the electricity power grid. The analogies with the generation and delivery of electricity are hard to ignore, and the implications are enormous. In fact, the Grid is analogous to the electricity (power) Grid and the vision is to offer (almost) dependable, consistent, pervasive, and inexpensive access to resources irrespective of their location for physical existence and their location for access.
13.BTBLIQGRAPHY
1) Foster, C. Kesselman, editors. The Grid: Blueprint for a New Computing Infrastructure, Morgan Kaufmarin, San Francisco, Calif. (1999).
2) Foster. I, Kesselman, C. and Tuecke, S. The Anatomy of the Grid: Enabling Scalable Virtual Organizations. International Journal of High Performance Computing Applications
3) Rajkumar Buyya, Mark Baker. Grids and Grid technologies for wide-area distributed computing ,SP&E.
4) Ian Foster. The Grid: A New Infrastructure for 21st Century Science, Physics today
5) http://www.globus.org
6) http://www.en.wikipedia.org
7) http://www.gridcomputing.com
CONTENTS
1) INTRODUCTION 1
2) TYPES OF SERVICES 3
3) LEVELS OF DEPLOYMENT 5
4) GRID CONSTRUCTION: GENERAL PRINCIPLES 7
5) DESIGN FEATURES 8
6) GRID SERVICE BROKER 11
7) GRIDSCAPEII 13
8) ADVANTAGES 14
9) DISADVANTAGES 15
10) GRID APPLICATIONS 16
11) CONCLUSION 20
12) FUTURE TRENDS 21
13) BIBLIOGRAPHY 23

Haptics ËœTouch the virtualâ„¢Presented By:
NIKHIL JS
Roll no:20
ËœHapticsâ„¢ is derived from the Greek word Ëœhaptikosâ„¢
which means-being able to come into contact with
THE SCIENCE OF TOUCH
Virtual Reality
Virtual reality is a form of human-computer interaction providing a virtual environment that one can explore
through direct interaction with our senses.
Itâ„¢s a mere imitation of the real world.
In order to complete the imitation of the real world one should be able to interact with the environment and get a
feedback.
User should be able to touch the virtual object and feel a response from it.
This feedback is called Haptic Feedback.
Basic idea
Haptics, is the technology of adding the sensation of touch and feeling to computers.
A haptic device gives people a sense of touch with computer-generated environments, so that when virtual objects
are touched, they seem real and tangible.
Understanding and enabling a compelling experience of Presence not limited to "being there", but extended to "being
in touch" with remote or virtual surroundings
The Technology
Haptics is implemented through different type of interactions with a haptic device communicating with the
computer. These interactions can be categorized into the different types of touch sensations a user can receive:
Force feedback
Tactail feedback
Force feedback
It reproduces the directional forces that can result from solid boundaries.
E.g. the weight of virtual objects, inertia, etc.
Tactile feedback
Refers to the sensations felt by the skin.
It allows the user to feel things such as the texture of surfaces, temperature and vibration.
Data transfer
Virtual Reality Modeling language (VRML)
Tells the interface how much force the haptic device should return when it is touched.
How it works?
Haptic devices
It allows users to touch, feel and manipulate
3-D objects in virtual environments. Two types are
Ground based
Body based
How are Haptic devices different?
Common interface devices like mouse and joystick are only input devices. No feedback.
Haptic devices are input-output devices.
THE NOVINT FALCON
THE NOVINT FALCON
The Novint Falcon is the first haptic interface device to
bring 3D touch.
As the Novint Falcon is moved, the computer keeps track of a 3D cursor.
When the 3D cursor touches a virtual object, the computer registers contact with that object.
And updates currents to motors in the device to create an appropriate force to the deviceâ„¢s handle, which the user
feels.
Exoskeletons
large and immobile systems that the user must attach him or herself to.
their large size and immobile nature allow for the generation of large and varied force information.
Gloves and wearable devices
The user can move naturally without being weighed down by a large exoskeleton or immobile device
E.g. Hand Master
Working
Locomotion interface and full body force feedback
In a confined space, simulate unrestrained human mobility such as walking and running for virtual reality.
Applications
Virtual reality
Virtual surgery Surgery
Tele-presence Tele-presence
Human assistive devices Slide 23
Games
Surgery
Tele-presence
Conclusion
The next important step towards realistically simulated environments that have been envisioned by science fiction
authors and futurists alike.
Large potential for applications in critical fields as well as for leisurely pleasures.
Haptic devices must be miniaturized so that they are lighter, simpler and

Presented By
L.Madhuri
06R51A0526
Introduction
how does it work
Important
Haptic Interaction
Advantages
References
Conclusion
Approaching

INTRODUCTION
Define hap tic as a technology that adds the sense of touch to virtual environments.
Hap tics refer to senseing and manipulation through touch. The word comes from the Greek haptesthai meaning

to touch

How Does Haptics Work?
Sensor(s)
Actuator (motor) control circuitry
One or more actuators that either vibrate or exert force
Real-time algorithms (actuator control software, which we call a player) and a haptic effect library
Application programming interface (API), and often a haptic effect authoring tool

Important
Bill Buxton hand on=finger on
Not exploiting the interface to computing up with computeing power
More 3D ad VR environments in games and elsewhere
Demand for richer input and output possibilities

Advantages
Reduction in low energy
Increase in productivity and comfort
Decreased learning times
Large reduction in manipulation error

Initially, computers could deal only with numbers. It took many years to realize the importance of operating with text. The introduction of CRT display technologies allowed graphics to be displayed, giving us a new way to interact with computers. As processing power increased over time, three-dimensional (3D) graphics became more common, and we may now peer into synthetic worlds that seem solid and almost real. Likewise, until recently, the notion of carrying on a “conversation” with our computer was far-fetched. Now, speech technology has progressed to the point that many interesting applications are being considered. Just over the horizon, computer vision is destined to play a role in face and gesture recognition. It seems clear that as the art of computing progresses, even more of the human sensory palette will become engaged.
It is likely that the sense of touch (haptics) will be the next sense to play an important role in this evolution.
We use touch pervasively in our everyday lives, and are accustomed to easy manipulation of objects in three dimensions. Even our conversation is peppered with references to touching. We principally use our hands to explore and interact with our surroundings. The hand is unique in this respect because it is both an input device and an output device, sensing and actuation are integrated within the same active living mechanism. Just as the primitive man forged hand tools to triumph over harsh nature, we need to develop smart devices to interface with information-rich real and virtual worlds. Given the ever-increasing quantities and types of information that surrounds us, and to which we need to respond rapidly, there is a critical need to explore new ways to interact with information. In order to be efficient in this interaction, it is essential that we utilize all of our sensorimotor capabilities. Our haptic system – with its tactile, kinesthetic, and motor capabilities together with the associated cognitive processes– presents a uniquely bi-directional information channel to our brains, yet it remains underutilized. Haptics is poised for rapid growth. Haptics is receiving broad, global acceptance. There are many terms used to describe haptics technology in user interfaces, including “full force feedback,” “rumble feedback,” “tactile feedback,” “touch-enabled,” “vibration,” and “vibrotactile.

INTRODUCTION
Haptics is derived from the Greek word ‘haptikos’ which means-”being able to come into contact with”

Haptics is the technology of adding the sensation of touch and feeling to computers.

A haptic device gives people a sense of touch with computer-generated environments, so that when virtual objects are touched, they seem real and tangible.

Understanding and enabling a compelling experience of Presence not limited to "being there", but extended to "being in touch" with remote or virtual surroundings

TECHNOLOGY
Haptic interactions can be categorized into
the different types of touch sensations a user
can receive:
Tactile feedback
Force feedback

Tactile feedback

Refers to the sensations felt by the skin.
It allows the user to feel things such as the texture of surfaces, temperature and vibration.

Force feedback

It reproduces the directional forces that can result from solid boundaries.
E.g. the weight of virtual objects, inertia, etc.

Haptic Devices

By using haptic devices, the user can not only feed information to the computer but can also receive information from the computer in the form of a felt sensation on some part of the body. This is referred to as a haptic interface.

A haptic interface device has ability to provide for simultaneous information exchange between a user and a machine.

The 2-dimensional haptic devices can be used to aid computer
users who are blind or visually disabled, by providing a
slight resistance at the edges of windows and buttons so
that the user can "feel" the Graphical User Interface

This technology can also provide resistance
to textures in computer images which enables computer
users to "feel“ pictures such as maps and drawings.

The WingMan Force Feedback Mouse and the iFeel mouse are
some of the two dimensional devices haptic devices produced
by Logitech.

Virtual reality/ Telerobotics based devices

It allow the user to "touch" and manipulate 3-dimensional virtual objects.

The PHANTom from SensAble Technologies is a 3-dimensional pen-style haptic interface, and comes in five models.

Exoskeletons
The term exoskeleton refers to the hard outer shell
large and immobile systems that the user must attach him or herself to.
their large size and immobile nature allow for the generation of large and varied force information.

Gloves and wearable devices
These devices are smaller exoskeleton-like devices
The user can move naturally without being weighed down by a large exoskeleton or immobile device

Point-sources and Specific task devices
Single point of contact
Specialized for performing a particular task.
Restricts the device to perform few functions

Locomotion Interface
Feedback is in the form of full body force Feedback. Force In a confined space, simulate unrestrained human mobility such as walking and running for virtual reality.

presented by:
Neha Jha
D. Naga sivanathHaptic technology.doc (Size: 847.5 KB / Downloads: 127)
ABSTRACT
“HAPTICS”-- a technology that adds the sense of touch to virtual environment .Haptic interfaces allow the user to feel as well as to see virtual objects on a computer, and so we can give an illusion of touching surfaces, shaping virtual clay or moving objects around.
The sensation of touch is the brain’s most effective learning mechanism --more effective than seeing or hearing—which is why the new technology holds so much promise as a teaching tool.
Haptic technology is like exploring the virtual world with a stick. If you push the stick into a virtual balloon push back .The computer communicates sensations through a haptic interface –a stick, scalpel, racket or pen that is connected to a force-exerting motors.
With this technology we can now sit down at a computer terminal and touch objects that exist only in the "mind" of the computer.By using special input/output devices (joysticks, data gloves, or other devices), users can receive feedback from computer applications in the form of felt sensations in the hand or other parts of the body. In combination with a visual display, haptics technology can be used to train people for tasks requiring hand-eye coordination, such as surgery and space ship maneuvers.
In this paper we explicate how sensors and actuators are used for tracking the position and movement of the haptic device moved by the operator. We mention the different types of force rendering algorithms. Then, we move on to a few applications of Haptic Technology. Finally we conclude by mentioning a few future developments. Introduction
What is Haptics?
Haptics refers to sensing and manipulation through touch. The word comes from the Greek ‘haptesthai’, meaning ‘to touch’.
The history of the haptic interface dates back to the 1950s, when a master-slave system was proposed by Goertz (1952). Haptic interfaces were established out of the field of tele- operation, which was then employed in the remote manipulation of radioactive materials. The ultimate goal of the tele-operation system was "transparency". That is, an user interacting with the master device in a master-slave pair should not be able to distinguish between using the master controller and manipulating the actual tool itself. Early haptic interface systems were therefore developed purely for telerobotic applications. Computing contact-response forces:
Humans perceive contact with real objects through sensors (mechanoreceptors) located in their skin, joints, tendons, and muscles. We make a simple distinction between the information these two types of sensors can acquire.
1.Tactile information refers to the information acquired through sensors in the skin with particular reference to the spatial distribution of pressure, or more generally, tractions, across the contact area.
To handle flexible materials like fabric and paper, we sense the pressure variation across the fingertip. Tactile sensing is also the basis of complex perceptual tasks like medical palpation, where physicians locate hidden anatomical structures and evaluate tissue properties using their hands.
2.Kinesthetic information refers to the information acquired through the sensors in the joints. Interaction forces are normally perceived through a combination of these two.
To provide a haptic simulation experience, systems are designed to recreate the contact forces a user would perceive when touching a real object.

presented by:
K.Kavitha
J.V.LakshmihAPTIS.doc (Size: 401 KB / Downloads: 62)
ABSTRACT
Engineering as it finds its wide range of application in every field not an exception even the medical field. One of the technologies which aid the surgeons to perform even the most complicated surgeries successfully is Virtual Reality.
Even though virtual reality is employed to carry out operations the surgeon’s attention is one of the most important parameter. If he commits any mistakes it may lead to a dangerous end. So, one may think of a technology that reduces the burdens of a surgeon by providing an efficient interaction to the surgeon than VR. Now our dream came to reality by means of a technology called “HAPTIC TECHNOLOGY”.
Haptic is the “science of applying tactile sensation to human interaction with computers”. In our paper we have discussed the basic concepts behind haptic along with the haptic devices and how these devices are interacted to produce sense of touch and force feedback mechanisms. Also the implementation of this mechanism by means of haptic rendering and contact detection were discussed.
We mainly focus on ‘Application of Haptic Technology in Surgical Simulation and Medical Training’. Further we explained the storage and retrieval of haptic data while working with haptic devices. Also the necessity of haptic data compression is illustrated.Introduction:
Haptic, is the term derived from the Greek word, haptesthai, which means ‘to touch’. Haptic is defined as
The “science of applying tactile sensation to human interaction with computers”. It enables a manual interaction with real, virtual and remote environment. Haptic permits users to sense (“feel”) and manipulate three-dimensional virtual objects with respect to such features as shape, weight, surface textures, and temperature.
A Haptic Device is one that involves physical contact between the computer and the user. By using Haptic devices, the user can not only feed information to the computer but can receive information from the computer in the form of a felt sensation on some part of the body. This is referred to as a Haptic interface.
In our paper we explain the basic concepts of ‘Haptic Technology and its Application in Surgical Simulation and Medical Training’.Haptic Devices:
Force feedback is the area of haptics that deals with devices that interact with the muscles and tendons that give the human a sensation of a force being applied—hardware and software that stimulates humans’ sense of touch and feel through tactile vibrations or force feedback.
These devices mainly consist of robotic manipulators that push back against a user with the forces that correspond to the environment that the virtual effector’s is in. Tactile feedback makes use of devices that interact with the nerve endings in the skin to indicate heat, pressure, and texture. These devices typically have been used to indicate whether or not the user is in contact with a virtual object. Other tactile feedback devices have been used to stimulate the texture of a virtual object.
PHANToM and CyberGrasp are some of the examples of Haptic Devices.
revolute joints each connected to a computer-controlled electric DC motor. The tip of the device is attached to a stylus that is held by the user. By sending appropriate voltages to the motors, it is possible to exert up to 1.5 pounds of force at the tip of the stylus, in any direction. CYBER GRASP:
The Cyber Glove is a lightweight glove with flexible sensors that
accurately measure the position and movement of the fingers and wrist. The CyberGrasp, from Immersion Corporation, is an exoskeleton device that fits over a 22 DOF CyberGlove, providing force feedback. The CyberGrasp is used in conjunction with a position tracker to measure the position and orientation of the forearm in three-dimensional space. Haptic Rendering:
It is a process of applying forces to the user through a force-feedback device. Using haptic rendering, we can enable a user to touch, feel and manipulate virtual objects. Enhance a user’s experience in virtual environment. Haptic rendering is process of displaying synthetically generated 2D/3D haptic stimuli to the user. The haptic interface acts as a two-port system terminated on one side by the human operator and on the other side by the virtual environment.Contact Detection
A fundamental problem in haptics is to detect contact between the virtual objects and the haptic device (a PHANToM, a glove, etc.). Once this contact is reliably detected, a force corresponding to the interaction physics is generated and rendered using the probe. This process usually runs in a tight servo loop within a haptic rendering system.
Another technique for contact detection is to generate the surface

Now then, what if we did a comprehensive review of the haptic gloves with the pressure sensors, and what happens if you put the sensors pressure inside of the glove? What could we use this for? If we switch sensors haptic pressure inside of the glove and place the cuff on a human hand will know the force with which the hand applied in motion. Perhaps, using one of the innovations of JS Callahan with LEDs.

presented by:
K.SURYAKUMARI
N.SRUJANAHAPTICTECHNOLOGY.doc (Size: 445 KB / Downloads: 70)
ABSTRACT
Engineering as it finds its wide range of application in every field not an exception even the medical field. One of the technologies which aid the surgeons to perform even the most complicated surgeries successfully is Virtual Reality.
Even though virtual reality is employed to carry out operations the surgeon’s attention is one of the most important parameter. If he commits any mistakes it may lead to a dangerous end. So, one may think of a technology that reduces the burdens of a surgeon by providing an efficient interaction to the surgeon than VR. Now our dream came to reality by means of a technology called “HAPTIC TECHNOLOGY”.
Haptic is the “science of applying tactile sensation to human interaction with computers”. In our paper we have discussed the basic concepts behind haptic along with the haptic devices and how these devices are interacted to produce sense of touch and force feedback mechanisms. Also the implementation of this mechanism by means of haptic rendering and contact detection were discussed.
We mainly focus on ‘Application of Haptic Technology in Surgical Simulation and Medical Training’. Further we explained the storage and retrieval of haptic data while working with haptic devices. Also the necessity of haptic data compression is illustrated.Haptic Technology
Introduction:
Haptic, is the term derived from the Greek word, haptesthai, which means ‘to touch’. Haptic is defined as the “science of applying tactile sensation to human interaction with computers”. It enables a manual interaction with real, virtual and remote environment. Haptic permits users to sense (“feel”) and manipulate three-dimensional virtual objects with respect to such features as shape, weight, surface textures, and temperature.
A Haptic Device is one that involves physical contact between the computer and the user. By using Haptic devices, the user can not only feed information to the computer but can receive information from the computer in the form of a felt sensation on some part of the body. This is referred to as a haptic interface.
In our paper we explain the basic concepts of ‘Haptic Technology and its Application in Surgical Simulation and Medical Training’.Haptic Devices:
Force feedback is the area of haptics that deals with devices that interact with the muscles and tendons that give the human a sensation of a force being applied—hardware and software that stimulates humans’ sense of touch and feel through tactile vibrations or force feedback.
These devices mainly consist of robotic manipulators that push back against a user with the forces that correspond to the environment that the virtual effector’s is in. Tactile feedback makes use of devices that interact with the nerve endings in the skin to indicate heat, pressure, and texture. These devices typically have been used to indicate whether or not the user is in contact with a virtual object. Other tactile feedback devices have been used to stimulate the texture of a virtual object.
PHANToM and CyberGrasp is some of the examples of Haptic Devices.PHANToM:
A small robot arm with three revolute joints each connected to a computer-controlled electric DC motor. The tip of the device is attached to a stylus that is held by the user. By sending appropriate voltages to the motors, it is possible to exert up to 1.5 pounds of force at the tip of the stylus, in any direction. CYBER GRASP:
The CyberGlove is a lightweight glove with flexible sensors that
accurately measure the position and movement of the fingers and wrist. The CyberGrasp, from Immersion Corporation, is an exoskeleton device that fits over a 22 DOF CyberGlove, providing force feedback. The CyberGrasp is used in conjunction with a position tracker to measure the position and orientation of the fore arm in three-dimensional space. Haptic Rendering:
It is a process of applying forces to the user through a force-feedback device. Using haptic rendering, we can enable a user to touch, feel and manipulate virtual objects. Enhance a user’s experience in virtual environment. Haptic rendering is process of displaying synthetically generated 2D/3D haptic stimuli to the user. The haptic interface acts as a two-port system terminated on one side by the human operator and on the other side by the virtual environment.

presented by:
Neha Jha
&
D. Naga sivanathHaptic technology.doc (Size: 833.5 KB / Downloads: 75)
ABSTRACT
“HAPTICS”-- a technology that adds the sense of touch to virtual environment .Haptic interfaces allow the user to feel as well as to see virtual objects on a computer, and so we can give an illusion of touching surfaces, shaping virtual clay or moving objects around.
The sensation of touch is the brain’s most effective learning mechanism --more effective than seeing or hearing—which is why the new technology holds so much promise as a teaching tool.
Haptic technology is like exploring the virtual world with a stick. If you push the stick into a virtual balloon push back .The computer communicates sensations through a haptic interface –a stick, scalpel, racket or pen that is connected to a force-exerting motors.
With this technology we can now sit down at a computer terminal and touch objects that exist only in the "mind" of the computer.By using special input/output devices (joysticks, data gloves, or other devices), users can receive feedback from computer applications in the form of felt sensations in the hand or other parts of the body. In combination with a visual display, haptics technology can be used to train people for tasks requiring hand-eye coordination, such as surgery and space ship maneuvers.
In this paper we explicate how sensors and actuators are used for tracking the position and movement of the haptic device moved by the operator. We mention the different types of force rendering algorithms. Then, we move on to a few applications of Haptic Technology. Finally we conclude by mentioning a few future developments. Introduction
What is Haptics?
Haptics refers to sensing and manipulation through touch. The word comes from the Greek ‘haptesthai’, meaning ‘to touch’.
The history of the haptic interface dates back to the 1950s, when a master-slave system was proposed by Goertz (1952). Haptic interfaces were established out of the field of tele- operation, which was then employed in the remote manipulation of radioactive materials. The ultimate goal of the tele-operation system was "transparency". That is, an user interacting with the master device in a master-slave pair should not be able to distinguish between using the master controller and manipulating the actual tool itself. Early haptic interface systems were therefore developed purely for telerobotic applications.
Working of Haptic Devices Architecture for Haptic feedback:
Basic architecture for a virtual reality application incorporating visual, auditory, and haptic feedback.
• Simulation engine:
Responsible for computing the virtual environment’s behavior over time.
• Visual, auditory, and haptic rendering algorithms:
Compute the virtual environment’s graphic, sound, and force responses toward the user.
• Transducers:
Convert visual, audio, and force signals from the computer into a form the operator can perceive.
• Rendering:
Process by which desired sensory stimuli are imposed on the user to convey information about a virtual haptic object.
The human operator typically holds or wears the haptic interface device and perceives audiovisual feedback from audio (computer speakers, headphones, and so on) and visual displays (a computer screen or head-mounted display, for example).
Audio and visual channels feature unidirectional information and energy flow (from the simulation engine towards the user) whereas, the haptic modality exchanges information and energy in two directions, from and toward the user. This bi directionality is often referred to as the single most important feature of the haptic interaction modality.
System architecture for haptic rendering:
An avatar is the virtual representation of the haptic interface through which the user physically interacts with the virtual environment.
Haptic-rendering algorithms compute the correct interaction forces between the haptic interface representation inside the virtual environment and the virtual objects populating the environment. Moreover, haptic rendering algorithms ensure that the haptic device correctly renders such forces on the human operator.
1.)Collision-detection algorithms detect collisions between objects and avatars in the virtual environment and yield information about where, when, and ideally to what extent collisions (penetrations, indentations, contact area, and so on) have occurred.
2.) Force-response algorithms compute the interaction force between avatars and virtual objects when a collision is detected. This force approximates as closely as possible the contact forces that would normally arise during contact between real objects.
Hardware limitations prevent haptic devices from applying the exact force computed by the force-response algorithms to the user.
3.) Control algorithms command the haptic device in such a way that minimizes the error between ideal and applicable forces. The discrete-time nature of the haptic- rendering algorithms often makes this difficult.
The force response algorithms’ return values are the actual force and torque vectors that will be commanded to the haptic device.
Existing haptic rendering techniques are currently based upon two main principles: "point-interaction" or "ray-based".
In point interactions, a single point, usually the distal point of a probe, thimble or stylus employed for direct interaction with the user, is employed in the simulation of collisions. The point penetrates the virtual objects, and the depth of indentation is calculated between the current point and a point on the surface of the object. Forces are then generated according to physical models, such as spring stiffness or a spring-damper model.
In ray-based rendering, the user interface mechanism, for example, a probe, is modeled in the virtual environment as a finite ray. Orientation is thus taken into account, and collisions are determined between the simulated probe and virtual objects. Collision detection algorithms return the intersection point between the ray and the surface of the simulated object. Computing contact-response forces:
Humans perceive contact with real objects through sensors (mechanoreceptors) located in their skin, joints, tendons, and muscles. We make a simple distinction between the information these two types of sensors can acquire.
1.Tactile information refers to the information acquired through sensors in the skin with particular reference to the spatial distribution of pressure, or more generally, tractions, across the contact area.
To handle flexible materials like fabric and paper, we sense the pressure variation across the fingertip. Tactile sensing is also the basis of complex perceptual tasks like medical palpation, where physicians locate hidden anatomical structures and evaluate tissue properties using their hands.
2.Kinesthetic information refers to the information acquired through the sensors in the joints. Interaction forces are normally perceived through a combination of these two.
To provide a haptic simulation experience, systems are designed to recreate the contact forces a user would perceive when touching a real object.
There are two types of forces:
1.Forces due to object geometry.
2.Forces due to object surface properties, such as texture and friction.
Geometry-dependent force-rendering algorithms:
The first type of force-rendering algorithms aspires to recreate the force interaction a user would feel when touching a frictionless and textureless object.
Force-rendering algorithms are also grouped by the number of Degrees-of-freedom (DOF) necessary to describe the interaction force being rendered.
Surface property-dependent force-rendering algorithms:
All real surfaces contain tiny irregularities or indentations. Higher accuracy, however, sacrifices speed, a critical factor in real-time applications. Any choice of modeling technique must consider this tradeoff. Keeping this trade-off in mind, researchers have developed more accurate haptic-rendering algorithms for friction.
In computer graphics, texture mapping adds realism to computer-generated scenes by projecting a bitmap image onto surfaces being rendered. The same can be done haptically.
Controlling forces delivered through haptic interfaces:
Once such forces have been computed, they must be applied to the user. Limitations of haptic device technology, however, have sometimes made applying the force’s exact value as computed by force-rendering algorithms impossible. They are as follows:
• Haptic interfaces can only exert forces with limited magnitude and not equally well in all directions
• Haptic devices aren’t ideal force transducers. An ideal haptic device would render zero impedance when simulating movement in free space, and any finite impedance when simulating contact with an object featuring such impedance characteristics. The friction, inertia, and backlash present in most haptic devices prevent them from meeting this ideal.
• A third issue is that haptic-rendering algorithms operate in discrete time whereas users operate in continuous time.
Finally, haptic device position sensors have finite resolution. Consequently, attempting to determine where and when contact occurs always results in a quantization error. It can create stability problems.
All of these issues can limit a haptic application’s realism. High servo rates (or low servo rate periods) are a key issue for stable haptic interaction.
Haptic Devices
Types of Haptic devices:
There are two main types of haptic devices:
• Devices that allow users to touch and manipulate 3-dimentional virtual objects.
• Devices that allow users to "feel" textures of 2-dementional objects.
Another distinction between haptic interface devices is their intrinsic mechanical behavior.
Impedance haptic devices simulate mechanical impedance—they read position and send force. Simpler to design and much cheaper to produce, impedance-type architectures are most common.
Admittance haptic devices simulate mechanical admittance—they read force and send position. Admittance-based devices are generally used for applications requiring high forces in a large workspace.
LOGITECH WINGMAN FORCE FEEDBACK MOUSE
It is attached to a base that replaces the mouse mat
and contains the motors used to provide forces back to
the user.
Interface use is to aid computer users who are blind or visually disabled; or who are tactile/Kinesthetic learners by providing a slight resistance at the edges of windows and buttons so that the user can "feel" the Graphical User Interface (GUI). This technology can also provide resistance to textures in computer images, which enables computer users to "feel" pictures such as maps and drawings. PHANTOM:
The PHANTOM provides single point, 3D force-
feedback to the user via a stylus (or thimble) attached to a
moveable arm. The position of the stylus point/fingertip is
tracked, and resistive force is applied to it when the device
comes into 'contact' with the virtual model, providing accurate, ground referenced force feedback. The physical working space is determined by the extent of the arm, and a number of models are available to suit different user requirements.
The phantom system is controlled by three direct current (DC) motors that have sensors and encoders attached to them. The number of motors corresponds to the number of degrees of freedom a particular phantom system has, although most systems produced have 3 motors.
The encoders track the user’s motion or position along the x, y and z coordinates the motors track the forces exerted on the user along the x, y and z-axis. From the motors there is a cable that connects to an aluminum linkage, which connects to a passive gimbals which attaches to the thimble or stylus. A gimbal is a device that permits a body freedom of motion in any direction or suspends it so that it will remain level at all times.
Used in surgical simulations and remote operation of robotics in hazardous environments Cyber Glove:
Cyber Glove can sense the position and movement of the fingers and wrist.
The basic Cyber Glove system includes one CyberGlove, its instrumentation unit, serial cable to connect to your host computer, and an executable version of VirtualHand graphic hand model display and calibration software.
The CyberGlove has a software programmable switch and LED on the wristband to permit the system software developer to provide the CyberGlove wearer with additional input/output capability. With the appropriate software, it can be used to interact with systems using hand gestures, and when combined with a tracking device to determine the hand's position in space, it can be used to manipulate virtual objects. Cyber Grasp:
The Cyber Grasp is a full hand force-feedback exo skeletal device, which is worn over the CyberGlove. CyberGrasp consists of a lightweight mechanical assembly, or exoskeleton, that fits over a motion capture glove. About 20 flexible semiconductor sensors are sewn into the fabric of the glove measure hand, wrist and finger movement. The sensors send their readings to a computer that displays a virtual hand mimicking the real hand’s flexes, tilts, dips, waves and swivels.
The same program that moves the virtual hand on the screen also directs machinery that exerts palpable forces on the real hand, creating the illusion of touching and grasping. A special computer called a force control unit calculates how much the exoskeleton assembly should resist movement of the real hand in order to simulate the onscreen action. Each of five actuator motors turns a spool that rolls or unrolls a cable. The cable conveys the resulting pushes or pulls to a finger via the exoskeleton. Applications
Medical training applications:
Such training systems use the Phantom’s force
display capabilities to let medical trainees
experience and learn the subtle and complex
physical interactions needed to become skillful in their art.
A computer based teaching tool has
been developed using haptic technology to train veterinary students to examine the bovine reproductive tract, simulating rectal palpation. The student receives touch feedback from a haptic device while palpating virtual objects. The teacher can visualize the student's actions on a screen and give training and guidance.

semreport.doc (Size: 1.16 MB / Downloads: 80)
ABSTRACT
‘Haptics’ is a technology that adds the sense of touch to virtual environments. Users are given the illusion that they are touching or manipulating a real physical object.
This seminar discusses the important concepts in haptics, some of the most commonly used haptics systems like ‘Phantom’, ‘Cyberglove’, ‘Novint Falcon’ and such similar devices. Following this, a description about how sensors and actuators are used for tracking the position and movement of the haptic systems, is provided.
The different types of force rendering algorithms are discussed next. The seminar explains the blocks in force rendering. Then a few applications of haptic systems are taken up for discussion. INTRODUCTION
2. a) What is ‘Haptics’?
Haptic technology refers to technology that interfaces the user with a virtual environment via the sense of touch by applying forces, vibrations, and/or motions to the user. This mechanical stimulation may be used to assist in the creation of virtual objects (objects existing only in a computer simulation), for control of such virtual objects, and to enhance the remote control of machines and devices (teleoperators). This emerging technology promises to have wide-reaching applications as it already has in some fields. For example, haptic technology has made it possible to investigate in detail how the human sense of touch works by allowing the creation of carefully controlled haptic virtual objects. These objects are used to systematically probe human haptic capabilities, which would otherwise be difficult to achieve. These new research tools contribute to our understanding of how touch and its underlying brain functions work. Although haptic devices are capable of measuring bulk or reactive forces that are applied by the user, it should not to be confused with touch or tactile sensors that measure the pressure or force exerted by the user to the interface.
The term haptic originated from the Greek word ἁπτικός (haptikos), meaning pertaining to the sense of touch and comes from the Greek verb ἅπτεσθαι (haptesthai) meaning to “contact” or “touch”.2. b) History of Haptics
In the early 20th century, psychophysicists introduced the word haptics to label the subfield of their studies that addressed human touch-based perception and manipulation. In the 1970s and 1980s, significant research efforts in a completely different field, robotics also began to focus on manipulation and perception by touch. Initially concerned with building autonomous robots, researchers soon found that building a dexterous robotic hand was much more complex and subtle than their initial naive hopes had suggested.
In time these two communities, one that sought to understand the human hand and one that aspired to create devices with dexterity inspired by human abilities found fertile mutual interest in topics such as sensory design and processing, grasp control and manipulation, object representation and haptic information encoding, and grammars for describing physical tasks.
In the early 1990s a new usage of the word haptics began to emerge. The confluence of several emerging technologies made virtualized haptics, or computer haptics possible. Much like computer graphics, computer haptics enables the display of simulated objects to humans in an interactive manner. However, computer haptics uses a display technology through which objects can be physically palpated.WORKING OF HAPTIC SYSTEMS
3. a) Basic system configuration.
Basically a haptic system consist of two parts namely the human part and the machine part. In the figure shown above, the human part (left) senses and controls the position of the hand, while the machine part (right) exerts forces from the hand to simulate contact with a virtual object. Also both the systems will be provided with necessary sensors, processors and actuators. In the case of the human system, nerve receptors performs sensing, brain performs processing and muscles performs actuation of the motion performed by the hand while in the case of the machine system, the above mentioned functions are performed by the encoders, computer and motors respectively.3. b) Haptic Information
Basically the haptic information provided by the system will be the combination of (i) Tactile information and (ii) Kinesthetic information.
Tactile information refers the information acquired by the sensors which are actually connected to the skin of the human body with a particular reference to the spatial distribution of pressure, or more generally, tractions, across the contact area.
For example when we handle flexible materials like fabric and paper, we sense the pressure variation across the fingertip. This is actually a sort of tactile information. Tactile sensing is also the basis of complex perceptual tasks like medical palpation, where physicians locate hidden anatomical structures and evaluate tissue properties using their hands.
Kinesthetic information refers to the information acquired through the sensors in the joints.
Interaction forces are normally perceived through a combination of these two informations.3. c) Creation of Virtual environment (Virtual reality).
Virtual reality is the technology which allows a user to interact with a computer-simulated environment, whether that environment is a simulation of the real world or an imaginary world. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special or stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Users can interact with a virtual environment or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus boom arm, and omnidirectional treadmill. The simulated environment can be similar to the real world, for example, simulations for pilot or combat training, or it can differ significantly from reality, as in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality experience, due largely to technical limitations on processing power, image resolution and communication bandwidth. However, those limitations are expected to eventually be overcome as processor, imaging and data communication technologies become more powerful and cost-effective over time.
Virtual Reality is often used to describe a wide variety of applications, commonly associated with its immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays; database gloves and miniaturization have helped popularize the motion. The most successful use of virtual reality is the computer generated 3-D simulators. The pilots use flight simulators. These flight simulators have designed just like cockpit of the airplanes or the helicopter. The screen in front of the pilot creates virtual environment and the trainers outside the simulators commands the simulator for adopt different modes. The pilots are trained to control the planes in different difficult situations and emergency landing. The simulator provides the environment. These simulators cost millions of dollars.
The virtual reality games are also used almost in the same fashion. The player has to wear special gloves, headphones, goggles, full body wearing and special sensory input devices. The player feels that he is in the real environment. The special goggles have monitors to see. The environment changes according to the moments of the player. These games are very expensive.3. d) Haptic feedback
Virtual reality (VR) applications strive to simulate real or imaginary scenes with which users can interact and perceive the effects of their actions in real time. Ideally the user interacts with the simulation via all five senses. However, today’s typical VR applications rely on a smaller subset, typically vision, hearing, and more recently, touch.
The application’s main elements are:
1) The simulation engine, responsible for computing the virtual environment’s behavior over time;
2) Visual, auditory, and haptic rendering algorithms, which compute the virtual environment’s graphic, sound, and force responses toward the user; and
3) Transducers, which convert visual, audio, and force signals from the computer into a form the operator can perceive HAPTIC DEVICES
A haptic device is the one that provides a physical interface between the user and the virtual environment by means of a computer. This can be done through an input/output device that senses the body’s movement, such as joystick or data glove. By using haptic devices, the user can not only feed information to the computer but can also receive information from the computer in the form of a felt sensation on some part of the body. This is referred to as a haptic interface.
Haptic devices can be broadly classified into4. a) Virtual reality/ Telerobotics based devices
i) Exoskeletons and Stationary device
ii) Gloves and wearable devices
iii) Point-sources and Specific task devices
iv) Locomotion Interfaces4. b) Feedback devices
i) Force feedback devices
ii) Tactile displays4. a. i) Exoskeletons and Stationary devices
The term exoskeleton refers to the hard outer shell that exists on many creatures. In a technical sense, the word refers to a system that covers the user
or the user has to wear. Current haptic devices that are classified as exoskeletons are large and immobile systems that the user must attach him- or herself to.4. a. ii) Gloves and wearable devices
These devices are smaller exoskeleton-like devices that are often, but not always, take the down by a large exoskeleton or other immobile devices. Since the goal of building a haptic system is to be able to immerse a user in the virtual or remote environment and it is important to provide a small remainder of the user’s actual environment as possible. The drawback of the wearable systems is that since weight and size of the devices are a concern, the systems will have more limited sets of capabilities.4. a. iii) Point sources and specific task devices
This is a class of devices that are very specialized for performing a particular given task. Designing a device to perform a single type of task restricts the application of that device to a much smaller number of functions. However it allows the designer to focus the device to perform its task extremely well. These task devices have two general forms, single point of interface devices and specific task devices.4. a. iv) Locomotion interfaces
An interesting application of haptic feedback is in the form of full body Force Feedback called locomotion interfaces. Locomotion interfaces are movement of force restriction devices in a confined space, simulating unrestrained mobility such as walking and running for virtual reality. These interfaces overcomes the limitations of using joysticks for maneuvering or whole body motion platforms, in which the user is seated and does not expend energy, and of room environments, where only short distances can be traversed.4. b. i) Force feedback devices
Force feedback input devices are usually, but not exclusively, connected to computer systems and is designed to apply forces to simulate the sensation of weight and resistance in order to provide information to the user. As such, the feedback hardware represents a more sophisticated form of input/output devices, complementing others such as keyboards, mice or trackers. Input from the user in the form of hand, or other body segment whereas feedback from the computer or other device is in the form of hand, or other body segment whereas feedback from the computer or other device is in the form of force or position. These devices translate digital information into physical sensations.4. b. ii) Tactile display devices
Simulation task involving active exploration or delicate manipulation of a virtual environment require the addition of feedback data that presents an object’s surface geometry or texture. Such feedback is provided by tactile feedback systems or tactile display devices. Tactile systems differ from haptic systems in the scale of the forces being generated. While haptic interfaces will present the shape, weight or compliance of an object, tactile interfaces present the surface properties of an object such as the object’s surface texture. Tactile feedback applies sensation to the skin.COMMONLY USED HAPTIC

With the technology advances in the past few years, computer technology is making its way to the general public at an affordable price.Such as a faster CPU, a larger hard drive, better graphic card, better multimedia system, and better computer tools.One of these computer technologies that are finding it’s way to the home and business PC market is the haptic technology.Haptics is "one of the growing areas in human computer interaction or new types of sensory interaction with computers besides keyboards and mice". A haptic technology is a force or tactile feedback technology, which allows a user to touch, feel, manipulate, create, and/or alter simulated three-dimensional objects in a virtual environment. Such an interface could be used to train physical skills such as those jobs requiring specialized hand-held tools, for instance, surgeons, astronauts, and mechanics..In addition, haptic help doctors to locate any change in temperature, or tumor in certain part of body without physically being there.

ABSTRACT
‘Haptics’ is a technology that adds the sense of touch to virtual environments. Users are given the illusion that they are touching or manipulating a real physical object.
This seminar discusses the important concepts in haptics, some of the most commonly used haptics systems like ‘Phantom’, ‘Cyberglove’, ‘Novint Falcon’ and such similar devices. Following this, a description about how sensors and actuators are used for tracking the position and movement of the haptic systems, is provided.
The different types of force rendering algorithms are discussed next. The seminar explains the blocks in force rendering. Then a few applications of haptic systems are taken up for discussion.

ABSTRACT
Touch is a fundamental aspect of interpersonal communication. Whether a greeting handshake, an encouraging pat on the back, or a comforting hug, physical contact is a basic means through which people achieve a sense of connection, indicate intention, and express emotion. In close personal relationships, such as family and friends, touch is particularly important as a communicator of affection.
Current interpersonal communication technology, such as telephones, video conferencing systems, and email, provides mechanisms for audio-visual and text-based interaction. Communication through touch, however, has been left largely unexplored . unexplored. In this paper, we describe an approach for applying haptic feedback technology to create a physical link between people separated by distance. The aim is to enrich current real-time communication by opening a channel for expression through touch.

Haptic- pertaining to the sense of
touch. Haptic technology is technology
using interfaces with computers to produce
the sense of touch by applying different
forces. These forces can make virtual
images/reality seem real to the touch. The
interfaces allow somebody to touch, feel,
stimulate, and alter dimensional-objects in
the virtual realm.
There are three types of haptic
interaction with the computer. There is force
feedback which is like the Rumble Pak,
joysticks and game controllers. This type of
feedback uses movement to feel forces.
There is positioning feedback, which is
feeling the objects around relative to the
body and where things are. The last type of
feedback is tactile feedback which uses
force that allows feeling temperature,
pressure, and other sensations. These
interfaces allow computers and virtual
reality to be as lifelike as possible.
Haptic technology/systems are used
in:
· Medicine
· Engineering
· Entertainment
· Education
· Telerobotics
· Military training
· Assistance for disabled individuals
Medical haptic systems are used in
diagnosis such as using virtual endoscopies.
Haptic systems are also used in surgery, like
training surgeons with virtual reality. They
can be used for touch-enabled microsurgery
or Telesurgery. Another thing haptic
systems are being used for is rehabilitation.
By using this touch technology, a person can
have exercise simulated and be used to
rehabilitate somebody with injury.
There are many ways haptic technology
can be used in engineering. Being able to
build, test and design things virtually has
many possibilities.
There are many haptic systems already
in the entertainment world. There are
joysticks, controllers, steering wheels, and
other types of gaming devices that always
the player to feel the bumps on the road or
other things.
Education uses hatpic systems to
teach. Students get hands on experience in
the virtual world on things that otherwise
wouldn’t be experienced.
Telerobotics is a big area using
haptic systems. Telesurgery is when a
surgeon is not presence in the room, and can
do the surgery from the virtual realm. There
are also systems that can remotely control
vehicles. Telerobotics also can be used when
handling hazardous materials.
An example of a haptic system being
used today is the CyberGraspTM. This is a
wired glove that covers your fingers and
hand. This allows a user to “reach” into the
computer and work with objects. The
objects being worked will provide force
feedback to each finger. This is used for
handling of hazardous materials, virtual
reality training, computer aided design, and
medical applications.