Index Terms—Autonomous Robots, Ad-Hoc Networking,RoboCup, Small-Size League, Search and Rescue.I.INTRODUCTIONIn order for robotics to have an impact in real worldapplications researchers have to overcome extensivechallenges from single robot designs all the way to multiplerobot architectures. In such efforts it is common to exploitdevelopments from various fields beyond robotics, such asartificial intelligence, biology, software engineering, human-computer interfaces, etc.In this paper we present work resulting from a newcollaboration between researchers in robotics at ITAM andresearchers in networking at UCSC in the design of newcommunication protocols for the coordination of multiplemobile robots. This work highlights not only the inter-disciplinary nature of this research but also points outchallenges in the individual domains. In the case of robotics,we are extending robotic architectures developed originally forRoboCup [1] by incorporating additional processingcapabilities than required for the official competitions. In thenetworking domain we are extending protocols developedoriginally for networks with uninterrupted connectivity withnew capabilities to make them applicable to scenarios withfrequent and long-lived connectivity interruptions.From the robotics perspective, the Robotics and BioroboticsLaboratories at ITAM are involved in the development ofbiologically inspired models to test hypothesis on animalbehavior and their linkage to neuroscientific studies. Thesemodels are helping the development of new adaptivearchitectures such as rat-inspired learning and its applicationto robot exploration [2]. Additionally, in the context ofRoboCup, ITAM’s Eagle Knights competes in a number ofsoccer leagues including Small-size and Four-Legged whererobots are programmed and in certain cases also built by theparticipating teams. RoboCup also includes non-soccercompetitions. One noteworthy example is search and rescueknown as RoboCup Rescue [3].In recent years robots have demonstrated their usefulness insupporting life-threatening human tasks. Among these, UrbanSearch and Rescue (USAR) [4] has been an area whererobotics is starting to have an important impact [5]. Forinstance, robots can play a crucial role in searching andrescuing survivors trapped under buildings collapsed due tomajor disasters such as earthquakes. One of the mainchallenges in these rescue operations are posed by the unstablenature of the collapsed structures, hard to reach spaces, lack ofoxygen, and hazards resulting from fire, toxic gases, or otherchemicals. In the past, specialized sensory equipment has beenused in assisting rescuers, yet this technology is mainly usedfrom outside the disaster perimeter. In the case of rescuerobots, they are usually remotely operated, resulting in anumber of limitations:(a) The number of robotic devices required to control alarge-scale search and rescue operation is significant,requiring a large number of trained human controllers.(b) Coordination between human-controlled, teleoperatedrobotic devices is hard, limiting the possibility of shareddecision support systems.(c) Poor environmental conditions, such as low visibility,make human maneuvering of robotic devices difficult.(d) Teleoperation relies on continuous availability of robustcommunication channels and power sources, includingthe use of wirelines.In order to get closer to survivors, scientists are currentlyexperimenting with mobile robots with various shapes, sizesand capabilities [6]. One unavoidable challenge is that searchand rescue robots must become more autonomous whileinteracting with human controllers only for higher-leveldecision making. Robots can help in the overall search andrescue operation. In addition to producing maps of how toreach a survivor’s location, robots will help in assertingsurvivors' conditions and existing hazards. A keyconsideration in carrying out these rescue missions will be theability for robots to communicate with base stations even if faraway. Ad-hoc networking will play an increasingly importantrole in such sparsely connected multi-robot systems.Multi-Robot Systems: Extending RoboCupSmall-Size Architecture with Local Vision andAd-Hoc NetworkingAlfredo Weitzenfeld, Senior Member, IEEE, Luis Martínez-Gómez, Juan Pablo Francois, AlejandroLevin-Pick, Katia Obraczka, Member, IEEE, Jay Boice

The i-NRG lab at UCSC is currently involved in several ad-hoc sensor networks related projects. Like the Eagle KnightsSmall-Size RoboCup team, these projects involve theintegration of custom-built hardware with ad-hoc networkprotocols specifically designed for the environments in whichthey are used, as well as the data that is to be delivered.Experience with each of these projects is being leveraged intothe Eagle Knights project. The following are descriptions ofsome of these projects.The CARNIVORE system [7] (Carnivore AdaptiveResearch Network in Varied Remote Outdoor Environments)was born from the desire to further understand the interplaybetween coyotes, their predators and their ecosystem in theSanta Cruz mountains. Custom collars have been developedthat contain a 3-axis accelerometer, GPS, storage space, andcommunication capabilities. Collared coyotes will continuallysense and transmit data to static base stations deployed in thearea, and the data will later be aggregated and used in analysisof their behavior. Similar to the Eagle Knights project, thenetwork topology is quite sparse, resulting in a network that israrely connected. Similar mechanisms will be used to ensurethat messages are delivered in a timely fashion to the sinknodes.Meerkats [8] is a battery-powered wide-area surveillancesystem incorporating both sophisticated vision algorithms anda power-management scheme to enable long network lifetime.Unlike the Eagle Knights project, the Meerkats network isstatic, allowing the use of more traditional ad-hoc networking.Detailed analysis of power consumption has enabled thenetwork to be designed such that lifetime is maximized. Powermonitoring enables a distributed resource manager to instructnodes to turn on or off their components such as wireless cardand USB camera.The SEA-LABS project [9] (Sensor Exploration Apparatusutilizing Low Power Aquatic Broadcasting System) has beendesigned to monitor remote coral reefs. This project, since it isalso battery-powered, must adhere to strict power-consumption guidelines in both sensing and transmission. Asuccessful deployment in the Monterey Bay has providedinitial data, and a full deployment in the Midway Atol isplanned for the future. The devices, since they are used in suchextreme environments, must require minimal maintenance andextremely long lifetime. Furthermore, the harsh environmentand large distance between nodes (up to 8km) requires that thenetworking be designed with reliability as a key consideration.These are just a few examples of mostly sensor networks,both static and mobile. In the case of multi-robot systems fordisaster recovery and emergency response applications, robotteams collaborating in rescuing or reconnaissance operationsneed to be deployed in arbitrarily wide areas with tortuousterrain and subject to communication impairments such asinterference, noise, signal fading, etc. Thus, new extensions torobots as mobile sensor networks are required to take intoaccount stringent and adverse environmental conditions insearch and rescue scenes. Thus, the initial goal of the existingcollaboration between ITAM´s Robotics Laboratory andUCSC’s i-NRG, is to add ad-hoc networking capabilities byextending the existing multi-robot platform.The remainder of this paper is organized in three majorsections, namely: Section II describes extensions to existingITAM’s Eagle Knights RoboCup Small-Size architecture byadding local vision and ad-hoc networking capabilities;Section III discusses current work at UCSC in developingprotocols for environments with episodic connectivity; SectionIV presents preliminary results from an experimental testbedcomposed of static and mobile nodes evaluating the ad hocnetworking protocols for frequent and long-liveddisconnection; finally, Section V presents conclusions.

II. MULTI-ROBOTCOORDINATION

This section overviews the RoboCup Small-Size leaguearchitecture and presents extensions to the individual robotsnecessary to provide local vision capabilities and support forad-hoc networking.A. RoboCup Small-Size Robot ArchitectureRoboCup competitions initiated 10 years ago and havebecome a well-known venue where coordination amongmultiple robots teaming in a soccer game can be evaluated.ITAM’s Eagle Knights [10] have been participating since2003 in different soccer leagues. While there have beensignificant improvement in the performance of RoboCupteams over the years, several aspects of the competition weredefined to simplify multi-robot coordination tasks. One clearexample is the Small-Size League (SSL) having global aerialcameras simplifying visual processing with control centralizedby an individual computer sending commands to all robots onthe field. Additionally, the limited size of the soccer arenaavoids many communication problems present in largerenvironments. The game involves two teams of five robots, upto 18cm in diameter each, playing on a 4m by 5.4m carpetedfield, as shown in Figure 1.

Fig. 1. ITAM’s Eagle Knights RoboCup Small-Size league systemarchitecture. A number of computers remotely control the state of the game.The Vision System receives images from the cameras mounted on top of thefield and sends information about relevant objects to the AI System producingremote commands to the robots in the field. A Referee Box send game signalsto both teams.

The system architecture consists of one or two remotecomputers sending action commands to the robots. Computersreceive video signals from cameras mounted on top of thefield and provide wireless signals to the five robots on thefield. The main functional components of the small-sizeleague system are shown in Figure 2: Vision System, AISystem, Referee Box, and Robots.Fig. 2. ITAM’s RoboCup Small-Size league block diagram. Visual input fromcameras mounted on top of the field are processed by the Vision module toprovide the AI module with robot positions and orientations. The AI modulesends action command to the robots via a transceiver.

Vision System. The Vision System is the main source ofinput during a game. Its main task is to capture video in realtime from the two cameras mounted on top of the field. Thecamera system needs to recognize the set of colors assigned tothe objects of interest in the game, namely robots and ball, allin accordance with the SSL rules [11]. Once objects arerecognized, the system identifies and computes the position ofthe ball together with position and orientation of the robots inthe field. Robots of one team must have a blue colored 50mmin diameter circular patch on top while the other team musthave a yellow colored patch. Additional patches are used toidentify robots and compute their orientation. A particularlycritical challenge in the Vision System is to adapt to differentlight conditions by performing dynamic color calibration.Positions and orientations of objects are transmitted to the AISystem. The computation cycle is around 30 frames persecond (More details can be found in [12].)AI System. The AI or High Level Control System receivesobject positions and orientations, i.e. object localization, fromthe Vision System in order to produce robot action commands.These actions depend on strategic decisions made a prioridepending on robot roles, e.g. goalkeeper, defense, andforward, and on the current state of the game, e.g. attacking ordefending. Additional game state information comes from theReferee Box, e.g. regular play, free kick, etc. The AI System iscomposed of three main submodules: Behaviors, CollisionDetection, and Motion Control as shown in Figure 3. Finalrobot action decisions are converted into commands that aretransmitted to the robots via a wireless link through atransceiver. Transmission is asynchronous.

Figure 4 shows a sample set of behavior for a robot attackerdescribed: Reach Ball, Circle Ball and Kick Ball. Thesebehaviors are activated by external signals such as ball_near orball_far.

Fig. 4. Attacker behaviors described as a state machine. Three behaviors aredefined: Reach Ball, Circle Ball and Kick Ball. These sample behaviors orstates are activated from external signals such as ball_near or ball_far.

Referee Box. The Referee Box communicates additionaldecisions (penalties, goal scored, start of the game, etc.)generated by the human referee during a game. These decisionscorrespond to a set of predefined commands sent to the AIsystem via a serial link.Robots. The Robots execute commands transmitted by theAI system through the transceiver to generate local robotactions, e.g. move, kick, and dribble. Robots in this league aremostly omni-directional having either three or four wheelswith corresponding motors controlling movement. There is anadditional motor controlling the dribbler that keeps the golfball tight into the robot for a limited amount of time asspecified by the rules. Additionally, the robot includes asolenoid controlled by capacitors to kick the ball. Local robotcontrol is managed by a Texas Instruments TMS320LF2407Afixed-point single chip DSP (Digital Signal Processor)optimized for digital motor control. The DSP receives remotecommunication from the AI System via a Radiometrix RPC-914/869-64 local transceiver with radio frequency at either914MHz or 869MHz with 64kbit/sec transmission rate similarto one attached to the PC. Teams alternate in radio frequency.Finally, rechargeable 9V/1600mA batteries are incorporated inthe robot. The robot block diagram is shown in Figure 6. Apicture of the Eagle Knights three wheeled robot used for thisproject is shown in Figure 7.Reach BallKick BallCircle Ballball_near

Fig. 6. Eagle Knights robot block diagram. A DSP receiving remote signalsvia a wireless transceiver control three (or four) motors for omni-directionalmovement. Additionally, the DSP control a dribbler and a kicker controlmechanism.

Fig. 7. Eagle Knights robot mechanical design. The omni-directional robotincludes a kicker, dribbler and motion control all processed by a local DSPreceiving signals from the remote AI System computer via a transceiver.B. Mobile Robot ArchitectureA major constraint in the small-size league architecture isthe global vision system limiting mobility of the robots to thesoccer field while keeping them under full camera view. Byproviding a local vision system as in the case of the Mid-Sizeand Four-Legged RoboCup leagues it becomes possible toavoid this restriction. For this purpose we have extended ourrobot design to include a local camera located where thedribbler and kicker used to be while adding a CrossbowStargate [13] as shown in Figure 8. The Stargate itself isoutfitted with a webcam and an 802.11 wireless card. It is arelatively powerful, small form factor single-board computerthat has found applications in ubiquitous computing andwireless sensor networking. It is based on Intel's 400MHz X-Scale processor and has 32MB flash memory and 64MBSDRAM and provides PCMCIA and Compact Flashconnectors on the main board. It also has a daughter boardwith Ethernet, USB and serial connectors. A LogitechQuickCam Pro 400 webcam is connected through the USBport, and communication carried out by an AmbicomWave2Net IEEE 802.11b compact flash wireless card. Theoperating system is Stargate's version 7.2, an embedded Linuxsystem (kernel version 2.4.19).

Fig. 8. Eagle Knights modified robot having local camera and 802.11communication capabilities. The original robot architecture is maintainedalthough replacing the transceiver with a direct linkage to the CrossbowStargate (on top) managing wireless communication and local vision. Notehow we replaced the kicker and dribbler with the camera due to camera.

The original communication transceiver was replaced by adirect wire connecting the main robot board with the Stargatewhile moving the Vision System and AI System computationsto the local Stargate for processing as shown in Figure 9.Since the Stargate contains a Linux operating system, portingprevious robot code written in C did not become a major issuealthough not all functionality was required. From Figure 4programmed the Reach_Ball behavior.

Fig. 9. Extended Small-Size robot architecture. Visual input from a cameramounted on the robot itself is processed by the Vision module to provide theAI module with robot positions and orientations. The AI module sends actioncommand to the robot locally. Communication control is available fornetworking with other robots or a remote computer.

The block diagram for the robot design is shown in Figure 10.Due to size constraints we took out the kicker and dribbler tomake space for the local camera. The Stargate was put on topof the robot as previously shown.WIRELESSCOMMUNICATION(TRANSCEIVER)

Fig. 10. Extended Small-Size robot block diagram. A DSP receiving remotesignals via a wireless transceiver control three (or four) motors for omni-directional movement. Additionally, the DSP control a dribbler and a kickercontrol mechanism.

III. WIRELESSAD-HOCNETWORKING

In the RoboCup Small-Size soccer league, robots are veryclose to each other on the field. This means that all robots arewithin transmission range of one another which makes routingof messages between computer and robot, or between robots,trivial; any robot can send a message to any other robot in asingle transmission. For other applications, however, as therange of robot mobility is extended, nodes may be too farapart to directly communicate, requiring messages to be routedthrough intermediate robots to reach their destination. In suchsituations, known as multi-hop ad hoc networks, nodes mustcooperatively establish routes and forward messages in orderto maintain communication.In terms of ad-hoc networking protocols, the Stargate usedin our system architecture is shipped with AODV [14], the Adhoc On-demand Distance Vector routing protocol. AODV hasbeen designed under the assumption that end-to-end paths areavailable at least most of the time. In other words, it isassumed that the network is connected most of the time andthat disconnections, when they happen, are short lived.However, in some situations such as disaster recovery oremergency response scenarios, end-to-end connectivity cannotbe guaranteed; in fact, it may turn out that the network isdisconnected for most of its operational lifetime. For thisreason, we have developed StAR (Steward Assisted Routing),a routing protocol for networks in which links are oftenunavailable due to mobility or other interference. Figure 11shows a sample network where typical ad-hoc protocols suchas AODV will fail, highlighting the need for protocols that arerobust to long-lived and/or frequent network disconnectionssuch as StAR. Below, we describe both AODV and StAR.

Fig. 11. An example network in which there is no route from S to D. Existingon-demand routing protocols fail to deliver messages when a route cannot beestablished. StAR will buffer data at the node nearest to the destination until aroute is available.A. AODVUnlike traditional wired networks, multi-hop ad hocnetworks (MANETs) require a routing protocol that canrespond quickly to node failures and topology changes.AODV is an example of an on-demand routing protocol. Itestablishes a route between a source-destination pair onlywhen the source node has data to send to the destination. Thisnotion is in contrast to proactive routing protocols commonlyused in the Internet, which can afford the luxury ofmaintaining all necessary routes since they rarely change.Because routes can change very quickly in a MANET, thesignaling overhead required to maintain all routes at all timescan be prohibitively high.AODV's route establishment phase consists of two maincontrol messages, the RREQ (route request) and RREP (routereply). A robot, when desiring to send a message to anotherrobot, must send a route request for the destination. Thisrequest is broadcast to all neighbors and relayed byintermediate nodes until it reaches the destination, or a robotwith a route to the destination, at which time a route replymessage is unicast back to the source robot. This messagesequence establishes the (temporary) route so that packets maybe forwarded from source to destination. For a much moredetailed description of AODV, the reader is referred to theAODV RFC [14].The major failing point of AODV, and other on-demandrouting protocols such as DSR [15], occurs when there is noexisting end-to-end path from source to destination, and theroute discovery phase fails. In this case, data packets aredropped, and the destination does not receive the intendedmessages.B. StARThe objective of StAR is to nominate, for each connectedpartition in the network, a steward for each destination. Thesestewards are the robots that are next expected to havecommunication with the destination. For example, if there is asingle moving robot who communicates with all otherstationary nodes, this robot is likely to be nominated as thesteward for all destinations. Messages are sent to theassociated steward, who will store them until a route to thedestination (or a better steward) is available.WIRELESSCOMMUNICATION(802.11)DIGITAL SIGNALPROCESSOR(DSP)

MOTIONCONTROLMOTOR (3)ENCODER (3)SINGLE BOARDCOMPUTER(Stargate)CAMERA

StAR routes messages using a combination of global(network-wide) contact information and local (intra-partition)route maintenance. The topological location of activedestinations in the network is propagated through periodicbroadcasts, or contact exchanges, between neighbors. Thesebroadcasts occur at a fixed interval if there are nearby nodes,and contain only those entries in the routing table that mayhave changed since the last broadcast to the same set ofneighbors. The message includes a unique sequence numberindicating the broadcast from which the information came.Initially, each node nominates itself as the local steward foreach destination, and therefore does not route messages to anyneighbor. As updates are received from neighbors thatadvertise better local stewards, routes are formed. The rankingof stewards is based on the most recently heard sequencenumber for a destination, or route length if two nodes have thesame destination sequence number. In a connected network(i.e, a network in which there are connected routes between allrobots), each tree will be rooted at the destination itself, andmessages routed directly to the destination.Thus, route maintenance results in one tree per destinationof interest in each partition, where each tree is rooted at thelocally nominated steward for that destination. Note that it ispossible (and quite likely) that a node can be the steward formore than one destination at any given time, and the tree foreach destination will contain precisely the same nodes andlinks.

IV. EXPERIMENTS ANDRESULTS

In addition to outfitting each robot with a local camera andad hoc networking capabilities, we have loaded them with asimplified surveillance application. Each robot is defined aseither a source (sensor) node or a destination (sink) node. It isthe responsibility of source nodes to acquire images of theirsurroundings through the webcam at 5-second intervals, andtransmit them to a designated sink. Because there may be nodirect route to the sink at the time the image is taken, StARensures that the image is buffered at some intermediate nodeuntil a route toward the destination exists. We are currentlyexperimenting with a wide range of network topologies usingStAR on the extended Eagle Knight robot architecture forcomparison with standard on-demand routing protocols.In what follows, we define three experiments using fourfully autonomous small-size robots in order to examineprotocol performance under varied scenarios. In eachexperiment described below, we modify the mobility of thesensor and sink nodes to provide more or less connectivity inthe network. All experiments last five minutes, during whichtime each sensor node captures a 230KB image every fiveseconds, resulting in a total of 30 images per sensor. Wemeasure the number of images that are successfully sent to thesink to determine the effectiveness of the routing protocol.A. Experiment 1: Static NetworkWe first examine the behavior in a network with four staticnodes, two of which are sensors. The distance and obstaclesbetween each node are different, as shown in Figure 12, whichleads to intermittent connectivity between some node pairs.Most notably, the connectivity between the sink (node 7), andone of the sensors (node 3) is often unavailable due to the manywalls between them, which requires images to be routedthrough node 1 at some points.

Table I compares the delivery rates of AODV and StAR.Both protocols deliver more than 75% of captured images,however, StAR is able to deliver all 60 images, since it handlesthe intermittent connectivity between nodes 3 and 7 either bybuffering the images at the source until a route can beestablished, either directly or through intermediate node 1.

TABLE

IPERFORMANCE OFAODVANDSTARINTOPOLOGY1

Image Deliveries Ratio DeliveredAODV46 76.67%StAR60 100.00%B. Experiment 2: Static Sensors with Mobile IntermediateNodeIn this experiment, all sensor nodes remain static, while anintermediate relay node moves to enable network connectivity.As shown in Figure 13, two of the sensor nodes 1 and 3sometimes have connectivity with the sink, while the thirdsensor node 4, never has direct connectivity. Mobile node 2enables connectivity between sensor node 4 and the sink,allowing images to be transmitted over a three-hop route (4 –2 – 1 – 7).

Table II shows the performance of the two routing protocolsin experiment 2. AODV does not take advantage of the addedconnectivity provided by mobile node 2, and therefore fails todeliver any images from sensor node 4. Using StAR, however,the mobile node carries the images until a route can beestablished through node 1 to the sink. StAR is therefore ableto successfully deliver all 90 images. Like the previousexperiment, the poor connectivity between the sink and sensornode 3 makes it difficult for AODV to deliver images becauseof its inability to buffer the images until a route can beestablished.

TABLE

IIPERFORMANCE OFAODVANDSTARINTOPOLOGY2

Image Deliveries Ratio DeliveredAODV48 51.11%StAR90 100.00%C. Experiment 3: Mobile Sensors with Static IntermediateNodeThis experiment is representative of a situation wheremobile sensor nodes are deployed to gather information beforerelaying it to static sink nodes. In this topology, shown inFigure 14, two mobile nodes with attached cameras hadlimited connectivity to static relay nodes. The static nodes allhad intermittent connectivity due to obstacles and distance.The mobile nodes ranged at a distance from the sink, nevercoming into direct contact.Again, as shown in Table III, StAR shows a largeimprovement over the standard AODV routing protocol.Because the source sensor nodes are able to buffer imagesuntil a relay node is available, and that relay node can in turnbuffer the images until a direct path to the destination isavailable, the protocol delivers nearly every captured image.

Another discovery worth mentioning is that when weperformed this type of experiment, the transmission of theimages, although complete in terms of the number of imagesreceived, in some cases did not get the entire image across.Most probably this is due to the fact that if the mobile sensornode is in the middle of a transmission when it goes out ofrange, only part of the picture arrives, making it impossible toview it at the sink. This problem would likely be dealt with atthe application layer.

We presented preliminary results from collaborative researchwork between the robotics laboratory at ITAM and theinternetworking research group at UCSC in incorporatingvision-based sensing and ad-hoc networking capabilities insmall autonomous mobile robots. The robots used weredeveloped at ITAM in the context by the Eagle KnightsRoboCup Small-Size league competitions. These robots arecurrently started to be used in search and rescue relatedapplications where extensions to their architecture is necessaryin order to have them execute outside the limited soccer field.The main hardware modifications have involved the inclusionof a Crossbow Stargate single-board computer connected to alocal web camera and 802.11 communications device. In termsof software, algorithms previously designed for remoteexecution have been ported to the Stargate for local processing.Additionally, we have ported ad-hoc communication protocols

developed by the networking group at UCSC to operate on theStargates.As proof of concept, we carried out a number of experimentsto showcase and evaluate the communication capabilities of theresulting robotic system. We have experimented with variousstatic and mobile multi-node configurations to test howeffectively sensor nodes can deliver images to a sink. We showthat the proposed routing protocol was quite efficient handlingdisruptions due to both node mobility and poor link quality.Our long-term goal in this collaborative effort is to be able todeploy multiple robots in real world applications such as searchand rescue where advanced communication capabilities arerequired. Our current work in this direction is to extend thecapabilities of both the robots and networking in adding moreautonomous networking related control in the robots to enablethem to take communication-related decisions during networkfailures, for example, by searching for locations wherecommunication can be reestablished.It should be noted that we have chosen to extend the Small-size league architecture since the robots were built by our groupand can easily be modified and extended with other devices ifso desired, such as having two cameras, etc. Other roboticplatforms were considered as well such as the alreadydiscontinued Sony AIBO incorporating ad-hoc communicationservices. From evaluations previously done at our robotics lab,the Small-size robot used in this project has at least twice thespeed of the Sony AIBO, while our latest Small-size generationhas more than four times the AIBO speed. Current plansinvolve using our latest small-size robot models. Finally, thisproject does not limit itself to ground robots but also tounmanned aerial vehicles (UAVs) in developing hybrid ad-hocnetworks.ACKNOWLEDGMENT

This work has been supported in part by UC-MEXUSCONACYT, CONACYT grant #42440, LAFMI, and“Asociación Mexicana de Cultura” in Mexico.REFERENCES