By means of these details and any programming language, it is possible to write a client and perform the actions supported by the interface. Another interesting scenario consists providing a web portal with the capability to launch and manage VMs. Luckily, this is not mandatory and the users can rely on existing tools ready to use. The OpenNebula team prepared a Ruby client implementation, the so called econe tools. In principle, any software package supporting the given API version and signature algorithm, such as the euca2ools or the Firefox extension Hybridfox, should be able to interact with the OpenNebula's EC2 access point. From now on, we will address both econe toolsand the euca2ools. Please note that they provide the same functionality, so, only one of them is needed. Just use what is available for your Linux distribution. Most of the following sections will be split to address each software package in a more clear way.

Activation

Every user with access to the ONE web GUI (Sunstone) can start to use the EC2 interface by means of any supported client without additional requests to the LRZ help desk. However, the EC2 interface does not support LDAP authentication, namely, it is not possible to use the password valid for Sunstone. A second password has to be setup from within the web interface, specifically for the interaction via EC2 commands. The procedure to enter the new credential is the following:

on the top bar, click on the username (red square) and then on Settings (first item of the drop-down menu, marked in green), a new window will open;

in the first tab of the Configuration window (labelled as Info, in green), click on the blue button Update password;

type 2 times a new password (the old one is not required) that will be used exclusively for the EC2 interface. Please do not use the same password as the web GUI since it has to be specified in clear text somewhere else. When done, click on the green Change button.

one for the EC2 interface (not valid for Sunstone), to be changed via Sunstone according to the procedure stated above.

Installation

econe tools

The econe tools are available for Linux from the official OpenNebula repositories. Please refer to the official ONE installation page to setup up the package repository and check carefully that the desired Linux distribution and version of your OS match. This task mainly consists of:

downloading the public key used to sign the packages in the repository

importing and accepting the key

installing the packages via the package manager (apt or yum)

The econe tools are available for Debian, Ubuntu, CentOS and Red Hat. Please refer to the sections titled Installing on CentOS/RHEL or Installing on Debian/Ubuntu, if applicable. If your version or your distribution is not supported, then, the best solution is to switch to the euca2ools.

The package to install is opennebula-sunstone: i.e., apt-get install opennebula-sunstone on Debian/Ubuntu or yum install opennebula-sunstone on CentOS/RedHat. This does not mean that it is necessary to setup and/or run Sunstone on the local machine, but the installation of this package ensures that all dependencies are met. At the end of the installation procedure please also run /usr/share/one/install_gems to add the latest Ruby libraries (otherwise the econe commands will fail).

At the end, the commands starting with econe- should be available in your path and ready to use after the setup.

euca2ools

The installation is straightforward, since the software is packaged for many Linux distributions, usually already available in the default repository. Just use your package manager to look for euca2ools, for example apt-get search euca2ools or yum search euca2ools or again zypper search euca2ools. If the operation does not return any result, then try to look for the functionality rather than the username, for example by means of apt-flle search euca2ools (on Debian and derivatives) or use the rpm whatprovides options if available. Once located, install the tools simply using the install option of the package manager (apt-get install euca2ools, yum install euca2ools, zypper install euca2ools, ...).

Once done, in your path there will be some commands starting with euca-. Please jump to the setup to start to use them.

Setup

This section is about setting up the authentication information needed to interact with the EC2 interface, including the end point's URL. Everything can be passed to the commands via proper switches, but this makes their usage quite verbose and lengthy. Before specifying the details, some terminology:

EC2_URL: the URL of the access point, as already stated https://www.cloud.mwn.de:22;

Access key: it's the username;

Secret key: it's the password (in clear text) or its SHA1 hash, depending on the software package in use.

econe tools

The end point's URL should be contained in the environment variable EC2_URL, so, if using bash, make sure that when a terminal is opened, the command export EC2_URL="https://www.cloud.mwn.de:22" is executed (i.e., it's contained in $HOME/.bashrc). The equivalent for tcshell is set EC2_URL "https://www.cloud.mwn.de:22". The second step is the setup of the authentication information. Please create the file $HOME/.one/one_auth and fill it with the following info: <username>:<password> where password is the clear text password specified in Sunstone earlier on. Remember: do not specify here the password used to access Sunstone.

euca2ools

All the needed information should be enclosed in the file $HOME/.eucarc. It is parsed by the euca2ools commands when issued. The content is the following:

where the EC2 password is the password entered in Sunstone according to the procedure explained before. The password should be hashed by means of the following command: echo -n '<the EC2 password> | sha1sum'. Please beware that:

the EC2 password is the clear text password setup in Sunstone, not the password used to access Sunstone;

the password should be surrounded by single quotes (') so that eventual special characters are not interpreted;

the -n switch of the echo command is fundamental. It prevents a new line character to be added at the end of the string (the password) output by echo.

The hashed password is the 40 characters alphanumeric string given by the above command. Paste it in eucarc as secret key.

The format of the configuration file of recent (i.e., > 3.1.x) versions of euca2ools changed, as well as the location. The service credentials and URL are now contained in $HOME/.euca/config.ini, though the file name is not mandatory, as long as the extension is preserved. The content is the following:

First of all, the structure is made up of named sections. On top there is the global section, addressing the default region, opennebula, described just below, as region opennebula (the region name, opennebula is not mandatory, it can be changed to another alphanumerical string without spaces). Here, we have to define the endpoint of the EC2 server, as ec2-url, and the user section (keyword user) containing the credentials to be used. The parameter key-id refers to the username in the LRZ Compute cloud, while secret-key is the hashed password previously set. The user section has been named after the username (see [user <the username>] and user = <the username>), but this is just a tag. Of course, the user section's title and the content of the user keyword in the region section have to correspond, but the value does not matter, as long it is purely alphanumerical and without spaces. The actual username has to be referenced for sure by key-id.

Template setup

The EC2 interface has no knowledge of templates, nonetheless, it is necessary to address them in order to describe to ONE the VM's configuration. EC2 only knows about the instance type, so this section is about mapping templates to instance types. Please note that this is valid for bothecone tools and euca2ools.

A template for a VM instance launched via the EC2 interface does not include the image with the OS of the VM. This is specified when instantiating the VM (see the following section). On the other hand, it should feature:

all the volatile disks and images (except the boot disk with the OS of the VM) needed. Please refer to the Storage section of the template's description. It is good practice to specify for each disk the Target, i.e., the position of the disk on the channel (IDE, SCSI, virtio). Of course the position for the boot disk (hda, sda, vda) should be left available for the image that will be added at the deployment time;

the needed NIC(s). See the Network section of the template's description. Please do not mix this with the Elastic IP feature detailed later;

the VNC support, in the Input/Output section. Even if not needed (and accessible only via Sunstone), it is a useful fallback solution in case of problems, so it should always be present;

eventual SSH keys for the root user in the Context section. These keys will be added to the image by default. There is also the possibility to add SSH keys at launch time (see below): this feature needs the option Add SSH contextualization to be checked, otherwise the injection mechanism will not be triggered. Please check this option if SSH keys have to be injected via the EC2 interface call, even if the text field Public Key is left empty (no SSH keys added by default). Of course, the VM image should be prepared to put the SSH keys in place from the contextualisation disk by means of the ONE contextualisation packages or cloud-init, as explained here;

The final step to prepare the template for the usage via EC2 is to actually insert the instance type information, tagged as EC2_INSTANCE_TYPE. Update the template section labelled as Other (the tab highlighted in green in the following picture). The text should be entered as a Custom Tag. As shown in the red square of the picture, the tag's name should be EC2_INSTANCE_TYPE while its value should be picked from the list above. Do not forget to click on the Add button (in yellow) so that the pair is entered and it appears at the bottom.

Regarding the template management of the templates for EC2, there are a couple of issues.

First of all, if the template is updated again after adding the EC2_INSTANCE_TYPE info, the tag disappears. Please, if modifying the template, check that the instance type is still referenced. The bug will be corrected with future updates.

The second issue is the tag management. In a user's scope, each instance type should be unique, i.e. defined only one time, and identify a single template. The user should avoid to assign the same tag to multiple templates, since, as said, the instance type identifies a template, and the relation should be unambiguous. In particular, special care should be given to templates that are available for the whole group, since they are visible to all members. It should be clear for all that a certain instance type is linked to a group-wide template, if this has been agreed. Probably, the best strategy is to couple a certain instance type to a private template (i.e., no permissions to the group or other users outside the group) when using the EC2 interface.

In order to avoid problems, when the EC2 interface receives an instantiation request for a certain instance type, it applies the following criteria:

it returns the template owned by the user (i.e., scope = user) whose EC2_INSTANCE_TYPE matches the requested instance type. If the template is not unique, the most recent (i.e., those with the highest numerical ID) is chosen;

if the previous match failed, then the EC2 interface tries to enlarge the scope and match the instance type against the templates not belonging to the user, but accessible to his/her group. Again if the resolution is not unique, the most recent template (i.e., highest numerical ID) is picked;

if still no matches are found, the instantiation fails, since the instance type has not been found. The public templates are never taken into account in this match-making mechanism.

One final remark: the instance types are just tags, there is no relation with the budgeting. The group's budget will be charged with the number of physical CPUs specified in the addressed template. For example, the instance type m1.small could point to a template envisaging more CPUs than those linked to m1.large. Users are totally free in assigning the instance types. In the end, they are just a naming convention enforced by the libraries used to write the clients, such as the econe tools or the euca tools.

SSH key management

The EC2 interface allows to specify a SSH key to be injected into the image during the instantiation process. Here we discuss how to create, list and remove a SSH key pair for this purpose, not how to specify it at instantiation time (there is a dedicated section for this). Please note that:

this key pair does not replace the SSH key(s) eventually defined in the Context section of the template to be used (and addressed by the instance type). Both keys, the one in the template and the EC2 pair, will be injected into the VM;

as already explained, the EC2 key injection will work only if the VM template to be used hasincludesAdd SSH contextualization option in the Context tab, even if the Public keytext area is empty. In other words, the following case is perfectly legit:

the image to be instantiated has to be capable of reading the SSH key(s) from the contextualisation disk and put them in place, by means of the ONE contextualisation packages or cloud-init, as explained here.

It is easy to recognise the SSH private key that has to be copied to a file in order to be used later together with the public part that will be injected in the VM. Please note that the private key file needs a restrictive permission mask, such as 600, i.e., not world readable.

It is easy to recognise the SSH private key that has to be copied to a file in order to be used later together with the public part that will be injected in the VM. Please note that the private key file needs a restrictive permission mask, such as 600, i.e., no world readable.

Images management

The operations allowed by the EC2 interface on disk images is limited to uploading/registering new OS disks and listing what is available. For more complex actions (datablock images management, changing of permissions ...), the usage of the web GUI is mandatory.

The creation of a new OS disk image is split in two parts:

upload, i.e., the transfer of the image from the client's platform to the LRZ Compute Cloud Frontend (please be aware of the possible issues and consider moving the image to the frontend's scratch folder and then add it to the datastore via Sunstone);

register the image within the EC2 interface so that it can be exposed also via this mechanism.

Note: the images already in the datastore are not exposed by default via the EC2 interface, they have to be registered first (see later sections, according to the tool of choice). Of course, existing images does not need to be uploaded again.

Hint: it is also possible to unregister an image, so that it does not show up again in the list fetched by the EC2 interface. In Sunstone, open the image selecting it among those available, removing the EC2_AMI property as shown in the picture, clicking on the trash icon bordered in green. Please do not click on the red trash icon on the right top corner, this will delete the image!

econe tools

The first operation available consists in showing the available images:

The fields are quite self explanatory. Location is simply a hash and the Visibility takes into account whether the image is available only for the user and his/her group (private) or also to other users outside the group (public). This behaviour can be adjusted in Sunstone, working on the permissions.

The upload of an image through the EC2 interface can be achieved simply typing econe-upload <local path to the OS disk image>, i.e.

econe-upload $HOME/myimage.qcow2
Success: ImageId ami-00000100

The EC2 identifier for the OS disk is returned. At this point, the image is not available yet, the registration is necessary: econe-register <the EC2 identifier of the OS disk image>, for example

econe-register ami-00000100
Success: ImageId ami-00000100

In case the image is already present in the ONE datastore, then only the command econe-register <the numerical ID of the image assigned by ONE> is needed, where the numerical ID is the first column of the image list in Sunstone (see this picture). Following our latest example:

We can recognise also here the unique name, the hash, the owner, the status and the scope (see the explanation in the euca tools section) of the image. The latest two columns are specific to the euca2ools and they represent, respectively, the architecture and the image type. The architecture is set by default and can be safely ignored. ONE puts this information in the template, rather then into the image. The image type is always machine, meaning that a bootable OS image is contained.

Unfortunately the euca2ools do not have specific commands to upload an image to ONE and register it. These actions have to be performed manually in Sunstone.

For the upload of an image, please refer to this section of the cloud manual on how to import an OS disk image.

The registration of the image within the EC2 interface consists in adding the attribute EC2_AMI and setting it to YES. In order to do that, open the image from within Sunstone (selecting it from the list that opens clicking on Virtual Resources and then Images from the menu on the left). As shown in the next picture, add the attribute and the value into the free row and then click on the Add button (in green) so that the new entry is listed above (highlighted by the red square). From now on, the image will appear in the list retrieved via the euca2ools.

As explained for the econe tools, it is also possible to unregister the image simply removing the attribute. Please beware that it is not necessary to remove the image from the datastore.

Instances management

This section is dedicated to running VMs (instances): how to list, start, stop, resume and terminate one. The list of possible statuses has been simplified, being just a subset of what can be seen in Sunstone, and it only includes the following:

pending: the scheduler is trying to find the resources for the VM (if persisting in this state for some minutes, then the cloud could be full or not enough resources are available) or the necessary files are being copied. This is a mix of the pending and prolog statuses reported by Sunstone;

booting: the VM has been dispatched to the worker node, where the hypervisor is taking care of the deployment;

running: the hypervisor has launched the VM and it is available to the user. Please note that in this phase, maybe the guest OS is still booting;

shutting-down: this is a transient when the VM is stopped or terminated;

stopped: the VM has been undeployed;

terminated: the VM has been removed from ONE, it does not exist anymore.

econe tools

Once again, we start with the basic command to show the current instances, running or stopped:

It is easy to recognise some basic information, especially the Instance ID in the first column, used later to address the VM when issuing other commands via the EC2 interface.

The command to instantiate a new VM is econe-run-instances, followed by the mandatory image identifier as returned by econe-describe-images. Here's an example with the available switches (most of them are optional, except for the instance type, -t):

-t (mandatory): instance type, in this case m1.small. Please review the available values here and this section to learn how to add this info to the VM template;

-k: the SSH public key to be injected through the contextualization mechanism into the VM's root account. Please refer to this paragraph for all the details and requirements needed by on-the-fly SSH key injection. The value key1 is the tag assigned to the SSH key pair at creation time. In order to be able to log in without password, the SSH private key returned when the tag pair has been created has to be available (i.e., saved in a file).

-n: the number of VMs that should be instantiated, using exactly the same configuration, in this case 1.

The output is very similar to that of the command used for listing, the status of the VM is of course pending while the IP has not been assigned yet.

At any time it is possible to reboot the instance by means of econe-reboot-instances, followed by the instance identifier, for instance econe-reboot-instances i-00006772. It is possible to specify multiple instance identifier, simply separating them with a comma (,), without any additional space. Please note that the operation is successful if the VM can correctly interpret the ACPI signal sent by ONE. Otherwise, please use Sunstone to perform the action.

When a VM has to be paused for later use, or to save resources, it can be undeployed: econe-stop-instances <instance id>. Multiple identifiers can be specified, comma separated.

Again, the operation is successful if the VM can correctly interpret the ACPI signal sent by ONE. Otherwise, please use Sunstone to perform the action. When listing, the VM will appear in the stopped state:

Again, the operation is successful if the VM can correctly interpret the ACPI signal sent by ONE. Otherwise, please use Sunstone to perform the action. When listing, the VM will appear in the terminated state for some time:

The output of the euca2ools is richer than the ONE counterpart. Some fields are unused, such as the kernel (eki) or the RAM disk (eri) images, since ONE bundles everything in the disk image. However, it is easy to recognise some basic information, especially the Instance ID in the second column, used later to address the VM when issuing other commands via the EC2 interface, the IP (A.B.C.9), the hostname (vm-A-B-C-9.cloud.mwn.de), the status, the image that has been used (i.e., ami-00000003) and the tag of the injected SSH key pair (key1).

The command to instantiate a new VM is euca-run-instance, followed by the mandatory image identifier as returned by euca-describe-images. Here's an example with the available switches (most of them are optional, except for the instance type, -t):

-t (mandatory): instance type, in this case m1.small. Please review the available values here and this section to learn how to add this info to the VM template;

-k: the SSH public key to be injected through the contextualization mechanism into the VM's root account. Please refer to this paragraph for all the details and requirements needed by on-the-fly SSH key injection. The value key1 is the tag assigned to the SSH key pair at creation time. In order to be able to log in without password, the SSH private key returned when the tag pair has been created has to be available (i.e., saved in a file).

-n: the number of VMs that should be instantiated, using exactly the same configuration, in this case 1.

The output is very similar to those of the command used for listing, the status of the VM is of course pending while the IP has not been assigned yet.

At any time it is possible to reboot the instance by means of euca-reboot-instances, followed by the instance identifier, for instance econe-reboot-instances i-00006772. Please note that the operation is successful if the VM can correctly interpret the ACPI signal sent by ONE. Otherwise, please use Sunstone to perform the action.

When a VM has to be paused for later use, or to save resources, it can be undeployed: euca-stop-instances <instance id>.

Again, the operation is successful if the VM can correctly interpret the ACPI signal sent by ONE. Otherwise, please use Sunstone to perform the action. When listing, the VM will appear in the stopped state:

Again, the operation is successful if the VM can correctly interpret the ACPI signal sent by ONE. Otherwise, please use Sunstone to perform the action. When listing, the VM will appear in the terminated state for some time:

Volumes management

In this context, a volume is a persistent data block disk image, meaning that it can be mounted by one VM at a time and it is updated on the fly (i.e., writing directly on the file in the datastore, without moving it back and forth the working space).

Note: creating and formatting (default: ext3) an image is an expensive operation. If the size is more than 10 or 20 gigabytes, then it is far better to create an empty (i.e., without file system) disk and then format it from inside the VM.

econe tools

The list of available volumes (and their status) is returned by the command econe-describe-volumes:

It is worth noticing that the first column is a unique identifier and the size is expressed in GB.

Attaching a volume to an instance requires some parameters, such as the the instance's identifier (-i, as returned by econe-describe-instances), the target device on the VM (-d switch, which needs access to the VM via SSH and/or VNC through Sunstone to actually find a free device) and the volume's identifier: econe-attache-volume -i <instance ID> -d <target device> <volume ID>. For example, in order to attach the volume vol-00000191 to instance i-00006772 on the channel c of the virtio bus (/dev/vdc), the command to type is:

euca2ools

It is worth noticing that the first column is a unique identifier and the size is expressed in GB.

Attaching a volume to an instance requires some parameters, such as the the instance's identifier (-i, as returned by euca-describe-instances), the target device on the VM (-d switch, which needs access to the VM via SSH and/or VNC through Sunstone to actually find a free device) and the volume's identifier: euca-attache-volume -i <instance ID> -d <target device> <volume ID>. For example, in order to attach the volume vol-00000191 to instance i-00006772 on the channel c of the virtio bus (/dev/vdc), the command to type is:

The -z option specifies the zone and it can be always set to opennebula.

The counterpart to delete it: econe-delete-volume <volume ID>

euca-delete-volume vol-00000191
VOLUME vol-00000191

Elastic IPs

In the Amazon Web Services terminology, an Elastic IP (EIP for short) is a public routable address that is associated to the VM's private one via NAT. The clear advantage is that the EIP can be moved seamlessly from one VM to another without any reconfiguration.

The LRZ compute cloud can not offer exactly the same implementation, but what has been implemented is the possibility to reserve an IP and associate it to any VM. The association is nothing but an attach NIC operation. The IP has been reserved ad-hoc for this. Please beware that:

this is a premium feature, it is not available by default, i.e., a user can not reserve an IP already assigned. If needed, please contact us, being understood that reserving an IP is costly since it can not be reassigned to other VMs when not booked (this is also the reason while all network management goes through a DHCP server);

the target VM has to be instructed to get the network configuration for the network card that will be associated to the EIP from the DHCP as well, even if the address is fixed;

the EIP network card should not be mentioned in the template. This is the whole point of the EIP concept, the NIC is added;

the most common use case is those of a VM with a single NIC (let's say eth0 on Linux) and a private IP, while the EIP (so, a second NIC) has to be added. The behaviour of the VM after the attachment depends on the guest OS. Usually, it is necessary to add the new interface (i.e., eth1) to the network configuration file. Please refer to this FAQ for various Linux distribution. The entry also deals with the setup of the routing: since now there are two network cards and two gateways, it is necessary to define a priority (i.e., assign a metric). The gateway of the public IP (EIP) should have priority, while the private address should be used only to contact the other VM of the same group. Be aware that when switching the gateway there could be a loss of connectivity, meaning that the SSH session could be broken, because of the change of the routing. It is better to perform this kind of operations via VNC (after changing the root password to gain access even without a network connection);

an EIP is not the only way to use a public IP in EC2. The user is free to add to the template a NIC associated to the Internet_access virtual network, getting a public IP as well. The only difference with respect to the EIP is that the former may change when the VM is stopped and then resumed.

econe tools

The bucket of available EIPs (if defined) can be shown typing

econe-allocate-address
A.B.C.4
A.B.C.5

where A.B.C.* are free IPv4 addresses reserved specifically for the user and not yet assigned to his/her running VMs. For the association (i.e., NIC attachment) to a running instance, the command econe-associate-address <EIP> <instance ID> is used, where the EIP is picked from the output of the above command and the instance ID comes from econe-describe-instances:

the EIP A.B.C.4 is clearly listed, together with the private address a.b.c.9 already present in the template of the VM. As already explained above, the user has to take care that the guest OS recognise the new network card and that the routing is consistent with the goal. Of course the EIP bucket is smaller now:

econe-allocate-address
A.B.C.5

Finally, the EIP can be removed from the instance it is associated to and return to the bucket by means of econe-disassociate-address <EIP>:

econe-disassociate-address A.B.C.4

The routing should be consistent (the private address still has its default gateway, it only had a lower priority). It is maybe convenient to remove the network card that has been associated to the EIP from the list of interfaces of the VM since after a reboot the guest OS could wait for an address to be assigned to it, even if not existing. Usually, this does not lead to a failure, it only lengthens the boot process.

euca2ools

The command euca-allocate-address is unfortunately not compatible with the message returned by the EC2 interface, so there is no automatic way to see the available EIPs. However, the reservations are made per user, so a quick inspection of the output of the euca-describe-instances's output gives an idea of which EIP is already associated. Assuming that the free EIP is A.B.C.4, for the association (i.e., NIC attachment) to a running instance, the command euca-associate-address -i <instance ID> <EIP> is used, where the EIP is determined by manual inspection and the instance ID comes from euca-describe-instances:

the EIP A.B.C.4 is clearly listed, together with the private address a.b.c.9 already present in the template of the VM. As already explained above, the user has to take care that the guest OS recognise the new network card and that the routing is consistent with the goal.

Note: the public IP and the associated hostname are listed two times: one together with the private IP (as one of the interfaces available) and one alone, as public IP.

Finally, the EIP can be removed from the instance it is associated to and return to the bucket by means of euca-disassociate-address <EIP>:

euca-disassociate-address A.B.C.4
ADDRESS A.B.C.4

The routing should be consistent (the private address still has its default gateway, it only had a lower priority). It is maybe convenient to remove the network card that has been associated to the EIP from the list of interfaces of the VM since after a reboot the guest OS could wait for an address to be assigned to it, even if not existing. Usually, this does not lead to a failure, it only lengthens the boot process.

Script injection

This section is about passing a script on the fly to a VM in order to be executed by cloud-init at boot time. We already explained how to do it via the VM template, now we want to do it via the EC2 interface, when the VM is deployed. A prerequisite for the script to be executed is that cloud-init is installed, correctly configured and added to the boot sequence in the VM image. Please refer to the previous link to learn how to the setup cloud-init.

econe tools

The switch of the econe-run-instances command to specify a script to be injected is -d. The argument should be the script itself. In order to avoid escaped characters, just write the script in a file (let's say myscript.sh) and then dump it as argument, using the cat command, as shown in the following example:

Important note: even though the man page of econe-run-instances says that the argument of -d is the base64 encoded version of the script to inject, please pass the clear text. The tool will take care of the conversion.

euca2ools

The command euca-run-instances can handle the script file (myscript.sh) directly, just pass the filename by means of the -f switch:

If desired, there is also the possibility to specify the text of the user's script. As seen for the econe tool, the option is -d and it specifies the clear text. Once again, it is safer to dump the content of the script rather than pasing the text