The personal view on the IT world of Johan Louwers, specially focusing on Oracle technology, Linux and UNIX technology, programming languages and all kinds of nice and cool things happening in the IT world.

Monday, December 31, 2012

IBM is running a project called Smarter Planet in which they combine a lot of the new and upcoming technologies and ways of thinking and put them to use to craft a view of how to build a better world and a smarter planet. As the tech company IBM is they do touch the subject of social media and big data. In the minds of a lot of companies social media and the capability of harvesting information from social media will be a valuable asset for the future. A future where big data is analyses on the fly and where we can understand what humans are saying and meaning, where we can interpret the feelings and emotions and have technology interact on this.

Secondly we will have a more instrumented, interconnected and intelligent way of doing things and analyze things where we can combine human expressions like a tweet or blogpost in combination with traffic and weather. All this combined will give is a good view of the state of a person or a group of persons.

One of the things that will play a big role in this big data future is social media analysis, where are we in the social media world, what are our connections, who are our opinion providers and who are our opinion consumers. Having the option to harvest and give meaning to a enormous amount of data will open up a whole new way of doing traditional business and will provide new business opportunities. New companies and ways of doing commerce will see the light of day.

In the above presentation, Graham Mackintosh, IBM social media analytics
expert explains why social media analytics is so important and how it
can be used to derive insights from the growing level of data that we
generate through social media.

Sunday, December 30, 2012

A couple of post ago I toughed the subject of big data sentiment analysis and how the field of sentiment analysis will be the next frontier in data analysis. Sentiment analysis will help developers build applications and analysis tools to improve the machine to human interaction and make technology play a more integrated and more natural way of helping humanity.

Feelings and computer algorithms are a field in which we do not have that much experience at this moment and a lot of people do feel that we will not be able to concur all the fields of emotions. Some people feel that we should not try to concur all the fields of computerized emotions and digitized emotions. In the below video Intel futurist Brian David Johnson talks with Dr. Hava Tirosh-Samuelson, Director of Jewish Studies at ASU. Dr. Hava talks about transhumanism, our future and the fact that to her opinion not all emotions can be captured or should be captured.

According to some it will be possible to capture the human emotion and extract the human emotion from a sentence or a work of art however it will be impossible to give a machine emotions. Giving a machine emotion will most likely be one of the most hardest parts of computer science in the upcoming time however first we will have to be able to capture and extract human emotions. This will provide us a much better way to interact with humans from a machine point of view. Some projects are working on this, some are working on collecting emotions and some try to make meaning out of the harvested emotions. One of the graphically most appealing emotion harvesting projects is done by Jonathan Harris an Sep Kamvar at wefeelfine.org.The wefeelfine.org project scans the internet continualsiy to find feelings based upon blogposts containing a sentence "I feel". On the project page you can see how the world is feeling at this moment.

As already stated capturing human emotions from social media en machine interaction and giving meaning to it will be one of the next big things a lot of companies will try to resolve. Having a algorithm to understand human emotions can be implemented in a lot of applications and will provide a whole new era of how we work with and interact with technology. The challenge in this is that this cannot be done by computer researchers alone, a lot of people and a lot of fields of science will have to interact to only begin to imagine the complexity of crafting such a algorithm.

Wednesday, December 19, 2012

HDFS (Hadoop Distributed File System) is the core for many big-data solutions that depend on an implementation of Hadoop. In many cases a Hadoop implementation consists out of a large number of nodes working together to provide the functionality that you need. When we are talking about massive multi-node implementations there are some things to consider that might not come to mind directly. One of those things for example is that the speed your Hadoop implementation is providing you already starts with a dependency on how you have arranged your network and how you have arranged your datacenter.

When you are planning a rather small implementation of HDFS you will most likely be fine with a simple layout for you namenodes as shown below. All your namenodes are connected to a single network segment and are most likley all within the same rack cabinet.

As stated for a simple setup with a relative low number of datanodes a setup as shown above will not cause any issues. The issue comes when you do have a larger cluster which spans over multiple racks all containing multiple nodes, for example a 100+ namenode cluster.

When you define your HDFS you will have to set the number of replications for your blocks. This means that every block will be replicated X times on different nodes. This setting is done in your hdfs-site.xml configuration file by adjusting the variable dfs.replication .When you update data which resides in a certain block this block will be replicated to all other nodes where this particular block resides. This means that you will have a lot of communication of data between your datanodes that is only used to keep your cluster in sync.

Within a simple setup this is not a real issue however when spanning over 100+ nodes in multiple racks your data will have to travel a lot and will take up networking resources that due to this cannot be used for other things.

In the image below you can see a more complex landscape where we have 15 datanodes postitioned over 3 racks. All racks have a top-rack switch to connect all the HDFS datanodes to the rack network backbone, all racks are connected to the cluster backbone for communication between the nodes of each rack.

If we do not take any action we can come into the situation that we will have a lot of communication between datanodes in different racks and by having this we will see a lot of communication over the cluster backbone and multiple rack backbones. This might cause additional network latency and potential bottlenecks especially at your cluster backbone switch. For example it would be quite possible that datanode 2 holds data that needs to be replicated to datanode 7 and datanode 10 which are both in a different rack.

To prevent this from happening you can make HDFS datanodes aware of there physical location in the datacenter and state that replication will only is allowed between datanodes within the same rack. The dfs.network.script scripting will allow you to state the rack ID and will help you limit the traffic between racks. This will make sure you will get a cluster model as shown below where you have a root (cluster) and 2 racks both containing 3 datanodes

When planning your rack setup both physical and within your cluster configuration there are some things to consider, if you allow replication only within a single rack and this rack is a physical rack at the same time as a software configured rack you can run into troubles if this rack fails for example due to a powerloss on your rack. You will not have access to any of the data that is stored within this rack as all the replications are in the same rack.

When planning you do have to consider this and think about the possibilities to have virtual racks where for example all the odd numbered servers in physical rack A and B are configured to a single (virtual) rack and all the even numbers in physical rack A and B into another virtual rack. To cope with possible loss of a switch and to provide quick communication between the racks you might want to consider connecting some switches with fiber to assure quick connection between physical racks.

Tuesday, December 18, 2012

Oracle databases are very commonly used in secured environments and are in many cases considered as a vital part of the IT infrastructure of a company. Oracle databases are often used to provide database solutions to core financial and operational systems. Due to this reason the security of your database and the entire infrastructure design needs to be in the minds of all involved. One of the things however commonly overlooked by application administrators / database administrators and developers is how the infrastructure is done. In many companies the infrastructure is done by another department then the people responsible for the applications and the database.

While it is good that everyone within an enterprise has a role and is dedicated to this it is also a big disadvantage in some cases. The below image is a representation of how many application administrators and DBA's will draw there application landscape (for a simple single database / single application server design).

In essence this is correct however in some cases to much simplified. A network engineer will draw a complete different picture, a UNIX engineer will draw another picture and a storage engineer another. Meaning you will have a couple of images. One thing however where a lot of disciplines will have to work together is security, the below picture is showing the same setup as in the image above however now with some vital parts added to it.

The important part that is added in the above picture is the fact that is shows the different network components like (V)LAN's and firewalls. In this image we excluded routers, switches and such. The reason that this is important is that not understanding the location of firewalls can seriously harm the performance of your application. For many, not network type of people, a firewall is just something you will pass your data through and it should not hinder you in any way. Firewall companies try to make this true however in some cases they do not succeed.

Even though having a firewall between your application server and database server is a good idea, when not configured correctly it can influence the speed of your application dramatically. This is specially the case when you have large sets of data returned by your database.

Reason for this is how a firewall appliance, in our example a Cisco PIX/ASA is handling the SQL traffic and how some default global inspection is implemented in firewalls which are based and do take note of an old SQL*Net implementation of Oracle. By default a Cisco PIX/ASA has implemented some default inspect rules which will do deep packet inspection of all communication that will flow via the firewall. As Oracle by default usage port 1521 the firewall by default has implemented this for all traffic going via port 1521 between the database and a client/applicatio-server.

Reason this is implemented dates back to the SQL*Net V1.x days and is still implemented in may firewalls due to the fact that they want to be compatible with old versions of SQL*Net. Within SQL*Net V1.x you basically had 2 sessions with the database server.

1) Control sessions
2) Data sessions

You would connect with your client to the database n port 1521 and this is referred to as the "control session". As soon as your connection was established a new process would spawn on your server who would handle the requests coming in. This new process received a new port to specifically communicate with you for this particular session. The issue with this was that the firewall in the middle would prevent the new process of communicating with you as this port was not open. For this reason the inspect rules on the firewall would read all the data on the control session and would look for a redirect message telling your client to communicate with the new process on the new port on the database server. As soon as this message was found by the firewall the firewall would open a temporarily port for communication between the database and your client. After that moment all data would flow via the data session.

As you can see in the above image the flow is as follows:A) Create sessions initiated by the client via the control session

B) Database server creates new process and sends a redirect message for the data session portC) All data communication go's via the newly opened port on the firewall in the data session

This model works quite OK for a SQL*Net V1.x implementation however in SQL*Net 2.x Oracle has abandoned the implementation of a control session and a data session and has introduced a "direct hand-off" implementation. This means that no new port will be opened and the data sessions and the control session are one. Due to this all the data is flowing via the the control session where your firewall is listing for a redirect that will never come.

The design of the firewall is that all traffic on the control session will be received, placed in a memory buffer, processed by the inspect routine and then forwarded to the client. As the control session is not data intensive the memory buffer is very small and the processors handling the inspect are not very powerful.

As from SQL*Net V2.x all the data is send over the control sessions and not a specific data session this means that all your data, all your result sets and everything your database can return will be received by your firewall and will be placed in the memory buffer to be inspected. The buffer in general is something around 2K meaning it cannot hold much data and your communication will be slowed down by your firewall. In case you are using SQL*Net V2.x this is completely useless and there is no reason for having this inspect in place in your firewall. Best practice is to disable the inspect in your firewall as it has no use with the newest implementations of SQL*Net and will only slow things down.

You can check on your firewall what is currently turned on and what not as shown in the example below:

Sunday, December 16, 2012

A lot of people who are currently working on, or thinking about, big data do directly start talking about how they can implement a certain things. How much data you will have to store and how much data you will have to process to find meaning in big data. A lot of people start talking about how they will need to store things, how they will need to build implementations of Hadoop and the Hadoop Distributed File System to be able to cope with the load of data.

I recently had a discussion with a couple of people and we where thinking about the fact if we could dynamically route taxis in a city based upon big-data and social media feeds you could receive from the internet. The second thing we where discussing was if we could measure the satisfaction of the user by linking his tweets back to the trip he just made in the taxi.

It is true that for receiving and processing the data coming in which needs processing and finally will result in routing taxi's will be enormous. Secondly it is true that you will need a lot of computing power and yes you will most likely need Hadoop like solutions for processing this. However there is a much more subtle part to this what a lot of people are overlooking.

Finding the need for routing one or more taxi's to a part of the city where we expect to be able to find a lot of customers based upon social media and historical data is challenging however there is an ever bigger and even more challenging part and that is in they end. That is the part we brought in while discussing if we could measure if the customer was happy by the things he published on social media just after he made use of the taxi.

Lets say we know a person called Tim entered a taxi and he is having the twitter account @TimTheTaxiCustomer. He is picked up and is dropped of during rushhour a couple of blocks away from his original location. We are monitoring the twitter feed from Tim and now he is sending a tweet with the following text:

"made it to my meeting, wonder what would happen if I had taken the subway"

He also had possibly could send out a tweet with;

"Made it to my meeting, never had this when is used the subway"

Looking at those text for a human it is, with some luck, possible to find out what Tim is meaning and if he is happy with the service. You will have to have some skills however we can conclude that with the first tweet he is somewhat jokingly referring to the subway and insinuating that he would have never made it with the subway. Meaning good service from the taxi company. The second tweet is however indicating that he had never experienced this with the subway which could mean that he had to run to make it to his meeting.

This process is even for humans complex from time to time and is very driven by local slang, local ways of saying things and culture influences. Having a human however read all the messages of all our potential taxi customers is not feasible due to the amount of messages that need to checked and the speed they are produced by. To be able to cope with the load we will need an algorithm to check the messages and to enable the algorithm to keep up with the pace of messages generate we will have to run this on a cluster which makes use of massive parallel computing.

Creating a algorithm capable of understanding human emotions is the field of sentiment alnalysis.
"Sentiment analysis or opinion mining refers to the application of natural language processing, computational linguistics, and text analytics to identify and extract subjective information in source materials.Generally speaking, sentiment analysis aims to determine the attitude
of a speaker or a writer with respect to some topic or the overall
contextual polarity of a document. The attitude may be his or her
judgment or evaluation (see appraisal theory),
affective state (that is to say, the emotional state of the author when
writing), or the intended emotional communication (that is to say, the
emotional effect the author wishes to have on the reader)."

In the below video Michael Karasick is toutching this subject in quite a good way in his IBM research lab video;

IBM Research - Almaden Vice President and Director Michael Karasick
presents a brief overview of "Managing Uncertain Data at Scale." Part of
the 2012 IBM Research Global Technology Outlook, this topic focuses on
the Volume, Velocity, Variety and Veracity of data, and how to harness
and draw insights from it.

Monday, December 10, 2012

On a "extreme" frequent basis I do get the question, via all different kind of channels, from people on how they can hack into an Oracle database / Oracle application server / Oracle X product. I did post in the past some things about certain security options in Oracle which where not always as good as you would like them to be. I do however try to stay away from giving people advise on how to hack into a certain system. If you are determined enough you will find a way in every system. So please do not expect me to give you a copy/past answer on this question.

Having this stated, I do provide something some advise on things to do or not to do when securing your environment. Some of the things I do encounter every now and then is quite a security thread. This is related to half finished installations and/or installations of Oracle products that have not been cleaned from default screens and logins.

When installing an Oracle product as for exampe an application server or database you commonly do get some default (http) pages and screens which will enable you during the initial installation and deployment of code and will guide you to the correct login pages. This is all fine as long as you are aware of the fact that those pages do exists and take the appropriate counter measures for this. The screenshot below is one of the default screens used by Oracle the Oracle application server.

Having such a page is one of the first things that can attract a potential hacker to your server. If, for example, you have connected your application server to the internet this page can be indexed by a search engine like Google. An attacker looking for a certain version of an Oracle product can use Google quite easily to find possible targets.

For those of you who do have experience with security and specially security of web attached systems and the ways a complete system can be compromised by making use of the HTTP attack angle this all will make sense. And even if you do not know all the default pages that might be populated by the newly installed system a good administrator will look for them and remove them where appropriate.

For you who are not that familiar with this, the below presentation is giving you some insight in this. This presentation was given by Chris Gates during the 2011 blackhat convention where he showed some of the ways to use Metasploit to attack an Oracle based system.

In 2009, Metasploit released a suite of auxiliary modules targeting oracle databases and attacking them via the TNS listener. This year lets beat up on...errr security test Oracle but do it over HTTP/HTTPS. Rather than relying on developers to write bad code lets see what we can do with default content and various unpatched Oracle middleware servers that you’ll commonly run into on penetration tests. We’ll also re-implement the TNS attack against the isqlplus web portal with Metasploit auxiliary modules.

Tuesday, December 04, 2012

In a recent forum post at Oracle a user asked the following question: "I have audit trail for the table *****_INSTALL_BASE_ALL for columns attribute10, attribute21, attribute22 if someone changes these columns I want to get the alert for the same". I answerd this question that this could be done with a database trigger and that you can write code to send you a mail or any other way of sending an alert.

Giving this answer is the quick response however it triggered me as I have received a couple of questions that all involve logging of some sort when a value in a table changes. Due to some reason the usage of triggers in the database and what you can do with them is not widespread knowledge. Triggers can be used for all kind of reasons, one of the most important reasons I am personally in favor of them is that you can add logic to a insert or update without having to change anything to the code that is initiating this insert or update.

If you have, for example, a packaged application that is allowed to update some values in a table and you want to add a log to it however you do not want to change the application and add customizations in the code of the application itself you can easily achieve what you want by adding a trigger to the table.

In the below example we have a application which is used to create contracts for customers. Every contract type has a certain profit margin associated to it. Every newly created contract will look into this table and based upon the contract type it will select the right profit margin that needs to be applied. Changing this value can have a huge impact on the financial results of the company. Due to this reason you might want to have some extra logging and auditing in place to be able to track back who changed something at which time.

What we want to achieve is that if someone changes a value in the table CONTRACT_PROFIT we will have a log entry in the table CONTRACT_AUDIT so we have a trail on what changed when. The layout of the table is very simple and you can make this as sophisticated as you like however for this example we have made this as easy as possible.

At the start of our example the table CONTRACT_PROFIT is filled with the following data;

If we now update the table CONTRACT_PROFIT and change the value of CONTRACT_PROFIT_RATE for DIRECT_SALES from 20 to 21 the trigger will be fired and you will see a new line in the table CONTRACT_AUDIT. An example of this line is shown below;

Last change made on: 04-DEC-12 05.58.14.3 AM US/PACIFIC . Change made by user: LOUWERSJ .New value for DIRECT_SALES is now 21 .

You might want to add other information to a log file, you might even do not want to create a insert statement however rather have a mail action defined. This can all be done when developing your trigger.

Monday, December 03, 2012

When you are working on a new product or when you are building a startup tech company it is hard to find the right people to join your startup. In many cases a department who working on a new concept or a group of people who are working on a idea and trying to build a startup company around it are tight on budget. You do want to hire directly the right people with the right skills and do not have the time to make a mistake in your hiring process.

When you are a startup you will not have a HR department nor do you have a big funding to get a headhunter to find the correct person for your. Due to this you see a lot of startups filled with people who know someone already working for you. This however can limit you a bit because you do depend on the social circle of your employees.

The people behind "Work for Pie" have build a solution for this. They have had a lot of experience in building a social network for opensource software developers to find the correct developer for a opensource project. Recently they have launched Work For Pie for Companies. On this platform companies can interact with developers to find the correct person. You can see it a bit as a mix between linkedin, a dating site and a job site.

Robert Scoble interviewd the people behind "Work for Pie" for his show rackspace show where he is interviewing people behind the new tech startup companies.

https://workforpie.com/
helps you build a showcase of your work, which takes less than 10
minutes. This showcase is much better than an old-style resume. Here the
founders explain why in our group of startup interviews at Techcrunch
Disrupt.

When working with in-memory databases a lot of people discard the fact that your disk IO can still be a bottleneck. It is true that most of your operations are done in memory when working with a in-memory database. However, having slow disk I/O can dramaticly impact the performance of you operations even when using a in-memory database. You have to make some decissions in your hardware and in your application design when working with a in-memory database.

The Oracle TimesTen in memory database is a good example when talking about this subject. Oracle TimesTen In-Memory Database (TimesTen) is a full-featured, memory-optimized, relational database with persistence and recoverability. It provides applications with the instant responsiveness and very high throughput required by database-intensive applications. Deployed in the application tier, TimesTen operates on databases that fit entirely in physical memory (RAM). Applications access the TimesTen database using standard SQL interfaces. For customers with existing application data residing on the Oracle Database, TimesTen is deployed as an in-memory cache database with automatic data synchronization between TimesTen and the Oracle Database.

As a quick introduction to the TimesTen database Oracle has provided a short, 3 minute youtube video showing what Oracle TimesTen can do;

One of the interesting topics already touched in the above video is, what will happen to my in-memory data when the server fails. The answer to this is that TimeTen will make it persistent. Making it persistent means in this case writing it to a disk. Writing data to disk means disk I/O. This means that a in-memory database is not 100% pure in-memory, it will still use your disks.

Oracle TimesTen uses durable commits to make a commit persistent. When using a durable commit the commit is not only written to the database structure which is hold in memory it is also written to disk. It is good to know that by default every commit results in a durable commit and by doing so results in disk I/O. Developers can make the decision if they will do a normal commit (durable) or a non-durable commit.

As an example, if you are building a webshop application based upon a TimesTen database you can make the decission that every item that is placed in the basket by a user is written to memory by using a non-durable commit. When the order is placed the order itself is written with a durable commit. This holds that if the database is to crash all items that a user has placed in a basket will be lost and will have to placed again, however a order that is placed is stored safely to disk and when the database is loaded into memory again during start-up this is still present.

More detailed information about durable and non-durable commits in a Oracle TimesTen database can be found in the documentation. Specially the chapter "use durable commits appropriately" is advisable to developers who are new to in-memory database architectures.

Making non-durable commits are in general faster than durable commits. This can improve the performance of your application dramatically. However at some point you will most likely want to use a durable commit to safeguard the final transaction. In case you have a large load on your system and you have slow disk I/O this can hinder the performance of your application. In the below video from Intel you can see a example of this and how solid state disks help to improve this in the showcase shown in the video.

Sunday, December 02, 2012

One of the futures of computing and the futures of cloud computing is the the hookup between mobile technology, social media and cloud computing. The three concepts are complementary to each other from a technology point of view and combining social media concepts with a mobile (and location aware) platform and bring the storage and computing of data streams to the cloud to enable a always on computing backbone delivers a quite interesting new concept. A concept which enables companies to quickly tap into a B2C market segment and quickly and for low costs build pilot projects and tests to test the SoMoClo concepts.

Every company starting a SoMoClo concept will have to be aware that cloud computing is the basis of the SoMoClo concept. One of the main reasons is the flexibility of cloud computing. You can quickly ramp up and down computing resources and you have the advantages you can quickly adopt services from other cloud enabled companies who do provide API's to their functionality. Every company looking to start a SoMo(Clo) project without giving the CLOud part a good look will make a very big mistake and will most likely have (A) have a unsatisfactory RTO (B) a disappointing group of users and (C) a failed project.

The Aberdeen group has the following statement about this:The rate of transformation in IT is the fastest it has ever been, and
increasing almost daily. The rising number of cores on processors allows
the compute capacity of servers to grow dramatically. With faster local
and wide area networks, servers can be placed anywhere. The amount of
data stored by organizations is doubling every 2 years. At the same
time, new mobile computing devices are being deployed to end users,
allowing them to work anyplace and anytime, with constant access to new
forms of social media. These are exciting times, but where is this
disruptive change taking us?Aberdeen’s Telecom and Unified Communications, Wireless and Mobility Communications, and IT Infrastructure research analysts have
been watching these trends for several years and has observed a growing
convergence of Social, Mobile and Cloud computing. These three
computing revolutions are creating a new computing paradigm. The future
is a converged computing infrastructure, which Aberdeen has termed
“SoMoClo” to underscore the integrated nature of this single overriding
trend.

In a recent article by Scott Hebner at cloudtimes.org he got into how we can make SoMoClo ready for the enterprise. Scott is currently Vice President at IBM for Cloud & Business Infrastructure Management. Inteersting thing from this article is that it go's into a couple of things of making it ready for the enterprise world. On one side it go's into the how you, as an enterprise, should adopt SoMoClo technology to grow you business by using it externally and internally.

It also go's into the fact that your employees are already using it and you can most likely not stop the usage of it by your employees. Even though you might have policies around the use of public and cloud services it is very likely that if you have banned the usage of these services that your employees are using, for example, dropbox a lot. For this it would be good to not ban online and cloud service however to adopt them in such a way that you are able to channel them and make sure they are getting more compliant to internall rules and regulations.

If you ban your staff from using any online service it will most likely not work and your employees will use every service available online. However, if you create a policy that they can use this and this and this service however not THIS specific one due to some good reason you will notice that your employees will most likely accept this without any issue. It will bring issues with it however coping with those issues and problems will bring you more benefit in the end opposed to blocking the total use of it towards your workforce.

Interesting is that this is exactly the same discussion we have been having some time ago on BYOD (Bring your Own Device). In this it was Bruce Schneier, CTO of BT Managed Security Solutions who stated that he was seeing a big security risk in a BYOD policy however would never oppose to it as the benefits where so big to companies and users would do it anyway in they end that it would be better to channel it.

The same go's for the usage of Cloud, mobile cloud and social mobile cloud solutions. Your employees will be using it, are already using it. You can better channel it and make use of it internally an externally. The same go's for building your own SoMoClo, as you will see the adoption of SoMoClo applications is picking up with an every increasing pace your company can get a big benefit of a SoMoClo strategy if understood, adopted and executed in the correct way.

When
connecting remotely to a Linux host most of us will (I hope) use SSH to
establish a secure connection. Some people will use a single key pair
for all the hosts they need to connect to when there is a need for
pasword-less authentication to a Linux machine. However, when you are
working in different environments and have different keys (and
usernames) you can end up with a list of key pairs for all kind of
hosts. Meaning you have to instruct the SSH client to use a specific key
when connecting to a specific machine you can use the -i options.

For
example if you have you private key stored in /user/jolouwer/backup_key
for the server somehost.example.com you can use the following way to
connect to this host. In our case we connect as the user dummy;

ssh -i /user/jolouwer/backup_key dummy@somehost.example.com

In
this way it will read the specific key located in
/user/jolouwer/backup_key and use this to connect tot he server. Now you
can use this way to connect to every server you like however you will
have to find a way to remember which key to use for which server. More
easy is to list them in your ssh config.

You can do so by adding some config to ~/.ssh/config
For our example we should add the following 2 lines to the file;

Host somehost.example.com IdentityFile /user/jolouwer/backup_key

this
will ensure that the mentiond key is used every time you will connect
from your workstation tot the somehost.example.com host.