Month: February 2017

Introduction

In this tutorial we’ll take a look at how to get gitignore to ignore files correctly when it seems that it is not ignoring the files that you have specified in the .gitignore file.

Let’s examine why this would happen.

gitignore will only ignore files that are not currently under source control. This means that if a file was committed before it was added to the .gitignore file, it will continue to track the changes of this file.

This is the basic principle of how gitignore works.

It is good practise to always update your .gitignore file before you commit any files that you do not want or need under source control. These files are commonly binaries, executables or any generated files that are automatically created when you build a project.

There may be cases where unwanted code is checked-in by mistake with the inital commit or subsequent commits. This is when our described problem will arise. No matter how many times you try to specify the file in the .gitignore file, it will always remain in the staging area.

Prerequisite Knowledge Assumed

The below resolution example makes use of the following software’s

git

git extensions

If you are unfamiliar with git extensions, there is a section right at the bottom that contains just the git commits that are required to resolve the problem.

Resolution

Steps that are required to resolve this problem:

Remove the files that you do not want tracked by source control, from git

You can do this by running “git rm -r –cached file/s“

Commit these removed files

Update .gitignore file

Commit .gitignore file

Let’s walk through the following example to expand exactly what you need to do for each

You’ve got a new repo that has just been created. You’re super excited about your new repo and this is the view that you have in git extensions:

When you go to commit you see the following:

From here you decide to stage all the files without adding the files in the bin folder to the .gitignore file.

So you stage all the files and hit commit.

Changes have now been made to your normal.code file and as a result the auto.gen file has been updated. When you go to the commit screen you now see the following:

At this point you realise that you don’t want the auto.gen file or any files from the bin directory to be committed to source control. When you try to add the bin directory to the .gitignore file it doesn’t seem to work, since the staging area looks like the image below:

Point 1 implemented

This is where you will need to implement point 1 (as discussed above).

To do this, open git bash and navigate to the repo.

Once here, run the following command:

Shell

1

git rm-r--cached bin/*

See image below:

As you can see, the two files in the bin directory were removed.

When we return to git extensions and refresh the staging area we see the following:

As you can see from the above image, the two bin files have been marked as “deleted“. The files have not physically been deleted. They have simply been removed from git’s tracking.

Point 2 implemented

Now we need to commit these removed files so that git will no longer track changes made to these files

After you have committed these files your git extension will look something like this:

Point 3 implemented

You can now update the .gitignore file to add any additional files that you would like to ignore. For the purpose of these example, we do not need to add any more contents to the .gitignore file as we have already specified all the bin files in the .gitignore file.

Point 4 implemented

You can now commit the .gitignore file and git will never track any files in the bin folder.

Simple git commands to resolve

If you are unfirmaliar with git extensions you can simply execute the following git commands to resolve the problem:

Basic steps

In this article we will take a look at creating a basic handlebars file and making use of it in your html page. There are a few basic steps to use handlebars that we need to follow.

Create the handlebars template file

Get (or make up) data that will be used to populate the handlebars template

Make sure to wrap the data in an opts where you can append the containerSelector (explained more below)

Create a method that will do the injection of the data

This method will also place the handlebars template onto your page

The code used in this article is simply for example purposes.

A full html page will be provided at the bottom for reference. For the sake of the example, all the code will be inside of the one html page; however you would likely want to have these components separated out as good coding practices dictate. When it comes to split out the code into different files, remember to include all the files in the main html page. An example will be shown at the bottom as well.

1. Create handlebars file

We put an ID on the script tag – “handlebarsTemplateExample”. This will be used as our template selector which will be discuss below.

We are creating HTML in the script tags (this is how handlebars works).

We have this weird notation of “{{” and “}}”. These are variables which handlebars uses to bind information. Similar to AngularJS.

2. Get data for handlebars to inject

For this basic example we will simply make up some data. Ideally you would want to get this data from an ajax call or something similar.

JavaScript

1

2

3

4

vardata={

heading:"Yoda post",

information:"This is an entelect yoda post"

}

3. Wrap data in opts

Since all of this is done in JavaScript it makes it easier for us to manage our code if we wrap all of our arguments into an “options” variable which is usually shortened to “opts”.

Opts is simply the parameters that we are going to be passing to the method. It is easier to say: “myMethod(opts)” if there are plenty of arguments that you want to pass to the method as opposed to: “myMethod(arg1, arg2, arg3….)”.

Making use of opts is also a better JavaScript practice as you can easily determine what each of the parameters are referring too. You can take a look at these two articles for more information on that (since this is a whole other discussion):

In the above code block you will see that we have added templateSelector and containerSelector to our opts object.

templateSelector will be used to determine where we get our source from (i.e. the handlebars template)

conatinerSelector will be used to inject the result of the handlebars into a div on your page. This means that you would need to have a div (with an ID of “insertDataHere” (in this example) on your html page). You can see the full html page below if you are confused.

4. Create method to do handlebars injection

This is the final step to get your basic handlebars up and running. We will now make the basic handlebars injection method.

JavaScript

1

2

3

4

5

6

7

8

9

functioncreateHandlebarsExample(opts){

varsource=$(opts.templateSelector).html();

if(typeof(source!=="undefined"))

{

vartemplate=Handlebars.compile(source);

varhtml=template(opts.data);

$(opts.containerSelector).html(html);

}

}

The code above is quite simple. Let’s break it down line by line.

JavaScript

1

varsource=$(opts.templateSelector).html();

This takes the html from the templateSelector and stores it in a variable called source. This would mean that it contains our html where we have defined: {{data.header}} and {{data.information}}.

JavaScript

1

vartemplate=Handlebars.compile(source);

Now that we have the basic handlebars “template” (inside source), let’s create the actual handlebars template by compiling the source. The result is stored in template.

JavaScript

1

varhtml=template(opts.data);

This line is where the magic happens. Now that we have a compiled handlebars template we can send the template our data. The variable “data” mustcontain “header” and “information”. If it does not, these items will not be bound to the template. This matches the names that we gave our data injection variables between the “{{” and “}}” in our handlebars file.

JavaScript

1

$(opts.containerSelector).html(html);

Lastly we put our completed template (which is stored inside html as per the previous line; with its data injected in) inside of our div on the main html page.

You have now created a basic handlebars template and injected it into your html. Please see the below code for the full reference.

Full one page HTML

Remember you can also always download the files and not use the HTTP address for the script/s.

This will not work, because there is one thing that I forgot to mention. The extracted handlebars file needs to be inside of an HTML file. This is because a .js file does not register “<script>” tags.

This means that you would need to load the HTML page into your other HTML page. This is just a simple way of extracting the handlebars file out into another file; it is however not the most pleasant way of doing it. For the simplicity of this tutorial I would recommend that you rather keep the handlebars script in your HTML file and not split this into another file.

If you do wish to split it into another file this is how you would do so:

This may come naturally to some of you but to others it can be very difficult to tell whether an email is legitimate or not. This is the reason that banks and other companies continuous say that you should never click on links (in emails) that would hint that you reset your password (or doing anything with your sensitive data) with this said company. You should always navigate to the official web page and then proceed to do whatever it is you need to from there.

Of course this doesn’t stop someone from hacking the actual site, but that’s a discussion on it’s own.

About 2 years ago ago I got this email from “Facebook” in my inbox (or rather my spam box). Now, I’m sure some of you have received emails like this in the past too (and some of you without a Facebook account I’m sure; I had recently deleted my Facebook account but here lay a message claiming I had unread messages on my account), but I thought it would be an interesting exercise to decompose the email and inspect everything in order to point out certain things that you should always be on the look out for when you receive an email and you are suspicious about it.

Let’s take a look at the following image:

As you can see the email says that it is from FacebookAdminstration, but look at the email address between <>, that is where it actually came from. Always make sure that you inspect this first. Usually it will not come from the Facebook (or which ever company they claim to be from) domain. It is very possible that this can be spoofed (using a simple SMTP server, but use this as your first point of entry. If this does not match up to what you think it should be, then the email is definitely not legitimate.

One of the more obvious signs of a phishing email is the nonesense which appears in the subject line. This one read: “Contraction Your 2 unread messages will be deleted in a few days swerve”.

This subject line does not make any sense

The casing in the sentence is wrong

If you use gmail as your email client (as I do) you can see the original email as text. To do so:

Click on the little arrow next to the reply button.

Select “Show Original”.

Right at the bottom of this post I have included the full content of the original content that I received (I have omitted my email address). The following section is a breakdown of the important things to look out for.

The above href contains a URL to a unfamiliar site. It is definitely not a Facebook site. This link (as you can see from the original email below) appears in every single clickable part of the html from the email and should not be trusted.

Another thing that you can look out for is the delivery chain of the email. This will be found in the header of the email as shown below:

Received-SPF:fail(google.com:domain of egal.hm@bongfaschist.de does not designate116.0.120.83as permitted sender)client-ip=116.0.120.83;

Authentication-Results:mx.google.com;

spf=hardfail(google.com:domain of egal.hm@bongfaschist.de does not designate116.0.120.83as permitted sender)smtp.mail=egal.hm@bongfaschist.de

From:FacebookAdministration<egal.hm@bongfaschist.de>

Groaning-Biographically-Influentially:1aee434c

Admissions-Calliope:6944c3248825f1

Message-ID:<babc-133dc6-825b@bongfaschist.de>

Date:Thu,7Aug201406:40:52+0000

Content-Transfer-Encoding:7bit

Later-Misusing:E252FBD6B6

To:me

Irreplaceable-Oleander-Columnizing:194

MIME-Version:1.0

Subject:Contractions Your2unread messages will be deleted inafew days swerve

Undoes-Emerge-Poindexter:recognizes

Content-Type:text/html;charset=UTF-8

The way which you read this header is a little bit counterintuitive, as you have to read it from the bottom-up. I have replaced my original email address with “me” in the above header.

As you can see the email is directed to “me” and this is where the email would start it’s travel. Similar to how you would write a normal letter. You need to provide the location at which it needs to be delivered. The beauty of email is that it will record each domain that touches it. Imagine the receiver of your plain old fashioned letter could know who exactly touched the letter on the way to them. If all the people that touched the letter were legitimate, the receiver of your letter could guarantee that it could be trusted

With email, this is always the case.

If we move further up the header to line 14, we can see that the email came from: “FacebookAdministration <egal.hm@bongfaschist.de>”. This matches what was discussed earlier. At this point, this is the original sender as they would like to be viewed from the SMTP server. It is possible however, that this could appear to be legitimate. We need to further inspect the email

Let’s take a look at the next two lines of the header above that:

Vim

1

2

3

Received-SPF:fail(google.com:domain of egal.hm@bongfaschist.de does not designate116.0.120.83as permitted sender)client-ip=116.0.120.83;

Authentication-Results:mx.google.com;

spf=hardfail(google.com:domain of egal.hm@bongfaschist.de does not designate116.0.120.83as permitted sender)smtp.mail=egal.hm@bongfaschist.de

One of the most important things to note here is the SPF. SPF is Sender Policy Framework. Basically it is an email validation system that ensures that the sender of the email is authorised on the domain that they claim to be from. As you see from the above, the SPF failed. That means that the person that sent this email on “bongfaschist.de”, is not a valid user on that domain. Hence we cannot trust this person.

Received:from www-data by mybb-mail01.jnb1.chs.hetzner.co.za with local(Exim4.80)

(envelope-from<newsletters@newsletters.mybroadband.co.za>)

id1Yx9QZ-000LBL-B8

forme;Tue,26May201509:37:31+0200

Here we see the original exit point of the mail. This email was sent from “www-data by mybb-mail01.jnb1.chs.hetzner.co.za”. This my seem a little bit scary. It’s not mybroadband.co.za. However, a quick google search will tell you that this address is simply the reverse DNS of the hosting site mybroadbandmail. You can see this at this link here: http://whatmyip.co/info/whois/197.242.89.180/k/471857840/website/mybroadbandmail.co.za

<a href="http://hellenicmediaservice.com/wp-content/themes/rockwell_v1.7.1/freshwork/pike.php" style="font-family:tahoma,verdana,arial,sans-serif;font-size:12px;text-decoration:none;color:#3b5998">Your 2 unread messages will be deleted in a few days</a> </div>

<td style="font-family:tahoma,verdana,arial,sans-serif;font-size:11px;color:#999999;padding:10px"> This message was sent to me. If you don'twant to receive these emails from Facebook inthe future,please<ahref="http://hellenicmediaservice.com/wp-content/themes/rockwell_v1.7.1/freshwork/pike.php"style="color:#3b5998;text-decoration:none;">unsubscribe</a>.<br/>Facebook,Inc.Attention:Department415P.OBox10005Palo Alto CA94303

When looking for approaches to data migration from one database management system (DBMS) to another DBMS, the most commonly suggested approach is to use SQL Server Integration Services (SSIS). This article will not be focusing on SSIS has there are many tutorials on the web relating to how you can achieve your desired goal using SSIS. This article will rather be focusing on using a code based approach (i.e. C#) in order to achieve your goals.

I was recently tasked with performing data migration from multiple DBMS’s. To name a few, these with Oracle SQL, Teradata and SQL Server. The destination DBMS is SQL Server. I had initially started using SSIS (as I mentioned, it is the most commonly suggested approach for this task), but soon came to realise that SSIS had some rather unsatisfactory pitfalls which can be avoided if you do the data migration via a code approach. One of the major pitfalls with SSIS is that it is very static in nature and you need to be able to define all your rules for the data migration up front. This relates to the column definition of tables as well as all the tables that you would like to migration data from. The problem with this is two fold:

You may not know exactly which tables you want to migrated data for – so you would like to bring all of the tables into the new DBMS so that you can do further analysis on them

There may be a huge number of tables in the source DBMS. Let’s take for example a table count of 100. This means that you need to statically define a source and a destination for 100 tables. This is just the basic reading in and writing of the tables, and excludes and tasks required to do processing on that data (should there be any). So, if there are no other tasks required, that will mean that you will need to define 200 tasks (at least) to perform this operation. If for any reason one of the tables is removed from the source or a column definition is changed, you will need to go back into the SSIS package, find those affected tables (amongst your 200 tasks) and refresh the meta-data on that task.

As you can see from the above, if your environment is dynamic in any nature at all, you will have major headaches trying to deal with this in SSIS. Not to mention the amount of time it will take you to set up a source and destination task for each and every table that exists.

It is for this reason that I decided to go with a code approach in doing the data migration. With code, you are able to define a process which will read the data from each of the source DBMS’s and gather the required information about those tables dynamically. If a table definition changes, it will automatically be updated in the migration. If a table no longer exists in the source DBMS, you will not receive any errors (as you would with SSIS), this table would simply not be carried through in the migration.

I will be taking you through two approaches that I have used, to perform this data migration. The first approach was a very naive, and it was simply to get the data across. The second approach was improved in order to do batch reads and batch inserts from source to destination. In both approaches the only section that differs is how we read the data from the source and how we write it to the destination. There is additional logic which needs to be applied first in order to get the definition of the tables from the source; this logic will be shown first.

Getting required information for the source DBMS

The first thing that you will need is to be able to get the table definition from the source DBMS. This may and will probably differ from source to source. As I mentioned, the three that I was using were Oracle DB, Teradata and SQL Server. Below is a snippet of each DBMS’s query to get the table definitions:

Oracle:

Oracle PL/SQL

1

SELECTowner,table_nameFROMall_tableswhereowner='{0}'

Teradata:

PgSQL

1

selecttablenamefromdbc.tableswheretablekind='V'anddatabasename='{0}'

SQL Server:

Transact-SQL

1

USE{0}GOSELECT*FROMsys.TablesGO

You will notice that each query as a {0}. This is used for string format purposes. Simply replace {0}, with the database name or the owner name (or Oracle purposes, owner is the same as database name).

From the result set returned by these queries you will have a full list of all the table names that exist on that source. In order to create the tables in our destination DBMS, we will need to get the column information for each of these tables. Below is a query snippet for each DBMS to get the column information about a table. You will notice that each snippet contains two string format args (0 and 1). These args are the schema (owner) and table name respectively.

Now that you have the table names and definitions for each source table you can create a looping function that will create each of these tables in your destination DBMS.

Creating the destination tables

For the purposes of what I was required to do, creating the destination tables as nvarchar’s was sufficient. However, as you might notice from running the above queries, it is also possible to get the data types of the source tables and use that information to create a table with the same data types.

SQL Server provides a neat way to do this in code which I shall detail below. For the other source DBMS’s, if you require the data types, you can attempt this SQL Server approach (but I do not know if it will work, as I have not tested it myself). If it does not work however, you should be able to get the data type information from the queries listed above if you select more than is provided in them. For example, you’ll note that in the Oracle example all we are selecting is the COLUMN_NAME however, there are other columns on that table which will contain the information that you are looking for.

Naive approach

Now, let’s take a look at the code for the simple table creation setting all of the data types to nvarchar(4000) (SQL Server’s max length, you are welcome to use nvarchar(max) if 4000 does not suit your needs).

You will notice that I am using the generic classes DbConnection, DbReader and DbCommand. I am using this to simply illustrate how the code will look. You will need to substitute that with the specific implementation of the DBMS that you are trying to connect to.

With data types approach (SQL Server specifically tested)

Now, let’s take a look at how we could get the column data type information for the tables.

The basic code changes as follows (you will notice that it references a method called GetColsForCreating. This method is defined below the current snippet.

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

foreach(vartable inTableNames)

{

varcols=GetColsForCreating(table);

varfullStagingTable=$"{destSchema}.{table}";

stringcreateQuery=$"CREATE TABLE {fullStagingTable} (";

for(vari=0;i<cols.Count;i++)

{

createQuery+=cols[i];

if(i!=cols.Count-1)

{

createQuery+=",";

}

}

createQuery+=")";

SqlCommand sqlCmd=newSqlCommand();

sqlCmd.Connection=DestDbConnection;

sqlCmd.CommandText=createQuery;

sqlCmd.CommandType=System.Data.CommandType.Text;

sqlCmd.ExecuteNonQuery();

}

GetColsForCreating method:

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

protectedList<string>GetColsForCreating(stringtable)

{

varlistOfCols=newList<string>();

//in this example QueryToGetColumnInformation is as follows:

//SELECT * FROM {0}.{1} WHERE 1=0

//this allows us to get an empty table of the table so that we can use the SqlReader to get the actual datatype information

And there you go. You have now successfully created all of the source tables from one DBMS (dynamically) into another DBMS. This approach gives you great flexibility and allows you to create the tables in the destination DBMS at run time (rather than at build time, like it would be the case for SSIS).

Importing the data (migration)

Now that we’ve got all of the tables defined in our destination DBMS, we can proceed to perform the actual migration of the data from the source DBMS’s.

As mentioned previously, I have two approaches that can be used.

The first approach is extremely naive and simple reads one row and inserts one row at a time into the destination. This approach will work sufficiently if the tables that you are reading from are not large.

The second approach is slightly more optimized and runs the reads and inserts in batches of a pre-defined amount.

I must note that for my example, I used a batch size of 50 000. I ran the code again with a batch size of 100 000, but it did not seem to increase the performance of the query to read the batch. it is possible that the indexes on the tables that I was reading from could be defined better. You will have to pay around with this number and decide for yourself what you deem is a good batch amount.

As you will see, this approach can be extremely memory intensive and will hold up your source table with a lock until all the data as been read. This would not be ideal in most cases, but might work fine if the table is less than 1000 rows.

Once again, you will notice that I have used the generic classes in the above code snippet. This will simply need to be replaced with whatever source DBMS you are reading from.

Final thoughts

SSIS is a good enough tool for someone who does not have development experience, however, if you are a developer – I would recommend that you always favor the development approach (at least for the problem of data migration).

I would like to say that the examples expressed above are simply approaches that I have recently go about implementing. The are by no means the best or worst ways to implement a data migration code approach, but I believe that it works fairly well. This is with regards to the second approach for each creating the source tables and importing the data into the destination tables from the source.

Lastly, if you are working on a data migration task for a source DBMS that has a large number of tables, or you are working with multiple source DBMS’s, my recommendation to you is don’t waste your time investigating how to do this with SSIS. Simply bite the bullet and use a code approach. You will always have more control over the flexibility of your implementation when using code.

If you have any suggestions or approaches that you would like to recommend, leave a comment so that future readers will have more examples to work with.