RES701

Can the below be framed as Design Science?
Yes it can, although it might not be the best way, it all depends on the fundamental goal. Looking at the research question the better method would properly be case study or something similar. If the Fundamental Goal was to just research this or investigate known ways then definitely. But with the Goal requiring a plan then design science becomes the best.

Research Question;How do a variety of businesses / corporations protect against whistleblowers or possible data leakage points?

I have seen a few new and simply amazing technologies online in the past few weeks, from my perspective. From yours though they might not be so amazing. It’s all about perspective.
Advertising companies use a perspective they want to push. The majority of it is “2D”, what I mean is that, they show you what they want you to see. A ad on the TV or in a magazine is not interactive its just pointing the camera at an object or idea they are trying to sell.
But what if it was “3D” or interactive. There has been a few of these sort of ads using apps or touch screens but its is still focused. What happens when you look at what you want to in the ad?

What got me thinking about this was a video I watched where using Virtual Reality the user could draw an image in 3D using a variety of tools. Now the images they came up with were when looking from one angle it didn’t look like anything, just a bunch of lines. But when they moved around and got the right angle, the same that the creator’s perspective was, you could see the image they had made.

Now what happens to advertising when the viewer can look where they want too. If a company is trying to sell a car, in “2D” advertising the camera just points at the car, but what happens in “3D”, or better, in virtual reality? There is nothing stopping the viewer just looking anywhere but at the car. Therefore the viewer won’t get the perspective that the car company wants to display.Although if used correctly they may get an even better experience.
A great example of this was a new feature of Facebook that allows videos to be interactive, meaning that the user can pan 360 degrees in any direction within the video. The new Star Wars Battlefront game used this interactive feature perfectly I thought. They posted a video of a player looking in first person flying across an open map where you could look all around at the surrounding environment. This meant that someone watching the video could experience it there own way, whether it was looking at the stars, surrounding terrain or the bike in which they were flying across the desert, they could gain their own perspective with only a small influence by the advertiser.

I think as soon as virtual reality comes to mainstream we are going to see a massive shift in advertising. As the user gains more control in the advert there perspective will change and so will advertisings tactics.

How do a variety of businesses / corporations protect against whistleblowers or possible data leakage points?

To answer this question I would first need to define what a leakage point is. This will help with further research. To answer the question I would start by interviewing a variety of different businesses and corporations. Asking them what type of data they keep and what they define as the biggest threats to it. Do they concern whistleblowers and NSA surveillance a problem. What strategies have they implemented to combat these and what priorities do they place on each strategy they have.
I would also use resources that have are available that give strategies for the above problem as extra resources.
Looking at the problem from the other end (the whistleblowers view) will also help in finding why these breaches happen in the first place. Also it can give insight to weaknesses in businesses / corporations strategies they had in place which then can be related to other businesses and corporations strategies.

The underlying goal is to find out what strategies have businesses and corporations implemented and how do they evolve as the data gets more important or mission critical. How is data protection evolving with the evolving technology relative to “mission critical data”.

There has been a big change in the way data is collected about our environment. Companies are seeing the benefits in mobile devices as now most people have a smartphone of some sort. Analysts are moving away from using specialist equipment that cost millions of dollars to install and upkeep and using apps or social media as a way of gathering data.

If you look at a major highway in Google Maps you will notice live traffic information. It shows with a color coding system how fast the traffic is moving along that section of road. It also allows them to accurately predict travel times between two points or adjust routes to cut times when navigating.
How do they do this? Well all android based phones (that have been set to allow Google to collect location data) are sending GPS coordinates of the devices location back to Google. This allow them to then determine how fast the phone has traveled between two points, which is then compared against normal travel times to determine whether traffic is flowing normally or not. Using one phone to update the traffic conditions is not accurate at all but if you have 100 or more devices travel through the same two points then a clear picture can be built of what the conditions are like.

This is are much more affordable and simpler way than installing cameras or detection devices on all roads, motorways and highways and having them monitored. Not only this mobile devices are world wide already and the amount of devices is growing in more locations. This means it is easier for Google to expand its Maps service.
This service by Google has been around since 2007 so it is a relatively old service but its reliability and feature set has greatly improved since. It has also become more integrated within the new versions of Android OS.

There are a few new innovations that have joined the smartphone data gathering philosophy recently. One I have found is Sunshine. This is Apple iOS based app that uses data collected from Apple devices and multiple user input to predict the weather. Data from new Apple iPhones biometric sensors is combined with traditional weather predictions (although they are relying less on this) to give increased localised accuracy to the weather forecast for that day. Users can also input what the weather is like at there location adding to the accuracy. If this app is released to Android based smartphone devices too then the data set and would increase. This innovation is pathing a way forward for weather forecasters.

There are uses for Social Media too in environment data collection. Twitter is now becoming a faster source for information than government organisations. The US Geological Survey (USGS) has noticed this when there is an earthquake in almost any location. The USGS has more than 2,000 sensors, mostly based in the US, but what happens when there is and earthquake outside of where the the sensors are located. Well they have turned to Twitter.
It takes less than 30 seconds to post a Tweet and if there is an earthquake there is likely to be thousands of Tweets in a matter of minutes. The USGS now monitors Twitter looking for a set of keywords within a set of parameters to detect earthquakes. In tests that have been run with recent seismic events they it has been found that usually they can be alerted to an event using data from Twitter within 2 minutes. In 2014, the USGS was alerted to the earthquake in Napa, California in 29 seconds using Twitter data, which was before the earthquake shaking had reached some locations.

New innervations like this, I believe, are going to take over from traditional data collection methods. They are not only are they faster but also, most of the time, more accurate and can be extremely cost effective.

The other day I was in a technology store trying to kill some time, so I was reading the cases of the software that you could buy. There were anti-virus products, Windows Office products and a few others I can’t remember now. Some of them had a link where you could go insert your serial number and download the software online, but the products included a DVD with the software on it to install it. This got me thinking, why are companies still shipping DVD’s and not converting to USB sticks with the software on it yet?

To answer the underlying question; why ship the product with a DVD install disk (or USB stick) anyway when you can download it online? Well some people don’t internet, the bandwidth, or even want to download it as it takes extra time. Also some people, I know I do, want to get a physical object out of their purchase, not just a paper card with a website address and a activation code on it.

But why not convert to USB stick installers instead of DVD disk installers? This could apply to purchasable DVD movies too. Almost every modern Computer, TV, Home theater system even DVD player comes with a USB input slot, while the number of these pieces of technology being released with DVD drives is steadily going down (apart from the DVD player obviously).
Many modern Laptops do not come standard with a drive anymore. The only way to install the software or watch the movie from a DVD is to go out and purchase a separate external DVD drive that connects via USB to the computer. The other alternative is to rip the software/movie off the DVD and copy it to a USB drive to move to the laptop. This kind of defeats the purpose of the DVD then doesn’t it?

DVD disks also have the limitation of writable space that can be used to store the data on of about 4 to 5GB. USB sticks on the other hand have upper limit of about 500GB but that is ever growing, even then no software install will ever be that big.
A recent game I bought came with 5 install disks in a big case, why not have a USB stick large enough to fit all the data on. Instead of having to sit by the computer and wait for each disk to finish before removing it and inserting the next, with a USB stick you would only have to insert it and set the install going.
Another benefit is the read speed of a USB stick is a lot fast that a DVD so an install would happen a lot quicker. For DVD’s this is a little less of a concern, although as the resolution of content goes up there will be limitations from the DVD.

DVD’s with Software and Movies that ship currently are not rewritable effectively making them a throw away item. USB sticks are on the other hand and could be recycled by the purchaser or sent back when no longer needed and refurbished and sent out again. Packaging will also be smaller meaning less cost to ship the same amount of product compared to DVD’s. Custom shaped USB sticks can also be created meaning that 3d advertising can be used. DVD’s are limited to a printed label on one surface. There will be a slight increase in cost, as to produce a DVD is cheaper than a USB stick, but the cost of USB sticks has come down so much now that the difference will be minimal. Therefore above two points should balance out the cost.

The below Infographic sums up some of what I have said above;

The benefits are there so why haven’t we switched? I do not know. Is it the nostalgia of the DVD? Or are DVD’s going to fased out with the only way to access items through the internet, leaving those without in the cold? Or is it just case like IPv4 where it works for now but when push comes to shove and it can’t keep up there will be a mad dash to change. Or are we not ready for it?
I am still surprised that DVD’s are used.
What do you think?

Did the abstract tell you the three things I said it should? If not, what did it tell you?

The abstract for this paper tries to follow the three rules, but I feel it is lacking somewhat on the third rule. It follows the first rule by stating at the start what technology is that the paper is studying. I think that they have done a good job of explaining NFC in a short sentence. The second rule is explained further down in the abstract. I think this sentence “In this paper we examined existing NFC applications, prototypes and studies from both academia and industry.” needs to be moved up to below the section that explains what the paper is about. The abstract would then make more sense with rule three. This is because they would have discovered that there weren’t many sources published on the uses and that they found their own as well as created more questions.

What seems to be the research question(s) they were trying to answer?

They say in their introduction that they want to research and discover the answer to two questions;

What are the benefits of currently developed NFC applications?

Which possible applications can be implemented in the future, and what benefits can we expect from them?

It also appears that they wanted to find out the different NFC operating modes and they there benefits and drawbacks. I believe they have done this to better understand the technology themselves and to benefit their answers to the questions above.

What method(s) did they use to answer the question(s)

It appears that they have reviewed about 50 studies or implementations of the technology. From this they have pulled out what NFC operating mode that they used and analysed the benefits of that method in that scenario. This was then used to answer the questions that paper set out to answer but also create new questions. In the second to last paragraph of the introduction they explain their research method. It is mentioned that they communicated with academicians studying Near Field communication to help their research. They also say that there research shaped the final results of the paper.

How credible do you think the paper is? (hint: look at who authors are and where and when it is published also compare what they were asking with what they did)

The authors of the paper are from a university in Istanbul, Turkey. After looking them up and the other publications that they have written I found that one of them has written a book about the NFC technology. I also noticed that almost all the papers that they had written were about this communication method. They have all had their papers cited numerous times according to Google Scholar. The paper was published at a conference held in 2010. This was when the NFC was still an emerging technology and little was known about it. I do think that this paper is credible and the results found relate to a technology that has little known about it. There is one thing that makes me wonder whether this just started out as a research with no real objective and then turned into a paper. This is due to them saying that there reseach shaped their questions, so I am wondering if the questions were created afterwards to create a paper out of the studies.

Did you agree, or not, with what they wrote in their conclusion? Why?

Yes I do agree with their conclusion for a 2010 paper. There are parts of the conclusion that they have come to that now with more known about the technology may be incorrect. For example this section may be incorrect as new research has found that NFC on mobile phones can be exploited in some cases; “ Also we think that this mode may promote NFC to become an important technology by enabling to store personal private data into mobile phones. By this way, users will not share any private information with third parties; instead they will store that private information in their mobile phones, and authorize people to access it.”
I think that the way they have classified the technology into three modes was a correct way to approach the subject and has benefited there results.

Briefly describe two things that you learnt from the paper.

The first thing I found out was how each operating mode worked. I had very little idea that there were three modes and only limited knowledge on how the communication actually happened. The second thing was how each mode benefited a certain use case. Selecting the correct operating mode when setting out to develop a communication method using NFC will be crucial to its success. Each has its own benefits/drawbacks and they will need to be fully explored before making a decision.

Finding Academic Articles

Finding Academic Articles that don’t require paying to gain access to the document, within the category that you are looking for, is hard. I found that I had to be very selective on the links I went to. Conference Papers were the main sources as they are mostly available to the public free of charge.
I had two documents but after reviewing them a bit more in depth I found that they didn’t meet the standard that I was looking for. After recently writing a new personal blog I found a new area to explore with more papers focusing on the same topic. This is where I found my Article 1. Article 2 was found after changing my search phrase and being very selective on the links I clicked on. I found a lot of NFC papers that didn’t meet the length requirements.

How you found the article and what keywords you used?;Foundit using Google Scholar. The keywords used were “XKEYSCORE analyes”

What kind of article it is?;It is a conference paper from iConference

All the reasons that you think it is an academic article;It fits with most of the criteria that defines an academic article. It has an author listed as well as their contact information, an abstract, an introduction, a list of references, a series of questions that are to be answered and a discussion. It reviews work by others and draws from reliable resources and documents. It is missing a clear conclusion though.

How many references it has?;
42 total. 29 of those are footnotes on the relevant pages.

How many citations it has?;Google Scholar showed 2.

Am I interested in properly reading the article or not?;
Yes I am. If you read my blog post Personal Thoughts – Citizenfour and NSA spying then you will see that recently I watched a documentary based around the topics this paper covers. The documentary covers the story of Edward Snowden, who is mentioned in this paper. I have skim read parts of the paper and it covers sections and questions that I have not yet explored. I am interested in reading it to gain more knowledge in this subject.

How you found the article and what keywords you used?;Foundit using Google Scholar. The keywords used were “NFC tags in education”

What kind of article it is?;The source I found was published in ICEMT 2010 Conference, 2-4 November 2010. So a conference paper I believe.

All the reasons that you think it is an academic article;It fits with most of the criteria that defines an academic article. It has an authors listed as well as their contact information, an abstract, an introduction, a list of references and keywords, a series of questions that are to be answered and a conclusion. It is within the word range, just. It works through a series of steps laid out in the introduction to come to the conclusion. It is missing a clear discussion section and I dosent really review other papers.

How many references it has?;
56 although these are listed in a separate document on a website.

How many citations it has?;Google Scholar showed 47.

Am I interested in properly reading the article or not?;
I came up with an idea that uses NFC tags but I have little knowledge on how they work and how they have been implemented in the past. This paper explains both of these aspects in detail. It also included modes of use and the security aspects of these modes, something that I am yet to explore. For these reasons I believe I will read this article.

Government information gathering tactics is a subject that most of us don’t talk about much. I can only think of two main reasons why we don’t, the first, we don’t actually know their tactics and/or the thought of what they might know or will find out if we talk scares us into not talking.
There are only a few brave ones that go against their government, mostly it isn’t for personal gain, and share their knowledge of what’s happening behind locked doors. These people are known as whistleblowers.

As I was procrastinating one evening I was scrolling through recent movie releases and came across a documentary trailer for a movie titled Citizenfour. It was a short mysterious trailer that left me wanting to watch more. Intrigued I found the full version of the documentary and sat back and watched what was a eye opener.

Up until a week or two ago I was someone who didn’t talk or put much thought towards government intelligence. I knew from what I had heard that spying was widespread but that was about it. A whistleblower by the name of Edward Snowden changed this and woke me up to the true extent of spying.

The documentary starts by following reporters, Laura Poitras and Glenn Greenwald, that have been contacted by someone that says he has information about the government. He says the world needs to know. He only calls himself citizenfour.

Edward was a contracted senior technology adviser for the NSA and a former employee of the CIA. After working for the NSA for a number of years he had seen enough and want the public to know the true extent of government spying. He flew from his job in Hawaii to Hong Kong in May of 2013 to tell his story to the world. This brave man left his girlfriend, family and friends without telling them where he was going, with the possibility of never seeing them again.
This is where the documentary team, meets the man that has secretly arranged a meeting with them. A series of meetings followed and Edward released the documents to the reporters, then they start to tell the world. After the news broke the events that followed happened in rapid succession. The media went into a frenzy and the NSA tried to cover it up and find out who did it. Edward later publicly it was him and went into hiding. He is currently in Moscow under where Russia is giving him asylum from the US government.

One thing that got to me throughout the meetings in the that were documented was how paranoid Edward was about the NSA watching him. For a start I thought he looked stupid, but after hearing the extent of spying that was happening I can now understand why. A few facts that he said that hit me hard were:

In 2011 one NSA facility could monitor simultaneously 1 billion communications at any given point. There were 20 NSA facilities in 2011.

On his work computer he could watch live feeds of over 1000 drones circling over people’s/targets houses all over the world.

The NSA database’s search engine, XKEYSCORE, could map your daily life by just using your credit card and a linked transportation pass.

As a New Zealand citizen we usually take the back foot around this this sort of topic because we don’t think it affects us. Think again. The GCSB (the New Zealand spying agency) has joined with, shown in this document, the NSA and is using the same technology to monitor us as well. Not only that the NSA has a backdoor in and can use data that the GCSB has collected in the investigations. One of the documents that Edward Snowden released, that the NSA keeps, is a definition of what a New Zealand Citizen is and when the NSA is allowed access to through the backdoor into the GCSB database.

One thing that he mentioned a few times was the database search engine that the analysts used to search through the data; XKEYSCORE. After finishing the documentary I went to a website setup with access to the documents he has released.

I found a series of “presentations” I guess you could call them, detailing what it was and even instructions on how to use it.

Notice that along the bottom of the first two documents NZL is included. That means that the document is relevant to the same system that the GCSB use. Another thing I found was a system, NSA has developed, that was 10 times bigger than XKEYSCORE. Its name was TEMPORA. This system at one site could process 40 billion pieces of content a day. The documentation for that was released in 2012, so I am guessing that those figures would have doubled or even tripled by now.

From this I don’t intend on scaring anyone to start living like Patrick, under a rock, but to spread knowledge with your pairs. Read about what is happening behind our backs. If you want to learn more I would start with the documentary and then Edward Snowden website.

Discourse analysis (DA) looking at what is meant ‘beyond the sentence’. This means not only looking at the social meanings/surroundings of the sentence but also studying each word(or part of the word) and its syntax or meaning. The result of looking at any given text with the view of DA is that any message can be communicated and how it can construct a social reality or view.

The example from this site is the best way of describing it I believe;
When the word(s) “terrorist” or “freedom fighter” in a sentence to describe someone it can bring on both positive and negative views, in most cases terrorist being negative and freedom fighter positive. If we then relate the term “muslim” to the above words then it can construct a different relationship to them.

What kinds of questions might it be useful for?

When questions or a sentence is introduced by the media. DA can be used to understand the meaning behind the what is being said. This means looking at the ideologies that may be introduced by it for different cultures and the relationships between what is said and what is meant. This can be applied to political texts too.

How might it be used in IT research?

I think it would be best used when the research involves social impacts. Questions or statements that involve social media that are being researched in a Information Technology perspective would be a good example (China’s view on Facebook vs the rest of the world).

What are the strengths of the approach?

It breaks down the meaning of a piece of text and gets the social relationships that multiple cultures could/would have from it. Also it can put words or text into perspective when viewed in the big picture like DA does.

What are the weaknesses of the approach?

The primary weakness is that although you can look at the social relationships your own personal bias will affect the result that you come out with. It is also a very work intensive approach to find results. There are a set of steps that need to be worked through. These steps can be or have to be worked through for the whole text all the way down to the individual sentence or even words in the text.