RIT TCOM333 Student Blog

Main menu

Monthly Archives: February 2016

In the modern day of digitalization, keeping sensitive information safe is a priority to maintaining data integrity. To make sure that the data remains secure over a longer period of time, it is imperative for any business, large or small, to perform a network audit.

Though it might seem obvious, you want to start with listing out all of your assets, which may range from the data that you are trying to protect to employee access cards. You want to identify any point that data can be stored, received, or sent from that uses the internet or similar digital networks. This will help you define what is known as a security perimeter, which may be a physical or conceptual barrier that defines the boundaries of the audit that you are performing. This will help you know what is important to be secure with while excluding things that are not a security threat.

Now that you know what can be attacked, is it important to know how it can currently be attacked as well. If you just make things secure blindly, you may leave a large hole in your security that can be exploited easily. This is why you need to create something called a threats list, which is exactly what it sounds like: a list of threats to your data and devices. This might be something similar to an employee losing a n ID card or something as severe as a powerful virus being uploaded and spread across the network being audited.

Now that all of the current ones are out of the way, you have to think to the future. You can do this by checking your own and global security trends to see what hackers are starting to do on the local and global scale. You might also consider asking, believe it or not, your competitors who face similar issues. If they have been around longer, they might be willing to help you out and show you threats you might be facing in the future.

Calculating harm is another largely important step when it comes to auditing, as you have to determine how large of measures should be considered for each situation: it might be a bit silly to install a 250,000 dollar firewall on an internal network, but it may be worth it to install one on external traffic if you can get it set up properly. Knowing your harm is important to a response plan, which assets should you dive to protect first if there is a breach is a very large question in this step.

The most basic form of securing your network access is by implementing a Network Access Control system, or NAC. Doing this prevents unauthorized users from accessing your network directly, as well as keeping employees out of areas of the network that they do not need access to in order to do their job.

After you have your NAC implemented, you need an Intrusion Protection System, or IPS, which comes commonly in the form of a firewall, either digital or physical. It is recommended that you use a 2nd generation firewall, as it has a feature that performs an advanced analysis on all network traffic that passes through, flagging unusual cases.

Identity and Access Management means that you control what users have access to based on automatically or manually presented credentials. This segments the network and may keep damage to a minimum if an employee’s credentials are compromised. It also keeps employees from committing attacks, such as stealing and using credit card information from their customers.

Creating backups is commonplace at home and should become a common practice on an enterprise network system. While we may thing of an outside person obtaining access to a network as a primary threat, more often than not the primary cause of data loss is accidental. To prevent this, you need either on site or off site storage, as well as regular backups of your network. Securing access to your backups is also critical: the data on a recent backup may be as valuable as the data currently on your network.

An easy way for a hacker to gain access to a system or network is through phishing or spear phishing attempts though corporate or private email accounts. If you have a private web server, you may consider requiring encryption for sensitive emails, as well as your own spam filter and employee training program. A filter may get rid of some threats, but a well trained employee eliminates the need for one.

Finally, preventing physical intrusions is an important and mainly forgotten aspect of network security. If someone is able to get on site and into an employee’s office, your entire network may be compromised. Making sure no one breaks into your office is as important as wiping a lost or stolen company device.

This entire list seems a bit daunting to a first time business owner or security professional, and several of these steps could be compressed. For example, the threat list and future threat list could be taken care of in one step, as well as installing NAC and IPS systems, as they are related to how the network is set up. I also do not believe that email should be its own step; While important, it can be combines with IPS and NAC to form a single step. However, I believe that the author left these separate to emphasize that they were all important in their own right. Beyond that, this lengthily process is unfortunately complex and well written out.

Share this:

Like this:

In the career field of computer security, one of the jobs incorporated into the field is being a penetration tester. A penetration tester is one who discovers vulnerabilities on a computer network or system that could be exploited by a hacker. Penetration testers (also known as ‘pen testers’) use a specific process that leads to a well established and efficient network that is profitable for the people in charge of the system.

One of the steps involved in pen testing is first getting together a team to go about the network from different directions. This way the team can go from attacking the network from both the inside and outside in order to detect vulnerabilities.

Another important step in the pen testing process is planning. If the pen tester is testing vulnerabilities using social engineering or a phishing scam, it is important to get together the proper team and plan out the different possible scenarios. With social engineering, planning is the key because anything can go wrong – which leads to the fact that much improvisation is needed to take advantage of the certain vulnerabilities.

After much planning and teamwork to detect vulnerabilities in the network and computer system, it is time to analyze the system and think of how to resolve the issues involved. The best way to improve a penetration test is to do it over and over again, especially after patching up the system after doing previous penetration tests.

Andrew McKenzie

Share this:

Like this:

Any modernized software company utilizes some form of version control in one way or another. Some (unfortunately) stick with the old “give me a USB with your code changes on it” style of version control, while others use systems like Git, SVN, or Mercurial.

Modern systems like the aforementioned stick to what is now a fairly standardized process. When a programmer goes to work the first thing he or she does is synchronizes their computer with the version control system so that the code they’re working with is up to date. Then they make some changes and go back into the version control system to upload said changes for others to grab from the system so they too are up to date. This is all well and good, but this process could very well go a step further.

Most people by now have used Google’s suite of office applications called Docs, Sheets, and Slides. In that system, it is possible to edit a file live at the same time as multiple other people and see their changes happen live. What I’m proposing is that version control meets that live, collaborative editing environment. Something like this already exists in fact, it’s called codeanywhere. If this sort of system were implemented in more software companies rather than the currently used Git, SVN, CVS, or Mercurial then it would bring an unprecedented level of collaboration to the software industry. This would fix problems like merging two sets of massive changes to the same file into one working file, and encourage more communication between developers.

-Jeremy Quinn

Share this:

Like this:

As a Management Information Systems major, my degree can take me down a variety of paths. One of the popular career choices is project management; the discipline of initiating, planning, executing, monitoring/controlling and closing a project. Since projects are temporary and unique endeavors, it is important for the processes to flow together nicely. There are many approaches to project management but fundamentally, they all incorporate the same steps. In today’s society, the Agile methodology is one of the more popular choices especially in software development. This process includes a series of repetitive development, implementation and test steps in order to ensure functionality and satisfaction. One of the more-liked features of this methodology is that the process is rather transparent and allows for necessary adjustments.

While this process may seem foolproof, there is always room for error. One major issue that I foresee if my career path should lead me to project management is the disconnect between members of differing industries. The jargon utilized during projects such as software development could cause confusion when a member of the commissioning firm reviews the documentation. I believe that the best way to improve the agile methodology is to improve education and organization. By providing all parties involved with the a general education regarding the processes that will take place, it will make communicating wants and needs that much easier. Organization is also a key feature of project management that could use some improvements. There is no generally accepted rule for organization therefore it is often times difficult to transition from project to project.

-Paige Glickman

Share this:

Like this:

One of the processes in computer engineering that I have faced quite often, is the act of turning a boolean logic expression into a physical circuit (usually on a breadboard). This act is often done by taking the original expression and using a variety of logic theorems, such as De Morgan’s theorem, to simplify the expression so it can be represented by using the least number of logic gates to implement it. The smaller the number of logic gates needed to implement a circuit, the faster and more efficient it can be. Then the circuit it often drawn out in a modeling program such as Quartus II. Symbols are used to represent logic gates and pis such as inputs and outputs. Wires connect the whole thing and it can then be plugged into ModelSim (a waveform simulation program) to check if it matches the original desired output. If the waveform in ModelSim validates the circuit, it is then time to built the actual circuit. Using the breadboard, tools, chips, and wires from my lab kit, I carefully arrange parts and connect things to complete the circuit. Once connected to a power source, LEDs can be used to confirm the accuracy of the circuit.

Overall, this is already a pretty efficient process. It is possible to draw the circuit on paper and skip all of the digital modeling parts, however it is then much more likely to run into errors. One way this process could be improved is if there was a program that could take the expression as an input and just output directly how to build the circuit. It would save time and effort as well as reduce the risk of human error. Another way to improve the process would be to use more efficient parts of circuits. This will be slowly improved over time as technology allows for faster and better processors.