Category: General IT

APIPA is in every version of Windows since NT and all versions of Mac OS X.

APIPA is a DHCP mechanism that provides DHCP clients with self-assigned IP addresses when DHCP servers are not available. When there isn’t a DHCP server available, APIPA assigns IPs from 169.254.0.1 to 169.254.255.254 with a default mask of 255.255.0.0.

Clients leverage ARP ( Address Resolution Protocol )to verify their address doesn’t conflict with another on the network. APIPA is enabled on all interfaces of all DHCP clients in pretty much all modern operating systems.

A new infographic by Varonis, titled “10 Cyber Security Myths Putting Your Business at Risk” identifies what is the myth and what is the reality. If you are like most small business owners, you probably aren’t a digital security expert. So, having a look at this infographic may be the best way to identify weaknesses in your security protocol.

With small businesses increasingly becoming targets of cyber-attacks, it is extremely important for owners to stay abreast of the latest developments in digital security.

On the official Varonis blog, Senior Director of Inbound Marketing Rob Sobers writes, “The proliferation of high-profile hacks in the news cycle often tricks small- and medium-sized businesses into thinking that they won’t be targets of attack.”

But this may not be the case, Sobers warns. Staying in the know makes it much harder for you to fall victim to the relentless attacks by cybercriminals.

Sobers ads, “If you or your employees believe any of the myths below, you could be opening up your business to unknown risk.”

The number one myth listed on the new infographic? ‘A strong password is enough to keep your business safe’. Although a strong password is important — and certainly better than ‘Admin1234′ — you need to do more.

Having a two-factor authentication and data monitoring adds another level of protection. And adding this layer of protection is in many cases enough to drive the average hacker to look for easier targets.

Another myth listed on the infographic? “Small and medium-size businesses aren’t targeted by hackers. This is obviously false because hackers are opportunists who will target anyone as long as they can benefit from it. And small businesses are not excluded from this.

The 2018 Verizon Data Breach Investigations Report has revealed 58 percent of data breach victims are small businesses, so the idea the size of your business might exclude you is definitely a myth.

Cybercriminals hack computer systems for a variety of reasons. Once they breach your security, they could use it to launch a DDoS attack, use your IP address for other nefarious purposes and more.

Much like some businesses believe they won’t be attacked because of their size, other businesses wrongly assume that they won’t be attacked because of the industry they’re in. This myth also goes hand-in-hand with the belief that some companies don’t have anything “worth” stealing. The reality is that any sensitive data, from credit card numbers to addresses and personal information, can make a business a target.

What’s more, even if the data being targeted doesn’t have resale value on the dark web, it may be imperative for the business to function. Ransomware, for example, can render data unusable unless you pay for a decryption key. This can make attacks very profitable for cybercriminals, even if the data is deemed “low value.”

Anti-virus software is certainly an important part of keeping your organization safe — but it won’t protect you from everything. The software is just the beginning of a comprehensive cybersecurity plan. To truly protect your organization, you need a total solution that encompasses everything from employee training to insider threat detection and disaster protection.

While outsider threats are certainly a concern and should be monitored extensively, insider threats are just as dangerous and should be watched just as closely. In fact, research suggests that insider threats can account for up to 75 percent of data breaches.

These threats can come from anyone on the inside, from disgruntled employees looking for professional revenge to content employees without proper cybersecurity training, so it’s important to have a system in place to deter and monitor insider threats.

While IT has a big responsibility when it comes to implementing and reviewing policies to keep companies cyber safe, true cybersecurity preparedness falls on the shoulders of every employee, not just those within the information technology department.

For example, according to Verizon, 49 percent of malware is installed over email. If your employees aren’t trained on cybersecurity best practices, like how to spot phishing scams and avoid unsafe links, they could be opening up your company to potential threats.

If your business has employees who travel often, work remotely or use shared workspaces, they may incorrectly assume that a password keeps a Wi-Fi network safe. In reality, Wi-Fi passwords primarily limit the number of users per network; other users using the same password can potentially view the sensitive data that’s being transmitted. These employees should invest in VPNs to keep their data more secure.

A decade or so ago it may have been true that you could tell immediately if your computer was infected with a virus — tell-tale signs included pop-up ads, slow-to-load browsers and, in extreme cases, full-on system crashes.

However, today’s modern malware is much more stealthy and hard to detect. Depending on the strain your computer or network is infected with, it’s quite possible that your compromised machine will continue running smoothly, allowing the virus to do damage for some time before detection.

Employees often assume that their personal devices are immune to the security protocols the company’s computers are subjected to. As such, Bring Your Own Device (BYOD) policies have opened up companies to the cyber risk they may not be aware of. Employees who use their personal devices for work-related activities need to follow the same protocols put in place on all of the network’s computers.

These rules aren’t limited to cell phones and laptops. BYOD policies should cover all devices that access the internet, including wearables and any IoT devices.

Cybersecurity is an ongoing battle, not a task to be checked off and forgotten about. New malware and attack methods consistently put your system and data at risk. To truly keep yourself cyber safe, you have to continuously monitor your systems, conduct internal audits, and review, test, and evaluate contingency plans.

Keeping a business cyber safe is a continuous effort and one that requires every employee’s participation. If anyone at your company has fallen victim to one of the myths above, it may be time to rethink your cybersecurity training and audit your company to assess your risk.

A filter bubble is an intellectual isolation that can occur when websites make use of algorithms to selectively assume the information a user would want to see, and then give information to the user according to this assumption.

Websites make these assumptions based on the information related to the user, such as former click behavior, browsing history, search history, and location. For that reason, the websites are more likely to present only information that will abide by the user’s past activity.

A filter bubble, therefore, can cause users to get significantly less contact with contradicting viewpoints, causing the user to become intellectually isolated.

Personalized search results from Google and personalized news stream from Facebook are two perfect examples of this phenomenon.

What are filters and where exactly is the “bubble?”

Language and location are the two most basic filters Google and other sites use to deliver personalized results. If you are searching Google for an electrician and you speak English and live in Ohio, Google knows there’s no need to show you the link to a bilingual electrician in Texas.

There are many other factors that Google and others use to personalize results to you. All of these filters create a bubble around you. The information that filters deem important to you goes into the bubble; the rest stays outside of the bubble and does not show up in search results.

The term filter bubble was coined by internet activist Eli Pariser in his book, “The Filter Bubble: What the Internet Is Hiding from You” (2011).

Pariser relates a case in which a user searches for “BP” on Google and gets investment news regarding British Petroleum as the search result, while another user receives details on the Deepwater Horizon oil spill for the same keyword. These two search results are noticeably different and could affect the searchers’ impression of the news surrounding the British Petroleum company.

According to Pariser, this bubble impact could have adverse effects on social discourse. However, others say the impact is negligible.

How Are Filter Bubbles Created?

Algorithmic websites, like many search engines and social media sites, show users content based on their past behavior. Depending on what you’ve clicked on in the past, the website shows you what it thinks you are most likely to engage with.

Social Media companies, like Facebook, want you to keep using the product. So instead of being a feed of all the information, Facebook is selective with what it puts in your feed. People often assume that the information they see is unbiased when it is actually skewed towards their beliefs.

Here is what Mark Zuckerberg said emphasizing the importance of news feed in Facebook and how they need to customized from user to user:

Rarely do we go past the page-1 of our Google searches? Highly filtered results (which most of us prefer – living in a bubble), meaning other stuff gets demoted. And the personalization increases as algorithm gets more training on your interests, and thus the wall of bubble goes thicker and thicker.

Why are Filter Bubbles Bad?

After a while of only seeing results they agree with, people begin to believe that they are more correct and then their views are strengthened and solidified. This means that when someone disagrees with them, both of their views are likely to be more polarized. As a result, these people are less likely to agree with each other, or even talk to each other.

Filter bubbles are a kind of “intellectual isolation”. This isolation creates ignorance to other perspectives and opinions.

The negative of personalization and filter bubbles is that you will only see information that you like. Google is not going to challenge or disagree with you. (Its search results and what flows into your “bubble” are all based on algorithms.) It’s important to know, you’re only seeing one side of the story: Your side. When we are only surrounded by information and people we agree with, we miss opportunities to learn and grow.

The other con associated with the bubble is Page Ranking. Search engines use this to categorize, and rank pages based on the number of hits or popularity of a given website or content. This doesn’t make the information accurate, but we tend to believe that because it ranks higher in the search than other websites it must be legit. This takes away our ability to dig deeper for relevant information.

How can you burst out of it?

In order to burst the filter bubble following steps can be handy.

To get rid of your search history.

To turn off targeted ads using ad blocking software

Ensuring that you delete your browser cookies

Disabling tracking cookie features

Keeping your Facebook data private, altogether!

Going incognito or anonymous

Private search engines are a great way to avoid filter bubbles.

What is the difference between the Filter Bubble and Personalisation?

Personalisation is the process and filter bubble the result. Personalization makes you only see stuff in your feed that is supposed to be relevant to you. That creates a filter bubble in which everything else is filtered out.

The HTTP-over-QUIC experimental protocol will be renamed to HTTP/3 and is expected to become the third official version of the HTTP protocol, officials at the Internet Engineering Task Force (IETF) have revealed.

This will become the second Google-developed experimental technology to become an official HTTP protocol upgrade after Google’s SPDY technology became the base of HTTP/2.

HTTP-over-QUIC is a rewrite of the HTTP protocol that uses Google’s QUIC instead of TCP (Transmission Control Protocol) as its base technology.

QUIC stands for “Quick UDP Internet Connections” and is, itself, Google’s attempt at rewriting the TCP protocol as an improved technology that combines HTTP/2, TCP, UDP, and TLS (for encryption), among many other things.

In a mailing list discussion last month, Mark Nottingham, Chair of the IETF HTTP and QUIC Working Group, made the official request to rename HTTP-over-QUIC as HTTP/3, and pass it’s a development from the QUIC Working Group to the HTTP Working Group.

In the subsequent discussions that followed and stretched over several days, Nottingham’s proposal was accepted by fellow IETF members, who gave their official seal of approval that HTTP-over-QUIC becomes HTTP/3, the next major iteration of the HTTP protocol, the technology that underpins today’s World Wide Web.

According to web statistics portal W3Techs, as of November 2018, 31.2 percent of the top 10 million websites support HTTP/2, while only 1.2 percent support QUIC.

What is QUIC?

QUIC (Quick UDP Internet Connections) is a new transport protocol for the internet, developed by Google.

QUIC solves a number of transport-layer and application-layer problems experienced by modern web applications while requiring little or no change from application writers. QUIC is very similar to TCP+TLS+HTTP2 but implemented on top of UDP. Having QUIC as a self-contained protocol allows innovations which aren’t possible with existing protocols as they are hampered by legacy clients and middleboxes.

The first time a QUIC client connects to a server, the client must perform a 1-roundtrip handshake in order to acquire the necessary information to complete the handshake. The client sends an inchoate (empty) client hello (CHLO), the server sends a rejection (REJ) with the information the client needs to make forward progress, including the source address token and the server’s certificates. The next time the client sends a CHLO, it can use the cached credentials from the previous connection to immediately send encrypted requests to the server.

Congestion Control

QUIC has pluggable congestion control and provides richer information to the congestion control algorithm than TCP. Currently, Google’s implementation of QUIC uses a reimplementation of TCP Cubic and is experimenting with alternative approaches.

One example of richer information is that each packet, both original and retransmitted, carries a new sequence number. This allows a QUIC sender to distinguish ACKs for retransmissions from ACKs for originals and avoids TCP’s retransmission ambiguity problem. QUIC ACKs also explicitly carry the delay between the receipt of a packet and its acknowledgment being sent, and together with the monotonically-increasing sequence numbers. This allows for precise roundtrip-time calculation.

Finally, QUIC’s ACK frames support up to 256 NACK ranges, so QUIC is more resilient to reordering than TCP (with SACK), as well as able to keep more bytes on the wire when there is reordering or loss. Both client and server have a more accurate picture of which packets the peer has received.

Multiplexing

One of the larger issues with HTTP2 on top of TCP is the issue of head-of-line blocking. The application sees a TCP connection as a stream of bytes. When a TCP packet is lost, no streams on that HTTP2 connection can make forward progress until the packet is retransmitted and received by the far side – not even when the packets with data for these streams have arrived and are waiting in a buffer.

Because QUIC is designed from the ground up for multiplexed operation, lost packets carrying data for an individual stream generally only impact that specific stream. Each stream frame can be immediately dispatched to that stream on arrival, so streams without loss can continue to be reassembled and make forward progress in the application.

Forward Error Correction

In order to recover from lost packets without waiting for a retransmission, QUIC can complement a group of packets with an FEC packet. Much like RAID-4, the FEC packet contains parity of the packets in the FEC group. If one of the packets in the group is lost, the contents of that packet can be recovered from the FEC packet and the remaining packets in the group. The sender may decide whether to send FEC packets to optimize specific scenarios (e.g., beginning and end of a request).

Connection Migration

QUIC connections are identified by a 64-bit connection ID, randomly generated by the client. In contrast, TCP connections are identified by a 4-tuple of source address, source port, destination address, and destination port. This means that if a client changes IP addresses (for example, by moving out of Wi-Fi range and switching over to cellular) or ports (if a NAT box loses and rebinds the port association), any active TCP connections are no longer valid. When a QUIC client changes IP addresses, it can continue to use the old connection ID from the new IP address without interrupting any in-flight requests.

For a detailed explanation, read the book: HTTP/3 Explained by Daniel Stenberg

HTTP/3 explained is a free and open booklet describing the HTTP/3 and QUIC protocols.

Are you concerned about your online security? With more data breaches occurring daily, it’s crucial to protect yourself with these simple tips.

This infographic is a comprehensive look at how you can reduce your online visibility to protect your privacy, but still be seen by your family and friends. From browsing the internet to safety on social media platforms, you don’t need to be a technical genius to lessen your online risk.

You don’t have to leave the grid to disappear from hackers and unscrupulous businesses who exploit you and your information for their gain without your knowledge. However, it’s critical to protect your data on each platform you use.

Unfortunately, these big corporations don’t always have our best interests at heart. As we’ve seen from the multiple data breaches, there are times that consumers aren’t told about the hack until it was too late. Repairing your credit and personal information after a data hack is scary. By locking down your data now, you’ll save yourself a bigger headache later.

HTTP/2 (originally named HTTP/2.0) is a major revision of the HTTP network protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google.

A protocol is a set of rules that govern the data communication mechanisms between clients (for example web browsers used by internet users to request information) and servers (the machines containing the requested information).

Protocols usually consist of three main parts: Header, Payload, and Footer.

The Header placed before the Payload contains information such as source and destination address as well as other details (such as size and type) regarding the Payload.

The Payload is the actual information transmitted using the protocol.

The Footer follows the Payload and works as a control field to route client-server requests to the intended recipients along with the Header to ensure the Payload data is transmitted free of errors.

~ Mail HTTP/2

The system is similar to the postal mail service. The letter (Payload) is inserted into an envelope (Header) with destination address written on it and sealed with glue and postage stamp (Footer) before it is dispatched.

What is SPDY?

SPDY (pronounced SPeeDY) is a networking protocol developed by Google with the purpose of speeding up the delivery of web content. It does this by modifying HTTP traffic which in turn reduces web page latency and improves web security.

HTTP, while powerful in its day, cannot keep up with the demands of today’s digital world, which is the reason SPDY was introduced to help meet those demands.

What is HTTP/2?

HTTP/2 is the second major version update to the HTTP protocol since HTTP1.1 which was released more than 15 years ago. The HTTP/2 protocol was developed due to the ever-evolving digital world and the need to load more resource intensive web pages.

SPDY was also implemented to help reduce web page latency users experience when using HTTP1.1. HTTP/2 is based off SPDY, however, contains key improvements that have led to the deprecation of SPDY in February 2015.

How does HTTP/2 work?

Whenever you click on a link to visit a site a request is made to the server. The server answers with a status message (header) and a file list for that website. After viewing that list, the browser asks for the files one at a time. The difference between HTTP 1.1 and HTTP/2 lies in what happens next.

Say you want a new LEGO set. First, you go to the store to buy your LEGO. When you get home, you open the box and look at the instructions, which tell you what you have to do: one brick at a time. So for every brick, you have to look at the instructions to see which brick to use next. The same for the next brick, and so on. This back-and-forth keeps happening until you have finished the entire LEGO set. If your set has 3,300 bricks, that’ll take quite a while. This is HTTP1.1.

With HTTP/2 this change. You go to the store to pick up your box. Open it, find the instructions and you can ask for all the bricks used on one section of the LEGO set. You can keep asking the instructions for more bricks, without having to look at the manual. “These bricks go together, so here they are.” If you want it really quickly, you could even get all the bricks at once so you can build the set in an instant.

Differences from HTTP1.1

Similar to SPDY, using HTTP/2 does not require any changes to how web applications currently work, however, applications are able to take advantage of the optimization features to increase page load speed.

Differences between the HTTP1.1. and HTTP/2 protocol includes the following:

HTTP/2 is binary, instead of textual

It is fully multiplexed, instead of ordered and blocking

It can use one connection for parallelism

It uses header compression to reduce overhead

It allows servers to “push” responses proactively into client caches instead of waiting for a new request for each resource.

Is it HTTP/2.0 or HTTP/2?

The Working Group decided to drop the minor version (“.0”) because it has caused a lot of confusion in HTTP/1.x.

In other words, the HTTP version only indicates wire compatibility, not feature sets or “marketing.”

Similarities with HTTP1.x and SPDY

HTTP1.x

SPDY

HTTP2

SSL not required but recommended.

SSL required.

SSL not required but recommended.

Slow encryption.

Fast encryption.

Even faster encryption.

One client-server request per TCP connection.

Multiple client-server requests per TCP connection. Occurs on a single host at a time.

Multi-host multiplexing. Occurs on multiple hosts at a single instant.

No header compression.

Header compression introduced.

Header compression using improved algorithms that improve performance as well as security.

No stream prioritization.

Stream prioritization introduced.

Improved stream prioritization mechanisms used.

Conclusion

HTTP/2 is without a doubt the direction the web is moving towards in terms of the networking protocol that is able to handle the resource needs of today’s websites. While SPDY was a great step forward in improving HTTP1.1, HTTP/2 has since further improved the HTTP protocol that has served the web for many years.

According to W3Techs, as of November 2018, 31% of the top 10 million websites supported HTTP/2.

Both HTTP and HTTPS are protocols being used for transmitting and receiving information across the Internet.

HTTP is the acronym for Hypertext Transfer Protocol. HTTP has been the standard communication protocol pretty much since the internet was developed.

HTTP: HyperText Transfer Protocol:

Hypertext Transfer Protocol (HTTP) is a system for transmitting and receiving information across the Internet. HTTP is an “application layer protocol,” which ultimately means that its focus is on how information is presented to the user, however, this option doesn’t really care how data gets from Point A to Point B.

It is said to be “stateless,” which means it doesn’t attempt to remember anything about the previous web session. The benefit of being stateless it that there is less data to send, and that means increased speed.

HTTP V1.0 is specified in RFC 1945 that officially introduced and recognized in 1996.

HTTP V1.1 is specified in RFC 2616 and was released in January 1997.

HTTP V2.0 is specified in RFC 7540 and was published in May 2015

HTTPS: Hyper Text Transfer Protocol Secure:

Hyper Text Transfer Protocol Secure (HTTPS) is the secure version of HTTP, the protocol over which data is sent between your browser and the website that you are connected to. The ‘S’ at the end of HTTPS stands for ‘Secure’. It means all communications between your browser and the website are encrypted. HTTPS is often used to protect highly confidential online transactions like online banking and online shopping order forms.

Web browsers such as Internet Explorer, Firefox and Chrome also display a padlock icon in the address bar to visually indicate that an HTTPS connection is in effect.

Here is the fact of HTTPS:

HTTPS uses a port 443 by default to transfer the information.

HTTPS URLs begin with “https://”.

The HTTPS is first used in HTTPS V1.1 and defined in RFC 2616.

HTTPS provides three key layers of protection

Encryption. Encrypting the exchanged data to keep it secure.

Data Integrity. Data cannot be modified or corrupted during transfer without being detected.

Authentication proves that your users communicate with the intended website.

There is a belief among many around the web that HTTPS is slower. Fortunately, this is a myth. HTTPS is actually much faster than HTTP.

Difference between HTTP and HTTPS

In HTTP, URL begins with “http://” whereas URL starts with “https://”

HTTP uses port number 80 for communication and HTTPS uses 443

HTTP is considered to be unsecured and HTTPS is secure

HTTP Works at Application Layer and HTTPS works at Transport Layer

In HTTP, Encryption is absent, and Encryption is present in HTTPS as discussed above

HTTP does not require any certificates and HTTPS needs SSL Certificates

Is HTTP dying?

HTTP isn’t really dying, per se. It’s just being forced to evolve. As we mentioned earlier, the browsers are basically our de facto vehicle for getting around the internet. The vast majority of us could not use the internet without a browser. And that puts the browsers in position to influence the internet as they see fit.

Right now, they’re mandating SSL. The initiative began a few years ago with a soft push. Google announced HTTPS would become a ranking factor for SEO, then the browsers started making new features exclusive to sites with SSL. Gradually they incentivized encryption more and more.