Monthly Archives: May 2017

Top 5 Cyber Security Tools

Latest data centres deploy firewalls and managed networking components, but still feel insecure because of hackers. Hence, there is a compelling need for tools that accurately assess network vulnerability. This article brings you the top five assessment tools to address these issues, categorized based on its popularity, functionality, and ease of use.

Vulnerabilities are unfortunately an integral part of every software and hardware system. It can be a bug in the operating system, a loophole in a commercial product, or the misconfiguration of critical infrastructure components which makes systems responsive to attacks.

On the bright side, with the number of attacks increasing, there are now a plenty of tools to detect and stop malware and cracking attempts. The open source world has many such utilities.

Though there are hundreds of tools, in this article we have selected the top five based on the fact that no other tool can really replace them. The primary selection criteria have been the feature set, how widespread the product is within the security community, and simplicity.

Explore this article and know what are the top five cyber-security tools and how it stops the malware.

Wireshark:

The very first step in vulnerability assessment is to have a clear picture of what is happening on the network. Wireshark works in a profligate mode to capture all the traffic of a TCP broadcast domain.

Customized filters can be set to intercept specific traffic; for instance, to capture communication between two IP addresses, or to capture UDP-based DNS queries on the network. Traffic data can be disposed into a capture file, which can be reviewed later. Additional filters can also be set during the review.

If the tester is looking for erratic IP addresses, spoofed packets, unnecessary packet drops, and suspicious packet generation from a single IP address. Wireshark gives a broad and clear picture of what is happening on the network.

However, it does not have its own intelligence, and it should be used as a data provider. Due to its great GUI, any person with even basic knowledge can use it.

Nmap:

This is probably the only tool to remain popular for almost a decade. This scanner is capable of crafting packets and performing scans to a granular TCP level, such as SYN scan, ACK scan, etc. It has in-built signature-checking algorithms to guess the OS and version, based on network responses such as TCP handshake.

Nmap is effective and detects remote devices, and in most cases correctly identifies firewalls, routers, and their make and model. Network administrators can use Nmap to check which ports are open and checks whether it can be exploited further in simulated attacks. The output will be in the form of plain text and verbose. This tool can be scripted to automate ordinary tasks and to grab evidence for an audit report.

Metasploit:

Once scanning is done using the above tools, it’s time to go to the OS and application level. Metasploit is one of the most powerful open source framework that perform detailed scan against a set of IP addresses.

Unlike many other frameworks, it can also be used for anti-forensics. This process can be reversed technically, when a virus attacks using unknown vulnerability, Metasploit can be used to test the patch for it.

As this is a commercial tool, the community edition is free, yet makes no compromises on the feature set.

Aircrack:

The list of network scanners would be incomplete without wireless security scanners. Today’s infrastructure contains wireless devices in the data centre as-well-as in corporate premises to facilitate mobile users. WPA-2 security is believed to be adequate for 802.11 WLAN standards, misconfiguration and the use of weak passwords leave such networks open to attacks.

Aircrack is a suite of software utilities that acts as a sniffer, packet crafter, and as a packet decoder. A targeted wireless network is subjected to packet traffic to capture important details about the underlying encryption. A decryptor is then used to brute-force the captured file, and find out passwords. Aircrack is capable of working on most Linux distros, but the one in BackTrack Linux is highly adopted.

OpenVAS:

The Nessus scanner is one of the famous commercial utility, from which OpenVAS branched out a few years back to remain open source. Though Metasploit and OpenVAS are very similar, there is still a distinct difference.

OpenVAS is split into two major components i.e., 1. A scanner and 2. A manager. A scanner may reside on the target to be scanned and passes the vulnerability information to the manager. The manager collects the inputs from multiple scanners and applies its own intelligence to create a report.

OpenVAS is believed as a stable and reliable tool for detecting the latest security loopholes, and for providing reports and inputs to fix them. An in-built Greenbone security assistant provides a GUI dashboard to list all vulnerabilities and the impacted machines on the network.

What’s new in Java EE 8?

The Java Community Process machinery has started increasing on Java EE again, a little over a year after Java EE 7 was released. The main aim is to create the next major version of Java Enterprise Edition.

This release is going to build upon the highly successful previous release of the platform, with the addition of many new functionalities. Some JSRs that could not be included in the last release, principally because of scheduling constraints, and that are now targeted to be included in Java EE 8. So, let’s dive deep in and see what are the key themes for the release.

Some features, such as HTTP/2, action-based MVC, and JSON Binding, are fairly big additions to the platform and are filed as separate JSRs.

Further simplification of the programming model by CDI alignment across the specs. Today declarative security and @Schedule can only be specified on enterprise JavaBeans. This is targeted to be made more generically available for POJOs. There are some unique features to EJB such as remoting, concurrency, and pooling that would need to be extracted and applied more widely to POJOs. JAX-RS has a few areas where clean-up is required such as CDI alignment, server-side async, and simpler hypermedia API. Older EJB 2-style APIs and CORBA IIOP interoperability are also targeted for pruning.

Infrastructure for cloud support Basic support for cloud infrastructure was added in Java EE 7, such as schema generation in JPA 2.1 and resource library templates in JSF 2.2. This addition will make flexible configuration of applications such as multi-tenancy and simplified security. REST-based APIs for management and monitoring will allow building a customized dashboard which is portable across different application servers.

Java SE 8 alignment: Java SE 8 has introduced many new features such as repeating annotations, lambda expressions, the Date/Time API, type annotations, and CompletableFutures. Component JSRs will be encouraged to take advantage of these features. For instance, @DataSourceDefinitions, @JMSConnectionFactoryDefinitions, @MailSessionDefinitions, and other similar annotations can be deprecated in favour of repeating annotations. Expression language is the added support for Lambda expressions, initially due to Java EE 7 and JDK 8 schedules not aligning. This support would need to be replaced with a JDK 8 standard now.

BELOW IS THE QUICK OVERVIEW OF FEATURES FROM THE COMPONENT JSRS FILED SO FAR:

JSR 371:

Leverage existing Java EE technologies.

Model should leverage CDI and Bean Validation.

View should leverage existing view technologies like JSPs and Facelets.

Defining a new template language is out of scope, existing languages such as Freemarker, Velocity, and others should be evaluated.

JSR 367:

There is only standard way to convert JSON into Java objects and vice versa.

JSON-B will leverage JSON-P and provide a conversion layer above it.

A default mapping algorithm will be defined for converting existing Java classes to JSON. The default mappings can be customized through the use of Java annotations and will be leveraged by the JSON-B runtime to convert the Java objects to and from JSON.

The goal is to simplify, standardize, and modernize the Security API across the platform. There is a proposal to simplify managing users and groups. Also, improvements related to password aliasing, role mapping, authentication and authorization with CDI support.

JSR 380:

The main aim is to leverage Java 8 language constructs, such as, optional’s, date and time API, and repeatable annotations.

Microsoft Dynamics AX Over Microsoft Dynamics NAV

The Microsoft Dynamics suite includes ERP solutions for businesses of all sizes and industries. Over the last decade, Microsoft acquired two ERP companies that resulted in four Dynamics ERP products: Dynamics GP, Dynamics SL, Dynamics NAV, and Dynamics AX.

The two products that have the most crossover are Microsoft Dynamics AX and Microsoft Dynamics NAV. When deciding what fits the best for the company, a lot of people tend to generalize and view AX for large enterprises and NAV for small to medium size businesses. If only selecting a software was that simple.

In terms of size alone, Dynamics AX is regarded to handle up to 10X more users than Dynamics NAV. However, that doesn’t mean there aren’t smaller businesses using Dynamics AX and larger enterprises running Dynamics NAV. Instead of taking the traditional approach and looking at both products from a size perspective, let’s compare by looking at the bigger picture. Explore this article and know what are the major differences between Microsoft Dynamics AX and Microsoft Dynamics NAV.

COMPLEX BUSINESS PROCESSES

Difficulties come in all shapes and sizes – from accounting compliance to vertical needs. Let’s say your organization has different accounting groups in different organizations within a consolidated corporate structure. This is one classic example of when the robust functionality of AX will overpower NAV. Dynamics AX works best when there are multiple accounting groups or a shared service models exists. Dynamics NAV is best suited for organizations with the same accounting group managing the books for multiple entities.

Dynamics AX is built on a platform that handles more complexities. Dynamics AX has more functionality out of the box.

ANALYTICAL CAPABILITIES

Once the user considers his business process complexities, then the user needs to consider the volume of data being transacted and the resulting reporting requirements. Dynamics AX has richer analytical and reporting functionality than Dynamics NAV, with more modules. The software also consists of much-advanced business intelligence, workflow, and portal capabilities that are beyond what Dynamics NAV natively offers.

Dynamics AX is one of the most powerful platforms that can handle great volumes of data without any decadence of performance. It can also handle multiple sites across multiple locations and has the capability to easily scale to thousands of users.

CLOUD AVAILABILITY

This scenario is a little easier to decode, as Dynamics AX was the first Dynamics ERP product built in the cloud. Dynamics NAV is available on-premise and hosted, but the latest version of Dynamics AX (AX7 or AX 2016) runs natively in Microsoft Azure. This is the best option for companies that are rapidly expanding and need mobility, versatility, and scalability in their ERP solution. The new release brings out the benefits of the cloud that are most important for business.

The cloud version, or AX on Azure, is also becoming the Operations Application in the upcoming Enterprise Edition of Dynamics 365, a new cloud product that combines both ERP and CRM capabilities into one solution. Dynamics 365 Enterprise Edition is a solution designed for large companies with over 250 employees. By choosing this route, a user can merge the powerful functionality of Dynamics AX7 with individual modules for core CRM, in addition to Microsoft’s Power Apps and Flow. This solution is also available for on premise customers as well.

What are Hadoop Pain Points?

As we know, Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware. It provides massive storage for any kind of data, excessive processing power and the ability to handle virtually boundless concurrent tasks. Hadoop is powerful, but, like most of the systems, it has some sharp edges. Explore this article and know what are the pain points of Hadoop

Hadoop isn’t a database

Hadoop is amply different from an access and storage perspective to throw a lot of people off. Databases abstract away the details of on-disk organization, file formats, and serialization, partitioning, optimization for varied access patterns. Topics such as “data modeling” are treated either at the logical layer or assumed as a relational engine. As an example, most of the people are not aware of how relational database engines performs various forms of joins.

Hadoop is a distributed system

Deployment, composition, management, monitoring, and debugging a single threaded, single process system can be tough. A multithreaded single and multi-process system is harder. A multi-threaded, multi-process, and distributed system is harder. Hadoop has a ton of moving parts and while it gets better with each release, it’s still a complex system that requires specialized knowledge. That said, this isn’t dissimilar from other systems. The main stumbling block is that most people don’t have tons of experience with distributed systems.

Hadoop has a huge ecosystem

There are a huge number of open sources and commercial products/projects have hop-up around Hadoop that interoperate with it in some way. Each of these comes with its own complications. More than a single system, Hadoop is an entire world until itself.

Hadoop is evolving

In the grand scheme of things, Hadoop is a young system. It’s evolving and changing at an extremely rapid pace. Hence, there are a huge number of things to keep up with if you want to know all the details.

Hadoop tooling is still developing:

Many existing tools and similar systems are designed to deal with the data that resides in relational databases. While the ecosystem is growing at a tremendous rate, not all of the tools you might expect have been fully updated in support of HDFS and Hadoop MapReduce. But, many of the commercial vendors in the ETL, EDW, BI and analytics spaces are well on their way. Some have already arrived.

Hadoop is still a young technology– it’s clear that lots of organizations need more resources, competence, solutions, and tools to relieve the execution difficulties. Each week we see brand-new market entrants, which are accelerating the rate of Hadoop adoption. In fact, different verticals are including their own unique set of devices that satisfy demands such as integrated security and regulatory compliance capabilities. Hadoop-experimentation is drawing to a close, now the developers are going into a phase of fast adoption and even a little beyond the early adopter phase, because companies are producing finest practices, looking for standardization and ease-of-use so that users can successfully obtain understandings at a faster speed.

AMAZON COGNITO

Amazon Cognito is an Amazon Web Services product which controls the user authentication and access for mobile applications on internet-connected devices. This service helps to speed-up the application development by saving and synchronizing end user’s data, allowing an application developer to focus on writing code instead of building and managing the requisite back-end infrastructure. Explore this article and know what is it about and it’s features.

FEATURES OF AMAZON COGNITO

AMAZON COGNITO USER POOLS

A developer can create and maintain a user directory and can add sign-up and sign-in to the user’s mobile application or web application using Amazon Cognito User Pools. User pools scale up-to hundreds of millions of users and can provide simple, secure, and low-cost options for the developer.

The developer can also implement enhanced security features, such as email and phone number verification, and multi-factor authentication. In addition, Amazon Cognito User Pools lets the developer customize the workflows through AWS Lambda.

AMAZON COGNITO FEDERATED IDENTITIES

Amazon Cognito Federated Identities enables the developer to create unique identities for their users and authenticate them with federated identity providers. With a federated identity, a developer can obtain temporary or limited-privilege using AWS credentials to synchronize the data with Amazon Cognito Sync or to securely access other AWS services such as Amazon DynamoDB, Amazon S3, and Amazon API Gateway. Amazon Cognito Federated Identities supports federated identity providers which include Amazon, Facebook, Google, Twitter, OpenID Connect providers, and SAML identity providers and as-well-as unauthenticated identities. This feature also supports the developer authenticated identities, which allows to register and authenticate users through their own back-end authentication systems.

AMAZON COGNITO SYNC

Amazon Cognito Sync is an AWS service that supports offline access and cross-device syncing of application-related to user’s data. The developer can use Amazon Cognito Sync to synchronize the user’s profile data across mobile devices and the web without requiring the user’s own back end.

CREATING AND MANAGING USER POOLS

Create and maintain a user directory and add sign-up and sign-in to the user mobile app or web application using user pools. The user can use user pools to add user registration and sign-in features to your apps. Instead of using external identity providers such as Facebook, Google, or Twitter, a developer can also use user pools to let the users register with a sign-in to an app using an email address, phone number, or a user name. The user can also create custom registration fields and can store the metadata in their user directory. Users can verify their email addresses and phone numbers, recover passwords and can enable multi-factor authentication (MFA) with just a few lines of code.

User pools are for mobile and web application developers who want to handle user registration and can sign-in directly in their apps. Previously, a developer needed to implement their own user directory to create user accounts, store user profiles, and implement password recovery flows to support user registration and sign-in.

User pools integrate easily with the existing Amazon Cognito functionality for anonymous and social identities. In addition to that, a user can start as an anonymous user and then either sign in using a social identity or using user pools to register and sign in using email, phone number, or username.

What is the use of Bootstrap Carousel plugin?

The Carousel plugin is used to add a slider to your site. It is useful in condition where you want to display huge amount of content within a small space on the web pages. Some of the standard carousel includes

What is the difference between jQuery and d3.js?

What is the difference between Memcache and Memcached?

Memcache: It is an extension that allows you to work through handy object-oriented (OOP’s)
and procedural interfaces. It is designed to reduce database load in dynamic web applications.

Memcached: It is an extension that uses libmemcached library to provide API for
communicating with Memcached servers. It is used to increase the dynamic web applications by
alleviating database load. It is the latest API.