Monthly Archives: January 2015

If you are like me your Google Analytics might have quite a few websites in it which you don’t use anymore or which for example are websites of old clients or friends you helped with. You might want to clean this up.

Now within Google Analytics you can’t ‘just’ select a few and delete them (don’t know why not), you have to take a few steps to delete the accounts, but when done, you will have a nice clean Google Analytics.

I’ve just cleaned up my Google Analytics and thought I’d share the process with you so you will know how easy it really is. I did this on my Google+ page originally but because people seemed to like it and not everybody seemed to know how it should be done I thought I’d share it here as well.

Steps to remove a website (profile) from Google Analytics

Step 1: Go to the Admin tab in Google Analytics

Step 2: Select the domain and the property (website), then click ‘view settings’ in the 3rd column

Step 3: Scroll down the page and click ‘Delete view’

Step 4: Confirm the deletion

You will receive a confirmation of the deletion in your e-mail.
That’s all it should not show up more in your google analytics account.

Seagate Hard disks of 3 Terabyte volume used to be very attractive HDDs (storage) option when they were launched on the market.
Many people and sysadmins have already bought such ones and some sysadmins and company customers that already choose Seagate are badly suffering because of that and thus it is good to warn others to stay away from3TB SeaGate Hard Disks.

Backblaze (Online Backup) company is one of the most severely affected companies that made the choise to use Seagate as a storage devices on their Cloud inter-connected servers. They used 41 213 hard disks in their computing Data Center as of 31 December 2014.

In their disk arrays they have used Western Digital (now part of Western DIgital) and of course pitily Seagate.

The problematic hard disks that they faced issues with are Seagate Barracuda 7200.14 3TB sized is the hard disks with most failures within Backblaze for whole 2014 about 40% percents!! of all 3TB hard disks the company had break up, died or had to be replaced because of I/O disk failures and bad-sectors.

It is not exactly clear what is the reason for such a high failures but Seagate were leaders in failures followed by Western Digital and HGTX (the ex-Hitachi).

Just for a comparisonBackblaze reports that 4 Tetabyte hard disks which they bought last year had failures, very rarely and in general the company is quite happy with Seagate / WD disks of 4 TB volume.

Seagate Barracuda 7200.14 3TB diskdrives mounted on their servers are the one who had most hardware issues and the company recommends anyone willing to buy a new HDD to stay away from this volume.

Western Digitals 3TB HDDs had 10% of failure rate, HGTS had only 2.6% and Seagate exact failed HDDs were approximately 43.1% with a HDD failure!!

No severe hardware HDD failures are reported with 4 TB hdds.
4TB Seagate HDDs gave 5% of defects, followed by WD with 3-4% and HGTS with only 1.4%.

Statistics clearly shows it if you want to buy a big storage for your big data / Web / FTP / Dropbox (Cloud) hosting Company as of time of writting 26.01.2015 it is better equip your Big Storage Array racks with HGTS branded hard drives.

I need to prepare a document called Operational Readiness Test (ORT) for a Windows server which will be going to production soon in that regards its necessery to fill-in the ORT document form the server load avarage. Here is how to get a Windows load avarage from command line:

C:\> wmic cpu get loadpercentage
LoadPercentage
1

alternative way to get Windows system load avarage data is with a short BAT for loop

That’s all now we have Windows Load Avarage. Note that this command should work on Windows 7 / 8 / Windows server 2012. Haven’t tested that on Windows XP and NT 4.0 but I guess it should be working too.

wmic command is very interesting one I advise you check out its complete help:
C:\> wmic /?
[global switches]

The following global switches are available:
/NAMESPACE Path for the namespace the alias operate against.
/ROLE Path for the role containing the alias definitions.
/NODE Servers the alias will operate against.
/IMPLEVEL Client impersonation level.
/AUTHLEVEL Client authentication level.
/LOCALE Language id the client should use.
/PRIVILEGES Enable or disable all privileges.
/TRACE Outputs debugging information to stderr.
/RECORD Logs all input commands and output.
/INTERACTIVE Sets or resets the interactive mode.
/FAILFAST Sets or resets the FailFast mode.
/USER User to be used during the session.
/PASSWORD Password to be used for session login.
/OUTPUT Specifies the mode for output redirection.
/APPEND Specifies the mode for output redirection.
/AGGREGATE Sets or resets aggregate mode.
/AUTHORITY Specifies the for the connection.
/?[:<BRIEF|FULL>] Usage information.

For more information on a specific global switch, type: switch-name /?

I’ve learned a very useful system administration tip on how to utilize processor work on a Linux server. By default in newer Linux kernel releases, there is ondemand CPU function enabled that makes use of multi-CPU processors only in case if the CPU is needed. On busy servers this on-demand practice is very bad practice, because often when ondemand is enabled CPU is working on a lower CPU rate than maximum (to save power) and used only when necessery and this is time consuming and affects performance.
To check whether ondemand “kernel feature” is making your CPU work by default on its maximum capacity:

As you can see even though the CPU has capacity to run 2.40Ghz only 1.6Ghz are used meaning you’re wasing computing speed and even if used it is slower (not to mention that for dedicated servers saving power is not a priority of the sysadmin).

If ondemand is enabled for CPUs usually this
To disable CPU ondemand function use following one liner:

Representational State Transfer (REST) has gained widespread acceptance across the Web as a simpler alternative to SOAP and WSDL based Web services. In layman’s terms, REST is an architecture style or design pattern used as a set of guidelines for creating web services which allow anything connected to a network (web servers, private intranets, smartphones, fitness bands, banking systems, traffic cameras, televisions etc.) to communicate with one another via a shared common common communications protocols known as HyperText Transfer Protocol (HTTP). The same HTTP Verbs (GET, POST, PUT, DELETE etc.) used by web browsers to retrieve and display web pages, audio/video files, images etc. from remote servers and post data back to them when performing actions like filling out and submitting forms are used by all of the aforementioned devices/services to communicate with one another.

By leveraging and repurposing a lightweight and universal protocol like HTTP, software engineers and system architects are given a set of guidelines to use when designing RESTful web services for both new and existing products and services that contribute to what has become collectively known as the Internet of Things (IoT).

A simple example of designing a web service for managing employee data using an OData REST implementation might involve several methods, each corresponding to one of the HTTP verbs. A method like “Employees/GetEmployees” would be mapped to the GET verb (or “Employees/GetEmployee/12345” in the case of retrieving details for a single specific employee), handling all requests submitted to the web service, “Employees/AddEmployee” would be mapped to the POST verb, “Employees/UpdateEmployee” would be mapped to PUT and “Employees/DeleteEmployee” would be mapped to the DELETE verb. If the service were also exposing an interface to allow remote clients to manage consumer products the API would follow a similar naming convention but obviously specific to consumer products (i.e. Products/GetProducts, Products/AddProduct etc.). Any remote client that has access and is authorized to use any of these methods would be able to execute them provided that the remote client is capable of sending and receiving data using the HTTP protocol.

Technically speaking, it is an abstraction of the architecture of the World Wide Web (WWW); more precisely, REST is an architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributedhypermedia system. REST ignores the details of component implementation and protocol syntax in order to focus on the roles of components, the constraints upon their interaction with other components, and their interpretation of significant data elements.

The REST architectural style is also applied to the development of web services. One can characterize web services as “RESTful” if they conform to the constraints described in the architectural constraints section. RESTful web services are assumed to return data in XML and/or JSON format, the latter of which has been gaining more and more support and seems to be the data format of choice for many of the newer REST implementations.