The personal view on the IT world of Johan Louwers, specially focusing on Oracle technology, Linux and UNIX technology, programming languages and all kinds of nice and cool things happening in the IT world.

Tuesday, October 20, 2015

When using Oracle Enterprise Manager to manage your IT footprint you most likely also want to make use of the reporting functions within Oracle Enterprise Manager. Within the latest releases Oracle tries to push to using Oracle BI and not the older reporting options. However, many deployments still use the "old" method of reporting (which works fine in most cases).

In some cases you do want to make a change to a report you have created and might run into a message like this: "You have chosen to edit report "xxxx". This report has saved copies. Do you want to edit the report with limited editing capabilities?".

This means that you cannot change the definition of the report while there are still "old" copies. To resolve this you have to first remove the copies before you can do your changes. To do so, login as a user who has the rights to change the report and open the report itself (not in edit mode, open it in view mode). You will see, as shown in the below screenshot, the number of saved copies.

When you click on the number you will be guided to a page like the one below:

You will have to delete all saved copies of this report. When you have done so and you enter the edit mode of the report again you will see that you have full editing capabilities and are able to make all changes required.

Thursday, October 08, 2015

NuPIC is an open source project based on a theory of neocortex called Hierarchical Temporal Memory (HTM). Parts of HTM theory have been implemented, tested, and used in applications, and other parts of HTM theory are still being developed. Today the HTM code in NuPIC can be used to analyze streaming data. It learns the time-based patterns in data, predicts future values, and detects anomalies. HTM is a set of algorithms which model the functionality of the neocortex in the human brain. HTM Theory is the key to unlocking Intelligent Applications and Machines. NupIC is the core product from numenta and is opensource and available to all who like to test with it, build upon it or add to it.

For intelligent applications NuPIC is great as a starting point of your development. However, a thing to keep in mind is that this field of computer science is new, HTM is fairly new. Or in the words from Jeff Hawkins: "This stuff is not easy. I can assure you that once you understand it, you will see a beauty in it. But most people take months to deeply understand the CLA. The tasks of creating hierarchies of CLAs and adding in motor capabilities are very difficult. Even just using the CLA in its current form is not trivial due to the learning required."

When you like to run NuPic on Oracle Linux a number of steps might be a bit different from the installation on a MacBook. Also a couple of dependencies are in place before you can install NuPic on Oracle Linux which are: Python 2.7, Python development headers, pip, wheel, numpy and C++ compiler like gcc or clang.

Python development headers
Next to Python, which most likely will be shipping with your Oracle Linux installation you will have to make sure that you have the Python development headers. You can check if this is installed by executing the below command. In my case Python development headers was already installed.

In case you do not get a result you will have to install the Python development headers by executing a yum install command as shown below:

[root@localhost ~]# yum install python-devel

pip
one of the requirements to be able to install NupiC is to install pip. pip is a package management system used to install and manage software packages written in Python. Many packages can be found in the Python Package Index (PyPI). If you have installed the Python setuptools which I describe in this blogpost the installation of pip can be done by using the easy_install command which is part of the setuptools distribution.

wheel
wheel is required as a dependency. Wheel(s) are a built-package format for Python. A wheel is a ZIP-format archive with a specially formatted filename and the .whl extension. It is designed to contain all the files for a PEP 376 compatible install in a way that is very close to the on-disk format. Many packages will be properly installed with only the “Unpack” step (simply extracting the file onto sys.path), and the unpacked archive preserves enough information to “Spread” (copy data and scripts to their final locations) at any later time. You can install wheel with the just installed pip by executing the below command. Which resulted in my case in some warnings which you can (should) resolve however are not blocking the installation.

[root@localhost ~]# pip install wheel
/usr/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
You are using pip version 6.1.1, however version 7.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting wheel
/usr/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
100% |ââââââââââââââââââââââââââââââââ| 65kB 1.8MB/s
Installing collected packages: wheel
Successfully installed wheel-0.26.0
[root@localhost ~]#

NumPy
it will not come as a surprise that NumPy is required to be installed on the system. NumPy is is the fundamental package for scientific computing with Python. It contains among other things; a powerful N-dimensional array object, sophisticated (broadcasting) functions, tools for integrating C/C++ and Fortran code, useful linear algebra, Fourier transform, and random number capabilities. Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases

Numpy can be installed by executing the command:

pip install numpy

Compiler
As the core of Nupic is written in C++ you will need a C++ compiler. The obvious choice in this is GCC which most likely is already installed on your system. You can check the availability with the below command, which shows in my example that it is installed.

In case it is not installed you can execute a yum install command to install gcc on your Oracle Linux machine. One small, however important note, gcc should be GCC 4.8.

Installing NuPic
After you ensured all dependencies are done you can install NuPic. The installation of NuPic on Oracle Linux is a bit different than the installation on for example a Mac. Reason for this is that The nupic.bindings binary distribution is not stored on PyPi along with the OS X distribution. NuPIC uses the wheel binary format, and PyPi does not support hosting Linux wheel files. This forces you to download the wheel file directly from Numenta and not from PyPi.

If all is ok the "pip install nupic" command should work like a charm. However, in case you run into a compiler error like the one shown below it might be that you are missing some additional prerequisites.

cc -c /tmp/tmphmvPkY/vers.cpp -o tmp/tmphmvPkY/vers.o --std=c++11
cc: error trying to exec 'cc1plus': execvp: No such file or directory
*WARNING* no libcapnp detected. Will download and build it from source now. If you have C++ Cap'n Proto installed, it may be out of date or is not being detected. Downloading and building libcapnp may take a while.
fetching https://capnproto.org/capnproto-c++-0.5.1.2.tar.gz into /tmp/pip-build-PHQZgs/pycapnp/bundled
configure: error: *** A compiler with support for C++11 language features is required.

To resolve this issue you will need to do a additional install of gcc-c++ by executing

[root@localhost ~]# yum install gcc-c++

Testing NuPic
to ensure your installation of Nupic was successful you can run a test with the test units provided in the github repository. Execute the py.test against test/unit/ which can be found in the github repository. This should look like the example below.

Tuesday, October 06, 2015

When working with Python and when you like to make your life more easy when installing new modules and functions it is commonly a best practice to use things like for example pip and/pr Python setuptools. Python setuptools will help you to easily download, build, install, upgrade, and uninstall Python packages. The setup of the setuptools on Oracle Linux is basically a single command to get things working. Executing the command will download a python script and execute it. This script will ensure the setuptool will be downloaded and installed correctly on your system.

You can download and execute the script manually and in two steps, you can also do this in one go and ensure that you only need a single command to install the setuptools on Oracle Linux. Below is an example of the single command which involves a wget and sending the result to Python for execution.

In most cases you will not need to generate a MAC address. It will come with your network interface or, in cases of a virtual machine, it will be generated for you by the orchestration tooling. However, in some cases you might need to generate a random MAC address. In my case this was when we experimented with the Oracle VM API's and at some point in time we wanted to provide the MAC address to the code that orchestrated the creation and deployment of a new VM.

Generating a new MAC address can be done in multiple ways, the below Python script is just one of the examples, however, it can be intergrated faitly easy into wider Python code or you can call it from a Bash script.

When developing a new solution, including both the application, data-store and infrastructure components, one of the questions to ask is on which layer to build resilience against failure. On which level of the stack will you protect against failure of a component and on which level will your disaster recovery focus. In essence the answer is quite simple, you should ensure that disaster recovery is safeguarded as high as possible in the stack. The true answer is a very complex answer which includes disaster recovery, high availability and maximum availability components. Building a solution which is resilient against failure is a very complex process in which every component needs to be taken into account. However, making sure that you have disaster recovery as high up in the stack as possible will make your life much more easy.

As an example we take the below image which shows a application centered disaster recovery solution based within a virtualized environment with Oracle VM.

Within this solution applications will run in a active active setup in both site A as well as site B. Information between the two sites is kept in sync by making use of the MAA maximum Availability Architecture principles from Oracle. This means that when a site fails the application will still be able to function as it will run on the other site. Users should not face any downtime and should not even be aware that one of the two sites has been lost due to a disaster.

The application centered disaster recovery solution is the most resilient solution against disasters and the loss of a site. However, in some cases it is not feasible to run a architecture as shown above and you would still like to be able to perform a disaster recovery of the virtual machines running within your deployment. A solution to this is making use of block replication on a storage level and allowing your recovery site (site B) to start the VM's in case your site A is lost.

Within this model you will replicate all storage associated with the VM's from site A to a storage repository within site B. In essence this is an exact copy of the VM, however, on site B the machine is in a stopped state. This is also represented in the diagram below where you can more clearly see the replication of storage on the two sides. For this solution you can use storage block replication in a way that your storage appliance is supporting.

In case of a failure you have to ensure that all machines are stopped on site A, after this you can make the storage on site B readable and writable and start the virtual machines. This might not be the most ideal solution in comparison with disaster recovery in the higher levels of the stack, however, in case you are forced to ensure disaster recovery on a infrastructure / VM layer instead of a application level this is a solution that can be used.

Friday, October 02, 2015

When operating a large landscape of Linux machines, in our case a large landscape of Oracle Linux machines security is one of the vital things to keep in mind. In an ideal world all your Linux deployments would be of exactly the same version and contain exactly the same level of patching. In an ideal world no machine would differ from another machine and in this same ideal world you would be able to run a yum update command on all machines and would never face any issue nor would you be required to talk to end-customers or other tech team. However, even though in some situations you are able to maintain such a situation, commonly it is seen that a landscape of servers is equally patched and in some cases servers are not patched for a long period of time. This is not necessarily due to bad maintenance by the Linux administrators, commonly it is related to pressure from the business not to change the systems or not getting approval from a change advisory board.

When it comes down to new or improved functionality which can come with a Linux patch this might be acceptable. However, in case of missing a security patch this might be much more serious. Oracle Enterprise Manager provides, in combination with Yum a solution to show which patches need to be applied on which system. However, also a different solution can be used specifically to identify which security issues have not been addressed in a specific system.

To get an overview of which security vulnerabilities are on your system you can use OpenSCAP. OpenSCAP is based upon SCAP is a line of standards managed by NIST. It was created to provide a standardized approach to maintaining the security of enterprise systems, such as automatically verifying the presence of patches, checking system security configuration settings, and examining systems for signs of compromise.

Oracle provides a OVAL®: Open Vulnerability and Assessment Language XML file which you can use in combination with OpenSCAP to run against your Oracle Linux deployement to get a quick overview of what needs attention on your system and what looks to be correct. If you refer to the Oracle Linux security guide you can find more information around this subject.

After you have installed the needed components with using a Yum command you will have to download the Oracle Linux specific components, or in more detail, the Oracle Linux ELSA file in OVAL format. Oracle provides this file in year files where each year file contains the information on security issues found during that year. As an example, if you wanted to run an audit against the ELSA file of 2015 you need to perform the following steps:

1) Download the ELSA information in the OVAL format and extract it from the bz2 file

This will produce a rather large output to the screen which provides some quick information however the more valuable information can be found in both the XML result as well as in the HTML report which we have send to /tmp . For references the below is the shell output of the audit on the 2015 file which I ran against a Oracle Linux 3.8.13-98.2.2.el7uek.x86_64 implementation:

3) Review the results (and take action)
You will have to review the results, which can be done by looking at the HTML report or you can run a parser against the XML output to do a more automated way of checking the results. In case you run a large number of Oracle Linux machines and you like to use the oscap way of checking parts of your security you most likely want to have the xml files somewhere in a central location so you do not need to connect all your machines to the public internet and you most likley want to run this in a scheduled form and interpret the results in a automated manner. The HTML file is usable for human reading, however, the XML file is something you would like to parse and use in case you have more then x servers.