In my previous posts I whowed how to deploy, discover physical arrays, create virtual arrays and pools, and create and authentication provider. Next I will show how to deploy and configure object data services. From the web GUI select settings, data service nodes The data services nodes screen pops up.... Continue reading

Having now set up a physical and virtual array (HERE) we next need to create an Authentication provider to validate logins by tenants to ViPR services. From the ViPR web GUI click security Authentication Provider Note that a user has to be setup to provide access to LDAP. See this... Continue reading

Having deployed ViPR and discovered physical arrays the next step is to abstract them into Virtual arrays and Pools for consumption. Below youll see examples using our Isilon array, From the left side select Virtual Assets and then select Virtual Array On the virtual Array screen click Add give you... Continue reading

After deploying the ViPR controller the next thing to do is add storage resources. This is done through a discovery. Discover Isilon File From the dashbard, on the left is a series of buttons. Under the dashboard button is the Physical assets. Click it and on the pop out select... Continue reading

With the launch of EMC ViPR 2.0 I figured i would update my install in the lab and publish my notes here. The basics are still the same as 1.0 and 1.1 but the interface is slightly differnt. This is the first in a series to deploy and configure ViPR... Continue reading

This is the last blog of a series that showed how to deploy, configure and use the Pivotal Cloud Foundry Runtime environment and developers console. Runtime Developers Console One of the cool parts of CF is the ability to install services, Hadoop, Mysql, Mongodb, and then deploy applications that attach... Continue reading

MongoDB World Day 1 I had a chance this week to attend the first ever MongoDB world held in NYC June 24th and 25th. This blog will bring together my notes from the 2 days spent attending the conference and sessions. To start the day we were greeted by a... Continue reading

In the last couple of blog post I showed how to setup PivotalCF Operations Manager, Elastic Runtime, and setup the PHD service. After setting up our enviornment we would then want to login to the developers console to deploy applications. While setting up the elastic runtime, we created a wildcard... Continue reading

This blog will show you how to deploy the PivotalCF Hadoop service using Operations manager. Once the service is deployed you can use the developers console to launch on demand hadoop clusters using the PivotalCF framework Login to the operations manager console using a web client On the left side... Continue reading

In the last 2 blog posts (here, and here) I showed how to deploy and configure Pivotal Cloud foundry operations manager and the operations manager director for VMware vSphere. This log will show you how to deploy the Pivotal Cloud foundry elastic runtime environment. Elastic runtime is the framework that... Continue reading

In my last blog post I showed how to deploy Pivotal Cloud Foundry operations manager. Once it’s deployed we have to configure it. Pivotal CF Operations Manager is a web application that you use to deploy and manage a Pivotal CF PaaS. It does it's deployments using BOSH. BOSH installs... Continue reading

Cloud foundry is an open source PAAS project that gives a user the ability to deploy platforms across multiple “cloud” platforms like Openstack, VMware, vCHS, and AWA. Pivotal CF is the enterprise version of this that has 2 main componets to enable PAAS: Pivotal CF Elastic Runtime Service – A... Continue reading

This is a continuing series on how to build a data lake. Welcome to part7 Part1 Part2 Part3 Part4 Part5 Part 6 Over the past couple of weeks Ive been blogging on how to create a data lake. These blogs included the architecture and how to install of a Data... Continue reading

OpenStack Cinder and Software-Defined Storage (SDS) So a week after the OpenStack Summit in Atlanta kicked off I’ve had some time to digest all I saw and heard. Having had the chance to present with John Griffith, the PTL of Cinder, was an amazing experience. John recently published a blog... Continue reading

This is a continuing series on how to build a data lake. Welcome to part6 Part1 Part2 Part3 Part4 Part5 GemFire XD is provided as a Pivotal HD installable component, for use with the Pivotal Command Center CLI installer. The CLI installation process installs multiple instances of GemFire XD. You... Continue reading

This is a continuing series on how to build a data lake. Welcome to part5 Part1 Part2 Part3 Part4 In this blog post I’ll show you how to enable Isilon to integrate with Pivotal Hawq. In these previous posts I explained the architecture and install of our data lake. At... Continue reading

Openstack Summit Atlanta session I was very fortunate to have the chance to present with Ken Hui and John Griffith in a session tilted Laying Cinder Blocks (Volumes) Use Cases and Reference Architectures this week at the Openstack summit. The session was standing room only and it was a great... Continue reading

This is a continuing series on how to build a data lake. Welcome to part4 Part1 Part2 Part3 The control center server (PCC) will push all the software and configuration information to our PHD nodes, Hawq master, and hawq segment servers. Create a temp directory and upload the binaries to... Continue reading

This is a contining blog series on How to build a data lake Part1 Part2 In part 2 I showed the architecture we are building for a data lake. In this blog I will begin to show how to deploy and integrate it all together. We’ll start with the base,... Continue reading

ViPR HDFS is a POSIX-like Hadoop compatible file system (HCFS) that enables you to run Hadoop 2.x applications on top of your ViPR storage infrastructure. You can configure your Hadoop distribution to run against the built-in Hadoop file system against ViPR HDFS, or any combination of HDFS, ViPR HDFS, or... Continue reading

ViPR HDFS is a POSIX-like Hadoop compatible file system (HCFS) that enables you to run Hadoop 2.0 applications on top of your ViPR storage infrastructure. You can configure your Hadoop distribution to run against the built-in Hadoop file system against ViPR HDFS, or any combination of HDFS, ViPR HDFS, or... Continue reading

ViPR HDFS is a POSIX-like Hadoop compatible file system (HCFS) that enables you to run Hadoop 2.0 applications on top of your ViPR storage infrastructure. You can configure your Hadoop distribution to run against the built-in Hadoop file system against ViPR HDFS, or any combination of HDFS, ViPR HDFS, or... Continue reading

Today EMC launched the Hadoop starter kit (HSK) ViPR edtion. These kits are designed to help deploy a Hadoop enviornment and use EMC ViPR as a hadoop compatiable file system for HDFS. There are 3 seperate guides that each focus in on how to deploy ViPR data services, create an... Continue reading

In my previous post I shared the origins of the Data Lake pilot within the EMC Open Innovations Lab. Based off that criteria we decided we needed to build an new analytics environment that would allow for real time data processing and the ability to compare it to historical data.... Continue reading

So I’ve seen a lot of blogs recently talking about the Data Lake. What it is and what it means. My favorite has been Steve Todd's which gives a good high level over of what a data lake is. In the EMC open innovations lab (OIL) we are constantly working... Continue reading