AnalyticsWeekhttps://analyticsweek.com
All Things AnalyticsThu, 24 May 2018 19:08:30 +0000en-UShourly1https://wordpress.org/?v=4.9.6https://i2.wp.com/analyticsweek.com/wp-content/uploads/2017/01/cropped-analyticsweek_sq2.png?fit=32%2C32&ssl=1AnalyticsWeekhttps://analyticsweek.com
323261931862What You Still Don’t Know about the General Data Protection Regulationhttps://analyticsweek.com/content/what-you-still-dont-know-about-the-general-data-protection-regulation/
https://analyticsweek.com/content/what-you-still-dont-know-about-the-general-data-protection-regulation/#respondThu, 24 May 2018 18:43:17 +0000https://analyticsweek.com/?p=292338For starters, the enforcement date is actually today. The May 25th enforcement date is scheduled for European time; many European Union countries are approximately six hours ahead of Eastern Time in the United States. Later on this afternoon the clock will strike midnight throughout Europe, signaling the enforcement of perhaps the most stringent, comprehensive set …

]]>For starters, the enforcement date is actually today. The May 25th enforcement date is scheduled for European time; many European Union countries are approximately six hours ahead of Eastern Time in the United States. Later on this afternoon the clock will strike midnight throughout Europe, signaling the enforcement of perhaps the most stringent, comprehensive set of regulations to affect IT processes across the world: the General Data Protection Regulation.

By now, most organizations know these regulations transfer the ownership of personal data from the companies that have them to the actual people to whom they belong. They realize there are austere fines associated with non-compliance, and a host of changes about the ramifications of data loss or breach.

Still, the underlying changes associated with the GDPR are binary. Not only does the regulation unify the enforcement of mandates regarding privacy across the entire EU (and for any organization containing personal data of its citizens), but it also retrofits 1995’s Data Protective Directive.

According to Osterman Research Principal Analyst and Founder Michael Osterman: “When the Data Protection Directive was implemented in 1998, social media wasn’t much of a thing, the internet was fairly new, mobile devices weren’t really very popular yet. With GDPR they’ve updated the law to take into account the much greater variety and amount of private data that’s out there.”

As such, organizations may be surprised to learn who exactly is affected by this regulation, what data it involves, apparent contradictions with national or vertical regulations, how regulations may be abused, new roles they’re requiring, and many other factors related to data processing, immunization, breaches, and more.

Of equal value, of course, are the means of effecting regulatory adherence.

According to Hyland Senior Manager of Product Marketing Dennis Chepurnov: “One of the biggest challenges of GDPR is really not so much around specific technological requirements, because it really doesn’t have a whole lot of those. The biggest challenge to organizations is sort of the shock to the system in terms of impacting their data management practices.”

Who’s Impacted
Perhaps the most prominent concern for those data management practices is determining who’s affected by these regulations. As Micro Focus Global Head of Product Marketing for Information Management and Governance Joe Garber indicated, “The most natural place to start is what do I have and where is it. And, importantly, what of that information is responsive to GDPR. That’s where things start to get a little more complicated.” GDPR regulations apply to all EU citizens regardless of where their data are—such as in the U.S. Many know that the data of EU tourists in domestic organizations are subject to those mandates; fewer know those mandates also apply to other citizens when they travel to the EU. “It’s not just about people in the EU, it’s about people who visit,” Osterman said. “If you take a trip to Europe in any of the EU member states, you are now a European data resident and so GDPR now applies to you as a recipient of all this stuff.”

Which Data
Since the GDPR was construed to preserve the right to privacy, the vast majority of the data under its sanctions is personal data. This umbrella term encompasses a broad array of data types which are identifiable to a specific person, encompassing everything from aspects of race and ethnicity to conventional names and credit card numbers. However, these regulations also apply to metadata regarding various data elements. “Metadata can be protected by GDPR if it includes geo-location,” Chepurnov mentioned. “If I take a picture of you and there’s geo-location data in that file’s metadata, that’s considered personal data because it places you at a particular location and there’s a whole lot of implications of that.” It’s not difficult to envision other scenarios in which different metadata aspects are considered forms of personal data warranting protection. “It’s full complexity when trying to also look for the metadata,” acknowledged Efrat Kanner-Nissimov, NICE Marketing Director of Multi-Channel Recording. “But, that’s a very clear requirement from the GDPR.”

How Data’s Processed
The GDPR involves not only personal data and metadata—largely meaning where and how they’re stored—but also where and how they’re processed. Processing data can involve anything from cloud deployments for Software Oriented Architecture to on-premise software use cases. This additional complexity requires organizations to consider how data are channeled throughout their organizations, including outsourcing efforts. The GDPR differentiates the data subject (typically the consumer who’s personal data organizations have) from the data controller (the organization with that data) and the data processor (whichever companies might be processing data). “Under GDPR, the data controller is ultimately responsible for compliance because they set the request in place,” Chepurnov remarked. “They say okay, we’re going to start processing data of these patients or students, for example. Their responsibility is to find a data processor who would also be compliant with GDPR.” Thus, organizations must ensure their processor’s processes comply with the GDPR as well.

Contradictions
Another poignant difficulty associated with the GDPR involves its notion of data immunization, which Kanner-Nissimov clarified as, “the need to store and hold only data which is absolutely needed by your organization, and your organization needs to provide a very good reason for why this data is being stored with them.” Without such reasons organizations are expected to delete this data, as they also are for subject requests from consumers—under certain conditions. These aspects of GDPR could potentially contradict other regulations (mandated by industries or countries) in which organizations are required to keep data which GDPR might want them to delete. “If you’re a U.S. company and you anticipate that a legal action is coming down the pike, you’re obligated to start preserving all of the content related to that case,” Osterman said. “If there’s a wrongful termination suit and you think an employee’s going to sue, then you have to preserve all of this information. Well, if somebody for whom you’re saving this information says I want you to delete all of my data, then you’re going to be violating U.S. law by doing that. Or, if you keep all the data after they’re asking you to delete it, you’re going to violate EU law.”

Opportunities
Conversely, data immunization also provides unequivocal options for business value. The GDPR can actually spur organizations to capitalize on compliance measures. According to Garber, “By analyzing information that’s responsive to GDPR, you can also identify information or applications you no longer need to keep and get rid of them to save money.” Additional benefits may also be derived from attempts to de-silo the enterprise for compliance purposes, which could lead to “strategic insights to your customers, or identify areas of your business that are under-funded, or drive better productivity,” Garber maintained. Perhaps the most convincing of the additional advantages gained by complying with GDPR and parting with data not integral to business processes pertains to data’s cost. “We did a report with The Enterprise Strategy Group, an analyst firm, where we asked them what does it cost to keep a gigabyte of information throughout it’s lifecycle,” Garber recollected. “They came back and said it’s $25 dollars a gigabyte, including not just the hardware and the storage, but the administration, the cooling, access, and so forth. If you start to remove terabytes of information, there’s a significant ROI there.”

Abuse
There’s also the potential to abuse some GDPR requirements, especially those empowering data subjects. Subjects can now request disclosure of their data an organization has, erasure of that data (in certain situations), and information about who’s overseeing GDPR measures. Organizations have 30 days to comply with requests about what subject’s data they have before incurring penalties. Thus, consumers could potentially besiege organizations with requests that backlog their retrieval processes, subjecting them to penalties. There’s also the potential for fraud. “Imagine I’m a consumer doing some sort of transaction from my account to another,” Kanner-Nissimov said. “And then I want to exercise my right to be forgotten, and I want all the information about me to be deleted. Then I can approach the bank and say I didn’t ask for the transaction.” Moreover, the regulation also contains specific literature enabling subjects to seek individual redress for non-compliance in addition to penalties bestowed upon organizations. “There’s a private right of action under the GDPR,” Osterman acknowledged. “You can be fined by the EU for violating the terms of the GDPR but also, the individual…[who’s privacy was purportedly violated] can file suit.”

New Roles
Full adherence to the GDPR also requires organizations to adopt new roles within their ranks. Whereas it’s fairly common for companies to employ compliance officers, the GDPR specifically denotes that organizations have “a DPO (Data Protections Officer)” Kanner-Nissimov revealed. The responsibilities of the DPO are manifold, and include “all responsibilities relating to the privacy of the data itself,” recounted Kanner-Nissimov. “Adhering to the regulation, privacy by design, data immunization, integrity, confidentiality of the information, all the aspects of mandatory breach notification in 72 hours, that’s all on his plate.” Some organizations are implementing entire GDPR departments.

The notification to relevant parties of breaches within 72 hours is a crucial aspect of the DPO position. Part of this requirement’s significance is its opposition to contemporary practices, in which organizations wait months or even years to inform relevant parties of data breaches. But also, “if we assume private data’s under control, but then somebody had a copy of that data on a shared drive, and that drive or cloud app got hacked and that data got leaked, it’s going to take you time to figure out who’s data was on there and who should be notified,” Chepurnov said. But if it takes more than 72 hours to notify those parties, organizations have violated the GDPR.

Compliance Measures
In general, the template for complying with this regulation involves determining what data an organization possesses and where they are, creating the appropriate data governance policies to protect them, mapping those policies to appropriate technologies to rapidly extract data upon request, and placing this capability under some centralized IT or compliance personnel control. Different methods for doing so involve:

Archiving: Archiving technologies help organizations know where there data are. “The fundamental problem today is most of this information is siloed,” Osterman said. “If you want to do a complete search across all of your data sources you have to go silo by silo to extract it. That’s why you need a unified archiving approach, like Archive360, that will allow you to basically have a copy of everything.”

Enterprise Search: Organizations can direct competitive enterprise search tools anywhere data exist within the enterprise to ascertain what’s where with indexes while automating jobs based on keywords and information patterns. The inclusion of document filtering, optical character recognition, and Natural Language Processing technologies enable these capabilities “without making the business halt everything it’s doing,” Chepurnov said.

Recording Solutions: These platforms are for those operating in the contact center space, and facilitate 100 percent retention of omni-channel contact center interactions with end-to-end encryption and analytics for identifying, extracting, and maintaining data in accordance with GDPR regulations “which can be done by the agent, on the back end by IT, or by the compliance department,” Kanner-Nissimov said.

Enforceability
The enforceability of the GDPR also raises many issues, which seem apparent for some matters but less so for others. In most cases, it seems the input of data subjects is necessary for alerting the EU about violations. For issues of consent, for example, subjects would seemingly have to simply produce an unwanted email to identify a non-compliant organization. Others issues are decidedly more involved, as Osterman’s comment indicates:

“Some people are speculating that Europe is going to try to go after some large American company’s first because of the tax issue. If you look at what Apple has done by funneling a lot of money through Ireland; a lot of companies go through the Netherlands and so forth. What they’re doing is completely legal, but it gets around a lot of EU taxes.

There’s been a lot of stories in the UK popular press recently about Amazon not paying incomes taxes nearly to the same extent in the U.K. as a lot of British companies do, and so there’s some resentment over that. So maybe a secondary or tertiary reason for GDPR may be to punish those companies that aren’t paying their taxes to the extent that European companies are.”

]]>https://analyticsweek.com/content/what-you-still-dont-know-about-the-general-data-protection-regulation/feed/02923386 things that you should know about vMwarevSphere 6.5https://analyticsweek.com/content/6-things-that-you-should-know-about-vmwarevsphere-6-5/
https://analyticsweek.com/content/6-things-that-you-should-know-about-vmwarevsphere-6-5/#respondThu, 24 May 2018 12:20:12 +0000https://analyticsweek.com/?p=292333vSphere 6.5 offers a resilient, highly available, on-demand infrastructure that is the perfect groundwork for any cloud environment. It provides innovation that will assist digital transformation for the business and make the job of the IT administrator simpler. This means that most of their time will be freed up so that they can carry out …

]]>vSphere 6.5 offers a resilient, highly available, on-demand infrastructure that is the perfect groundwork for any cloud environment. It provides innovation that will assist digital transformation for the business and make the job of the IT administrator simpler. This means that most of their time will be freed up so that they can carry out more innovations instead of maintaining the status quo. Furthermore, vSpehere is the foundation of the hybrid cloud strategy of VMware and is necessary for cross-cloud architectures. Here are essential features of the new and updated vSphere.

vCenter Server appliance

vCenter is an essential backend tool that controls the virtual infrastructure of VMware. vCenter 6.5 has lots of innovative upgraded features. It has a migration tool that aids in shifting from vSphere 5.5 or 6.0 to vSphere 6.5. The vCenter Server appliance also includes the VMware Update Manager that eliminates the need for restarting external VM tasks or using pesky plugins.

vSphere client

In the past, the front-end client that was used for accessing the vCenter Server was quite old-fashioned and stocky. The vSphere has undergone necessary HTML5 alterations. Aside from the foreseeable performance upgrades, the change also makes this tool cross-browser compatible and more mobile-friendly. The plugins are no longer needed and the UI has been switched for a more cutting-edge aesthetics founded on the VMware Clarity UI.

Backup and restore

The backup and restore capabilities of the VSpher 6.5 is an excellent functionality that enables clients to back up data on any Platform Services Controller appliances or the vCenter Server directly from the Application Programming Interface(API) or Virtual Appliance Management Interface (VAMI). In addition, it is able to back up both VUM and Auto Deploy implanted within the appliance. This backup mainly consists of files that need to be streamed into a preferred storage device through SCP, FTP(s), or HTTP(s) protocols.

Superior automation capabilities

With regards to automation, VMware vSphere 6.5 works perfectly because of the new upgrades. The new PowerCLI tweak has been an excellent addition to the VMware part because it is completely module-based and the APIs are at present in very high demand. This feature enables the IT administrators to entirely computerize tasks down to the virtual machine level.

Secure boot

The secure boot element of vSphere comprises the -enabled virtual machines. This feature is available in both Linux and Windows VMs and it allows secure boot to be completed through the clicking of a simplified checkbox situated in the VM properties. After it is enabled, only the properly signed VMs can utilize the virtual environment for booting.

Improved auditing

The Vsphere 6.5 offers clients improved audit-quality logging characteristics. This aids in accessing more forensic details about user actions. With this feature, it is easier to determine what was done, when, by whom, and if any investigations are essential with regards to anomalies and security threats.

VMware’s vSphere developed out of complexity and necessity of expanding the virtualization market. The earlier serve products were not robust enough to deal with the increasing demands of IT departments. As businesses invested in virtualization, they had to consolidate and simplify their physical server farms into virtualized ones and this triggered the need for virtual infrastructure. With these VSphere 6.5 features in mind, you can unleash its full potential and usage. Make the switch today to the new and innovative VMware VSphere 6.5.

]]>https://analyticsweek.com/content/6-things-that-you-should-know-about-vmwarevsphere-6-5/feed/0292333Strategic use of Google Analytics is essential to measure SEO progress for increasing its effectivenesshttps://analyticsweek.com/content/strategic-use-of-google-analytics-is-essential-to-measure-seo-progress-for-increasing-its-effectiveness/
https://analyticsweek.com/content/strategic-use-of-google-analytics-is-essential-to-measure-seo-progress-for-increasing-its-effectiveness/#respondThu, 24 May 2018 11:12:09 +0000https://analyticsweek.com/?p=292328Search engine optimization is so dynamic a process that if you cannot keep improving it continuously, the returns from marketing campaign will defy your expectations. To improve, you must first know how the campaign is performing that point currently to the possible areas for improvement. To evaluate the campaign performance you have to gather data …

]]>Search engine optimization is so dynamic a process that if you cannot keep improving it continuously, the returns from marketing campaign will defy your expectations. To improve, you must first know how the campaign is performing that point currently to the possible areas for improvement. To evaluate the campaign performance you have to gather data from several campaign metrics for which some tool is necessary. Google Analytics is a free tool that is widely popular among search engine marketers and can assist in evaluating SEO campaign performance by capturing data from various activities. If you are not using Google Analytics, then you have to depend on keyword rankings to understand SEO performance. Indeed, keyword rankings have a direct relation to traffic flow, but it does not capture the bigger picture of other marketing parameters.

To understand the performance of various aspects of the marketing campaign like the generation of organic traffic, the quality of traffic, revenue generation from the traffic, page loading times, etc., you have to look much beyond keywords. This is where tools like Google Analytics become necessary due to its ability to capture and analyze data from various marketing activities. Only when you paint the complete picture of your online marketing that you could assess how effective SEO is for your organization. Google Analytics helps marketers to calculate the return on investment, which is the ultimate measure of business success.

You have to use some performance measurement tool that provides the inspiration to improve, as you know what you have to do give more teeth to the SEO campaign. Every business wants to drive more customers to their websites but has to depend on different metrics according to the nature of the company. However, some universal parameters apply to all kinds of companies. How you could use Google Analytics to measure these parameters is the topic of discussion of this article.

Tracking organic traffic only

Generating traffic organically is one of the best SEO practices, and every marketer should thus pay attention in this area. Every business is keen to generate maximum organic traffic. You too would naturally like to know how much organic traffic reaches your website. If you find a decline in overall traffic going to the site, there is no need to panic unless you find a decline in the flow of organic traffic. If you find the proportion of organic traffic is satisfactory, then the reduction is due to the fall in traffic from other sources.

The experts at the SEO Company visionsmash.com/york-seo can guide you in understanding organic traffic flow by using Google Analytics. Look at the Channel Grouping report in Google Analytics where you would find the traffic data reported according to the type of traffic – organic traffic, referral traffic, paid search, email marketing traffic, traffic from social media, etc.

On clicking the organic search data, you would get elaborate and detailed information about the traffic. You would know about traffic generated from different keywords, the extent of traffic from different search engines, which are the top landing pages for search traffic and many more.

Determining the quality of traffic

You can use Google Analytics to determine the quality of search traffic even though many would consider quality as something too much subjective and hence not measurable, which is not correct. It is possible to ascertain the quality of traffic provided you know how to do it by using Google Analytics. The rate of conversions from traffic is a measure of its quality, and the Assisted Conversions report is what you have to use.

The report helps to compare the conversions from the search for a particular month with the previous period. The report also captures data related to conversions through multiple visits when visitors reach the site for the time through search, but subsequent conversion happened because of visitors going to the site directly. The report helps to understand whether search traffic has gone up or reduced so that marketers can take some action. If the overall traffic is good, but conversions are low, then it indicates that the quality of search traffic is not satisfactory.

Knowing the revenue earned from search traffic

Although SEO can bring an overall improvement in marketing by enhancing visibility, traffic, and conversions, these do not help to understand its direct impact on business. An easier way is to ascertain the value derived from organic searches that make the SEO performance more visible and easy to comprehend. Focusing on the business gains derived from SEO helps to understand its effectiveness better.

To know how much revenue generates from organic searches, you have to compare the cost that you would have to bear to earn the same revenue had you purchased the keyword from Google Adwords. To conduct this exercise, you have to synchronize both Google Adwords account and Google Analytics with the console.

By comparing the data related to clicks generated from the report of Google Analytics and Google Adwords, you would know the monetary value of the clicks happening from organic searches.

Identification of pages that load very slow

Slow loading pages can undo all your efforts in SEO, and you have to identify the web pages that can cause harm to SEO due to its slow speed. Slow loading pages can make users angry, unhappy and make them quit the website. Google Analytics help to measure page load times, which is now a ranking factor for search engines. By generating a report related to user Behavior, you could gather the data about average page speed and the exit percentage for every web page and identify the ones that result in the maximum exit.

For presenting the data obtained from Google Analytics convincingly. It is advisable to create your own dashboard by using the Dashboard Interface. The summarized version of all reports reflected in the dashboard and presented in a single view that you can easily access, print and share.

Easy understanding of the value that SEO brings to business gives confidence to marketers about the effectiveness of their efforts.

[ TIPS & TRICKS OF THE WEEK]

Save yourself from zombie apocalypse from unscalable models
One living and breathing zombie in today’s analytical models is the pulsating absence of error bars. Not every model is scalable or holds ground with increasing data. Error bars that is tagged to almost every models should be duly calibrated. As business models rake in more data the error bars keep it sensible and in check. If error bars are not accounted for, we will make our models susceptible to failure leading us to halloween that we never wants to see.

[ DATA SCIENCE Q&A]

Q:Give examples of data that does not have a Gaussian distribution, nor log-normal?
A: * Allocation of wealth among individuals
* Values of oil reserves among oil fields (many small ones, a small number of large ones)

]]>https://analyticsweek.com/content/may-24-18-analyticsclub-newsletter-events-tips-news-more/feed/0292329@AlexWG on Unwrapping Intelligence in #ArtificialIntelligence #FutureOfDatahttps://analyticsweek.com/content/alexwg-on-unwrapping-intelligence-in-artificialintelligence-futureofdata/
https://analyticsweek.com/content/alexwg-on-unwrapping-intelligence-in-artificialintelligence-futureofdata/#respondThu, 24 May 2018 02:29:04 +0000https://analyticsweek.com/?p=292317@AlexWG on Unwrapping Intelligence in #ArtificialIntelligence #FutureOfData In this podcast Alex Wissner-Gross from Gemedy sat with Vishal Kumar from AnalyticsWeek to discuss convoluted world of intelligence in artificial intelligence. He sheds deep insights on what machines perceive as intelligence and how to evaluate the current unwrapping of AI capabilities. This podcast is a must attend …

In this podcast Alex Wissner-Gross from Gemedy sat with Vishal Kumar from AnalyticsWeek to discuss convoluted world of intelligence in artificial intelligence. He sheds deep insights on what machines perceive as intelligence and how to evaluate the current unwrapping of AI capabilities. This podcast is a must attend for any one who wish to understand what all AI is all about.

Alex’s BIO:
Dr. Alexander D. Wissner-Gross is an award-winning scientist, engineer, entrepreneur, investor, and author. He serves as President and Chief Scientist of Gemedy and holds academic appointments at Harvard and MIT. He has received 125 major distinctions, authored 18 publications, been granted 24 issued, pending, and provisional patents, and founded, managed, and advised 4 technology companies that were acquired for a combined value of over $600 million. In 1998 and 1999, respectively, he won the USA Computer Olympiad and the Intel Science Talent Search. In 2003, he became the last person in MIT history to receive a triple major, with bachelors in Physics, Electrical Science and Engineering, and Mathematics, while graduating first in his class from the MIT School of Engineering. In 2007, he completed his Ph.D. in Physics at Harvard, where his research on neuromorphic computing, machine learning, and programmable matter was awarded the Hertz Doctoral Thesis Prize. A thought leader in artificial intelligence, he is a contributing author of the New York Times Science Bestseller, This Idea Must Die, and the Amazon #1 New Release, What to Think About Machines That Think. A popular TED speaker, his talks have been viewed more than 2 million times and translated into 27 languages. His work has been featured in more than 200 press outlets worldwide including The Wall Street Journal, BusinessWeek, CNN, USA Today, and Wired.

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

Wanna Join?
If you or any you know wants to join in,
Register your interest @ http://play.analyticsweek.com/guest/

]]>https://analyticsweek.com/content/alexwg-on-unwrapping-intelligence-in-artificialintelligence-futureofdata/feed/0292317Nick Howe (@Area9Nick) talks about fabric of learning organization to bring #JobsOfFuture #podcasthttps://analyticsweek.com/content/nick-howe-nickjhowe-area9learning-talks-about-fabric-of-learning-organization-to-bring-jobsoffuture-podcast/
https://analyticsweek.com/content/nick-howe-nickjhowe-area9learning-talks-about-fabric-of-learning-organization-to-bring-jobsoffuture-podcast/#respondThu, 24 May 2018 02:17:58 +0000https://analyticsweek.com/?p=292316In this podcast Nick Howe (@NickJHowe) from @Area9Learning talks about the transforming world of learning landscape. He shed light on some of the learning challenges and some of the ways learning could match the evolving world and its learning needs. Nick sheds light on some tactical steps that businesses could adopt to create world class …

In this podcast Nick Howe (@NickJHowe) from @Area9Learning talks about the transforming world of learning landscape. He shed light on some of the learning challenges and some of the ways learning could match the evolving world and its learning needs. Nick sheds light on some tactical steps that businesses could adopt to create world class learning organization. This podcast is must for learning organization.

Nick’s BIO:
Nick Howe is an award winning Chief Learning Officer and business leader with a focus on the application of innovative education technologies. He is the Chief Learning Officer at Area9 Lyceum – one of global leaders in adaptive learning technology, a Strategic Advisor to the Institute of Simulation and Training at the University of Central Florida, and board advisor to multiple EdTech startups.

For twelve years Nick was the Chief Learning Officer at Hitachi Data Systems where he built and led the corporate university and online communities serving over 50,000 employees, resellers and customers.

With over 25 years’ global sales, sales enablement, delivery and consulting experience with Hitachi, EDS Corporation and Bechtel Inc., Nick is passionate about the transformation of customer experiences, partner relationships and employee performance through learning and collaboration

About #Podcast:
#JobsOfFuture podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

[ TIPS & TRICKS OF THE WEEK]

Data Analytics Success Starts with Empowerment
Being Data Driven is not as much of a tech challenge as it is an adoption challenge. Adoption has it’s root in cultural DNA of any organization. Great data driven organizations rungs the data driven culture into the corporate DNA. A culture of connection, interactions, sharing and collaboration is what it takes to be data driven. Its about being empowered more than its about being educated.

[ DATA SCIENCE Q&A]

Q:Examples of NoSQL architecture?
A: * Key-value: in a key-value NoSQL database, all of the data within consists of an indexed key and a value. Cassandra, DynamoDB
* Column-based: designed for storing data tables as sections of columns of data rather than as rows of data. HBase, SAP HANA
* Document Database: map a key to some document that contains structured information. The key is used to retrieve the document. MongoDB, CouchDB
* Graph Database: designed for data whose relations are well-represented as a graph and has elements which are interconnected, with an undetermined number of relations between them. Polyglot Neo4J

]]>https://analyticsweek.com/content/may-17-18-analyticsclub-newsletter-events-tips-news-more/feed/0292229@JustinBorgman on Running a data science startup, one decision at a time #Futureofdata #podcasthttps://analyticsweek.com/content/running-a-data-science-startup-one-decision-at-a-time-futureofdata-podcast/
https://analyticsweek.com/content/running-a-data-science-startup-one-decision-at-a-time-futureofdata-podcast/#respondThu, 17 May 2018 04:17:47 +0000https://analyticsweek.com/?p=292215Running a data science startup, one decision at a time #Futureofdata #podcast Youtube: https://math.im/youtube iTunes: https://math.im/itunes In this podcast Justin Borgman talks about his journey of starting a data science start, doing an exit and jumping on an another one. The session is filled with insights for leadership looking for entrepreneurial wisdom to get on …

In this podcast Justin Borgman talks about his journey of starting a data science start, doing an exit and jumping on an another one. The session is filled with insights for leadership looking for entrepreneurial wisdom to get on data driven journey.

Justin’s BIO:
Justin has spent the better part of a decade in senior executive roles building new businesses in the data warehousing and analytics space. Prior to co-founding Starburst, Justin was Vice President and General Manager at Teradata (NYSE: TDC), where he was responsible for the company’s portfolio of Hadoop products. Prior to joining Teradata, Justin was co-founder and CEO of Hadapt, the pioneering “SQL-on-Hadoop” company that transformed Hadoop from file system to analytic database accessible to anyone with a BI tool. Hadapt was acquired by Teradata in 2014.

Justin earned a BS in Computer Science from the University of Massachusetts at Amherst and an MBA from the Yale School of Management

About #Podcast:
#FutureOfData podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

]]>https://analyticsweek.com/content/running-a-data-science-startup-one-decision-at-a-time-futureofdata-podcast/feed/0292215Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcasthttps://analyticsweek.com/content/dave-ulrich-dave_ulrich-talks-about-role-responsibility-of-hr-in-futureofwork-jobsoffuture-podcast/
https://analyticsweek.com/content/dave-ulrich-dave_ulrich-talks-about-role-responsibility-of-hr-in-futureofwork-jobsoffuture-podcast/#respondWed, 16 May 2018 03:12:11 +0000https://analyticsweek.com/?p=292214Dave Ulrich (@dave_ulrich) talks about role / responsibility of HR in #FutureOfWork #JobsOfFuture #Podcast In this podcast Dave Ulrich shared his perspective on the future of HR. Dave shared some of the best practices that HR could deploy today to ensure their organization could stay relevant as they head into the future. Dave shared some …

In this podcast Dave Ulrich shared his perspective on the future of HR. Dave shared some of the best practices that HR could deploy today to ensure their organization could stay relevant as they head into the future. Dave shared some tactical steps / best practices that HR professionals could deploy. This is a great podcast for HR executives looking for the fabric of data driven, sustained value carrying HR practice.

Dave’s BIO:
Ranked as the #1 management guru by Business Week, profiled by Fast Company as one of the world’s top 10 creative people in business, a top 5 coach in Forbes, and recognized on Thinkers50 (Hall of Fame) as one of the world’s leading business thinkers, Dave Ulrich has a passion for ideas with impact. In his writing, teaching, and consulting, he continually seeks new ideas that tackle some of the world’s thorniest and longest standing challenges.

His bestselling books and popular speeches inspire the corporate and academic agenda. Dave has co authored over 30 books and 200 articles that have shaped three fields:
• Organization.
• Leadership.
• Human Resources.

He has spoken to large audiences in 88 countries; performed workshops for over half of the Fortune 200; and coached successful business leaders. He is co-founder of the RBL Group (www.rbl.net) a consulting firm that increases business results through leadership, organization, and human resources. He gives back to the profession and others, having worked as Editor of Human Resource Management for 10 years, being a Trustee and advisor to universities and other professional groups, and serving on the Herman Miller board for 15 years. He is known for continually learning, turning complex ideas into simple solutions, and creating real value to those he works with.

Wendy and Dave serve frequently in their church, have 3 children, 8 grandchildren, and get their greatest glee when their grandkids’ eyes light up at seeing them.

About #Podcast:
#JobsOfFuture podcast is a conversation starter to bring leaders, influencers and lead practitioners to come on show and discuss their journey in creating the data driven future.

]]>https://analyticsweek.com/content/dave-ulrich-dave_ulrich-talks-about-role-responsibility-of-hr-in-futureofwork-jobsoffuture-podcast/feed/0292214Underpinning the Internet of Things with GPUshttps://analyticsweek.com/content/underpinning-the-internet-of-things-with-gpus/
https://analyticsweek.com/content/underpinning-the-internet-of-things-with-gpus/#respondWed, 16 May 2018 02:39:51 +0000https://analyticsweek.com/?p=292206Processing, analyzing, and capitalizing on big data generated in the Internet of Things is an exacting task for any organization. The most lucrative use cases for the IoT require acting in real time on continuously generated streaming data from sources like industrial equipment sensors, autonomous or connected vehicles, or physical infrastructure for smart cities. But …

]]>Processing, analyzing, and capitalizing on big data generated in the Internet of Things is an exacting task for any organization. The most lucrative use cases for the IoT require acting in real time on continuously generated streaming data from sources like industrial equipment sensors, autonomous or connected vehicles, or physical infrastructure for smart cities.

But while most users tend to focus on relatively new architecturesfor parsing such copious data for low latency applications, including the cloud and fog computing/edge computing, one factor is frequently overlooked: compute power. “Hardware kind of always precedes software,” Kinetica CTO Nima Negahban explained. “The hardware that’s been available has very limited compute capabilities; because of that you’ve had only basic edge analytics. But I think that’s beginning to change in a big way.”

One of the most substantial factors affecting IoT analytics is the emergence of GPUs (Graphics Processing Units), which are supplementing, and possibly displacing, conventional CPUs (Central Processing Units). Aided by GPUs, there are multiple applications of big data—such as the IoT—that are suddenly much more viable to contemporary organizations.

The ascendancy of GPUs has paralleled that of big data in general, priming both for solidifying the IoT. “As far as an adoption context at the edge, CPUs are still king,” Negahban said. “But as far as innovation and raw processing power, GPUs have the lead.”

Raw Processing Power
The influx of GPUs throughout the data ecosystem is directly responsible for the viability of big data applications such as the IoT and advanced forms of machine learning. There are two fundamental differences between utilizing GPUs and deploying the CPUs which predated them. The first is the sheer amount of processing power of the former. According to Negahban, CPUs “are not made for having a massively parallel computation framework”, particularly in fog computing deployments at the fringe of the IoT. GPUs, however, are ideal for parallel processing. “Because of its very nature, [GPUs] have passivity capabilities where ARM CPUs just don’t have that,” Negahban remarked. The more robust computational power of CPUs also translates into cost benefits, suddenly making IoT deployments (and big data in general) much more affordable. “It wasn’t long ago where only the biggest of the big companies could afford the kind of infrastructure to process this kind of stuff,” mentioned Razorthink CEO Gary Oliver. “Now, with the advent of GPUs and other ways to do it, it’s something that any company can get their hands on.”

Innovation
Much of the innovation fueled by GPUs that Negahban referenced involves opportunities for fog computing at the IoT’s extremities. With CPUs, organizations could traditionally only do fairly simple analytics. According to Kinetica VP of Product Marketing Dipti Borkar “with CPUs and the way edge computing is today, you might be doing some very basic rules-based stuff.” Thus, with thermostats for smart homes, common use cases could involve “let’s say if the temperature gets above 80 degrees, let’s do something about it,” Borkar said. However, the additional computational power of GPUs can facilitate more sophisticated aggregate analytics, in which data is compiled across multiple days (or even sources, in some instances) for action of even greater consequence. Borkar noted that the increasing memory and storage of computer chips enhanced by GPUs facilitates fog computing deployments in which “you can do aggregated patterns and predictive stuff. So, it looks like for the last three days the usage has been increasing, maybe this homeowner has forgotten to turn a certain appliance off.”

It’s important to realize that in this use case and others, the ability to rapidly process large volumes of data—onsite, if need be—is an integral aspect of profiting from the IoT. “With so much data coming off of sensors and other electronic forms, there’s just a vast wealth of data coming from organizations now that’s able to be processed,” Oliver ventured. “I think the whole big data technology stacks that can process that data are important.”

Aggregate Analytics
The potential for aggregate analytics is vast. It both involves, yet transcends, the IoT, dramatically affecting security issues, customer service, and other facets of data-driven processes. Aggregate analytics are so cogent in the IoT because they can aggregate data at the fringe of an organization’s network, partly due to the presence of GPUs. “It’s kind of the difference between going from end line analytics that are just looking at the end object they’re examining, to aggregate analytics where you’re looking at a broader swath of data that can really capture behavior better,” Negahban observed. “That’s really what we’re talking about.” The ability to perform aggregate analytics also contributes to centralized IoT deployments. In addition to performing aggregates for devices via fog computing paradigms, GPU-empowered aggregate analytics can also be used to “ship back data to centralized stores and centralized analytics platforms with the provision that what it’s sending back is already pre-aggregated to greatly reduce costs at the central stores,” Negahban said.

Decreased Costs
Ultimately, the ascent of GPUs can considerably decrease the costs associated with the IoT. GPUs are spurring innovative practices around fog computing, aggregate analytics, and centralized cloud architectures. The key to their cost saving advantages is their massive parallel processing power. As a result “you’re talking about use cases that then operationalize better and save costs better, rather than just very simple operations that are on a small set of data,” Borkar said.