Please note that this is a recorded webinar. It was recorded during live presentation.
In this session, we are going to cover the basics of Clustered Indexes & Non-Clustered Indexes.
1. How to create them and some best practices to follow.
2. What is a covering index and how is it useful.
3. What is fragmentation and how to defrag indexes
4. What is fill factor and how is is useful.
This is going to be a level 100 session targeted towards Developers & DBAs beginning their careers with SQL Server.
Webinar resources including presentation, demo files, code snippets and more learning material is available on http://www.dataplatformgeeks.com/ (Join for free and access all the resources)
As a DPG member, you have free access to all our learning resources like videos, Hands-On-Labs & past event resources.
Suggest us topics that you wish to learn through our webinars: http://www.dataplatformgeeks.com/dpg-...
Connect with DataPlatformGeeks: http://www.dataplatformgeeks.com/
http://www.twitter.com/SQLServerGeeks
https://www.facebook.com/SQLServerGeeks
Email us: [email protected]
Have technical questions? Join the largest SQL/Data group on FaceBook – https://www.facebook.com/groups/thesq...
LinkedIn Group: https://www.linkedin.com/groups/6753546
DataPlatformGeeks (DPG) Community
Join the fastest growing community of data & analytics professionals
Why Join DPG? http://www.dataplatformgeeks.com/
-Attend all events hosted by DPG, including SQLMaestros Special Events
-Get access to free videos, labs, magazines and host of learning resources
-Download all events & conference material
-Learn new skills. Sharpen existing skills
-Be part of Asia’s Largest Data/Analytics Community
-Opportunity to be a regional mentor & speaker at our events
-Immense technical & professional development
-http://www.dataplatformgeeks.com/
Do you know about Data Platform Summit (DPS)?
Learn about the largest Data/Analytics Learning Event in Asia. http://www.DPS10.com | [email protected]
A word from our sponsors
SQLMaestros Hands-On-Labs
Want to practice SQL, Azure & BI concepts, step-by-step with exercises, screenshots, instructions & explanations? Get access to 100+ labs covering the entire Microsoft Data Platform stack. Try SQLMaestros Hands-On-Labs – the new way of practical, self-paced learning. Anytime. Anywhere.
http://hols.SQLMaestros.com
Email [email protected]
SQLMaestros Video Courses
http://sqlmaestros.com/sql-server-vid...
SQLMaestros Master Classes & Accelerators
http://sqlmaestros.com/
SQL Health Check
http://sqlmaestros.com/
Advanced SQL Training (On-site)
Want your team to experience Amit Bansal's Advanced SQL Training?
http://www.SQLMaestros.com
Email [email protected]
Corporate Training
Looking for any other high-end technology training for your team?
http://www.PeoplewareIndia.com
Email [email protected]
Connect with the founder of DataPlatformGeeks/SQLServerGeeks
Follow on Twitter: https://twitter.com/A_Bansal
Follow on FaceBook at http://www.facebook.com/amit.r.bansal
Follow on LinkedIN: http://www.linkedin.com/in/amitbansal...
Facebook Page: https://www.facebook.com/AmitRSBansal/

Every column within a table has to be given what is known as a datatype. A data type is a fairly simple concept when you dissect the word. It is literally the type of data. Why do we use types, though? The biggest benefit is so that Oracle knows how to interpret and work with our data. It also makes the database better at rejecting incorrect data.
If we had to concept of a data type, there would be a lot more work involved in forcing data to be of the right format. It would also be harder for us to get the database to treat the data in the correct way.
In addition to this, a database can optimize storage and performance for a column if everything is of the same data type. Because of this, each column can only support one data type.
There are numerous different data types in Oracle and it helps us if we categorize them.
The first types of datatypes we should learn about are:
String,
Numeric,
Temporal
Now, there are few more categories we could make, but these are the main ones. We will worry about the other ones another day as I am only introducing the topic.
A string data type is anything within quotes. Most databases use single quotes for string data. Inside of the quotes can be any number of characters. What is a character? Think of any letter, number, or symbol you can type. Some people call these letters, numbers, and symbols alphanumeric.
Numeric data type includes only numbers. These data types are often used for data that you plan on using for mathematical calculations.
Temporal data types are data types that are used for dates and times.
Now, each data type is probably going to have some options you'll need to worry about, but one that comes up with every data type is storage. The reason we need to consider storage is because we may end up with millions of rows in a table and the difference between a few bytes for each row will make a huge difference when we look at the whole picture. When a data type gives you the option of size, you will want to a size that will be able to hold what you need, but nothing more.
In the upcoming videos we are going to discuss the available data types in more detail.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Support me! http://www.patreon.com/calebcurry
Subscribe to my newsletter: http://bit.ly/JoinCCNewsletter
Donate!: http://bit.ly/DonateCTVM2.
~~~~~~~~~~~~~~~Additional Links~~~~~~~~~~~~~~~
More content: http://CalebCurry.com
Facebook: http://www.facebook.com/CalebTheVideoMaker
Google+: https://plus.google.com/+CalebTheVideoMaker2
Twitter: http://twitter.com/calebCurry
Amazing Web Hosting - http://bit.ly/ccbluehost (The best web hosting for a cheap price!)

Create Report in Seconds by Fetching Data from SQL Server using Excel VBA
If your manager needs the report very frequently then you need to do the same task again and again.
Frankly speaking, I faced this kind of situation in my previous company and I prepared the report using VBA and handed over the file to my manager.
Now, whenever he will click the button he will get the report of the LIVE DATA from SQL to Excel.
You can read our blog to go through instructions as well as download working and code files, Click here:
http://yodalearning.com/tutorials/export-data-from-sql-to-excel-spreadsheet-using-vba/
You can enroll in our Excel VBA course: http://courses.yodalearning.com/p/excel-vba-tutorials
CHECK SOME OF THE FREE COURSES WE OFFER
http://courses.yodalearning.com/p/free-office-2016-tips
Keep yourself updated. Follow us now!
http://www.facebook.com/yodalearning
http://www.twitter.com/yodalearning

***HR Analytics with Excel***
Business Scenario: You want to control the content/format for your datasheet so that user can input faster and more accurately
Excel Function: Data Validation, Auto-correction
Time for Formula:
Data validation 3:20,
Auto-corretion 5:00
-~-~~-~~~-~~-~-
Please watch: "How to create a Merit Matrix for Salary Increase with Goal Seek function"
https://www.youtube.com/watch?v=6ZRixDI8Zws
-~-~~-~~~-~~-~-

A Junk Dimension is a dimension table consisting of attributes that do not belong in the fact table or in any of the existing dimension tables. The nature of these attributes is usually text or various flags, e.g. non-generic comments or just simple yes/no or true/false indicators.
The have low cardinality and usually don't come under SCD.
If you require any clarifications for this video, Please drop me a comment and I will try to answer asap.

Welcome to an SQLite mini-series! SQLite, as the name suggests, is a lite version of an SQL database. SQLite3 comes as a part of the Python 3 standard library.
Databases offer, typically, a superior method of high-volume data input and output over a typical file such as a text file. SQLite is a "light" version that works based on SQL syntax. SQL is a programming language in itself, but is a very popular database language. Many websites use MySQL, for example.
SQLite truly shines because it is extremely lightweight. Setting up an SQLite database is nearly instant, there is no server to set up, no users to define, and no permissions to concern yourself with. For this reason, it is often used as a developmental and protyping database, but it can and is used in production. The main issue with SQLite is that it winds up being much like any other flat-file, so high volume input/output, especially with simultaneous queries, can be problematic and slow. You may then ask, what really is the difference between a typical file and sqlite. First, SQLite will let you structure your data as a database, which can easily be queried, so you get that functionality both with adding new content and calling upon it later. Each table would likely need its own file if you were doing plain files, and SQLite is all in one. SQLite is also going to be buffering your data. A flat file will require a full load before you can start querying the full dataset, SQLite files don't work that way. Finally, edits do not require the entire file to be re-saved, it's just that part of the file. This improves performance significantly. Alright great, let's dive into some SQLite.
https://pythonprogramming.net/sql-database-python-part-1-inserting-database/
Playlist: https://www.youtube.com/playlist?list=PLQVvvaa0QuDezJh0sC5CqXLKZTSKU1YNo
https://pythonprogramming.net
https://twitter.com/sentdex
https://www.facebook.com/pythonprogramming.net/
https://plus.google.com/+sentdex

Download File: http://people.highline.edu/mgirvin/excelisfun.htm
In Power Query see how to use deal with the Refresh Error Message:
We couldn’t refresh the connection. Here is the error message we got:
[DataSource.NotFound] File or Folder: We couldn’t find the folder.

This video covers the key new data transformations available in Oracle Big Data Discovery version 1.1. This video is intended for data analysts or anyone wanting to learn how to perform the key new data transformations available in Big Data Discovery version 1.1.

Download file from “Highline BI 348 Class” section: https://people.highline.edu/mgirvin/excelisfun.htm
Learn about PowerPivot: Import Big Data, Build Data Model, Create Reports:
1) (00:04) Info about files for project
2) (00:20) Intro to Video and look at end result reports
3) (01:56) Overview of steps in building a Data Model in PowerPivot
4) (03:03) Look at source data files including text files with 5 million rows of records
5) (04:46) From Text Files, use Power Query to import 5 million rows of transactional sales data into the a Fact Table in the Data Model in PowerPivot
6) (09:48) From Excel file, use Power Query to import Product Retail and Standard Cost Dimension Table into the Data Model in PowerPivot
7) (10:52) Build Calendar Dimension Table in Excel and then use Power Query to import into the Data Model in PowerPivot
8) (13:44) From web get ISO 3166-1 County Code Data to build and then use Power Query to import Country Code Dimension Table into the Data Model in PowerPivot
9) (17:50) Build One-to-Many Relationships between Fact Table and Dimension Tables
10) (20:16) Build DAX Calculated Column for Net Revenue for each transaction. See the DAX functions: RELATED (Like VLOOKUP in Excel) and ROUND.
11) (22:30) Build DAX Measure (Calculated Field) for Total Net Revenue (Overall Total Net Revenue for entire Fact table). See the DAX function: SUM.
12) (24:16) Build PivotTable to see that the relationships are working and that we can pull fields from Dimension Tables and Fact Tables.
13) (24:16) Build PivotTable Report to show Net Revenue for each Country.
14) (25:33) “Trouble Shooting” Part of Example: Tracking Down Error between web site data and company data, including finding error and updating Country Code Excel Table and refreshing the linked table in the Data Model so that the PivotTable report updates and has correct results.
15) (30:35) Hide Fields from Field Lists using “Hide From Client Tools”
16) (31:59) Build PivotTable to see that with a Data Model PivotTable you can NOT group Dates
17) (33:20) Create DAX Calculated Column for Month Number and Month Name. See the DAX functions: MONTH and FORMAT (Like TEXT in Excel).
18) (35:20) Build PivotTable to see that Month Name does NOT sort correctly in a Data Model PivotTable.
19) (35:52) Learn how to Sort Month Name column by Month Number so that Month Name sorts correctly in a data Model PivotTable
20) (36:36) Build relationship between Calendar Table and Fact Table
21) (36:36) Build PivotTable Report to show Net Revenue by Month
22) (37:33) Create DAX Calculated Column for Year. See the DAX function: YEAR.
23) (38:02) Build PivotTable Report to show Net Revenue by Month & Year
24) (39:10) Build DAX Calculated Column for COGS for each transaction. See the DAX functions: RELATED (Like VLOOKUP in Excel) and ROUND.
25) (40:55) Build DAX Measure (Calculated Field) for Total COGS (Overall Total COGS for entire Fact table). See the DAX function: SUM.
26) (41:29) Build DAX Measure (Calculated Field) for Gross Profit using Measures (Calculated Fields) in our DAX formula.
27) (42:21) Build PivotTable Report to show Net Revenue, COGS and Gross Profit for each Year and Month.
28) (44:04) Build PivotTable Report to show Percentage Change for Net Revenue over Same Period Last Year.
29) (45:00) Build DAX Measure (Calculated Field) for Percentage Change over Same Period Last Year using the DAX functions: CALCULATE, SAMEPERIODLASTYEAR and IF.
30) (52:22) Build PivotTable Report to show Percentage of Grand Total for Each Product. Concept behind the formula.
31) (54:11) Build DAX Measure (Calculated Field) for Percentage of Grand Total using the DAX functions: CALCULATE and ALL.
32) (56:45) Refresh Reports when source data changes. In our example we bring 7 million rows into the Excel PowerPivot Data Model.
33) (57:48) Update Calendar table
34) (59:34) Summary and Conclusion.
Download Excel File Not: After clicking on link, Use Ctrl + F (Find) and search for “Highline BI 348 Class” or for the file name as seen at the beginning of the video.
More about CALCULATE function: https://www.youtube.com/watch?v=kMMohkVk8Ds
PowerPivot Playlist: https://www.youtube.com/playlist?list=PLrRPvpgDmw0nGCx21PRFbsJpUIH06LKs-

Learn more about qureying in the official documentation: https://goo.gl/iLDAvS
Welcome to the third video in the Firebase Database for SQL Developers series!
Querying may be less powerful in NoSQL databases than compared to SQL databases, but there's still a lot you can do with the Firebase Database.
Watch more videos from this series: https://goo.gl/ZDcO0a
Subscribe to the Firebase Channel: https://goo.gl/9giPHG

This is a tutorial which goes over how to create a PHP search which filters out results from a database table. Sorry for the mistakes made in this video, the video following this goes over how to take this and make it instant with jQuery
Tutor Facebook: http://www.facebook.com/JoeTheTutor
Dibbble: www.dribbble.com/sleekode
www.helpingdevelop.com

SQL CREATE VIEW Statement. In SQL, a view is a virtual table based on the result-set of an SQL statement. A view contains rows and columns, just like a real table. The fields in a view are fields from one or more real tables in the database.
Why do we use views in SQL?
SQL - Using Views. A view is nothing more than a SQL statement that is stored in the database with an associated name. ... A view can contain all rows of a table or select rows from a table. A view can be created from one or many tables which depends on the written SQL query to create a view.
What do you mean by view in SQL?
A database view is a searchable object in a database that is defined by a query. Though a view doesn't store data, some refer to a views as “virtual tables,” you canquery a view like you can a table. A view can combine data from two or more table, using joins, and also just contain a subset of information.
What are views used for?
Views can be used as security mechanisms by letting users access data through the view, without granting the users permissions to directly access the underlying base tables of the view. Views can be used to provide a backward compatible interface to emulate a table that used to exist but whose schema has changed.

Check out http://www.pgconf.us/2015/event/97/ for the full talk details.
In this talk I'm going to touch different techniques and tools for managing changes inside PostgreSQL databases, including schema deployments, data changes and usage of stored procedures as a versioned API layer. Based on my experience of no-downtime deployments at Zalando SE, one of the biggest European fashion retailers, I'm going to highlight different approaches to database deployments and introduce some open-source tools used and developed by the Database Team at Zalando.
About the Speaker
Alexey started with PostgreSQL at 2003 as a C programmer. He learned how to debug the backend code before learning SQL, but caught up on the latter pretty quickly. He is now using both of those skills and years of experience as a database consultant to help his employer manage a growing number of PostgreSQL database clusters and deal with hundreds of database changes during weekly deployments. A native of Simferopol, Ukraine, he lives in Berlin and works as a Database Engineer at Zalando SE. Over the years, he contributed code and documentation to the PostgreSQL project.

This video is part of LearnItFirst's SQL Server 2012: A Comprehensive Introduction course. More information on this video and course is available here:
http://www.learnitfirst.com/Course170
In this video, we walk through the basics of the MDX Query language. It is a very logical language, however, is somewhat large in syntax. If you enjoy writing Transact-SQL, you will really enjoy the MDX language. The AdventureWorks2012 multidimensional models need to be installed on your SSAS Multidimensional mode instance from the CodePlex web site.
Highlights from this video:
- The basics of an MDX query
- What is the basic format of the MDX query language?
- Is it necessary to have a WHERE clause in an MDX query?
- How to signal the end of a statement in the MDX query language
- Using the Internet Order Count
and much more...

For more videos on technology, visit http://www.Techytube.com
By [email protected]
SQL server is a powerful database platform, this means that it can also be complex to understand and work with. However the major users of the data in the database are still a non-technical business user. A key problem that most business users face when it comes to working with SQL Server is the dependency on an IT professional to query and return the data in a CSV format or Excel sheet. With the ability of MS Excel to be able to connect to SQL Server via the ODBC Drivers allows users to work with SQL tables within the familiar Excel User interface. This approach allows users to be productive with SQL Server without having to know Transact SQL. The feature set provided with MS Excel and newer versions of SQL allow business users to do much more than just query the database. These include the ability to query and view data in pivot format for large data sets using power pivot. Perform ad hoc calculations to underlying data and create models that are specific to their business case using the powerful DAX language. Embedding reports in SharePoint services using PowerPivot.
Working with Excel is one of the most powerful ways that end users can work with MS SQL Server to deliver results faster and improve productivity.

Here is a webinar on Excel Advanced LOOK UP and its application in Data Analytics.
Who can benefit?
• One who has basic knowledge of MS-EXCEL
• Analytics professionals who want to handle large volume of data using V-LOOK-UP
• Any analytics aspirant or professional who works with large amount of data and wants to extract data from the same sheet, or from different sheet of the same work book or from different workbooks.
Topics to be covered in this webinar:
• Quick overview of V-LOOK-UP
• 2 Column LOOK UP
• Multiple sheets LOOK-UP
• Multiple extractions using V-LOOK-UP
STAY UPDATED:
To get alerts about our future webinars and analytics events, please 'Like' our Facebook page www.facebook.com/ivyproschool or drop an email to [email protected] You can view our past webinars by subscribing to our YouTube channel: www.youtube.com/ivyproschool.

In this series of videos, Tahir Hussain Babar examines using SAP Lumira Desktop and SAP Lumira Cloud. In this video, we walk through exploring your data in the Cloud with SAP Lumira.
Uploading files:
Bob wants to store objects in SAP Lumira Cloud and shows how to do so in 2 steps.
1. First Bob goes to cloud.saplumira.com and logs into his free personal SAP Lumira Cloud account.
2. Bob clicks on upload, then browse, and then chooses the file that he wants to upload before clicking on open and then ok.
A user can upload Storyboards, Datasets, SAP Lumira files, CSVs, Local Spreadsheets, Powerpoints, SAP Crystal Reports or Design Studio files to SAP Lumira Cloud and can filter their objects using the narrow by feature on the toolbar.
Creating Datasets:
Bob wants to create a dataset in SAP Lumira and shows how to in 5 steps.
1. Bob clicks on the create dataset button and the selects the browse button.
2. After selecting a CSV file (can also use Excel files) Bob clicks ok.
3. Next, Bob clicks on the data acquisitions options and chooses his sheet and if he wants to set the first column as the column header.
4. Then, Bob clicks on the wrench on the left to enrich his data. A data set must have at least one measure and one attribute. Bob has three measures and no attributes so he clicks on one of his measures and chooses to convert it to an attribute. Users can also change their measures from a sum to a min, max, or count and can change the name of the measures and attributes.
5. Bob then names his dataset and clicks on the acquire button.
Exploring Data:
Bob details how to explore datasets and create visualizations in in SAP Lumira Cloud.
Bob clicks on the wheel to the right of a data set and selects explore.
Bob receives a prompt that he has too many data points to visualize. To resolve this, Bob drags his current attribute in the X Axis & Values box out and clicks on the plus sign to replace it with a new attribute.
Bob shows how users can create visualizations by dragging and dropping measures and attributes from the left onto the canvas in the middle and then choosing different visualizations options from the list on the right. For the geographic chart a user must use latitude and longitude.
Bob then shows how users can modify their visualizations by using the analyze tab. A user can add a trellis and change the type, data points, mode, color, size, animation an/or shading of different attributes and measures in their visualization.
Bob clicks on the save button at the top right and names his visualization to save it.
Clicking on the share button creates a unique URL for the visualization that users can email to their colleagues.

In the Expert Analytics interface of SAP Predictive Analytics, you can change the data type of columns in the Prepare room or in the Predict room. In this tutorial, we will use both methods to convert a date column that is formatted as a string into a proper date format for analysis.

We've been hands-on with Red Dead Redemption 2! In this special episode we tell you everything you played and saw in our extensive time with the game.
Subscribe to GR+ here: http://goo.gl/cnjsn1
Red Dead Redemption 2 o'clock is the is the world's most in-depth - and best connected - Red Dead 2 show.

** Flat 20% Off (Use Code: YOUTUBE20) Hadoop Training: https://www.edureka.co/hadoop **
This Edureka "Hadoop tutorial For Beginners" ( Hadoop Blog series: https://goo.gl/LFesy8 ) will help you to understand the problem with traditional system while processing Big Data and how Hadoop solves it. This tutorial will provide you a comprehensive idea about HDFS and YARN along with their architecture that has been explained in a very simple manner using examples and practical demonstration. At the end, you will get to know how to analyze Olympic data set using Hadoop and gain useful insights.
Below are the topics covered in this tutorial:
1. Big Data Growth Drivers
2. What is Big Data?
3. Hadoop Introduction
4. Hadoop Master/Slave Architecture
5. Hadoop Core Components
6. HDFS Data Blocks
7. HDFS Read/Write Mechanism
8. What is MapReduce
9. MapReduce Program
10. MapReduce Job Workflow
11. Hadoop Ecosystem
12. Hadoop Use Case: Analyzing Olympic Dataset
Subscribe to our channel to get video updates. Hit the subscribe button above.
Check our complete Hadoop playlist here: https://goo.gl/ExJdZs
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
#Hadoop #Hadooptutorial #HadoopTutorialForBeginners #HadoopArchitecture #LearnHadoop #HadoopTraining #HadoopCertification
How it Works?
1. This is a 5 Week Instructor led Online Course, 40 hours of assignment and 30 hours of project work
2. We have a 24x7 One-on-One LIVE Technical Support to help you with any problems you might face or any clarifications you may require during the course.
3. At the end of the training you will have to undergo a 2-hour LIVE Practical Exam based on which we will provide you a Grade and a Verifiable Certificate!
- - - - - - - - - - - - - -
About the Course
Edureka’s Big Data and Hadoop online training is designed to help you become a top Hadoop developer. During this course, our expert Hadoop instructors will help you:
1. Master the concepts of HDFS and MapReduce framework
2. Understand Hadoop 2.x Architecture
3. Setup Hadoop Cluster and write Complex MapReduce programs
4. Learn data loading techniques using Sqoop and Flume
5. Perform data analytics using Pig, Hive and YARN
6. Implement HBase and MapReduce integration
7. Implement Advanced Usage and Indexing
8. Schedule jobs using Oozie
9. Implement best practices for Hadoop development
10. Work on a real life Project on Big Data Analytics
11. Understand Spark and its Ecosystem
12. Learn how to work in RDD in Spark
- - - - - - - - - - - - - -
Who should go for this course?
If you belong to any of the following groups, knowledge of Big Data and Hadoop is crucial for you if you want to progress in your career:
1. Analytics professionals
2. BI /ETL/DW professionals
3. Project managers
4. Testing professionals
5. Mainframe professionals
6. Software developers and architects
7. Recent graduates passionate about building successful career in Big Data
- - - - - - - - - - - - - -
Why Learn Hadoop?
Big Data! A Worldwide Problem?
According to Wikipedia, "Big data is collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications." In simpler terms, Big Data is a term given to large volumes of data that organizations store and process. However, it is becoming very difficult for companies to store, retrieve and process the ever-increasing data. If any company gets hold on managing its data well, nothing can stop it from becoming the next BIG success!
The problem lies in the use of traditional systems to store enormous data. Though these systems were a success a few years ago, with increasing amount and complexity of data, these are soon becoming obsolete. The good news is - Hadoop has become an integral part for storing, handling, evaluating and retrieving hundreds of terabytes, and even petabytes of data.
- - - - - - - - - - - - - -
Opportunities for Hadoopers!
Opportunities for Hadoopers are infinite - from a Hadoop Developer, to a Hadoop Tester or a Hadoop Architect, and so on. If cracking and managing BIG Data is your passion in life, then think no more and Join Edureka's Hadoop Online course and carve a niche for yourself!
Please write back to us at [email protected] or call us at +91 88808 62004 for more information.
Customer Review:
Michael Harkins, System Architect, Hortonworks says: “The courses are top rate. The best part is live instruction, with playback. But my favorite feature is viewing a previous class. Also, they are always there to answer questions, and prompt when you open an issue if you are having any trouble. Added bonus ~ you get lifetime access to the course you took!!! ~ This is the killer education app... I've take two courses, and I'm taking two more.”

[ You can find a visual transcript of this video on my blog: https://www.timroes.de/2016/10/23/kibana5-introduction/ ]
In this video we'll cover all the basics you need to get started with Kibana 5 and kickstart into visualizing and analyzing your data.
More Kibana tutorials can be found on https://www.timroes.de
All Kibana tutorials are available in the Kibana Tutorials playlist: https://www.youtube.com/playlist?list=PLWOeloPQaz1C91x7ioqFO8SnaV7xNFvjo
The mentioned post about the detailed explanation of queries can be found at: https://www.timroes.de/2016/05/29/elasticsearch-kibana-queries-in-depth-tutorial/
Some tutorials about timelion:
- Short introduction: https://www.youtube.com/watch?v=-sgZdW5k7eQ
- More detailed talk from DevoxxFR: https://www.youtube.com/watch?v=L5LvP_Cj0A0

SSIS Video Scenario:
How to Export Data to Multiple Excel Sheets from Single SQL Server Table in SSIS Package
which was dividing the data to multiple sheets depending upon distinct value in one of the column.
In this post, we are going to split the rows without using any column value. Think about a situation where you have 3.5 million records and you would like to write 500,000 on each of the excel sheet.
The package should be able to take number of rows per sheet as variable value so we can change anytime we like.
link for the script used in the video
http://www.techbrothersit.com/2016/03/how-to-split-large-table-data-into.html
Check out our Dynamic Excel Source and Destination videos in SSIS Video Tutorial playlist on below topics
How to Load Data from Excel Files when Number of Columns can decrease or order is changed in Excel Sheet
How to Load Only Matching Column Data to SQL Server Table from Multiple Excel Files (Single Sheet per file) Dynamically in SSIS Package
How to Load Excel File Names with Sheet Names ,Row Count,Last Modified Date, File Size in SQL Server Table
How to Load Multiple Excel Files with Multiple Sheets to Single SQL Server Table by using SSIS Package
How to Load Matching Sheets from Excel to Table and Log Not Matching Sheets Information in SQL Server Table
How to create Table for each sheet in Excel Files and load data to it dynamically in SSIS Package
How to Create Table per Excel File and Load all Sheets Data Dynamically in SSIS Package by using Script Task
How to create CSV file per Excel File and Load All Sheets from Excel File to it in SSIS Package
How to Create CSV File for Each Excel Sheet from Excel Files in SSIS Package
How to Load Excel File Name and Sheet Name with Data to SQL Server in SSIS Package
How to Import data from Multiple Excel Sheets with a pattern of sheet names from Multiple Excel File in SSIS Package
How to import Data from Excel Files for specific Sheet Name to SQL Server Table in SSIS Package
Load Data To Tables according to Excel Sheet Names from Excel Files dynamically in SSIS Package
How to Load Excel Files with Single/ Multiple Sheets to SQL Server Tables according to Excel File Name Dynamically
How to Read Excel Sheet Data after Skipping Rows in SSIS Package by using Script Task
How to read data from Excel Sheet and Load to Multiple Tables by using Script Task in SSIS Package
How to create Excel File Dynamically from SQL server Table/View by using Script Task in SSIS Package
How to create Excel File Dynamically for Stored Procedure Results in SSIS Package by using Script Task
How to Export SQL Server Tables from Database to Excel File Dynamically in SSIS Package by using Script Task
How to Convert CSV/Text Files to Excel Files in SSIS Package by using Script Task
How to Load All CSV Files to Excel Sheets ( Sheet Per CSV) in single Excel File in SSIS Package
How to Load All CSV Files to Single Excel Sheet with File Names in an Excel File Dynamically in SSIS Package
How to Create Sample Excel file with Sheet from each table with Top 1000 Rows per sheet in SSIS Package
How to Export Data to Multiple Excel Sheets from Single SQL Server Table in SSIS Package
How to split large table data into multiple Excel Sheets on Single Excel File by using SSIS Package
How to Export All tables of a database to Excel Files with Date-time in SSIS Package
How to read Cell Value from Excel by using Script Task in SSIS Package

In this video snippet, I'm going to show you how to install the free R statistics package and calculate some basic statistics.
This clip is referenced in my blog entitled, How To Quickly Install The Free Statistics Package R.
This clip was hijacked from my OraPub Online Institute seminar,
Using Skewed Performance Data To Your Advantage.
https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htm#skew

QMF version 11 provides a new global variable that can save your session settings when you exit a QMF session and initialize them when you re-enter a QMF session. DSQEC_USERGLV_SAV is used to set up this behavior. Also, DSQEC_SESSGLV_SAV can be used to save values entered on QMF command prompt panels from session to session.

As the world’s more popular spreadsheet, Excel has continued to evolve and grow with its users. We have added new features, adapted them based on your feedback and continued to make the functionality in Excel better, faster & more efficient. Join us to learn more about the innovations coming in 2018 - Machine Learning-Powered Insights, New Data Types, New Visualizations, Custom Functions and More! From business analysts to line of business specialists, if you use Excel this is a session you don’t want to miss.

Learn about Cal Poly's new MS in Business Analytics launching fall 2016. Leaders from the Orfalea College of Business, Google and Oracle discuss trends in data analysis and why this degree is in demand by major employers. More info: http://www.cob.calpoly.edu/gradbusiness/degree-programs/ms-business-analytics/

This month's Patch Tuesday aligns with January's of past... issuing a low number of patches and giving IT a reprieve to start the year.
We have just 4 new bulletins this month covering 6 CVEs. None of the bulletins are rated critical and only one bulletin was known to have active exploits prior to today's release. After you deploy the few needed patches this month, consider taking the extra time to get caught up on what you may have missed over the holidays.
First on your list of priorities should be MS14-002 which is a patch for the elevation of privilege vulnerability discovered in late November with active exploits against Windows XP and Windows Server 2003 users. This bulletin addresses one CVE and has only been seen used in conjunction with a vulnerability in Adobe Reader and Acrobat that was patched in May as part of Adobe Security Bulletin APSB 13-15. This was typically exploited by an attacker sending your user a spear phishing email with a bad Adobe link. Once clicked, that attacker could then gain administrator access to the machine. Keeping your Adobe applications fully patched will mitigate this vulnerability, but it's important to apply MS14-002 as a defense in depth. MS14-002 replaces Security Advisory 2914486. This bulletin is only given a severity of "Important" for a variety of reasons, including the fact that Microsoft will end support for XP in April. If you're still using XP, this will be an important patch to deploy. And, hopefully you are working on your platform migration plan.
Second on your list of priorities should be MS14-001. It's a vulnerability in Office that could allow a remote code execution and it covers 3 CVEs. While there are no known active attacks, the vulnerability has an exploitability index of 1 meaning exploit code is likely within the next 30 days. This bulletin is applicable to all currently supported versions of Microsoft Word on Windows. Microsoft Office for Mac is unaffected.
MS14-003 is a vulnerability that could lead to an elevation of privilege in Windows Kernel mode drivers. It covers one CVE for Windows 7 and Server 2008 R2 and has no known active attacks. Microsoft gives a deployment priority of 2 and should be third on your list this month.
Finally, MS14-004 is a vulnerability in Microsoft Dynamics that could allow a denial of service. It covers one CVE as well and there are no known active attacks. The exploitability index is 3. This is a server side vulnerability and note that the updated service will not automatically restart, so if you are applicable, it would be best practice to manually restart the impacted service after applying the update.
In addition to the 4 patches, Microsoft has also re-released MS13-081. There were stability issues in some cases with 3rd party USB drivers so we recommend reinstalling the update to ensure you have the latest version of that patch. They have also released Security Advisory 2755801 for updates to vulnerabilities in Adobe Flash Player for IE.

Generating Searchable Public-Key Ciphertexts with Hidden Structures for Fast Keyword Search
To get this project in ONLINE or through TRAINING Sessions, Contact:JP INFOTECH, Old No.31, New No.86, 1st Floor, 1st Avenue, Ashok Pillar, Chennai -83.
Landmark: Next to Kotak Mahendra Bank.
Pondicherry Office: JP INFOTECH, #45, Kamaraj Salai, Thattanchavady, Puducherry -9.
Landmark: Next to VVP Nagar Arch.
Mobile: (0) 9952649690 , Email: [email protected], web: www.jpinfotech.org
Blog: www.jpinfotech.blogspot.com
Existing semantically secure public-key searchable encryption schemes take search time linear with the total number of the ciphertexts. This makes retrieval from large-scale databases prohibitive. To alleviate this problem, this paper proposes Searchable Public-Key Ciphertexts with Hidden Structures (SPCHS) for keyword search as fast as possible without sacrificing semantic security of the encrypted keywords. In SPCHS, all keyword-searchable ciphertexts are structured by hidden relations, and with the search trapdoor corresponding to a keyword, the minimum information of the relations is disclosed to a search algorithm as the guidance to find all matching ciphertexts efficiently. We construct a SPCHS scheme from scratch in which the ciphertexts have a hidden star-like structure. We prove our scheme to be semantically secure in the Random Oracle (RO) model. The search complexity of our scheme is dependent on the actual number of the ciphertexts containing the queried keyword, rather than the number of all ciphertexts. Finally, we present a generic SPCHS construction from anonymous identity-based encryption and collision-free full-identity malleable Identity-Based Key Encapsulation Mechanism (IBKEM) with anonymity. We illustrate two collision-free full-identity malleable IBKEM instances, which are semantically secure and anonymous, respectively, in the RO and standard models. The latter instance enables us to construct an SPCHS scheme with semantic security in the standard model.

Hello! This is our first Sponge Plugin Tutorial! We are planning on doing plenty more Sponge development tutorial in the coming weeks, so stay tuned!
Sorry the audio for HassanS6000 was too loud in this episode, we'll have it fixed in the ones yet to come.
Negafinity Social Media:
Website: http://www.negafinity.com/index.html
Twitter: https://twitter.com/Negafinity
Facebook: https://www.facebook.com/negafinity
GitHub: https://github.com/NEGAFINITY
Links:
JDK 8 Downloads: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
Setting up JAVA_HOME for Windows: https://confluence.atlassian.com/doc/setting-the-java_home-variable-in-windows-8895.html
Setting up JAVA_HOME for Mac OS X: https://www.mkyong.com/java/how-to-set-java_home-environment-variable-on-mac-os-x/
Eclipse Website: http://www.eclipse.org/home/index.php
Gradle Website: https://gradle.org/
Sponge Docs: https://docs.spongepowered.org/
Oracle Documentation on Annotations: https://docs.oracle.com/javase/tutorial/java/annotations/
Sponge Website: https://www.spongepowered.org/
We used the Spongie mascot in our thumbnail under the rights given in the documentation here: https://docs.spongepowered.org/master/en/about/assets.html
Thanks for watching! Don't forget to drop a like, and subscribe. Feel free to ask questions in the comments!

Listen to the full episode here:
http://softwareengineeringdaily.com/2017/05/30/ios-and-podcasts-with-rob-walch/
Apple controls the iOS ecosystem. As an accident of history, Apple also controls the podcasting ecosystem. Unlike most ecosystems within Apple’s dominion, podcasts remain open. A podcaster merely has to record an mp3, distribute it via RSS feed, and submit that RSS feed to the iTunes podcast portal.
Podcasting has thrived in recent years, but very few technology companies have managed to take advantage of that growth. Libsyn is the most popular place to host a podcast. Libsyn is a combination of a CDN, a hosting service, analytics, and place to get an RSS feed for a podcaster. There have been many clones of Libsyn over the years, but the company remains the industry standard.
For people who are confused–iTunes does not host any audio files. It is just an index of feeds. A podcaster needs to host audio files somewhere in order to give iTunes access.
Today’s guest Rob Walch joins the show to talk about podcasts–including his podcast Today in iOS. I had a great time meeting Rob at the Microsoft Build conference. Special thanks to Bharat Bhat for organizing the podcast booths at Microsoft Build.
Transcript
Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.
Sponsors
Dice.com will help you accelerate your tech career. Whether you’re actively looking for a job or need insights to grow in your role, Dice has the resources you need. Dice’s mobile app is the fastest and easiest way to get ahead. Search thousands of jobs from top companies. Discover your market value based on your unique skill set. Manage your tech career and download the Dice Careers app on Android or iOS today. And to check out the Dice website and support Software Engineering Daily, go to dice.com/sedaily.
Oracle Dyn provides DNS that is as dynamic and intelligent as your applications. Dyn DNS gets your users to the right cloud service, CDN, or data center, using intelligent response to steer traffic based on business policies, as well as real-time internet conditions, like the security and performance of the network path. Get started with a free 30-day trial for your application by going to dyn.com/sedaily. After the free trial, Dyn’s developer plans start at just $7 a month for world-class DNS. Rethink DNS. Go to dyn.com/sedaily to learn more and get your free trial of Dyn DNS.
Ready to build your own stunning website? Go to Wix-DOT-com and start for free! With Wix, you can choose from hundreds of beautiful, designer-made templates. Simply drag and drop to customize anything and everything. Add your text, images, videos and more. Wix makes it easy to get your stunning website looking exactly the way you want. Plus, your site is mobile optimized, so you’ll look amazing on any device. Whatever you need a website for, Wix has you covered. So, showcase your talents. Start that dev blog, detailing your latest projects. Grow your network with Wix apps made to work seamlessly with your site. Or, simply explore and share new ideas. You decide. Over one-hundred-million people choose Wix to create their website – what are you waiting for? Make yours happen today. It’s easy and free. And when you’re ready to upgrade, use the promo code SEDaily for a special SE Daily listener discount. Terms and conditions apply. For more details, go to Wix.com/wix-lp/SEdaily. Create your stunning website today with Wix.com, that’s W-I-X-DOT-com.

The Tron Roadmap.
Follow us on Twitter , Facebook , Steemit , and join our Telegram channel for the latest blockchain and cryptocurrency news.
With an already existing user base of over 180 million, the opportunities for this Blockchain and cryptocurrency seem enormous. Also, it will likely not have to bootstrap and this is a deviation from the trajectory of most apps and platforms of this nature.
An outsider continues to steal the crypto spotlight.
Investors started telling CoinDesk in late December that Telegram was looking at doing some kind of ICO.
All that on top of promising super fast payments and micropayments using mobile devices, with negligible transaction fees.
With these announcements, fake sites quickly popped up claiming to be the place to buy grams. Confirming that one was fake in a tweet proved to be the closest Durov has come to a public confirmation of the crowdsale.

By mid-month, the idea that Telegram might raise its fundraising round even higher was reported by Bloomberg.
Early February.
They come up with a lockup period that releases tokens after four waiting periods, the longest one last 18 months.
Late February.
Finally, Telegram has apparently offered investors some kind of refund provision if it fails to deliver the TON platform by the end of October 2019, Business Insider reported.
The leader in blockchain news, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups.
The filing names Ton Issuer Inc. and Telegram Group Inc. along with the two individuals, Pavel Durov and Nikolai Durov, as related persons.
Apart from building on the extensive userbase Telegram has amassed, and serving as a medium of exchange with a native cryptocurrency called GRAM, the TON platform also aims to include smart contracts and decentralized services such as TON Storage and TON Proxy.
Leverage and Margin Explained.