This website uses cookies, including third party ones, to allow for analysis of how people use our website in order to improve your experience and our services. By continuing to use our website, you agree to the use of such cookies. Click here for more information on our Cookie Policy and Privacy Policy.

close

Continue

Telecoms, Media & Technology is part of the Knowledge and Networking Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Quest Software | Data Platform Track Sponsor

Profile

Spotlight Cloud, from Quest, is a SaaS solution with no on premise performance repository, a full browser front end, and native apps for your mobile devices. We’ve taken 20 years of experience in providing SQL Server monitoring and performance diagnostics and made it easier than ever for our customers to both resolve and prevent issues. Get up and get going in just a few minutes with zero hassle while leveraging the compute power of the cloud to handle longer data history with deeper analytics.

In this session, we’ll look at powerful but simple-to-use datatypes in R such as data frames. The session will also give attendees a chance to upgrade their data analysis skills by looking at R data transformation using a powerful set of tools to make things simple: the tidyverse. Then, we will look at integrating our R work into Power BI, and visualizing our data using beautiful visualizations with R and Power BI. Finally, we will share our work by publishing our Power BI project, with our R code, to the Power BI service. We will also look at refreshing our dataset so that our new dashboard has refreshed data.

A year after moving to DynamoDB, the rumblings began: "Maybe we need a new database." Why would a growing startup switch from a leading NoSQL database to DynamoDB, only to be contemplating a new database a year later? Is such a move even necessary? In this session, we'll cover our DynamoDB journey: why we chose DynamoDB, how we moved to it, and the problems we encountered and how we overcame them. We’ll also look at where DynamoDB is headed and where we are headed. By the end of this session, you'll know why you may want to consider DynamoDB; what challenges you'll likely encounter with scaling, design, local development, backups and failover; and why you may want to look beyond DynamoDB.

Polybase is Microsoft's newest way of integrating SQL Server with external systems such as Hadoop and Azure Blob Storage. In this session, we will connect SQL Server to an on-premises Hadoop cluster as well as Azure Blob Storage, writing T-SQL queries to retrieve remote data. We will then use DMVs and other resources to show what the Polybase engine is doing behind the scenes. Finally, we will look at interesting use cases for Polybase.

Microsoft's official PowerShell module for SQL Server offers faster ways to manage your entire data-loving world. In this session, we’ll show you the best new features in the new SQL Server module and why every data professional will find it useful. Prior to SQL Server 2016, using SQL PowerShell was like installing training wheels on a Ducati. The SQL Tools team changed all that by working with the community to prioritize improvements to SQL PowerShell. Additionally, they've started releasing regular updates. The SQL Server team has already delivered new cmdlets to help you manage SQL Agent Jobs, SQL Error Logs, and Add & Remove Logins. A favorite new feature is how you can now query multiple sources (SQL, csv files, etc.) and combine the results easily. By leveraging .NET DataTables, you can insert those results into SQL Server in a streamlined way. By the end of this session, you'll have a taste of all the new capabilities in SQL PowerShell.

Regular expressions can help you perform incredible tasks with very little effort. Need to create 1,700 logins from an email request? Did developers give you a single script with 300 stored procedures filled with table variables instead of temp tables? Need to move 500 databases to a different drive? Such tasks typically take a long time to code, but regular expressions cut that work from several days to just a few minutes or even from hours to seconds. In this session, you’ll learn how to use regular expressions to significantly decrease the time it takes to do tasks. Stop writing code manually and let regular expressions do it for you.

Operational and performance DBAs must walk a tightrope to deliver the best database performance possible within the constraints of departmental budgets. This means finding the right trade-off between storage performance and cost, an exercise that is getting more challenging as the public cloud offers more and more specialized options for storage. In this session, we will discuss each of the storage options available on Amazon Web Services, trade-offs between these options and real-world situations where each is appropriate.

Do you want to make more SQL Servers but don't want to invest in more hardware? Then Relational Database Service (RDS) in Amazon Web Services (AWS) may be the right choice for you. In this session, we will create a server using the Amazon Web Console, explore how to automate installation through Cloud Formation, learn how to administrate the service and walk through advanced features. Whether you are familiar with AWS or are brand new to it, this presentation will at least give you a good overview to decide if RDS is the right service for you.

Your OLTP database application sustains a heavy, mixed workload with a lot of read and write transactions at the same time reports are delivered from the database to a client application. Performance was fine for a long time, but it is no longer meeting your needs now that it must scale to much higher workloads. What should you do? In this real-world case study, you’ll learn about a series of technologies that provide unprecedented scalability, including data compression, In-Memory OLTP and clustered, partitioned columnstore indexes. We will walk you through a chronology of the application and database architecture, its changes over time and the degree of performance improvement achieved with each new SQL Server feature applied. This session will teach you all about planning and implementing advanced SQL Server performance features and how each one impacts your system performance for applications with 100s or 1,000s of concurrent users.

Could your seemingly normal SQL Server Integration Services (SSIS) package be hiding a disaster, waiting to detonate at the most inconvenient time? Integration Services is an incredibly flexible product, and that flexibility can lead to good—and occasionally bad—design patterns. Small and seemingly trivial design decisions can lead to big issues down the road, including leaky data flows, data quality issues, paralyzing performance problems and other explosive behaviors. In this session, we will explore some of the most common SSIS design patterns that are potentially more harmful than they first appear. From package configuration to control flow constraints, and data flow transformations to logging, we'll demonstrate what can go wrong and show some alternative designs that can prevent these types of problems from developing into bigger issues.

When creating a new database, how do you ensure it will shine when things get tough? How many times, when database objects are created with all the defaults, have you seen your database become out of control due to the size, amount of data or even activity? If you want to ensure your database and the objects contained in it can scale and perform well as it grows, you need to do the proper homework before ever creating the database. In this session, we will walk through the various factors that affect performance and scalability under real-life conditions and help you understand how to properly configure databases up front to avoid issues down the road. Scalability is all about having a proper foundation to build on.

In this session, Steph Locke, who has spent the past five years operationalizing data science models, will show you a robust framework that'll work with R and Python models using your existing SQL Server implementation. This isn't the only way to do things, but this solution will fit easily into a SQL Server 2016+ database, helping you get models into production quickly and safely. Using SQL Server 2016+ with R/ML Services installed, attendees will use temporal tables and other new features to deploy models, log the results and then monitor using Power BI.

Is it possible that SQL Server now runs on Linux? In this session, we’ll cover the details of deploying and running SQL Server on Linux, including Red Hat, SUSE and Ubuntu. We’ll go over the internals of the architecture and the unique aspects of using SQL Server on Linux. Also covered will be how SQL Server has embraced Docker containers, providing new scenarios not seen before with virtual machines, including support for Kubernetes and CI/CD pipelines. The session will be full of demos showing how to make the move to using SQL Server on Linux and Docker containers. If you are a MacBook user and want to run SQL Server with no virtualization or Windows installation, don't miss this session. This session will discuss the new features of SQL Server 2019 on Linux and containers.

For years, SQL Server Reporting Services and Power BI lived in their own separate worlds: SSRS as an on-premises solution and Power BI in the cloud. However, with the release of Power BI Report Server, SSRS and Power BI can live happily together as an on-premises solution. In this presentation, we'll cover the basics of Power BI Report Server. We'll discuss the essential moving parts and will walk through what your organization will need to get started using this new union of Power BI and SSRS.

There are many ways to detect and capture changes to the data in your business system to populate your data warehouse. In this session, we will compare and contrast several methods for loading slowly changing dimensions in your ETL solutions, including SSIS design patterns, T-SQL code, change data capture and temporal tables. Attendees will gain a full understanding of the pros and cons of each of these solutions and become confident in choosing and implementing them in their own ETL solutions.

The Query Store now allows us to track query plans as they change over time, giving us a slew of new possibilities when it comes to tuning our queries. Even just the ability to compare a previous plan to a new plan is a huge step toward understanding what may be happening in our instance. We can even tell the optimizer which plan we want it to use. These were all either extremely difficult—and, in some cases, impossible—to do before. This session will explain what the Query Store is all about and give you the insight to get started using this new and wonderful feature set.

Tuning disk subsystems for optimal SQL Server performance is typically the domain of very experienced enterprise DBAs. It normally requires years of experience with the hard disk subsystem to learn exactly what configurations perform best, provide the greatest fault tolerance and allow for the most scalability. Configuring hardware can be very intimidating, especially when the application needs to scale. This session will teach you the best practices, tips and techniques to help you avoid costly mistakes, and they will serve as the foundation for the long-term success of your SQL Server environment.

When Microsoft announced Stretch Databases, we all loved the idea—that is, until pricing was revealed. But perhaps, by combining various SQL Server features, we could “stretch” databases to the cloud ourselves. By using files and filegroups and partitioning along with a locally mounted Azure or AWS fileshare, you can build your own stretch database without necessarily breaking the bank. In this session, you’ll learn the steps to build a quasi-stretch database to store stale data in cheap cloud storage yet still have it easily accessible when needed.

You have just been informed that you are moving to AWS. Now what? In this presentation, we’ll cover the ups and downs of using AWS Elastic Cloud Computing (EC2) instances for your SQL Server. We will show you how to make an EC2 instance and install SQL Server on it; go over networking, security and routing services; and demonstrate how to make a simple, highly available, two-node availability group. By the end of the session, you should be on the right track to start using AWS in your environment.

Are you missing out on the promise and excitement of Azure Machine Learning because your customers are unwilling or unable to commit to the cloud? Wouldn't it be great if you could harness the same capabilities for your on-premises data? With Python integrated in SQL Server 2017, you can. In this session, we will provide real-world examples of problems solved on-premises using both supervised and unsupervised machine learning techniques. Attendees will learn how to use Machine Learning Services (In-Database) integration to tackle complex data prediction scenarios.

Big data is not just a buzzword. Today, companies from startups to enterprises have access to massive amounts of data generated by users and connected devices. The variety, volume and velocity of data requires specialized tools and massively scalable compute, storage and analytics capabilities to generate valuable insights. Amazon Web Services (AWS) offers a broad range of services for virtually any kind of big data application, such as data warehousing, clickstream analytics, fraud detection, recommendation engines, event-driven ETL, serverless computing, IoT processing, and machine learning. In this session, we will look at how to choose the right tool for the right job at every step of a data analytics workflow, from collecting, storing, processing and consuming data at scale to get the level of performance, reliability and security you need, while being cost-efficient.

It wasn’t that long ago when virtualization changed the way we work with SQL and infrastructure. Now it’s all about containers. Containers can make the DBA’s job of creating and deploying servers much easier, freeing them up for other tasks such as performance tuning and security. In this session, we will go over the concepts of containers and their general capabilities. Then we will build, connect and configure a SQL Server in a container for use as a private lab.

Using packages such as xml2, jsonlite and jq, this session will showcase how you can work with XML and JSON easily. We'll also cover how you can work with data that's been stored in SQL Server so you can work with your data either from an R stand-alone instance or in a stored procedure in SQL Server.

Accessing and Analyzing Current and Historical Weather Forecasts using AWS Athena

SpeakerMarty Sullivan -
DevOps/Cloud Engineer,
Cornell University

Do you want to learn how to access a massive data lake of geospatial data in Amazon S3? Amazon Athena allows a data analyst to write familiar SQL queries to access data stored in flat files directly in an Amazon S3 bucket. This session will cover data partitioning, bucketing, and basic and columnar file formats. Any datasets, from a few gigabytes to a few petabytes, can easily be queried with Amazon Athena. Attendees will take away the knowledge needed to start creating their own data lake using the same techniques discussed in this session with any data of their choosing.

SQL Server 2019, now in public preview, builds upon the modern data platform of SQL Server 2017 to bring new capabilities and open up new worlds through a unified data platform. Come learn more about the new features of SQL Server 2019 including Big Data Clusters with Spark and HDFS, Machine Learning, SQL Server extensions with Java, Intelligent Query Processing, Always On Availability Groups on Kubernetes, and list of database engine enhancements for the data professional and developer.