NEW QUESTION 1You are conducting feature engineering to prepuce data for further analysis. The data includes seasonal patterns on inventory requirements. You need to select the appropriate method to conduct feature engineering on the data. Which method should you use?

NEW QUESTION 2You plan to use a Deep Learning Virtual Machine (DLVM) to train deep learning models using Compute Unified Device Architecture (CUDA) computations. You need to configure the DLVM to support CUDA. What should you implement?

NEW QUESTION 3You are creating a new experiment in Azure Machine Learning Studio. You have a small dataset that has missing values in many columns. The data does not require the application of predictors for each column. You plan to use the Clean Missing Data module to handle the missing data. You need to select a data cleaning method. Which method should you use?

NEW QUESTION 4You are a data scientist using Azure Machine Learning Studio. You need to normalize values to produce an output column into bins to predict a target column.Solution: Apply an Equal Width with Custom Start and Stop binning mode.Does the solution meet the goal?

NEW QUESTION 5You are using Azure Machine Learning Studio to perform feature engineering on a dataset. You need to normalize values to produce a feature column grouped into bins.Solution: Apply an Entropy Minimum Description Length (MDL) binning mode.Does the solution meet the goal?

NEW QUESTION 6You are a data scientist building a deep convolutional neural network (CNN) for image classification. The CNN model you built shows signs of overfitting. You need to reduce overfitting and converge the model to an optimal fit. Which two actions should you perform? (Each correct answer presents a complete solution. Choose two.)

NEW QUESTION 7Drag and DropYou are building an intelligent solution using machine learning models. The environment must support the following requirements:– Data scientists must build notebooks in a cloud environment.– Data scientists must use automatic feature engineering and model building in machine learning pipelines.– Notebooks must be deployed to retrain using Spark instances with dynamic worker allocation.– Notebooks must be exportable to be version controlled locally.You need to create the environment. Which four actions should you perform in sequence? (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.)

Answer:Explanation:Step 1: Create an Azure HDInsight cluster to include the Apache Spark Mlib library.Step 2: Install Microsot Machine Learning for Apache Spark. You install AzureML on your Azure HDInsight cluster. Microsoft Machine Learning for Apache Spark (MMLSpark) provides a number of deep learning and data science tools for Apache Spark, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK) and OpenCV, enabling you to quickly create powerful, highly-scalable predictive and analytical models for large image and text datasets.Step 3: Create and execute the Zeppelin notebooks on the cluster.Step 4: When the cluster is ready, export Zeppelin notebooks to a local environment. Notebooks must be exportable to be version controlled locally.https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-zeppelin-notebookhttps://azuremlbuild.blob.core.windows.net/pysparkapi/intro.html

NEW QUESTION 8HotSpotYou create an experiment in Azure Machine Learning Studio. You add a training dataset that contains 10,000 rows. The first 9,000 rows represent class 0 (90 percent). The remaining 1,000 rows represent class 1 (10 percent). The training set is imbalances between two classes. You must increase the number of training examples for class 1 to 4,000 by using 5 data rows. You add the Synthetic Minority Oversampling Technique (SMOTE) module to the experiment. You need to configure the module. Which values should you use? (To answer, select the appropriate options in the dialog box in the answer area.)

Answer:Explanation:Box 1: 300. You type 300 (%), the module triples the percentage of minority cases (3000) compared to the original dataset (1000).Box 2: 5. We should use 5 data rows. Use the Number of nearest neighbors option to determine the size of the feature space that the SMOTE algorithm uses when in building new cases. A nearest neighbor is a row of data (a case) that is very similar to some target case. The distance between any two cases is measured by combining the weighted vectors of all features. By increasing the number of nearest neighbors, you get features from more cases. By keeping the number of nearest neighbors low, you use features that are more like those in the original sample.https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/smote

NEW QUESTION 9You are creating a machine learning model. You have a dataset that contains null rows. You need to use the Clean Missing Data module in Azure Machine Learning Studio to identify and resolve the null and missing data in the dataset. Which parameter should you use?

NEW QUESTION 10You use the Two-Class Neural Network module in Azure Machine Learning Studio to build a binary classification model. You use the Tune Model Hyperparameters module to tune accuracy for the model. You need to select the hyperparameters that should be tuned using the Tune Model Hyperparameters module. Which two hyperparameters should you use? (Each correct answer presents part of the solution. Choose two.)

A. Number of hidden nodes.B. Learning Rate.C. The type of the normalizer.D. Number of learning iterations.E. Hidden layer specification.

Case Study 1 – Sporting EventsYou are a data scientist in a company that provides data science for professional sporting events. Models will be global and local market data to meet the following business goals:……

NEW QUESTION 101You need to resolve the local machine learning pipeline performance issue. What should you do?

NEW QUESTION 103You need to implement a scaling strategy for the local penalty detection data. Which normalization type should you use?

A. StreamingB. WeightC. BatchD. Cosine

Answer: CExplanation:Post batch normalization statistics (PBN) is the Microsoft Cognitive Toolkit (CNTK) version of how to evaluate the population mean and variance of Batch Normalization which could be used in inference Original Paper. In CNTK, custom networks are defined using the BrainScriptNetworkBuilder and described in the CNTK network description language “BrainScript”.https://docs.microsoft.com/en-us/cognitive-toolkit/post-batch-normalization-statistics

NEW QUESTION 104……

Case Study 2 – FabrikamYou are a data scientist for Fabrikam Residences, a company specializing in quality private and commercial property in the United States. Fabrikam Residences is considering expanding into Europe and has asked you to investigate prices for private residences in major European cities. You use Azure Machine Learning Studio to measure the median value of properties. You produce a regression model to predict property prices by using the Linear Regression and Bayesian Linear Regression modules.……

NEW QUESTION 107You need to select a feature extraction method. Which method should you use?

NEW QUESTION 108Drag and DropYou need to correct the model fit issue. Which three actions should you perform in sequence? (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.)

Answer:

NEW QUESTION 109HotSpotYou need to configure the Feature Based Feature Selection module based on the experiment requirements and datasets. How should you configure the module properties? (To answer, select the appropriate options in the dialog box in the answer area.)