Tag Archives: users

In a series of papers scheduled to be presented at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Amazon researchers propose complementary AI algorithms that could form the foundation of an assistant that helps customers shop for clothes. One lets people fine-tune search queries by describing variations on a product image, while another suggests products that go with items a customer has already selected. Meanwhile, a third synthesizes an image of a model wearing clothes from different product pages to demonstrate how items work together as an outfit.

Amazon already leverages AI to power Style by Alexa, a feature of the Amazon Shopping app that suggests, compares, and rates apparel using algorithms and human curation. With style recommendations and programs like Prime Wardrobe, which allows users to try on clothes and return what they don’t want to buy, the retailer is vying for a larger slice of sales in a declining apparel market while surfacing products that customers might not normally choose. It’s a win for businesses on its face — excepting cases where the recommended accessories are Amazon’s own, of course.

Virtual try-on network

Researchers at Lab126, the Amazon hardware lab which spawned products like Fire TV, Kindle Fire, and Echo, developed an image-based virtual try-on system called Outfit-VITON that’s designed to help visualize how clothing items in reference photos might look on an image of a person. It can be trained on a single picture using a generative adversarial network (GAN), Amazon says, a type of model with a component called a discriminator that learns to distinguish generated items from real images.

“Online apparel shopping offers the convenience of shopping from the comfort of one’s home, a large selection of items to choose from, and access to the very latest products. However, online shopping does not enable physical try-on, thereby limiting customer understanding of how a garment will actually look on them,” the researchers wrote. “This critical limitation encouraged the development of virtual fitting rooms, where images of a customer wearing selected garments are generated synthetically to help compare and choose the most desired look.”

Outfit-VITON comprises several parts: a shape generation model whose inputs are a query image, which serves as the template for the final image; and any number of reference images, which depict clothes that will be transferred to the model from the query image.

In preprocessing, established techniques segment the input images and compute the query person’s body model, representing their pose and shape. The segments selected for inclusion in the final image pass to the shape generation model, which combines them with the body model and updates the query image’s shape representation. This shape representation moves to a second model — the appearance generation model — that encodes information about texture and color, producing a representation that’s combined with the shape representation to create a photo of the person wearing the garments.

Outfit-VITON’s third model fine-tunes the variables of the appearance generation model to preserve features like logos or distinctive patterns without compromising the silhouette, resulting in what Amazon claims is “more natural” outputs than those of previous systems. “Our approach generates a geometrically correct segmentation map that alters the shape of the selected reference garments to conform to the target person,” the researchers explained. “The algorithm accurately synthesizes fine garment features such as textures, logos, and embroidery using an online optimization scheme that iteratively fine-tunes the synthesized image.”

Visiolinguistic product discovery

One of the other papers tackles the challenge of using text to refine an image that matches a customer-provided query. The Amazon engineers’ approach fuses textual descriptions and image features into representations at different levels of granularity, so that a customer can say something as abstract as “Something more formal” or as precise as “Change the neck style,” and it preserves some image features while following customers’ instructions to change others.

The system consists of models trained on triples of inputs: a source image, a textual revision, and a target image that matches the revision. The inputs pass through three different sub-models in parallel, and at distinct points in the pipeline, the representation of the source image is fused with the representation of text before it’s correlated with the representation of the target image. Because the lower levels of the model tend to represent lower-level input features (e.g., textures and colors) and higher levels higher-level features (sleeve length or tightness of fit), hierarchical matching helps to train the system to ensure it’s able to handle textual modifications of different resolutions, according to Amazon.

Each fusion of linguistic and visual representations is performed by a separate two-component model. One uses a joint attention mechanism to identify visual features that should be the same in the source and target images, while the other identifies features that should change. In tests, the researchers say that it helped to find valid matches to textual modifications 58% more frequently than its best-performing predecessor.

“Image search is a fundamental task in computer vision. In this work, we investigate the task of image search with text feedback, which entitles user to interact with the system by selecting a reference image and providing additional text to refine or modify the retrieval results,” the coauthors wrote. “Unlike the prior works that mostly focus on one type of text feedback, we consider the more general form of text, which can be either attribute-like description, or natural language expression.”

Complementary-item retrieval

The last paper investigates a technique for large-scale fashion data retrieval, where a system predicts an outfit item’s compatibility with other clothing, wardrobe, and accessory items. It takes as inputs any number of garment images together with a numerical representation called a vector indicating the category of each, along with a category vector of the customer’s sought-after item, allowing a customer to select things like shirts and jackets and receive recommendations for shoes.

“Customers frequently shop for clothing items that fit well with what has been selected or purchased before,” the researchers wrote. “Being able to recommend compatible items at the right moment would improve their shopping experience … Our system is designed for large-scale retrieval and outperforms the state-of-the-art on compatibility prediction, fill-in-the-blank, and outfit complementary item retrieval.”

Images pass through a model that produces a vector representation of each, and each representation passes through a set of masks that de-emphasize some representation features and amplify others. (The masks are learned during training, and the resulting representations encode product information like color and style that’s relevant only to a subset of complementary items, such as shoes, handbags, and hats.) Another model takes as input the category for each image and the category of the target item and outputs values for prioritizing the masks, which are called subspace representations.

The whole system is trained using an evaluation criterion that accounts for the outfit. Each training sample includes an outfit as well as items that go well with that outfit and a group of items that don’t, such that post-training, the system produces vector representations of every item in a catalog. Finding the best complement for a particular outfit then becomes a matter of looking up the corresponding vectors.

In tests that use two standard measures on garment complementarity, the system outperformed its three top predecessors with 56.19% fill-in-the-blank accuracy (and 87% compatibility area under the curve) while enabling more efficient item retrieval, and while achieving state-of-the-art results on data sets crawled from multiple online shopping websites (including Amazon and Like.com).

In a paper published on the preprint server Arxiv.org, researchers affiliated with Microsoft and Arizona State University propose an approach to detecting fake news that leverages a technique called weak social supervision. They say that by enabling the training of fake news-detecting AI even in scenarios where labeled examples aren’t available, weak social supervision opens the door to exploring how aspects of user interactions indicate news might be misleading.

According to the Pew Research Center, approximately 68% of U.S. adults got their news from social media in 2018 — which is worrisome considering misinformation about the pandemic continues to go viral, for instance. Companies from Facebook and Twitter to Google are pursuing automated detection solutions, but fake news remains a moving target owing to its topical and stylistic diverseness.

Building on a study published in April, the coauthors of this latest work suggest that weak supervision — where noisy or imprecise sources provide data labeling signals — could improve fake news detection accuracy without requiring fine-tuning. To this end, they built a framework dubbed Tri-relationship for Fake News (TiFN) that models social media users and their connections as an “interaction network” to detect fake news.

Interaction networks describe the relationships among entities like publishers, news pieces, and users; given an interaction network, TiFN’s goal is to embed different types of entities, following from the observation that people tend to interact with like-minded friends. In making its predictions, the framework also accounts for the fact that connected users are more likely to share similar interests in news pieces; that publishers with a high degree of political bias are more likely to publish fake news; and that users with low credibility are more likely to spread fake news.

To test whether TiFN’s weak social supervision could help to detect fake news effectively, the team validated it against a Politifact data set containing 120 true news and 120 verifiably fake pieces shared among 23,865 users. Versus baseline detectors that consider only news content and some social interactions, they report that TiFN achieved between 75% to 87% accuracy even with a limited amount of weak social supervision (within 12 hours after the news was published).

In another experiment involving a separate custom framework called Defend, the researchers sought to use as a weak supervision signal news sentences and user comments explaining why a piece of news is fake. Tested on a second Politifact data set consisting of 145 true news and 270 fake news pieces with 89,999 comments from 68,523 users on Twitter, they say that Defend achieved 90% accuracy.

[W]ith the help of weak social supervision from publisher-bias and user-credibility, the detection performance is better than those without utilizing weak social supervision. We [also] observe that when we eliminate news content component, user comment component, or the co-attention for news contents and user comments, the performances are reduced. [This] indicates capturing the semantic relations between the weak social supervision from user comments and news contents is important,” wrote the researchers. “[W]e can see within a certain range, more weak social supervision leads to a larger performance increase, which shows the benefit of using weak social supervision.”

Today, following the launch of a Facebook program that’s matching developers with organizations to build solutions that inform people about COVID-19, Facebook revealed the steps it’s taking to help Messenger users stay connected. Via a new hub — the Messenger Coronavirus Community Hub — the network is providing tips and resources to help users stay in touch with friends, family, colleagues, and community while preventing the spread of misinformation.

“Messenger helps you feel together with the people you care about, even when you can’t be together,” wrote Stan Chudnovsky in a blog post. “Around the world, we’ve seen significant increases in people using Messenger for group calls to stay in touch with their loved ones. Globally, 70% more people are participating in group video calls and time spent on group video calls has doubled. Whether it’s a one-on-one conversation with a friend or a video call with your extended family, Messenger can help keep you connected to your support system and help get us through these challenging times.”

Facebook says the hub will recommend activities like scheduling a virtual play date, connecting with kids’ teachers or other parents for school updates, and organizing group video chats or text groups. Additionally, it will highlight ways to identify false or misleading information about COVID-19, and it will suggest how to avoid online scams related to COVID-19 treatments or fundraising.

Separately, to limit the spread of misinformation about COVID-19, Facebook is exploring options like testing stricter limits for how many chats Messenger users can forward a message to at one time. The company has also banned ads for hand sanitizer, medical masks, and COVID-19 test kits in recent weeks.

“As this global public health crisis evolves, our mission to connect people around the world could not be more important,” said Chudnovsky. “We hope the hub can serve as a resource for people to help maintain their communities and social connections even when they can’t be together in-person.”

The Messenger hub complements the coronavirus information center Facebook rolled out earlier this month, which collates sources like the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) and sits at the top of the news feed. It includes curated posts from politicians, journalists, and celebrities intended to spread useful health information.

Elsewhere, Facebook launched a WhatsApp information hub in partnership with the WHO, UNICEF, and UNDP to offer actionable guidance, advice, and resources to inform users about COVID-19. WhatsApp said that it’s working with the WHO and UNICEF to provide messaging hotlines for people to use directly.

Beyond its educational efforts, Facebook recently said it would roll out a $ 100 million grant program to assist 30,000 eligible small businesses in over 30 countries. It also teamed up with the WHO, Facebook, Microsoft, TikTok, and other health experts and tech companies to launch #BuildforCOVID19, a global hackathon that aims to find software solutions for challenges related to the novel coronavirus, and it donated $ 1 million to support the International Fact-Checking Network, a unit of the Poynter Institute dedicated to bringing together fact-checkers worldwide.

The Microsoft Dynamics 365 family keeps growing. Maybe it is because of its growth, maybe it is because of its general complexity, the fact is that all sorts of different questions on Dynamics 365 keep coming up, by both users and sysadmins.

Here are 5 of the most frequently asked questions on Dynamics 365, with my view on them and some articles and videos for you to explore further.

1 – How can I reduce costs?

In general, Microsoft Dynamics 365 costs can be split into

Software licenses

Hardware

Cloud storage, portals and other add-ons

Experts and other hires for Implementation and Customization

Training and onboarding

If you opt for a SaaS version of Dynamics 365, I would say cloud storage will be a significant percentage of your costs. Although Dynamics 365 storage is, in fact, less expensive than its competitors, storage is the best area to cut on costs and actually get a better solution in the end. Do read on.

2 – How can I increase Dynamics 365 security?

As long as you initially set up the proper security roles in Dynamics 365, Microsoft Dynamics security should not be a big issue. But you need to be aware of security issues that affect some integrations, namely with Microsoft SharePoint. Did you know Dynamics privileges do not automatically convert into SharePoint permissions? Here is all the info you need on that.

3 – How can I increase team collaboration?

“No man is an island“, they say. The truth is that you can get very interesting results if you get a whole team collaborating rather than having each single individual work with Dynamics on their own. Want to know more?

4 – How can I do an integration of Dynamics 365 and other software?

If “No man is an island”, it is also true that no software is an island. Today’s world is a connected and integrated world.

It is good that it is so, as you certainly can get interesting features and synergies by integrating Dynamics 365 with other software (Microsoft or not). If this is something you would like to explore, check out

By Ana Neto, Connecting Software, a producer of integration and synchronization solutions. Connecting Software is a 15-year-old company, with 40 employees spread in 4 different countries. We specialize in making software work together. Our goal is that everyone in your company can feel the software they use is their ally.

This article is part of a VB special issue. Read the series here: Power in AI.

My name is John.

My name is John Connor, and I live at 19,828 Almond Avenue, Los Angeles.

My name is John Connor, I live at 19,828 Almond Avenue, Los Angeles, and my California police record J-66455705 lists my supposedly expunged juvenile convictions for vandalism, trespassing, and shoplifting.

Which of these levels of personal detail do you feel comfortable sharing with your smartphone? And should every app on that device have the same level of knowledge about your personal details?

Welcome to the concept of siloed sharing. If you want to keep relying on your favorite device to store and automatically sort through your data, it’s time to start considering whether you want to trust device-, app-, and cloud-level AI services to share access to all of your information, or whether there should be highly differential access levels with silo-class safeguards in place.

The siloing concept

Your phone already contains far more information about you than you realize. Depending on who makes the phone’s operating system and chips, that information might be spread across storage silos — separate folders and/or “secure enclaves” — that aren’t easily accessible to the network, the operating system, or other apps. So if you take an ephemeral photo for Snapchat, you might not have to worry that the same image will be sucked up by Facebook without your express permission.

Or all of your data could be sitting in one big pocket- or cloud-sized pool, ready for inspection. If you’ve passively tapped “I agree” buttons or given developers bits of personal data, you can be certain that there’s plenty of information about you on multiple cloud servers across the world. Depending on the photos, business documents, and calendar information you store online, Amazon, Facebook, Google, and other technology companies may already have more info about your personal life than a Russian kompromat dossier.

A silo isn’t there to stop you from using these services. It’s designed to help you keep each service’s data from being widely accessible to others — a particular challenge when tech companies, even “privacy-focused” ones such as Apple, keep growing data-dependent AI services to become bigger parts of their devices and operating systems.

Until recently, there was a practical limit to massive data gathering and mining operations: Someone had to actually sift through the data, typically at such considerable time and expense that only governments and large corporations could allocate the resources. These days, affordable AI chips handle the sifting, most often with at least implicit user consent, for every cloud and personal computer promising “smart” services. As you bring more AI-assisted doorbells, cameras, and speakers into your home, the breadth and depth of sifted, shareable knowledge about you keeps growing by the day.

It’s easy to become comfortable with the conveniences AI solutions bring to the table, assuming you can trust their makers not to misuse or share the data. But as AI becomes responsible for processing more and more of your personal information, who knows how that data will be used, shared, or sold to others? Will Google leverage your health data to help someone sell you insurance? Or could Apple deny you access to a credit card based on private, possibly inaccurate financial data?

Of course, the tech giants will say no. But absent hard legal or user-imposed limits on what can be done with private data, the prospect of making (or saving) money by using AI-mined personal information is already too tempting for some companies to ignore.

Sounding the AI alarm

Artificial intelligence’s potential dangers landed fully in the public consciousness when James Cameron’s 1984 film The Terminator debuted, imagining that in 1997 a “Skynet” military computer would achieve self-awareness and steer armies of killer robots to purge the world of a singular threat: humanity. Cameron seemed genuinely concerned about AI’s emerging threats, but as time passed, society and Terminator sequels realized that an AI-controlled world wasn’t happening anytime soon. In the subsequent films, Skynet’s self-awareness date was pushed off to 2004, and later 2014, when the danger was reimagined as an evil iCloud that connected everyone’s iPhones and iPads.

Putting aside specific dates, The Terminator helped spread the idea that human-made robots won’t necessarily be friendly and giving computers access to personal information could come back to haunt us all. Cameron originally posited that Sarah Connor’s name alone would be enough information for a killer robot to find her at home using a physical phone book. By the 1991 sequel, a next-generation robot located John Connor using a police car’s online database. Today, while the latest Terminator film is in theaters, your cell phone is constantly sharing your current location with a vast network infrastructure, whether you realize it or not.

If you’re a bad actor, this means a bounty hunter can bring you in on an outstanding warrant. If you’re really bad, that’s enough accuracy for a government to pinpoint you for a drone strike. Even if you’re a good citizen, you can be located pretty much anytime, anywhere, as long as you’re “on the grid.” And unlike John Connor in Terminator 3, most of us have no meaningful way of getting off that grid.

Above: The Windows Apps team at Microsoft created a Terminator Vision HUD for HoloLens. It might not seem cute in the future.

Image Credit: Microsoft

Location tracking may be the least of your concerns. Anyone in the U.S. with a federally mandated Real ID driver’s license already has a photo, address, social security number, and other personal details flowing through one or more departments of motor vehicles, to say nothing of systems police can access on an as-needed basis. U.S. residents who have applied for professional licenses most likely have fingerprints, prior addresses, and possibly prior employers circulating in some semi-private databases, too. Top that off with cross-referenced public records, and it’s likely that your court appearances, convictions, and home purchases are all part of someone’s file on you.

Add more recent innovations — such as facial recognition cameras and DNA testing — to the mix, and you’ll have the perfect cocktail for paranoia. Armed with all of that data, computer systems with modern AI could instantly identify your face whenever you appear in public while also understanding your strengths and weaknesses on a genetic level.

The further case for siloing data

As 2019 draws to a close, the idea that a futuristic computer would need to locate you using a phone book seems amusingly quaint. Fully self-aware AIs aren’t yet here, but partially autonomous AI systems are closer to science fact than science fiction.

There’s probably no undoing any of what’s already been shared with companies; the data points about you are already out there, and heavily backed up. To the extent that databases of personal information on millions of people might once have lived largely on gigantic servers in remote locations, they now fit on flash drives and can be shared over the internet in seconds. Hackers trade them for sport.

Absent national or international laws to protect personal data from being aggregated and warehoused — Europe’s GDPR is a noteworthy exception, with state-level alternatives such as California’s CCPA — our solutions may wind up being personal, practical, and technological. Those of us living through this shift will need to start clamping down on the data we share and teach successive generations, starting with our kids, to be more cautious than we were.

Based on what’s been happening with social networks over the past decade, that’s clearly going to be difficult. Apart from its behind-the-scenes data mining, Facebook hosts innocuous-looking “fun” surveys designed to get people to cough up bits of information about themselves, historically while gathering additional information about a user’s profile, friends, and photos. We’ve been trained to become so numb to these information-gathering activities that there’s a temptation to just stop caring and keep sharing.

To wit, phones now automatically upload personal photos en masse to cloud servers where they’re sorted by date and location at a minimum, and perhaps by facial-, object-, and text recognition as well. We may not even know that we’re sharing some of the information; AI may glean details from an image’s background and make inferences missed by the original photographer.

Cloud-based, AI-sorted storage has a huge upside: convenience. But if we’re going to keep relying on someone else’s computers and AI for assistance with our personal files, we need rules that limit their initial and continued access to our data. Although it might be acceptable for your photo app to know that you were next to a restaurant when a fire broke out, you might not want that detail — or your photos at the restaurant — automatically shared with investigators. Or perhaps you do. That’s why sharing silos are so important.

We’re already at the point where these “assistants” are becoming fully integrated into the operating systems we rely on every day. Apart from occasional complaints about Siri’s internet connectivity, assistants draw upon data from the cloud and our devices so quickly that we generally don’t even realize it’s happening.

This raises three questions. The first is how much your most trusted Android, iOS, macOS, or Windows device actually “knows” about you, with or without your permission. A critical second question is how much of that data is being shared with specific apps. And a related third question is how much app-specific data is being shared back to the device’s operating system and/or the cloud.

Users deserve transparent answers to all of these questions. We should also be able to cut the data at any time, anywhere it’s being stored, without delay or permission from a gatekeeper.

For instance, you might have heard about Siri’s early ability to reply to the joke query, “Where do I bury a body?” That’s the sort of question (almost) no one would ask, jokingly or otherwise, if they thought their digital assistant might contact the police. What have you asked your digital assistant — anything potentially embarrassing, incriminating, or capable of being misconstrued? Now imagine that there’s a database out there with all of your requests, and perhaps some accidentally submitted recordings or transcripts, as well.

In a world where smartphones are our primary computers, linked both to the cloud and to your laptop, desktop, tablet, or wearable devices, there must be impenetrable data-sharing firewalls both at the edges of devices and within them. And users need to have clear, meaningful control over which apps have access to specific types, and perhaps levels, of personal data.

There should be multiple silos at both the cloud and device OS levels, paired with individual silos for each app. Users shouldn’t just have the ability to know that “what’s on your iPhone stays on your iPhone” — they should be able to know that what’s in each app stays in that app, and enjoy continuous access (with add/edit/delete rights) to each personal data repository on a device, and in the cloud.

AI is power, but AI is powerless without data

That’s not to say that every person or even most people will dive into these databases. But we should have the ability, at will, to control which personal details are being shared. Trusting your phone to know your weight or heart rate shouldn’t mean granting the same granular data access to your doctor or life insurance provider — unless you wish to do so.

As operating systems grow to subsume more previously non-core functions, such as monitoring health and sorting photos, internal silos within operating systems may be necessary to keep specific types of data (or summaries of that data) from being shared. There are also times when people will want to use certain device features and services anonymously. That should be an option.

AI wields the power to sift through untold quantities of data and make smart decisions with minimal to no human involvement, ideally for the general benefit of people and humanity as a whole. AI will transform more facets of our society than we realize, and it is already impacting plenty of things, for better and for worse.

Data is the fuel for AI’s power. As beneficial as AI can be, planning now to limit its access to personal data using silos is the best way to stop or reduce problems down the road.

This article is part of a VB special issue. Read the series here: Power in AI.

By 2020 Microsoft Requiring Dynamics 365 Users to Migrate to the Unified Interface. The Unified Interface is a far superior user interface that is modern, unified across all devices (desktop, mobile & tablet) and makes the system easier to use.

Did you know there are two user interfaces for Dynamics 365?

Classic Web Interface:

Unified Interface

There are several legacy web capabilities that are being removed from the product altogether, examples include – Process dialogs and Task Flows.

There are many benefits to the Unified Interface, overall it is a far superior interface in just about every way.

Impact to Dynamics 365 users – Every Dynamics 365 system using the legacy web interface needs to be reviewed to assess any compatibility issues well in advance of migrating to the Unified Interface.

We offer a review service to assess your system and identify compatibility and usability issues. You receive a report on any issues found, the impact of each issue and approximate cost to address each issue.

About the Author: David Buggy is a veteran of the CRM industry with 18 years of experience helping businesses transform by leveraging Customer Relationship Management technology. He has over 16 years of experience with Microsoft Dynamics CRM/365 and has helped hundreds of businesses plan, implement and support CRM initiatives. To reach David or call 844.8.STRAVA (844.878.7282) To learn more about Strava Technology Group visit www.stravatechgroup.com

Personal assistants like Apple’s Siri accomplish tasks through natural language commands. However, their underlying components often rely on supervised machine learning algorithms requiring large amounts of hand-annotated training data. In an attempt to reduce the time and effort taken to collect this data, researchers at Apple developed a framework that leverages user engagement signals to automatically create data-augmenting labels. They report that when incorporated using strategies like multi-task learning and validation with an external knowledge base, the annotated data significantly improve accuracy in a production deep learning system.

“We believe this is the first use of user engagement signals to help generate training data for a sequence labeling task on a large scale, and can be applied in practical settings to speed up new feature deployment when little human-annotated data is available,” wrote the researchers in a preprint paper. “Moreover … user engagement signals can help us to identify where the digital assistant needs improvement by learning from its own mistakes.”

The researchers used a range of heuristics to identify behaviors indicating either positive or negative engagement. A few included tapping on content to engage with it further (a positive response), listening to a song for a long duration (another positive response), or interrupting content provided by an intelligent assistant and manually selecting different content (a negative response). Those signals were selectively harvested in a “privacy-preserving manner” to automatically produce ground truth annotations, and they were subsequently combined with coarse-grained labels provided by human annotators.

In order to incorporate the coarse-grained labels and the inferred fine-grained labels into an AI model, the paper’s coauthors devised a multi-task learning framework that treats coarse-grained and fine-grained entity labeling as two tasks. Additionally, they incorporated an external knowledge base validator consisting of entities and their relations. Given the prediction “something” as a music title and “the Beatles” as a music artist for the query “Play something by the Beatles,” the validator would perform a lookup for the top label alternatives and send them to a component that’d re-rank the predictions and return the best alternative.

The researchers conducted two separate test sets to evaluate the tasks performed by the multi-task model, which they compiled by randomly sampling from the production system and hand-annotating with ground truth labels. They say that across 21 model runs, adding 260,000 training examples “consistently” reduced the coarse-grained entity error rate on a prediction task compared with the baseline for all amounts of human-annotated data. Moreover, they report that adding weakly supervised fine-grained data had a larger impact when there was a relatively small amount of human-annotated data (5,000 examples). Lastly, they report that for examples where any of the top model hypotheses passed the knowledge base validator, the fine-grained entity error rate dropped by around 50%.

In another experiment, the team sought to determine whether more granular representations of the user’s intent would increase the likelihood of the system selecting the correct action. They sampled roughly 5,000 “play music” commands containing references to multiple bands, artists, and songs and sent them through a system incorporating their framework, after which they asked annotators to grade the response returned by the system as “satisfactory” or “unsatisfactory.” The results produced by the enhanced system achieved a relative task error rate reduction of 24.64%, the researchers report.

They leave to future work exploring using individual users’ engagement behaviors to improve personalization.

“We observe that our model improves user-facing results especially for requests that contain difficult or unusual language patterns,” wrote the coauthors. “For example, the enhanced system correctly handles queries such as ‘Can you play Malibu from Miley Cyrus new album’ and ‘Play Humble through my music Kendrick Lamar.’ Also, the enhanced model identifies entities that users are more likely to refer to in cases of genuine linguistic ambiguity. For example, in ‘Play one by Metallica,’ ‘one’ could either be a non-entity token (meaning play any song by Metallica), or it refer specifically to the song called ‘One’ by ‘Metallica.’ Since most users listen to the song ‘One’ by the ‘Metallica’ whenever they say ‘Play one by Metallica,’ our model trained on engagement-annotated data will learn to predict ‘one’ as [the music title], thus better capturing trends and preferences in our user population.”

The work comes on the heels of a paper describing Apple’s Overton, an AI development tool whose models have processed ‘billions’ of queries. Separately, the Cupertino company recently studied whether users preferred conversing with “chattier” AI assistants.

Whether you are new to Dynamics 365 or have been using it for a while, tailored CRM training sessions can help your teams get the most from the sales and customer service apps.

If your site is new, your people will need to understand how to tap into its many features in order to improve working practices.

If you have been using Dynamics 365 for a while it will be helpful to revisit CRM training, you may have new people in the team, or your processes and requirements may have changed over time.

CRM training from Caltech

We provide training on all aspects of Microsoft Dynamics 365 CRM Sales and Customer Service applications, working closely with clients to tailor sessions according to how they use (or would like to use) their system. Benefits of CRM training from Caltech include:

1. Helping you get the most out of your investment in Dynamics 365

2. Speeding up the learning process

3. Building confidence with Dynamics 365

Arranging your CRM training

The first step in arranging CRM training is a discussion with one of our CRM consultants. We will define your objectives so that we can develop the right approach for your organisation, covering the aspects and applications of Dynamics 365 which are most important to you. We will then prepare a tailored Dynamics 365 training agenda and cover all the areas in the right amount of depth for the people who are going to attend.

We also offer Dynamics 365 Familiarisation Training tailored to each client but typically covering topics such as the web interface, the Outlook interface, records and relationships, working with data and records, security considerations, sharing records and an introduction to reports, charts and dashboards.

For more information about our CRM training services, please get in touch.

Making your customers a priority should be the main concern of all businesses, since they either succeed or thrive based on how the customers feel about the business. With all of the current advances in technology, from creating a social media presence to online reviews of businesses, knowing where your customers come from and how to best effectively reach them is of the utmost importance. In today’s age most users turn to devices to shop, communicate with others and even conduct business. Those devices can be their tablets and most commonly used, a smartphone.

Reaching your customers where they are opens a range of opportunities for businesses. So once you know where your customers go to either buy from your business or at least browse, it is time to start catering your business toward them. Read below to find out three ways to improve your relationship with mobile users.

Improvement #1: Mobile Responsive Site

Creating a mobile friendly site is the first step many businesses take before designing their own app. A mobile site is the same version of the website that adapts itself to the device that the site is being accessed on. A mobile responsive layout often times look slightly different depending on the device that is being used. Having a website designed to be mobile friendly shows your customers that you know many users prefer their cellphones to laptops or desk computers. Designing a site for mobile users also shows that you would like to make the site experience an easy one to navigate.

Improvement #2: Mobile Apps

The next step up from a mobile responsive site, is developing an app. Not only is it an easier solution than a mobile site, it is also a more up to date and trendy one. Knowing that your customers took the time to download you app shows that they have an invested interest in purchases from you and visiting the app again, creating more opportunities for a transaction to be made. A lot of the customer’s interactions will be made through the app, so developing it with the customers in mind will help to give them a satisfactory experience whenever they use it.

Improvement #3: Consistent Social Media Usage

Another forum that many customers use is social media. With social media, it is often times accessed through mobile devices which is another opportunity to reach your customer base. No matter the social media site, statistics have shown that many mobile users go to social media on their phone more than any other reason they use their phone for. Integrating social media into your marketing plan places you right into the heart of where your customers are.

Additional Improvement: Mobile CRM

Another great improvement that a business can utilize in an effort to best build their relationships with mobile users is mobile CRM software. Workwise OnContact CRM software allows you to access your software from a laptop, tablet and of course your smartphone. This often times allows business employees to have access to the software no matter where they are so long as they have a Wi-Fi connection and their cellular device. Customer relationship management software allows for businesses to know their customer base better, with information about their shopping history and even ways to contact them. Having this information on the go allows for immediate assistance and updates, improving the customer service experience.

Paying attention to the latest trends, many of which are now becoming staples such as mobile usage allows for your business to make the necessary changes it needs to make in order to not only stay afloat, but to thrive in their respective industries. Consider the ways that your relationship will improve with your customers when you place a mobile minded thought process into play. Meeting your customers where they are shows that you are interested in them and want to make their experience with you a meaningful one.

In customer-centric organizations, Dynamics 365 plays a key role in planning a robust strategy. All the verticals of an organization use Dynamics 365 for different purposes and report to their team managers accordingly. The manager is responsible for keeping their teams informed, motivated and driven to maximize the output.

As members of a team might be at different locations, it is not feasible for managers to meet every team member individually or to transfer knowledge personally. Writing emails for every little detail is an option, but not the optimized solution. Since Dynamics 365 CRM allows the team members different levels of access as per their roles, it would be an optimum solution if the managers could pass on instructions to their team, right within the CRM.

This is where Alerts come into picture, they are a great way to guide the team members even when they are at distant locations and also track if all the members have received the Notification. Let’s delve deep and find out why creating alerts should be a must-have in your business strategy:

Keeping your team informed: For Sales, Service and Marketing teams, constant updates are usually on the go and the manager needs to be proactive to keep their team informed. For e.g., let’s say a prospect wants to revisit an offer and thus there is a requirement to reactivate a Quote or another Lead that was in touch with the manager, needs to be qualified, or, the Service Request of a premium client is raised that needs to be attended at the earliest. Here, manager can create Alerts (Notifications) for their team in the CRM and guide them in appropriate direction. Thus the team doesn’t need to switch to email for every nitty-gritty detail.

Quick Interaction: With the help of Alerts, managers can quickly notify their teams with what needs to be done based on target audience in Dynamics CRM. Target audience can be a single user, a group of users or at organizational level.

Create Announcements: Managers and Administrators can create Alerts for all the user in an organization in case some information needs to be passed on at a global level. For instance, if there is a hurricane alert in the city, an announcement alerts all the users that they need to be safe and take preventive measures.

Conditional Alerts: If Alerts are defined for only those records that satisfy a certain condition, it helps the manager to create messages only for target records and not redundantly.

Display Alerts in multiple ways: The best option to notify users would be to create an Alert and associate multiple messages as per requirement or audience for these Alerts in the CRM. These Alert Notifications can be displayed in Dynamics 365 as well as sent to the users as an email. This also ensures a backup copy of the Alert messages is saved.

Define duration of Alerts and Notifications: It is crucial that Alerts be scheduled at right time so that it yields maximum impact. A Sales or Service Alert needs to be attended immediately to keep the reputation and ROI high.

Guide New Team Members: New team members can be sent notifications so that they don’t miss any Sales or Service follow-up and learn the work process at the earliest.

What can be the evident changes seen in business after using Alerts:

Lesser Confusion: Since creating Alerts in Dynamics 365 drives the team in a guided direction, they are able to perform better and deliver maximum output for their teams.

Task Management: With Alerts4Dynamics, tasks can be assigned to a particular user or set of users by defining target audience and thus, a lot of time spent on managing meetings is saved.

Information Management: With Alert log in the CRM, manager can always track what all information they have passed on to their teams and what is remaining and plan accordingly.

Priority-based tasks: Alerts can be defined on levels set as Information, Warning or Critical. This helps the team to plan the precedence of tasks and attend to most crucial ones first.

Evident Spike Up in ROI: It is proven by practical usage that Alerts allow users to stay informed and have more confidence in their performance. With quick response and better clarity of their job, they can contribute to an increased ROI of their organization.

Thus you have seen why creating and managing Alerts can be a smart investment for you and should be a must in your strategy. It’s short-term and long-term goals are astounding and reduce a lot of haywire approach in managing business.

Alerts4Dynamics in an application for Dynamics 365 CRM to create, schedule and manage Alerts and Notifications. Managers and Administrators can create Announcements, and Alerts for individual records or records satisfying a certain condition and track the users who read or dismiss these Alerts. Managers can notify their target audience about Alerts with Pop-Ups, Form Notifications and Emails.