Imagine a way shoppers and consumers could virtually try on clothing, accessories and even make-up to ensure what they are getting is right for them. Thereby saving them wasted time, money and effort returning products they don’t like or even letting them sit idle; specifically eyeglass wear!

With the technology of Augmented Reality incorporated into a Chatbot, there is a new way to shop using Augmented Reality technology, being able to change between outfits and makeup products are now becoming a reality. The development of AR in products and apps is revolutionizing the way we shop by helping consumers ‘try on’ various outfits and products before even stepping foot into the store.

Tallan has developed such bots to run several extremely effective campaigns for large fashion industry clients like Revlon. Tallan’s Chatbot technology solutions enabled these fashion industry leaders to quickly deploy and manage high-quality bots, streamlining the consumer process; providing increased sales and decreased costs for the clients. In theory, it will be possible to enter any shop in the world, browse and try on clothing and products to see how it looks. It will also allow the notion of online shopping to take on a completely new meaning. Even shops that are exclusively online will be able to allow customers to ‘try on’ clothes and make informed decisions.

The forecast is for 900 million AR-enabled smartphones by the end of 2018, according to consulting firm Digi-Capital. Research from Digital Bridge shows that 69% of consumers now expect retailers to launch AR apps within the next six months. Further insights from Google show 34% of users say they would use AR while shopping. There have been more than 20 million downloads of L’Oreal’s Makeup Genius app, for instance, which uses AR to let users virtually try-on beauty products on their phones. Others brands including Sephora, Charlotte Tilbury, and Rimmel have followed suit.

Augmented reality has revolutionized the way fashion marketing meets the public consumer. AR has massive potential to bridge the gap between online retail and in-store experiences, letting customers see what clothing actually looks like on a human body without having to be physically present in the shop. The article, Five ways fashion brands are using AI for personalization, gives real-life cases studies on how fashion brands are currently already doing this.

Please feel free to take a look at the white paper on Tallan’s Augmented Reality Bot services found here or learn more about Tallan’s Chatbot capabilities here!

]]>https://blog.tallan.com/2018/05/22/augmented-reality-bots-revolutionizing-the-way-the-fashion-industry-meets-the-public-consumer/feed/0Design Systems: What Are They and How Can Products Benefit from Using Themhttps://blog.tallan.com/2018/05/15/design-systems-what-are-they-and-how-can-products-benefit-from-using-them/
https://blog.tallan.com/2018/05/15/design-systems-what-are-they-and-how-can-products-benefit-from-using-them/#commentsTue, 15 May 2018 15:38:59 +0000https://blog.tallan.com/?p=9697

What is a design system?

A design system is a library of standard, extensible components that create a consistent visual language paired with accompanied defined behaviors for each component. Components are individual elements that stem from the atomic design methodology. They can be used as building blocks to assemble a user interface to be used across multiple applications, devices, screen sizes, and mediums.

Material design is an example of how components are paired with design specifications, defined with expected behaviors and guiding principles on usage (see figures 1 – 3). From there, a design system uses these standard components to build patterns such as inputs, buttons, navigations, error states, etc.

Figure 1

Figure 2

Figure 3

Why do design systems matter?

Design systems create a unified experience across platforms, devices and enterprise suites of applications. They create a strong, extensible base through a modular approach using consistent components and defined behavior. The design system creates consistency and standards which matters for both usability and brand standards.

Consistent design patterns become recognizable when used repeatedly. In contrast, without a design system in place, when unique user interface elements are constantly introduced, cognitive load increases in parallel. Forcing the user to encounter new ways to take the same action forces them to interpret UI elements rather than focus on the task they are trying to complete. By reusing the same components consistently in the UI, the UI then becomes easier to understand and use which also decreases user training time for applications.

Standard behaviors for these elements are also important regarding usability. If elements across pages look visually unified but have slight nuances in their behavior, unexpected interactions can cause confusion. Defining not only the visual language, but documentation around component behavior, interactions, and guiding principles of usage is what sets a design system apart from a style guide or UI pattern library. See figure 4 below for an example of how material design provides usage guidelines.

Figure 4

Additionally, design systems keep brand standards intact. Colors, typography, horizontal and vertical spacing and rhythm, and numerous other design components can be iterated and mutated over time. The various iterations of design existing together can alter brand image in a negative light if your website or application appears disheveled with all these inconsistencies.

How do design systems help product development?

Design systems create an optimized environment for product development. They allow for quicker prototyping by using existing components and faster development time by developing the components and their respective behaviors once and then reusing them. In turn, this fosters faster product iterations when combining these factors and also reducing design and development debt.

Rapid prototyping while still maintaining consistency is possible when using a design system. Once a base system has been created, the components can feed a global symbol library in sketch and shared via the design system manager. As the design system evolves, updates can be pushed to the design manager and components in prototypes will be updated as long as the link is maintained to the global symbol. When a team of designers that may work on entirely different applications for the same brand work off the same global design system library, it ensures consistency across applications.

Design systems also reduce design and development debt. Instead of designing for a specific feature or an immediate use case only, a design system creates a language meant to be reused across applications. Once the system is built and coded, its elements can be reused to reduce technical overhead.

The reusable principle of design systems makes it easily scalable. Since a design system is a collection of reusable components, these components can be extended to any team or application. It scales design and development efforts through the reuse of these components. However, the scalability is not limited to the visual design or development efforts. There are standards and documentation in a design system around usage and behavior which will guide the intended user experience and remain intact when scaled.

Design systems are a powerful tool to help businesses scale with a solid foundation to reduce future design and development costs as well as support a consistent and positive user experience within the usage of their products.

If you are interested in learning more about design systems or are in need of design expertise check out our User Experience Page or Contact Us today!

Data binding between controller and directive in AngularJS can be a tricky subject for the uninitiated (and often, even for the initiated). AngularJS is great at providing the magic that makes data flow easily between components on the front end – except when it’s not. This post is an examination of one of the cases where not everything is straightforward.

The Problem:

Passing a callback function into a directive with isolate scope is simple – just a matter of creating the binding in the scope definition (callback: ‘&’). However, there is no built-in equivalent for exposing a directive function to the parent directive or controller. That is, if we have the (truncated) directive definition below, we’re going to have to do some work before calling “foo” from the parent directive or controller.

Option 1: Events

Typically, communication between controllers and directives that do not share scope is handled with events. A $rootscope.$broadcast() from the parent with a corresponding scope.$on() listener in our directive would certainly get the job done and is a viable option (especially in the case where we need to carry out other, related operations in other directives at the same time). However, event firing/catching has a lot of performance overhead and isn’t the best option when we need to make an isolated call to a function defined in a child directive.

Option 2: Directive control objects

The alternative to firing and listening for events is to bind a control object in your directive. The control object is defined as a typical 2-way binding and passed in from the parent as usual. The first step is to define the control object in the parent (a controller, in this example):

//this now successfully calls foo() in the directive
vm.directiveControls.ourDirective.foo();

AngularJS puts a huge amount at our disposal as developers, and it can sometimes be daunting to pick the right tool for the job. I hope that this post has added to your toolbox and helped organize it (and your code) a bit better as well.

SharePoint Hybrid Environments offer flexibility for businesses that are not ready, able or willing to move all of their existing content to SharePoint Online. Whether it is because of current customizations, third-party solutions and integrated legacy applications that aren’t available or supported, or it is due to regulatory and compliance restrictions, hybrid environments may be the answer for an organization weighing the benefits of moving to the Cloud.

Hybrid Environments let businesses migrate content that they want to move and can move to SharePoint Online while keeping the rest on-premises and ties the two together. For the end user, the experience is unified and seamless. Hybrid Environments can be useful for many scenarios including Gradual Migration, Regulatory Restrictions, Unified Search, Business Connectivity Services (BCS) and Business-to-Business (B2B) Extranets.

Gradual Migration

For organizations that have highly customized SharePoint environments, an all in approach may not be feasible for migrating to SharePoint Online. Businesses often utilize custom solutions, third-party products, and legacy applications in their on-premises environments that are either unsupported, unavailable or missing from SharePoint Online tenants. For these instances, implementing a hybrid approach can help soften the impact of the migration and drive better adoption for the user base.

An incremental migration using the Hybrid model allows businesses to keep their customized solutions in their on-premises environments while moving the rest to SharePoint Online. Over time, developers and IT can upgrade, recreate or replace these unsupported solutions and gradually phase in the new ones. This process helps mitigate the up-front costs and risks while ensuring a smooth transition for the user base.

Regulatory Restrictions

In some organizations, a complete move to SharePoint Online is just not possible. The reason may be due to Regulatory or Compliance restrictions that prevent the business from storing some content in a multi-tenant environment.

Fortunately, with the use of a Hybrid Environment, businesses can comply with Regulation Standards by storing sensitive data in their SharePoint on-premises farms and saving the rest to their SharePoint Online tenant. By configuring Hybrid Search, users can seamlessly search against both SharePoint On-Premises and SharePoint Online content.

Cloud Hybrid Search

Cloud Hybrid Search is the newest (at the time of this writing) of the hybrid search models and is the Microsoft recommended model. In this model, SharePoint Online maintains the search index for both the On-Premises and Online environments. This approach gives the user a singular, unified search experience and allows the user to search against both sources as if it were one. In this case, search results are perceived to be from one source because the content from both SharePoint Online and On-Premises is inline.

This model does not require a reverse proxy because users will always be searching against the SharePoint Online index. However, to enable the Document Preview functionality for documents located in the SharePoint On-Premises environment, a reverse proxy is required.

Hybrid Federated Search

Hybrid Federated Search is the older hybrid search model, which uses two separate indexes. SharePoint Online maintains an index of its content, while SharePoint On-Premises retains its index. When a user searches, he or she must select the source to search against so that the search query is sent to the proper environment, i.e., online vs. on-premises.

Hybrid Federated Search requires a reverse proxy to handle unsolicited search queries to the SharePoint On-Premises Environment. In this scenario, an unsolicited search query is a request that was not initiated by the SharePoint On-Premises environment, e.g., when a user searches against on-premises content from SharePoint Online.

Business Connectivity Services (BCS) for SharePoint Online

Business Connectivity Services (BCS) helps businesses create an integration point between SharePoint and external data sources, such as SQL Server Azure Database and WCF web services. BCS enables users to interact with external data from within SharePoint by leveraging the familiar SharePoint List interface. A benefit to this approach is that developers need only develop the CRUD operations, while SharePoint handles the User Interaction and look and feel. To the end user, the data appears as if it is a list created within SharePoint, but behind the scenes, SharePoint sends the operation requests to and from the external data source.

Organizations can benefit from using BCS for various reasons, such as:

Decorating User Profiles with additional information from systems other than Active Directory

Ensuring that the external content is searchable from within SharePoint

The main benefit to BCS is that there is no duplication or syncing of data required between systems; it is merely exposed and surfaced through SharePoint.

BCS for SharePoint Online requires a reverse proxy to relay requests from a SharePoint Online tenant to the SharePoint On-Premises environment.

Business-to-Business (B2B) Extranet Portal

Organizations that want to set up Extranets for their partners, but find that setting up the additional infrastructure, configuration, and maintenance to be cost-prohibitive, may discover SharePoint Hybrid Environments to be the perfect fit. Extranets require much care and consideration; from Security and Auditing to Infrastructure, Support, and Maintenance, the effort and cost needed to set-up and keep the extranet running can be high.

Enter SharePoint Hybrid. The IT group can host the Intranet internally while setting up SharePoint Online sites for the organizations’ external partners. With a couple of clicks, IT can configure SharePoint Online to allow sharing with external, user accounts. In a matter of minutes, business users can set up and configure Partner sites to specification (see SharePoint Site Design and Site Scripts).

With the configuration of SharePoint Hybrid Search, the business’ internal users can quickly search between both the Intranet and Extranet, further driving up user productivity.

Summary

Whether you want to make a move to SharePoint Online gradually, have regulatory restrictions, want to implement custom processing rules, need to integrate external data or collaborate with external partners SharePoint Hybrid Environments can be a great, cost-effective solution. See how Tallan can help you get there by contacting us today.

Introduction

Recently I was given the opportunity to spend some time playing with Facebook’s awesome AR (Augmented Reality) Studio. I worked through Facebook’s quick start tutorial and created a mustache effect in no time. Immediately I was hooked, it was all so easy to use. AR Studio does the heavy lifting for the face tracking, so really all you need is a texture to make your first filter. So after the first and second basic filter, I wanted to make something more dynamic, I wanted to change the filter on tap, and suddenly my progress screamed to a halt. All of a sudden the documentation and instructions lacked what I needed. I looked through the tutorials and found out how to add scripts, but not how to change materials, or bind the events to specific objects. After hours of experimentation I was able to solve it though, so here is my guide for creating a filter effect which updates a material with touch events.

Facebook Quick Start Guide Recap

If you haven’t completed this or don’t understand the tasks below (minus the mustache texture) then you should likely check out Facebook’s quick start guide

Create new project

Insert new face mesh

Add the mustache texture

Add a new material

Set the material shader type to face paint with the mustache texture

Set the face mesh to use the new material

Basic effect complete!

Alright, you should have something that looks like this:

Let’s Ramp it up

Since the goal here is to use tap events to change materials, we’re going to need some more materials with textures. I created a rough pennywise (Stephen King’s IT) and a lipstick texture that I can use for the demo. Running with that, we need to:

Create a new project or add to the basic tutorial project

Add two new textures, in my case pennywise and lipstick

Add two new materials both with facepaint shaders and the textures matching the new textures

Test the new materials by changing the material on the facemesh, make sure each individual effect looks good

Alright, so now my effects are as follows (plus the original mustache effect):

Okay, hopefully you’ve made it this far, because this is where I started to run into issues

Adding buttons to trigger the various effects

Let’s add some buttons so the user can select an effect from the choices and also maybe the ability to turn them off.

Also worth noting, the goal of this is not necessarily to look too refined, the goal is just a functional demo.

Add four rectangles to the scene, by default they will all be stacked in the center of the display. I aligned them each to a corner for ease.

Create four new materials, one for each of the new “buttons” (the rectangles)

Add four textures to the project to use as a skin for the buttons, anything will do. I used some screen clippings of icons.

Below is my version with the face mesh turned off and the buttons skinned

Let’s write that script so they can start doing some work!

Adding the script to put it all together

Our script will need to be able to target each of our buttons, the face mesh, and our materials.

Step 1.

In order to use touch gestures in your effect, they will need to be enabled, which is done through the project menu then properties, as seen below.

Step 2.

Below is a screen shot of my incomplete script. I wanted to go over a few key details about the script to make sure you could follow along.

The files in the project all follow a hierarchy from the Scene root, but in reality all of our items are located under Focal Distance so I declare my “base” that directory. (Also be mindful of filenames as this is case sensitive and files will not be found unless names match exactly) The added lines are showing the files on the left being referenced by the script.

I am only printing to the console on press. My goal was for this to be a simple testing step and also a way to show the diagnostic tool in action and how simple it is to print values to the console.

This is the prelim script for testing the touch events and to see the diagnostics in action. Note that if you copy this, you must make sure your references have all of the correct names or else you will see errors in the console at runtime.

Step 3.

The final step: connecting the events

Looking at the final version of my script below, you can see that it doesn’t take a whole lot more code to make the dynamic changes. All we needed to do was pull in the library for the Materials, set the material on the face mesh, and lastly toggle overall effect visibility on tap.

Overview

X12 Studio provides validation capabilities that can be used to validate claims, enrollments, and other HIPAA X12 EDI file types. X12 Studio can be used to test those files by reading, validating, and producing an X12 999 Acknowledgement (ACK). The 5010 999 ACK replaced the previous version – 4010 997 ACK. The 999 ACK informs the submitter that the EDI submitted is validated according to the receiver’s implementation guide. This validation includes the results describing the quality of the functional group’s syntax. Sometimes this validation is referred to as WEDI SNIP level edits 1 and 2.

Steps to Generate the 999 ACK

Launch the application and open an existing 837I, 837P, 834, or any of the HIPAA X12 EDI files:

Click on the Generate ACK 999 icon in the top menu:

View the Output tab at the bottom of the application. This will show the file system location of the generated 999 ACK. Simply click on the file location in the Output window and X12 Studio will open the 999 ACK response.

It’s also worth noting, this Output file location can be configured within the ‘Configure’ menu:

Here is the generated 999 ACK output for the X12 837I (Institutional) EDI file. In this scenario, the 837I was accepted as denoted in the AK9 ‘A’ (for accepted):

Now, let’s use X12 Studio to edit the file so that we can test for an invalid 837I. Change the NM108 subscriber name qualifier value from ‘MI’ to an invalid enumeration of ‘XX’.

X12 Studio will instantly validate this change and display a validation error in the bottom of the screen within the Validation tab:

Generate 999 ACK again and verify that the 999 produced a rejection in the AK9, denoted by ‘R’ (for rejected).

Tallan’s T-Connect X12 Studio Toolbox provides a set of features helpful for streamlining HIPAA processes, including 999 ACK Generator. The X12 Studio Toolbox utilizes some of our features found in our enterprise solution – T-Connect EDI Management Suite in an easy-to-use interface. We also support elevated SNIP level edits, feel free to contact us directly via the Contact Us Form.

“Hey Google, how can my business be at the top of all of my clients’ search results?” Between Siri, Google, Alexa, and Cortana, searching for anything and everything has never been easier. As technology becomes more and more hands-off, businesses need to be more hands-on in ensuring they can keep up with the times.

By the end of 2019, at least half of all searches will be voice searches – this includes not only personal queries such as “What’s the weather”, but more business-related questions like “Where is the closest place for me to replace my tire today”. According to market research, 76% of people who search for something nearby on their smartphones visit a related business that day, and 28% of those searches result in a purchase. Smart speakers also add to voice search analysis. Nearly one in five U.S. adults today have access to a smart speaker, according to new research out this week from Voicebot.ai. That means adoption of these voice-powered devices has grown to 47.3 million U.S. adults in two years – or 20 percent of U.S. adult population. This data confirms that smart speakers are being incorporated into everyday lives of consumers: 63% report using them daily and 77% at least weekly.

A new survey revealed that 62% of marketers have no specific plans for optimizing their business’s website for voice search – but nearly 20% of all searches today are performed via voice. Another survey by ClickZ and Marin Software reveals that only 7% of marketers prioritized investments in 2017 in “AI” (voice search, digital assistants, and chatbots). Maybe more surprising is that only 4% of marketers name “smart hubs” such as Amazon Echo and Google Home as a top priority for 2017.

The key to optimizing your website to hit the top of your potential clients’ search results is to have your site think how people speak. Instead of typing in what you think the most important keywords are, you are now starting a conversation with your search engine (with the possibility of adding some follow-up questions as well). These are known as semantic searches. Semantic searches are longer queries that are conversational in nature, and will need to be parsed by a search engine to understand what the user wants. Semantic searches with natural language processing engines give the search engine a sense of “personality” that the consumer can relate with.

Goals need to be defined in a way that relates to users’ problems. Voice searches are significantly more focused on details and are typically questions, where text searches are more blunt statements. You need to fine-tune your website to answer the pain points that your consumers are facing and achieve some form of a language-market fit. Make sure that your website is able to address possible conversations between you and the consumer – prepare for follow up questions if you can and try to address all gaps between content as best as possible. As a business, you need to come up with quick, concise answers to detailed questions.

Although major search engines can parse images for more information, it is always a safer bet to place critical details in the actual HTML of your page. Things like addresses and phone numbers should always be explicitly in the schema markup of the page and should not be inferred by the image parser.

Put simply, a schema is a bunch of metadata – data that is not seen to the end user but is a critical component of how a website characterizes itself. Think of it as information that tells a search engine what a site actually contains and how it should be treated. A well-defined schema will then allow you to display your “rich snippet”, or the catch phrase that will pull your users to your site. See the example below:

Google Voice Search Results

Smart speakers and personal assistants are expanding well past the current scope of consumers’ personal lives and are reaching more into the business and enterprise worlds. Having your business embrace voice searching early will pave the way for more elegant use cases for voice functionality later on. If you can make your consumers comfortable with your business from a basic search experience, imagine what you can do once they are actually using your business.

Tallan is no stranger to voice searching and knows from experience that queries go well beyond the standard, day-to-day questions any user would ask their device. We have implemented voice search functionality in a wide range of enterprise level businesses – from the mortgage world to the hospitality business; you would be surprised at just how many searches are done by voice right now. Contact us today to find out how we can help you and your business shine in the search results.

When the Health Insurance Exchange (HIX) network went online in late 2013, the industry was challenged with reconciling new subscribers to their related premium payments and subsidies. Health plans painstakingly assembled Excel eligibility extracts to invoice their Federally Facilitated Marketplace (FFM) or State-based Exchange (SBE) for tax credit reimbursement. However, by the start of 2016, the FFM became the system of record for eligibility, and issuers were required to accept a new variant of EDI representing premium payments.

Traditionally, the 820-Payment Order / Remittance Advice transaction (005010X218) has been transmitted from payroll agencies and government healthcare organizations to insurers in order to provide summary or subscriber-level information regarding premium payments. With the advent of ACA exchanges, enough variations surfaced in this traditional handshake that a new version of the 820 was required. The HIX 820 (005010X306) removes structures unnecessary for exchange reporting, and adds tracking segments for the unique aspects of these plans, such as Advance Premium Tax Credits (APTC) and Cost Sharing Reductions (CSR).

As a starting point, it’s easier to see what has been removed in the HIX 820. A traditional 820 transaction contains a header section identifying the receiver, sender and any intermediaries. The detail section contains two parallel loops. The 2000A loop (line 9 below) is used to provide summary remittance advice, while the 2000B loop (line 16) can be used to report on individual payments.

A standard 820 transaction

The HIX 820 removes the 1000C Intermediary N1 set of loops, and collapses the 2000A and B detail loops into a single structure, making the 005010X306 variant quite streamlined, as far as HIPAA EDI goes.

Some segments have been added to handle HIX business cases, however. The first category of changes relates to REF segments added to track Qualified Health Plan (QHP) Identifiers, Group and Policy IDs. These REF segments have proliferated within the HIX 820 since the transaction set designers needed to allow for both the health exchanges and the plan issuers to potentially create their own identifiers for members and policies. Additionally, these segments may exist on a header or detail level, leaving us to account for the following qualified REF segments:

REF*38 – Exchange Assigned QHP ID

REF*TV – Issuer Assigned QHP ID

REF*18 – Exchange Assigned Group ID

REF*1L – Issuer Assigned Group ID

REF*POL – Exchange Assigned Policy ID

REF*AZ – Issuer Assigned Policy ID

The other REF segments added to the HIX 820 track subscribers and dependents, and CMS reimbursements. Under the ACA, there are two ways in which HIX plan costs may be reduced. Both cost reductions depend on where an individual or family adjusted gross income (AGI) falls in relational to the Federal Poverty Level (FPL). The low end of both ranges starts at 100% of the FPL or 138% for states that expanded Medicaid coverage.

Advance Premium Tax Credits – APTCs reduce the cost of premiums for plans of any metal tier. These tax credits scale based on AGI from 100/138 – 400% of the FPL. These credits may be immediately applied to health coverage or received as refunds at the end of the tax year.

Cost Sharing Reductions – CSRs only apply to Silver plans and individuals or families at 100/138 – 250% of the FPL. These subsidies reduce the cost of copays and other out-of-pocket expenses. The CSR funding mechanism was successfully challenged in court by House Republicans, although the verdict was put on hold following an appeal by the Obama administration. In October of 2017, the Trump administration dropped the appeal, resulting in CMS discontinuing these reimbursements to health plans. Insurers of Silver plans are still required to provide CSRs to eligible subscribers, however. Many health plans anticipated the administration’s direction and raised premiums on Silver plans by up to 20% as a result. Ironically, since Premium Tax Credits are based on the cost of Silver plans, federal reimbursement has increased in many states in 2018 for APTCs.

These REF segments can be used to identify taxpayers and similarly allow for the assignment of values by the exchange or health plan:

REF*4A – Exchange Assigned APTC Contributor

REF*23 – Issuer Assigned APTC Contributor

SNIP 3 Balancing – Balancing the HIX 820 is considerably simpler than balancing the standard HIPAA 820. The HIX 820 does not contain ADX segments indicating payment adjustments, so balancing is limited to summing RMR04 remittance amounts for comparison against the header level total amount paid (BPR02). This can be expressed as:

1000-BPR02 = SUM(of each 2300-RMR04)

T-Connect Implementation

The T-Connect EDI Management Suite provides a platform that health plans and exchanges can use to manage the translation and reconciliation of HIX 820 transactions with 834 enrollments. We’re always happy to schedule a call to walk through EDI processing requirements. Feel free to reach out to us for a free demo or consultation.

Exporting X12 837 claim files into standardized CMS1500 or UB-04 forms is simple with T-Connect X12 Studio Toolbox’s PDF Claim Form Generator. CMS1500 is the standardized form for X12 837P (Professional) EDI files. The CMS1450, aka UB-04, provides the form for 837I (Institutional). Both forms are provided by the Centers for Medicare & Medicaid Services (CMS). The PDF claim files can be used to view, archive, or manage EDI claims into a human-readable form. Our Claim Form Generator feature is a very useful tool for EDI Analysts, overlaying 837 EDI data onto industry-standardized forms.

Steps to Generate the PDF 1500CMS or UB04 Forms

Launch the application and open an existing 837I or 837P X12 EDI format:

T-Connect X12 Toolbox Application

Click on the Generate PDF icon in the top menu:

Generate PDF icon

View the Output tab at the bottom of the application. This will show where the PDF CMS1500 / UB04 form is saved to. Simply click on the file location in the Output window and X12 Studio will open the PDF claim file.

Output Tab

It’s also worth noting, this Output file location can be configured within the ‘Configure’ menu:

‘Configure’ Menu

Output Directory

Here is the output for the X12 837I (Institutional) EDI file, generated as a PDF CMS-1450, aka the UB-04:

Tallan’s T-Connect X12 Studio Toolbox provides a set of features helpful for streamlining HIPAA processes, including PDF Claim Form Generator. Tallan can also add any customized formats to fit your healthcare claim processing needs. Additionally, we can help automate mass generation of claim forms and provide integration solutions for your specific processing scenarios. Feel free to contact us directly via the Contact Us Form.

Let’s say the Finance Department of a clothing retailer has some great reports that let them see all the sales across the United States; so great, in fact, that they want to share them with all Regional Managers so they can communicate about the hot spots in their region. The problem is the Regional Managers aren’t permitted to see data outside their region, and giving them access to these reports would allow them to filter to any region they wanted. We could create separate Datasets and reports filtered to the region for the manager that is given access to them, but that would be time-consuming, and a nightmare to maintain. Luckily, Power BI provides the ability to implement Row-Level Security (RLS).

So, what is RLS? Simply put, it controls a user’s access to each individual row of the Dataset. In Power BI this is accomplished filtering the data in DAX based on the user. In the example below, we can see Billy requesting a list of all the states; Power BI then matches Billy to his roles and filters the states down to the ones in his roles.

Billy opens a Report with States on it

In the report, we might have a page that shows sales data by location, and we want our Regional Manager to only see the sales data in his region.

Roles in Power BI

In Power BI, RLS is defined in Power BI Desktop through roles; roles define DAX filters for whichever tables they are intended to secure. This means that we need to be careful with the structure of our Data Model to ensure that we are filtering on the highest-level entity possible. In the example below, we see a Dataset similar to the scenario above and a report that it drives.

Access to Geography is restricted by selected Regions, and these Geographies can, in turn, restrict the Sales

Report with no Roles applied

Creating Roles

Now that we have the appropriate structure set up for the data to be able to filter by region, we will need to set up our roles. In Power BI Desktop this can be done by going to Modeling>Manage Roles, we can then create new roles in this view on the left and define their table filters to the right.

Here we can add the two Roles from the example above

Now that we have a couple of Roles set up, we can use Power BI to test them out using the View as Roles button next to the Manage Roles button.

Here we can see the two Roles we added earlier and select those we would like to test

Report as someone with the Mountain Role would see it (Note the bar at the top denoting the Roles that we are viewing the Report as)

Report as someone with the Mountain and New England Role would see it

Dynamic RLS

In addition to locking down access to tables with hard-coded filters, we can also leverage security that is built into the data. Let’s say that we are sourcing this data from an application that handles security in its database by granting user’s access to specific states. In this case, we can bring in this security information and use these relationships to filter data by who is viewing the report. Below, we can see these Security Tables brought into Power BI; the User Table (greyed out because we are hiding it in the final report) filters the State User Table to a combination of the user that is logged in and the state’s data they are permitted to see. This table further filters the State Table, which then filters the Geography Table.

Note the relationship between State User and State; State User will likely have many users that will be able to see the same state, so, this is a one-to-many relationship, and in this instance, the State Users are the many. Normally, in Power BI, it is the one side of a relationship that filters down the many sides (the same way that state filters geography) but, for our purposes we want the State User to be able to filter the states. To achieve this, we must make the relationship filter in both directions and apply the security filter in both directions.

Here we see our Guest User has access to CT, NY, and MA

Now that the Data Model is set up, we need to create a role that will be able to filter by the logged in user’s name. To do this, we can create a role that filters the user using the DAX function USERPRINCIPALNAME(); not to be confused with USERNAME() which may be the Domain\Login (Depending on your AD,) the USERPRINCIPALNAME() returns the user’s email address.

At this time, anyone with this role applied will have the User Table filtered by their email address and subsequently, will have the rest of the tables down the line. To test this, we will have to define who we are logging in as to see the report they will see.

We set our UPN in the Other user field (as you can see this is case-insensitive)

Now we only see the three States our Guest has access to

Assigning Roles

Now that we have all of our roles created we will need to assign them to the appropriate users. We will need to publish a PBIX to a Workspace in the Power BI Service; then we go to Datasets > [Our Dataset with RLS] > … > Security. Here we can assign users to roles; we select the role we want to add to and start typing, Power BI will be able to filter by email or name within your organization. Note that anyone that is not an admin viewing a report that they have no roles assigned to, but that uses RLS will not be able to load any visuals.

Here we have given the PowerBI Guest user the Mountain Role

When PowerBI Guest views the App all that is shown is the Mountain Region data

These Security Roles will be applied to anyone viewing the report in a Published Power BI App, and changes to the security will not require the app to be published again. These roles will also affect anyone that is a member of the Workspace; provided that the Workspace is set up to allow members to only view content and not edit.

Stay tuned for Row-Level Security in Power BI: Part 2 RLS in Embedded Reports; in which we will go over handling RLS for you reports embedded in your internal and customer-facing Applications.