Excellent presentations from the Hortonworks team for “NiFi on HDF” solutions architecture and best practices. Powerful solution to process and distribute data in real-time, any data, and in large quantities with resiliency. It’s no wonder why the US NSA originally developed the ability to consume data in real-time, manipulate it, and then send it on it’s way. However, recognizing the commercial applications (benevolent wisdom?), the NSA released the product as open-source software, via its technology transfer program.

As a tangent, among other things, I’m currently exploring the capabilities of “Microsoft Flow“, which has recently been promoted to GA from their ‘Preview Release’. One resonating question came to mind during the presentations last night:

The NiFi / HDF solution manages data flows in real-time. The Microsoft Flow architecture seems to fall short in this capacity. Is it on the product road map for Flow? Is it a capability Microsoft wants to have?

There a bit of architecture / infrastructure on the Hortonworks HDF side, which enables the solution as a whole to be able to ingest, process, and push the data in real-time. Not sure Microsoft Flow is currently engineered on the back end to handle the throughput.

The current Microsoft Flow UI may need to be updated to handle this ‘slightly altered’ paradigm of real-time content consumption and distribution.

The comparison between Microsoft Flow and NiFi on HDF may be a huge stretch for comparison.

What is Cloud Serverless Computing?

Serverless computing is a cloud computing code execution model in which the cloud provider fully manages starting and stopping virtual machines as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machine, per hour.[97] Despite the name, it does not actually involve running code without servers.[97] Serverless computing is so named because the business or person that owns the system does not have to purchase, rent or provision servers or virtual machines for the back-end code to run .

Based on your application Use Case(s), Cloud Serverless Computing architecture may reduce ongoing costs for application usage, and provide scalability on demand without the Cloud Server Instance management overhead, i.e. costs and effort.

Note: Cloud Serverless Computing is used interchangeability with Functions as a service (FaaS) which makes sense from a developer’s standpoint as they are coding Functions (or Methods), and that’s the level of abstraction.

Create automated workflows between apps and services to get notifications, synchronize files, collect data, and more. Although not the traditional Serverless Computing implementation, it’s the quickest way to perform application services without having to procure the application servers. Depending on your microservices (connectors + templates) definitions, you may not need to write a single line of code, and could all be done through the Flow console.

Microsoft Flow: Automate Business Process

Connectors are “enablers” to connect to [data] sources in order to extract or insert data, typically one Connector per service, such as Twitter.

Templates utilize Connectors, and enable workflow designers to build business process workflows. Execution of the manufactured workflows performs the activities either Event trigger driven, or ADHOC / manual execution through the portal or through the Microsoft Flow mobile apps.

Automating business processes by designing workflows to turn repetitive tasks into multi-step workflows

Microsoft Flow Templates

Microsoft Flow Pricing

As listed below, there are three tiers, which includes a free tier for personal use or exploring the platform for your business. The pay Flow plans seem ridiculously inexpensive based on what business workflow designers receive for the 5 USD or 15 USD per month. Microsoft Flow has abstracted building workflows so almost anyone can build application workflows or automate business manual workflows leveraging almost any of the popular applications on the market.

It doesn’t seem like 3rd party [data] Connectors and Template creators receive any direct monetary value from the Microsoft Flow platform. Although workflow designers and business owners may be swayed to purchase 3rd party product licenses for the use of their core technology.

Properly designed microservices have a single responsibility and can independently scale. With traditional applications being broken up into 100s of microservices, traditional platform technologies can lead to significant increase in management and infrastructure costs. Google Cloud Platform’s serverless products mitigates these challenges and help you create cost-effective microservices.

AWS provides a set of fully managed services that you can use to build and run serverless applications. You use these services to build serverless applications that don’t require provisioning, maintaining, and administering servers for backend components such as compute, databases, storage, stream processing, message queueing, and more. You also no longer need to worry about ensuring application fault tolerance and availability. Instead, AWS handles all of these capabilities for you, allowing you to focus on product innovation and get faster time-to-market. It’s important to note that Amazon was the first contender in this space with a 2014 product launch.

Execute code on demand in a highly scalable serverless environment. Create and run event-driven apps that scale on demand.

Focus on essential event-driven logic, not on maintaining servers

Integrate with a catalog of services

Pay for actual usage rather than projected peaks

The OpenWhisk serverless architecture accelerates development as a set of small, distinct, and independent actions. By abstracting away infrastructure, OpenWhisk frees members of small teams to rapidly work on different pieces of code simultaneously, keeping the overall focus on creating user experiences customers want.

What’s Next?

Serverless Computing is a decision that needs to be made based on the usage profile of your application. For the right use case, serverless computing is an excellent choice that is ready for prime time and can provide significant cost savings.

Protecting the Data Warehouse with Artificial Intelligence

Teleran is a middleware company who’s software monitors and governs OLAP activity between the Data Warehouse and Business Intelligence tools, like Business Objects and Cognos. Teleran’s suite of tools encompass a comprehensive analytical and monitoring solution called iSight. In addition, Teleran has a product that leverages artificial intelligence and machine learning to impose real-time query and data access controls. Architecture also allows for Teleran’s agent not to be on the same host as the database, for additional security and prevention of utilizing resources from the database host.

Data Access Policies

Connection Policies

Blocks connections to the database

White list or black list by

DB User Logins

OS User Logins

Applications (BI, Query Apps)

IP addresses

Rule Templates Contain Customizable Messages

Each of the “Policy Templates” has the ability to send the user querying the database a customized message based on the defined policy. The message back to the user from Teleran should be seamless to the application user’s experience.

iGuard Rules Messaging

Machine Learning: Curbing Inappropriate, or Long Running Queries

iGuard has the ability to analyze all of the historical SQL passed through to the Data Warehouse, and suggest new, customized policies to cancel queries with certain SQL characteristics. The Teleran administrator sets parameters such as rows or bytes returned, and then runs the induction process. New rules will be suggested which exceed these defined parameters. The induction engine is “smart” enough to look at the repository of queries holistically and not make determinations based on a single query.

A relatively new medium of support for businesses small to global conglomerates becomes available based on the exciting yet embryonic [Chabot] / Digital Agent services. Amazon and Microsoft, among others, are diving into this transforming space. The coat of paint is still wet on Amazon Lex and Microsoft Cortana Skills. MSFT Cortana Skills Kit is not yet available to any/all developers, but has been opened to a select set of partners, enabling them to expand Cortana’s core knowledge set. Microsoft’s Bot Framework is in “Preview” phase. However, the possibilities are extensive, such as another tier of support for both of these companies, if they turn on their own knowledge repositories using their respective Digital Agents [Chabot] platforms.

Approach from Inception to Deployment

The curation and creation of knowledge content may occur with the definition of ‘Goals/Intents’ and their correlated human utterances which trigger the Goal Question and Answer (Q&A) dialog format. Classic Use Case. The question may provide an answer with text, images, and video.

Taking Goals/Intents and Utterances to ‘the next level’ involves creating / implementing Process Workflows (PW). A workflow may contain many possibilities for the user to reach their goal with a single utterance triggered. Workflows look very similar to what you might see in a Visio diagram, with multiple logical paths. Instead of presenting users with the answer based upon the single human utterance, the question, the workflow navigates the users through a narrative to:

disambiguate the initial human utterance, and get a better understanding of the specific user goal/intention. The user’s question to the Digital Agent may have a degree of ambiguity, and workflows enable the AI Digital Agent to determine the goal through an interactive dialog/inspection. The larger the volume of knowledge, and the closer the goals/intentions, the implementation would require disambiguation.

interactive conversation / dialog with the AI Digital Agent, to walk through a process step by step, including text, images, and Video inline with the conversation. The AI chat agent may pause the ‘directions’ waiting for the human counterpart to proceed.

Future Opportunities:

Amazon to provide billing and implementation / technical support for AWS services through a customized version of their own AWS Lex service? All the code used to provide this Digital Agent / Chabot maybe ‘open source’ for those looking to implement similar [enterprise] services.

Digital Agent may allow the user to share their screen, OCR the current section of code from an IDE, and perform a code review on the functions / methods.

Microsoft has an ‘Online Chat’ capability for MSDN. Not sure how extensive the capability is, and if its a true 1:1 chat, which they claim is a 24/7 service. Microsoft has libraries of content from Microsoft Docs, MSDN, and TechNet. If the MSFT Bot framework has the capability to ingest their own articles, users may be able to trigger these goals/intents from utterances, similar to searching for knowledge base articles today.

Abstraction, Abstraction, Abstraction. These AI Chatbot/Digital Agents must float toward Wizards to build and deploy, and attempt to stay away from coding. Elevating this technology to be configurable by a business user. Solutions have significant possibilities for small companies, and this technology needs to reach their hands. It seems that Amazon Lex is well on their way to achieving the wizard driven creation / distribution, but have ways to go. I’m not sure if the back end process execution, e.g. Amazon Lambda, will be abstracted any time soon.

Google may attempt to leapfrog their Digital Assistant competition by taking advantage of their ability to search against all Google products. The more personal data a Digital Assistant may access, the greater the potential for increased value per conversation.

As a first step, Google’s “Personal” Search tab in their Search UI has access to Google Calendar, Photos, and your Gmail data. No doubt other Google products are coming soon.

Big benefits are not just for the consumer to search through their Personal Goggle data, but provide that consolidated view to the AI Assistant. Does the Google [Digital] Assistant already have access to Google Keep data, for example. Is providing Google’s “Personal” search results a dependency to broadening the Digital Assistant’s access and usage? If so, these…

interactions are most likely based on a reactive model, rather than proactive dialogs, i.e. the Assistant initiating the conversation with the human.

“What you need, before you ask. Stay a step ahead with Now cards about traffic for your commute, news, birthdays, scores and more.”

I’m not sure how proactive the Google AI is built to provide, but most likely, it’s barely scratching the service of what’s possible.

Modeling Personal, AI + Human Interactions

Starting from N number of accessible data sources, searching for actionable data points, correlating these data points to others, and then escalating to the human as a dynamic or predefined Assistant Consumer Workflow (ACW). Proactive, AI Digital Assistant initiates human contact to engage in commerce without otherwise being triggered by the consumer.

Actionable data point correlations can trigger multiple goals in parallel. However, the execution of goal based rules would need to be managed. The consumer doesn’t want to be bombarded with AI Assistant suggestions, but at the same time, “choice” opportunities may be appropriate, as the Google [mobile] App has implemented ‘Cards’ of bite size data, consumable from the UI, at the user’s discretion.

As an ongoing ‘background’ AI / ML process, Digital Assistant ‘server side’ agent may derive correlations between one or more data source records to get a deeper perspective of the person’s life, and potentially be proactive about providing input to the consumer decision making process.

AI Assistant may search user’s photo archive on the server side. Any photo metadata could be garnished from search, including date time stamps, abstracted to include ‘Season’ of Year, and other synonym tags.

Photos from around ‘August’ may be earmarked for Assistant use

Photos may be geo tagged, e.g. Lake Champlain, which is known for its fishing.

All objects in the image may be stored as image metadata. Using image object recognition against all photos in the consumer’s repository, goal / rule execution may occur against pictures from last August, the Assistant may identify the “fishing buddies” posing with a huge “Bass fish”.

In addition to the Assistant making the suggestion re: booking the trip, Google’s Assistant may bring up ‘highlighted’ photos from last fishing trip to ‘encourage’ the person to take the trip.

This type of interaction, the Assistant has the ability to proactively ‘coerce’ and influence the human decision making process. Building these interactive models of communication, and the ‘management’ process to govern the AI Assistant is within reach.

Predefined Assistant Consumer / User Workflows (ACW) may be created by third parties, such as Travel Agencies, or by industry groups, such as foods, “low hanging fruit” easy to implement the “time to get more milk” . Or, food may not be the best place to start, i.e. Amazon Dash

Google seems to be rolling out a new feature in search results that adds a “Personal” tab to show content from [personal] private sources, like your Gmail account and Google Photos library. The addition of the tab was first reported by Search Engine Roundtable, which spotted the change earlier today.

I’ve been very vocal about a Google Federated Search, specifically across the user’s data sources, such as Gmail, Calendar, and Keep. Although, it doesn’t seem that Google has implemented Federated Search across all user, Google data sources yet, they’ve picked a few data sources, and started up the mountain.

It seems Google is rolling out this capability iteratively, and as with Agile/Scrum, it’s to get user feedback, and take slices of deliverables.

Search Roundtable online news didn’t seem to indicate Google has publicly announced this effort, and is perhaps waiting for more sustenance, and more stick time.

As initially reported by Search Engine Roundtable, the output of Gmail results appear in a single column text output with links to the content, in this case email.

Google Personal Search Results – Gmail

It appears the sequence of the “Personal Search” output:

Agenda (Calendar)

Photos

Gmail

Each of the three app data sources displayed on the “Personal” search enables the user to drill down into the records displayed, e.g.specific email displayed.

Google Personal Search Results – Calendar

Group Permissions – Searching

Providing users the ability to search across varied Google repositories (shared calendars, photos, etc.) will enable both business teams, and families ( e.g. Apple’s family iCloud share) to collaborate and share more seamlessly. At present Cloud Search part of G Suite by Google Cloud offers search across team/org digital assets:

Use the power of Google to search across your company’s content in G Suite. From Gmail and Drive to Docs, Sheets, Slides, Calendar, and more, Google Cloud Search answers your questions and delivers relevant suggestions to help you throughout the day.

As a business, Digital Agent subscriber, Microsoft Bing search results will contain the business’ AI Digital Assistant created using Visio. The ‘Chat’ link will invoke the business’ custom Digital Agent. The Agent has the ability to answer business questions, or lead the user through “complex”, workflows. For example, the user may ask if a particular store has an item in stock, and then place the order from the search results, with a ‘small’ transaction fee to the business. The Digital Assistant may be hosted with MSFT / Bing or an external server. Applying the Digital Assistant to search results pushes the transaction to the surface of the stack.

Bing Digital Chat Agent

Leveraging their existing technologies, Microsoft will leap into the custom AI digital assistant business using Visio to design business process workflows, and Bing for promotion placement, and visibility. Microsoft can charge the business for the Digital Agent implementation and/or usage licensing.

The SDK for Visio that empowers the business user to build business process workflows with ease may have a low to no cost monthly licensing as a part of MSFT’s cloud pricing model.

Microsoft may charge the business a “per chat interaction” fee model, either per chat, or bundles with discounts based on volume.

In addition, any revenue generated from the AI Digital Assistant, may be subject to transactional fees by Microsoft.

Why not use Microsoft’s Cortana, or Google’s AI Assistant? Using a ‘white label’ version of an AI Assistant enables the user to interact with an agent of the search listed business, and that agent has business specific knowledge. The ‘white label’ AI digital agent is also empowered to perform any automation processes integrated into the user defined, business workflows. Examples include:

basic knowledge such as store hours of operation

more complex assistance, such as walking a [perspective] client through a process such as “How to Sweat Copper Pipes”. Many “how to” articles and videos do exist on the Internet already through blogs or youtube. The AI digital assistant “curator of knowledge” may ‘recommended’ existing content, or provide their own content.

Proprietary information can be disclosed in a narrative using the AI digital agent, e.g. My order number is 123456B. What is the status of my order?

Actions, such as employee referrals, e.g. I spoke with Kate Smith in the store, and she was a huge help finding what I needed. I would like to recommend her. E.g.2. I would like to re-order my ‘favorite’ shampoo with my details on file. Frequent patrons may reorder a ‘named’ shopping cart.

Escalation to a human agent is also a feature. When the business process workflow dictates, the user may escalate to a human in ‘real-time’, e.g. to a person’s smartphone.

Note: As of yet, Microsoft representatives have made no comment relating to this article.

Microsoft Outlook has had an AI Email Rules Engine for years and years. From using a simple Wizard to an advanced construction rules user interface. Oh the things you can do. Based on a wide away of ‘out of the box’ identifiers to highly customizable conditions, MS Outlook may take action on the client side of the email transaction or on the server side. What types of actions? All kinds of transactions ranging from ‘out of the box’ to a high degree of customization. And yes, Outlook (in conjunction with MS Exchange) may be identified as a digital asset management (DAM) tool.

Email comes into an inbox, based on “from”, “subject”, contents of email, and a long list of attributes, MS Outlook [optionally with MS Exchange], for example, may push the Email and any attached content, to a server folder, perhaps to Amazon AWS S3, or as simple as an MS Exchange folder.

Then, optionally a ‘backend’ workflow may be triggered, for example, with the use of Microsoft Flow. Where you go from there has almost infinite potential.

Analogously, Google Gmail’s new Inbox UI uses categorization based on ‘some set’ of rules is not something new to the industry, but now Google has the ability. For example, “Group By” through Google’s new Inbox, could be a huge timesaver. Enabling the user to perform actions across predefined email categories, such as delete all “promotional” emails, could be extremely successful. However, I’ve not yet seen the AI rules that identify particular emails as “promotional” verses “financial”. Google is implying these ‘out of the box’ email categories, and the way users interact, take action, are extremely similar per category.

Google may continue to follow in the footsteps of Microsoft, possibly adding the initiation of workflows based on predetermined criteria. Maybe Google will expose its AI (Email) Rules Engine for users to customize their workflows, just as Microsoft did so many years ago.

Although Microsoft’s Outlook (and Exchange) may have been seen as a Digital Asset Management (DAM) tool in the past, the user’s email Inbox folder size could have been identified as one of the few sole inhibitors. Workaround, of course, using service accounts with vastly higher folder quota / size.

The AI personal assistant with the “most usage” spanning connectivity across all smart devices, will be the anchor upon which users will gravitate to control their ‘automated’ lives. An Amazon commercial just aired which depicted a dad with his daughter, and the daughter was crying about her boyfriend who happened to be in the front yard yelling for her. The dad says to Amazon’s Alexa, sprinklers on, and yes, the boyfriend got soaked.

What is so special about top spot for the AI Personal Assistant? Controlling the ‘funnel’ upon which all information is accessed, and actions are taken means the intelligent ability to:

Serve up content / information, which could then be mixed in with advertisements, or ‘intelligent suggestions’ based on historical data, i.e. machine learning.

Proactive, suggestive actions may lead to sales of goods and services. e.g. AI Personal Assistant flags potential ‘buys’ from eBay based on user profiles.

Three main sources of AI Personal Assistant value add:

A portal to the “outside” world; E.g. If I need information, I wouldn’t “surf the web” I would ask Cortana to go “Research” XYZ; in the Business Intelligence / data warehousing space, a business analyst may need to run a few queries in order to get the information they wanted. In the same token, Microsoft Cortana may come back to you several times to ask “for your guidance”

An abstraction layer between the user and their apps; The user need not ‘lift a finger’ to any app outside the Personal Assistant with noted exceptions like playing a game for you.

User Profiles derived from the first two points; I.e. data collection on everything from spending habits, or other day to day rituals.

Proactive and chatty assistants may win the “Assistant of Choice” on all platforms. Being proactive means collecting data more often then when it’s just you asking questions ADHOC. Proactive AI Personal Assistants that are Geo Aware may may make “timely appropriate interruptions”(notifications) that may be based on time and location. E.g. “Don’t forget milk” says Siri, as your passing the grocery store. Around the time I leave work Google maps tells me if I have traffic and my ETA.

It’s possible for the [non-native] AI Personal Assistant to become the ‘abstract’ layer on top of ANY mobile OS (iOS, Android), and is the funnel by which all actions / requests are triggered.

Microsoft Corona has an iOS app and widget, which is wrapped around the OS. Tighter integration may be possible but not allowed by the iOS, the iPhone, and the Apple Co. Note: Google’s Allo does not provide an iOS widget at the time of this writing.

Antitrust violation by mobile smartphone maker Apple: iOS must allow for the ‘substitution’ of a competitive AI Personal Assistant to be triggered in the same manner as the native Siri, “press and hold home button” capability that launches the default packaged iOS assistant Siri.

Reminiscent of the Microsoft IE Browser / OS antitrust violations in the past.

Holding the iPhone Home button brings up Siri. There should be an OS setting to swap out which Assistant is to be used with the mobile OS as the default. Today, the iPhone / iPad iOS only supports “Siri” under the Settings menu.

ANY AI Personal assistant should be allowed to replace the default OS Personal assistant from Amazon’s Alexa, Microsoft’s Cortana to any startup company with expertise and resources needed to build, and deploy a Personal Assistant solution. Has Apple has taken steps to tightly couple Siri with it’s iOS?

AI Personal Assistant ‘Wish” list:

Interactive, Voice Menu Driven Dialog; The AI Personal Assistant should know what installed [mobile] apps exist, as well as their actionable, hierarchical taxonomy of feature / functions. The Assistant should, for example, ask which application the user wants to use, and if not known by the user, the assistant should verbally / visually list the apps. After the user selects the app, the Assistant should then provide a list of function choices for that application; e.g. “Press 1 for “Play Song”

The interactive voice menu should also provide a level of abstraction when available, e.g. User need not select the app, and just say “Create Reminder”. There may be several applications on the Smartphone that do the same thing, such as Note Taking and Reminders. In the OS Settings, under the soon to be NEW menu ‘ AI Personal Assistant’, a list of installed system applications compatible with this “AI Personal Assistant” service layer should be listed, and should be grouped by sets of categories defined by the Mobile OS.

Capability to interact with IoT using user defined workflows. Hardware and software may exist in the Cloud.

Ever tighter integration with native as well as 3rd party apps, e.g. Google Allo and Google Keep.

Apple could already be making the changes as a natural course of their product evolution. Even if the ‘big boys’ don’t want to stir up a hornet’s nest, all you need is VC and a few good programmers to pick a fight with Apple.

It looks like Microsoft created a generic workflow platform, product independent.

Microsoft has software solutions, like MS Outlook with an [email] rules engine built into Outlook. SharePoint has a workflow solution within the Sharepoint Platform, typically governing the content flowing through it’s system.

Microsoft Flow is a different animal. It seems like Microsoft has built a ‘generic’ rules engine for processing almost any event. The Flow product:

Start using the product from one of two areas: a) “My Flows” where I may view existing and create new [work]flows. b) “Activity”, that shows “Notifications” and “Failures”

Select “My Flows”, and the user may “Create [a workflow] from Blank”, or “Browse Templates”. MSFT existing set of templates were created by Microsoft, and also by a 3rd party implying a marketplace.

Select “Create from Blank” and the user has a single drop down list of events, a culmination events across Internet products. There is an implication there could be any product, and event “made compatible” with MSFT Flows.

The drop down list of events has a format of “Product – Event”. As the list of products and events grow, we should see at least two separate drop down lists, one for products, and a sub list for the product specific events.

Several Example Events Include:

“Dropbox – When a file is created”

“Facebook – When there is a new post to my timeline”

“Project Online – When a new task is created”

“RSS – When a feed item is published”

“Salesforce – When an object is created”

The list of products as well as there events may need a business analyst to rationalize the use cases.

Once an Event is selected, event specific details may be required, e.g. Twitter account details, or OneDrive “watch” folder

Next, a Condition may be added to this [work]flow, and may be specific to the Event type, e.g. OneDrive File Type properties [contains] XYZ value. There is also an “advanced mode” using a conditional scripting language.

There is “IF YES” and “IF NO” logic, which then allows the user to select one [or more] actions to perform

Several Action Examples Include:

“Excel – Insert Rows”

“FTP – Create File”

“Google Drive – List files in folder”

“Mail – Send email”

“Push Notification – Send a push notification”

Again, it seems like an eclectic bunch of Products, Actions, and Events strung together to have a system to POC.

The Templates list, predefined set of workflows that may be of interest to anyone who does not want to start from scratch. The UI provides several ways to filter, list, and search through templates.

Applicable to everyday life, from an individual home user, small business, to the enterprise. At this stage the product seems in Beta at best, or more accurately, just after clickable prototype. I ran into several errors trying to go through basic use cases, i.e. adding rules.

Despite the “Preview” launch, Microsoft has showed us the power in [work]flow processing regardless of the service platform provider, e.g. Box, DropBox, Facebook, GitHub, Instagram, Salesforce, Twitter, Google, MailChimp, …

Microsoft may be the glue to combine service providers who may / expose their services to MSFT Flow functionality.

IBM Watson Cognitive API, Text to Speech, occurs, and the product of the action is placed in the same Box folder.

Action: Using Microsoft Edge (powered by MSN), in the “My news feed” tab, enable action to publish “Cards”, such as app notifications

Challenges \ Opportunities \ Unknowns

3rd party companies existing, published [cloud; web service] APIs may not even need any modification to integrate with Microsoft Flow; however, business approval may be required to use the API in this manner,

It is unclear re: Flow Templates need to be created by the product owner, e.g. Telestream, or knowledgeable third party, following the Android, iOS, and/or MSFT Mobile Apps model.

It is unclear if the MSFT Flow app may be licensed individually in the cloud, within the 365 cloud suite, or offered for Home and\or Business?