Track location of your customers and correlate it with other factors (such as time of day, weather conditions, current events, social media trending) and then offer tailored promotions in real-time while the user is engaged on your website or application.

Measure, analyze and correlate customer behavior so you can gain insight in to failed process steps and understand and optimize the buyer’s journey to increase customer engagement and conversion rates on both desktops and mobile.

See how people interact with your website, new promotion page or application—-see what they view, what they read, what they click on and what they run from. Use these insights to not only improve your website or application content placement but also to curate overall marketing strategies, customer service and product development.

Sense the reception of your new product or service by using texting analytics and comment-form analysis to discover social media “sentiment”. By understanding the difference between positive and negative comments you can proactively gauge and predict how customers are using social networks to influence others and express their shopping interests and experiences.

By tracking click streams, location data, updates in preferences and other behavior in real time, you’ll be able to recognize when customers are nearing a purchase decision and can then nudge the transaction to completion by bundling preferred products, offering discounts or reward program benefits.

A major pizza chain uses user location data correlated with weather information to change the menu a user sees when ordering on the website, offering more salads to Californian locations and heartier meat pizzas in Maine. Since you can see aggregated ordering and web-site behavioural data graphically in real time, marketing can change and experiment with live promotions on-the-fly. Users at, near or engaged in watching special events (like the Super Bowl) can be targeted with special promotions.

A pharmaceutical company analyzes social media conversations to graphically display how patients are experiencing a new drug they are offering. If the data reveals particular unpredicted side-effects in certain people, for example, the drug company immediately updates their website content, social media feeds or even the product itself to reflect this new insight.

A large call center monitors conversations to detect levels of customer anger or dissatisfaction on their dashboard and uses the information to modify their scripts or practices to better serve the needs of their customers.

An interactive tutorial company relies on their data dashboard to see that many users are pausing and/or rewinding a tutorial at a certain place, indicating a sticking-point in user understanding. They can then modify the tutorial to make the point clearer. The company can also see which tutorials that are not consumed in one sitting, which spurs them to make that tutorial more interesting and thereby increase sales.

A gaming company clearly sees graphical data about how long users engage with their game in a given session or over a lifetime. They also see what elements users are most likely to click on to buy points or otherwise monetize the game and then add or subtract features to create a better user experience and create more sales. By knowing how, when and from where visitors are being channeled to the game, they see which advertising strategies drive the most visitors to the game and modify their promotional budget accordingly.

Business Intelligence Analytics

Know the Company You Keep

Drastically improve decision and process management by being able to visualize data cross-platform throughout your enterprise and even integrate with other business partner’s systems or applications for complete end-to-end insights into performance. Integrating analytics with business rules, event-processing, goals and operations in a single visual dashboard allows your enterprise to pick out important events from a pile of data “noise” and assume a predictive and proactive stance in real-time across diverse systems.

Integrate data previously lurking in data “silos”—walled-off functional units such as R&D, engineering, manufacturing, or service operations—and bring it into a single aggregated interface that allows for a collaborative and comprehensive use of information from multiple systems across your enterprise.

Quickly detect anomalies and find patterns across mountains of historical or unstructured data and rapidly create and share graphs, charts through a single intuitive dashboard that empowers users with prebuilt functions and visualizations so you can truly understand and act on your data. Increase operational efficiency by monitoring everything from product inventories to sick days to comparing shipping times in different facilities to discover which processes and teams work most efficiently to boost performance.

Gain real-time, end-to-end visibility into complex business processes such as financial trade settlement, new telecommunications service activations, as well as health and insurance claims processing. Find business process bottlenecks across systems and identify opportunities to optimize company processes.

Increase your data security and compliance with a single interface that shows when a database was modified, from where and by whom. Monitor and audit read-and-write access to sensitive data.

Truly collaborate with your enterprise partners by integrating their data into your dashboard to proactively adjust to changes in supply chains or monitor information from external suppliers and customers to assess, troubleshoot or co-create products on the fly.

An international automobile manufacturer shares data from its suppliers around the world—who make thousands of component parts. Using an integrated data platform, the company and its suppliers collaborate to create, access and monitor product development all the way through the design and implementation phases in real time. The car company realizes a dramatic reduction in design flaws and failures, thereby significantly driving down manufacturing costs.

A well-known fast-food chain equips its stores with devices that gather operational data that tracks customer interactions, in-store traffic and ordering patterns. Operational managers then model the impact of variations in menus, restaurant designs, and training, thereby increasing both productivity and sales.

A healthcare software provider views and monitors data transactions between health providers and insurers. This allows them to optimize patient eligibility verification which means claims are processed and paid much more quickly. By monitoring transactions errors and partner response rates they quickly identify failed transactions (such as improperly entered form data) and take remedial action.

A bank analyzes data to differentiate between customers based on their credit risk, credit usage and other parameters. They then offer new credit products to customers who meet certain criteria based on the collected data.

A large oil company needs to correlate financial data with a mish-mash of process-information data showing how much oil and gas the company finds and collects. By mapping and correlating its data through a single visual interface, it brought 27 separate sources into one 450GB data warehouse, allowing for more transparent and actionable insights between finance and operations.

Machine/Industrial Data and the Internet of Things

Hear Your Herd

As your seeing, we can help you unify, organize and extract real-time insights from massive amounts of machine data from virtually any source (websites, applications, social media and cloud platforms, logs, clickstreams, app servers, any endpoint device, hypervisors, mainframes, traditional databases and open-source data stores).

A more specific type of machine data–industrial data–uses sensors embedded in products or industrial machines to generate huge varieties of data. That data can then be aggregated into a single visual dashboard allowing companies to gain incredible insights into machine efficiency, safety, diagnostics or how customers are using your product and what state its current internal environment is in.

When virtually any electronic thing can create and maintain a data-driven website about itself—and thereby communicate with other devices of its type—and your dashboard can aggregate this data into actionable events, we can see that the Internet of connected computers is merely the beginning.

Support proactive business operations by utilizing a graphical interface of data from devices, sensors, production lines and industrial systems (including SCADA, DDC and 150 other machine-control protocols) in real time that will identify system anomalies, outliers, downtime, device availability, utilization and other issues in device deployment.

View human-friendly data from sensors embedded in connected consumer products from children’s toys to refrigerators to determine how these products are actually used in the real world, create interactivity in real-time or provide proactive maintenance to avoid failures.

Protect industrial systems and other critical assets from cybersecurity threats, intellectual property theft and loss of control/view by using built-in and observable security audits, Intrusion Detection Systems (IDS), Security Incident and Event Management (SIEM), as well as antivirus and firewall protections.

An oil company uses instruments that constantly deliver data on wellhead conditions, mechanical systems and pipelines. This data is analyzed by clusters of computers and is then fed to real-time operations centers that adjust oil flows to optimize production and minimize down-times. As a result, the company cuts operational and staffing costs by 10 to 25 percent, while increasing production by 5 percent.

An energy management company monitors and analyzes tens of thousands of sensors and data inputs from HVAC (heating, ventilation and air conditioning) systems in more than 100 buildings on a university campus. Using native field device integration capabilities and a common interface for data from every nook of the campus, the company creates historical energy usage reports and deploys a strategy of load shedding and shifting that takes advantage of favorable electric rates, thereby and boosting energy efficiency. The project is projected to save about $2.5 million annually.

A large cattle farm users sensors placed in cow’s ears to record their location, vital signs, hormonal levels and other behavioural inputs and then correlates these with milk production and operational costs. Agricultural managers interface with this data to predict problems and increase overall efficiency and milk production.

A healthcare monitoring company uses an in-home device to collect and securely transfer patients’ heart rates and other vital signs to a central dashboard where the data can be monitored and acted upon as it happens, thereby helping patients stay at home during the period of their chronic illness.

An appliance manufacturer designs a new line of refrigerators that connect to the Internet and send self-diagnostic data to a processing center where it can be analyzed for performance issues. Remedial action can be taken by either controlling the fridge remotely or by emailing the consumer and dispatching repair personnel. Coming soon: near field chip technology embedded in everyday food items (like milk) could indicate low supply levels to the internet-connected refrigerator, which in turn, could automatically order new supplies and have them delivered to the house.

Networking and IT Infrastructure

A Server is a Terrible Thing to Waste

Experience 20/20 operational visibility across your network, datacenter and cloud infrastructures from a single graphical dashboard and resolve any server problems or security issues that do arise in minutes, not hours. Being able to easily see what server is down or has performance deviations and why without having to search through servers, systems or virtual machines dramatically decreases downtime while vastly increasing user satisfaction and retention.

Visually confirm how a network infrastructure has been set up in a central, unified, data-driven view of critical IT services. Map the dependencies and configurations within your infrastructure to ensure that the delivery of services meets your business objectives.

Receive a comprehensive view of performance issues (such as idle versus near-capacity systems), timeouts, capacity bottlenecks, service availability or degradation and usage. Link application or user related issues to the underlying infrastructure that supports them and even correlate event data and system metrics across technology layers.

Easily debug faults and failures, backdoor attacks, time bombs, user role/registry changes or other suspicious activity that indicate that the network or a server may be compromised or is the object of a remote attack, whether they are in traditional datacenters or in distributed cloud infrastructures.

Reduce the number of tools and skills needed to maintain and monitor complex infrastructures to empower administrators to pinpoint the root cause of service degradations faster within the underlying OS, hypervisor, storage, network and server infrastructure. Sophisticated analytics powered by machine learning in cohort with 100% data persistence across applications and systems enables the dashboard to drill down deep into the data for rapid resolution.

By correlating real-time streaming data with terabytes of historical data, detecting patterns and predicting events, problems can be prevented before they impact critical services or users.

Sophisticated resource allocation and change management allows easy scalability from a single server to multiple datacenters, thereby reducing escalations, capacity constraints and asset inventory throughout all levels of usage.

A content sharing and collaboration platform company needs fast upload speeds and continuous site availability, not to mention rapid issue resolution. By using a data-driven graphical dashboard to monitor upload success, response times, exceptions, the number and type of users and by taking advantage of capacity modelling metrics they were able to better scale their operations, improve response time and increase customer satisfaction.

A cloud sharing company—focused on paid applications in training and sales enablement—gains deep insight into the thousands of virtual machines (VMs) supported by the company’s infrastructure. They then plan their capacity strategy based on trends they identify by knowing what resources their customers are using. Gathering logs and performance metrics from every component—including networks, firewalls, storage, operating systems and custom applications—also allows them to create attack signatures that trigger automatic blocks or generate alerts to the Network Operations Center.

A national online company that offers information to consumers about what car to buy and at what price wonders how advertising during the Super Bowl might affect their online infrastructure. They begin using a data-driven, graphical interface and gain detailed performance information showing the impact of individual components in the infrastructure and their average statistical performance. This operational insight allows them to actually remove under-performing systems and still sustain a Super Bowl escalation of users. This vastly reduces time spent on physical server administration, overall systems administration and the need to purchase more servers, resulting in savings of $160,000.

A UK online grocery store has a complex and large production and development mix, including more than 400 servers across 10 environments. It needs a way for multiple developers to securely access logs at many geographic locations and several platforms, while making sense of errors. By being able to aggregate and visualize the data they are able to see where ‘404’ errors are occurring and why—before it has an adverse effect on sales. In this way, revenue lost to abandoned shopping carts is dramatically reduced.

Application Delivery and Metrics

Build Rome in A Day. Or Two.

Building, testing, developing and maintaining a large-scale, interconnected and highly distributed application is a monumental task. We can help you visualize both the development of your application and the DevOps tools you’re using to build it no matter which technology or environment you’re using. We can also help you maintain your empire with end-to-end visibility into customer usage, financial transactions and other key performance indicators for continuous and agile delivery based on how your users actually interact with your application.

Developers can search and visualize QA and staging data from multiple production environments in one place and quickly trace and precisely pinpoint bugs and other code-level issues within the tools, applications, and systems that dev teams use every day. Get operational insights across any framework, stack or language including PHP, Ruby, JavaScript, Node.js, as well as backend database technologies, message buses and other legacy technologies.

Easily ingest, correlate and analyze all sorts of data—from clickstreams, web and system logs, databases, SOA architectures and API endpoints—in a pre-production environment to track each code check-in, automate testing and manage continuous integration and deployment in real time. This enables more collaborative feedback between otherwise disparate sets of developers and improves the quality of the applications being delivered.

Collect granular data and visualize all types of powerful application metrics such as transaction response times, network performance, user activity, application performance, fraud attempts, purchases, trouble reports and account changes across all devices all without having to build special purpose software. With this data you can enhance application insights, establish baselines, trend application performance and usage, and correlate with other machine data for end-to-end visibility.

Gain actionable insights from a single dashboard that lets you see how new features are adopted and helps you drive innovation with continuous integration and automatic application updating that constantly improves customer engagement, retention rates, conversion and monetization.

Developers at a multinational, open-source enterprise software company began using a data-driven visual dashboard and were able to reduce error rates by two orders of magnitude in just a few weeks. By correlating data across various environments and systems, developers were able improve code quality throughout the product development process.

An online securities trading firm experiences tremendous growth by acquisition—going from a few thousand users to over 50,000. By using a centralized, visual dashboard it performs extensive load testing and advanced real-time data analysis on a large scale on its newly consolidated application. The massive load test is designed to break the platform—and it could be seen breaking in real time. The immense amount of debugging, transactional and monitoring data was analyzed on the fly and generated deep insights into how to successfully scale the application.

A provider of energy intelligence software collects millions of data points on customer energy production and consumer usage. By providing real-time operational visibility into the data flow through their platform’s public and private cloud components, administrators perform user and workload analytics over huge historical data sets, setting up alerts for over 200 operational events ensuring that application data is processed with high, error-free throughput and zero latency.

A leading provider of cloud-based software solutions for managing physicians’ practices sees that within the first twenty minutes of their deployment of data visualization that they have major problems at several key areas within their application. Instead of laboring over schematics to locate what is potentially failing, they are able to easily access the appropriate data to determine if an entire workflow or just a particular endpoint was unhealthy. Armed with this intelligence, they quickly resolve the issues and when the customers log in to the application following day, they have a much more pleasant experience.

Security and Compliance

Stand On Guard.

Forge beyond simple monitoring for traditional threats by using advanced analytics-enabled security that gives you panoramic visibility across web infrastructure, throughout your enterprise and within the cloud. Continuous compliance and security monitoring, rapid event response and the capability to respond against advanced hazards—-both known and unknown— vastly reduces the threat of external attacks, malicious insiders and expensive fraud.

Find the compromised servers associated with malware and advanced, persistent infections and determine the scope and impact of compromised systems by conducting breach and compromise assessments utilizing the kill chain methodology. Quickly detect activities, indicators, events and artifacts correlated with infected hosts and malware to create searches and alerts for the newly discovered breaches using machine learning and data science instead of writing complicated correlation rules.

Former employees, contractors, or partners who may still have access to your network can view, misuse or destroy sensitive data, while evading detection by traditional security solutions. Through a single visual dashboard you’ll detect actions and patterns indicative of data leaving a network or endpoint and receive alerts when what is seen as normal behavior for a peer group is off baseline. Determine the intent, scope and severity of a user’s actions by rapidly searching through months of historical event data.

Automate compliance for governmental, industry or enterprise regulations with the centralized collection, continuous monitoring and retention of security events with dashboards and reports that show your compliance state. Quickly search through months’ worth of security event data to expedite security investigations and satisfy auditors.

Use real-time correlation searches, anomaly detection and behavior analytics to expose account takeovers and other malicious threats correlated to devices, applications and users to identify fraud as it occurs and then prevent it. Use consolidated dashboard reports to analyze, manage and measure fraud risk for internal users, single transactions, or across the enterprise using aggregate fraud scoring.

A large pension fund management firm in the Netherlands manages the privacy data of half a million pensioners and routinely processes large financial transactions. By switching to data-driven security, it streamlines its compliance reporting, enables real-time financial transaction monitoring and provides proactive incident investigation capabilities, bringing down the time required to prepare compliance reports from 24 hours to only 4. It can now verify the status or accuracy of a transaction in real time, as opposed to having to wait a day or more.

As regulators begin to focus on information security threats and cybersecurity, an emerging Canadian financial services company must adhere to various regulatory controls and audits and provide transparency. By bringing in data from every critical device throughout the organization—including all switches, firewalls, IPS devices, endpoints and over 100 servers (Linux, Microsoft Windows)—and by using pre-built correlations, dashboards, reports and real-time alerting, the firm solves advanced security issues and proves transparency to regulators from a “single pane of glass”.

A consulting company that provides enterprise applications to energy, commercial, healthcare, higher education and public sectors runs over 700 Oracle database instances, which makes it difficult to implement and use traditional SIEM solutions. By using a data-driven, centralized dashboard, the company is able to proactively monitor large amounts of user activity in real time to identify abnormal behavior patterns. This gives them the capabilities to identify unknown threats and perform comprehensive security investigations as well as the more conventional SIEM functionality that monitors for traditional known threats identified from security data sources, saving the firm $200,000+ in SIEM consulting and custom development services.

A large data center company that provides internet exchanges to enable business interconnection is overwhelmed by more than 30 billion raw security events generated every month. By aggregating and correlating the data into a centralized dashboard, the security team was able to reduce raw security events down to about 12,000 correlated events, and then further reduce those down to 20 actionable alerts. Now, the security administrators can quickly cross-reference data between systems, enabling them to investigate and respond to incidents 30% faster than before.

All About Us

Check out the bios of our world-famous technology superheroes and the company they keep.