The aim of INVISQUE is to improve sense making and knowledge discovery in large and complex datasets by making use of visual and spatial cues.
The original design concept was conceived by Professor William Wong, at the Interaction Design Center, Middlesex University. The early design prototypes were implemented by Steel London. Since then, the INVISQUE prototype has been advanced and used in research into user interfaces to support investigative processes in defence, security, and as diverse as entertainment, and low literacy users.

The original project was funded by the JISC Innovation Research Programme (Grant No.) to improve search and discovery in library electronic resource discovery systems, so as to improve usage and utility of the datasets funded by JISC and used by the various research and academic communities.

INVISQUE adopts an interaction and visualisation design approach that is based on the following:

Interaction techniques to support cognitive momentum - Enabling rapid and continuous iterative querying and searching while keeping visible the context of search, and minimizing WWILF-ing, or the ‘What Was I Looking For?’ problem

The Reasoning Workspace: A Visual Sensemaking Approach

We will extend on this work (Wong, et al 2009; Stelmaszewka, et al 2010), to develop a visuo-spatially oriented design to support the information analysis sense-making process (Task 5.2). We will apply techniques from Cognitive Work Analysis (Vicente, 1999) and Ecological Interface Design (Burns et al, 2004) to identify the key information relationships of the analysts’ cognitive work domain (Task 5.1) and to develop tools, visualization and interaction techniques for creating, assembling, and organizing such relationships for use in evidential reasoning. This ‘reasoning workspace’ will conceptually comprise and connect three areas: a ‘data space’, an ‘analysis space’, and a ‘hypothesis space’. We hypothesize that such a design will support sense-making activities of the analysts especially when they need to combine results from different data sets, that may have been analyzed using different tools, and then organized and presented in ways that afford the analysts, a capability for rapid extraction of explanations. We anticipate that the design will also afford the ease of visual examination and reasoning with the collective set of results about inconsistencies, completeness, plausibility, and other sense-making activities (Klein et al, 2006, 2007), looking for causality, correlation or mutability (Klein, et al 2009).

The research also aims to extend the WIMP-based direct manipulation interaction technique currently in use in all GUI interfaces, to one of direct manipulation of information and operations on the information. For example, we propose to develop techniques for filtering information so that functions such as Boolean operations such as ‘AND’ operations can be carried out simply by dragging two sets of information together to reveal the values in the intersection of the two sets. We refer to this as ‘tactile reasoning’. We hypothesize that tactile reasoning, interactive visualization and the freedom to use the display workspace as desired, combined with underlying smart technologies such as entity extraction, can be used to support visual thinking (Arnheim,1969; McKim, 1980) by reducing the effort for finding, filtering and compiling evidence for generating conclusions (e.g. Maglio et al, 1999). The design of the interaction and visualization techniques will draw on many important principles and concepts such as the perceptual cycle (Neisser, 1976), affordances and ecological perception (Gibson, 1979; Norman, 1999), focus+context and distortion displays (Leung and Apperley, 1994), Gestalt and the Proximity-Compatibility Principle (Wickens and Carswell, 1995), object displays, configurality and emergent features (see Ware, 2004; Bennett and Flach, 2011, for a good review).