Phil 10.3.17

Indexing theory, in brief, states that the content of media reports reflects the degree to which elites – politicians and leaders in government in particular – are in agreement or disagreement. The greater the level of agreement or consensus among elites, the less news there is to report in terms of elite conflict. This is not to say that a consensus among elites is not newsworthy; indexing theory conveys how media reporting is a function of the multiple voices that exist when there is elite debate

This is kind of what I was going to do when I started on my PhD studies. Funny how things change. Did get a poster out of it though.

Sent Aaron a note that maybe there should be an ASRCresearch domain (.org?) or ASRCFederal.com/research. He’ll start floating that.

Since I think that Trust is a social construct for groups to deal with complete information and that Awareness is a measure of the completeness of the information, I started to search for collective game theory trust. The main term that comes up is collective action.

There is also a thing called decision space:

A Class of Solutions for Group Decision Problems Given a set of utility functions defined on a decision apace for a group of individuals, we introduce the concept of utopia point for the group o.a well o.a the group regret of a feasible decision. The group regret of a decision varies with a class of distance functions. The class of solutions under study are those which minimize the group regret according to the closs of distance functions. We investigate its properties from the viewpoint of decision rationale. The bounds and monotonicity of the solutions together with their computation are also explored

Like this:

Related

Post navigation

1 thought on “Phil 10.3.17”

Phil, regarding the NASA risk-scenario data I’ve been trying to track down for you, could you provide a bit more information, such as the resources where it was mentioned? I was able to track down a description of the psychological tests conducted to screen Mercury astronauts here – http://www.dtic.mil/dtic/tr/fulltext/u2/234749.pdf (starts on page 83) – but no description of a test matching those conditions is listed. If you could point me to the resource where you saw it mentioned (perhaps somewhere in Moscovici’s work?), I might be able to pull out better search terms or researcher names to track that down for you.
I am worried, though, that if the risk-scenario data originates from psychological screening done on potential first astronauts, selection bias may have a skewing effect on their behavior. The first astronauts were selected from a very narrow population of men, within certain ages, intellectual qualifications, and occupational backgrounds. I expect that they would skew toward explorer behavior far more than the average population.

While researching risk scenario data, I came across this paper that – while not directly related – I thought might have implications for your game design or reporting work.http://www.dtic.mil/get-tr-doc/pdf?AD=ADA392955
“The goal of this SBIR program was to provide authorable, dialog-enabled agents for tutoring and performance support systems. Users interact with agents who carry out strategies and goals and can engage in mixed-initiative dialog via a natural language understanding and generation system. Non-programmers can author new domains and scenarios and create new dialog agents. The dialog system is authorable by non-computational linguists. The system has two types of agents, Mentor agents and Conversational agents. The Mentor agent is a simulated subject matter expert (SME) that provides troubleshooting and problem solving advice. Mentor engages in a dialogue with trainees, helping them solve problems by taking them through logical courses of action and asking and answering domain-specific questions. Conversational agents are used for role-playing scenarios. The only real difference between the two agents is that Conversational agents do not have specific problem solving strategies. Both Mentors and Conversational agents have domain specific knowledge and access to a common sense knowledge base. This report describes the capabilities and limitations of results of this Phase II effort.”

Separately, I’ve tracked down the work I mentioned to you that relates to a semantic spatial model for interacting with large sets of documents. You might find Dr. Bradel’s research useful, as it has implications for making your own research easier, information foraging, and design of information retrieval systems.
Multi-Model Semantic Interaction for Scalable Text Analyticshttps://search.proquest.com/openview/3bbfd6126bfa6f21056380f32183a14b/1?pq-origsite=gscholar&cbl=18750&diss=y
Abstract:
“Learning from text data often involves a loop of tasks that iterate between foraging for
information and synthesizing it in incremental hypotheses. Past research has shown the
advantages of using spatial workspaces as a means for synthesizing information through
externalizing hypotheses and creating spatial schemas. However, spatializing the entirety
of datasets becomes prohibitive as the number of documents available to the analysts
grows, particularly when only a small subset are relevant to the tasks at hand. To address
this issue, we developed the multi-model semantic interaction (MSI) technique, which
leverages user interactions to aid in the display layout (as was seen in previous semantic
interaction work), forage for new, relevant documents as implied by the interactions, and
then place them in context of the user’s existing spatial layout. This results in the ability
for the user to conduct both implicit queries and traditional explicit searches. A
comparative user study of StarSPIRE discovered that while adding implicit querying did
not impact the quality of the foraging, it enabled users to 1) synthesize more information
than users with only explicit querying, 2) externalize more hypotheses, 3) complete more
synthesis-related semantic interactions. Also, 18% of relevant documents were found by
implicitly generated queries when given the option. StarSPIRE has also been integrated
with web-based search engines, allowing users to work across vastly different levels of
data scale to complete exploratory data analysis tasks (e.g. literature review, investigative
journalism).
The core contribution of this work is multi-model semantic interaction (MSI) for usable
big data analytics. This work has expanded the understanding of how user interactions
can be interpreted and mapped to underlying models to steer multiple algorithms
simultaneously and at varying levels of data scale. This is represented in an extendable
multi-model semantic interaction pipeline. The lessons learned from this dissertation
work can be applied to other visual analytics systems, promoting direct manipulation of
the data in context of the visualization rather than tweaking algorithmic parameters and
creating usable and intuitive interfaces for big data analytics.”
Another article on spatial manipulation of big data focusing more on user interactions:https://pdfs.semanticscholar.org/0268/e1ef36d49dbf13d7016e3da867334d3068a6.pdf
“To tackle the onset of big data, visual analytics (VA) seeks to marry the human intuition of visualization with mathematical models’ analytical horsepower. A critical question is, how will humans interact with and steer these complex mathematical models? Initially, users applied direct manipulation to such models in the same way they applied it to simpler visualizations in the premodel era—by using control panels to directly manipulate model parameters. However, opportunities are arising for direct manipulation of the model outputs, where the users’ thought processes take place, rather than the inputs. Here we present this new agenda for direct manipulation for VA.”