So far in all cases:
1. The potential clients (evaluators) have an identified [critical] problem potentially addressed by the SIRA Technology.
2. The clients are domain experts capable of evaluating the results in terms of accuracy in their domain of knowledge.
3. The quality/usefulness of the results remains the most important aspect of testing.
4. Trigent resources work with the client to resolve issues affecting the quality of the results.
5. The evaluation has some level of stakeholder interest.

Once the SIRA Technology is shown to be adequate, next steps will be evaluated.

If you wish to participate in evaluating the SIRA Technology, you will need to:
1. Identify an important problem to address
2. Identify how the SIRA Technology contributes to the solution to your problem
3. Provide committed involvement with domain experts
4. Verify Stakeholder interest

In addition to introductory presentations and demos, we are available to present to stakeholders, experts and management. Please contact us to discuss your potential participation.

Web content is made for humans to read/look-at (“eye candy”, advertisements, links to other content, etc.). This poses a challenge for automated readers to identify topical information and context boundaries.

For selected sites (like Wikipedia and the US Patent Office), we have readers with individual “Reading Plans” that focus on the information of interest. For all other links, SIRA reads the complete web page and all links for relevant content. This costs time and can lead to finding less relevant content. One way to focus a read of all the links on a page is to create a separate web page with just the links of interest. Then you can specify that page as the base page for SIRA reading.

Short clear sentences work so much better. (e.g. “The student of the professor on the balcony is happy.” Is the student or professor “on the balcony”?)

Be sure you want all the combinatorics implied by compound sentences. Compound sentences can generate a large number of simple semantic assertions. For example, “Joe, Jane, and bob bought a car, played the saxophone, did cartwheels, and went home.” Becomes 12 simple sentences:
1. Jane bought a car.
2. Jane did cartwheels.
3. Jane played the saxophone.
4. Jane went home.
5. Joe bought a car.
6. Joe did cartwheels.
7. Joe played the saxophone.
8. Joe went home.
9. Robert bought a car.
10. Robert did cartwheels.
11. Robert played the saxophone.
12. Robert went home.

Now try, “Joe and Jane bought and sold a car and truck.”, which become 8 simple sentences:
1. Jane bought a car.
2. Jane bought a truck.
3. Jane sold a car.
4. Jane sold a truck.
5. Joe bought a car.
6. Joe bought a truck.
7. Joe sold a car.
8. Joe sold a truck.

Fewer words are better, for example, “Joe will fly to the store and shop.” which semantically becomes:
1. Joe will fly to the store. (fly to a place)
2. Joe will fly to the shop. (fly to a place)
3. Joe will fly to shop. (fly and then do some shopping)
4. Joe will shop. (not necessarily connected with flying to the store)

In many cases automated domain-aware reading and writing is not enough for a satisfactory Research Assistant. A good Research Assistant often needs to apply additional domain knowledge to translate (expand) a “research request” into semantically equivalent and findable reading objectives (e.g. What do you mean by “similar” and what constitutes evidence that something is “similar”?).

Suppose you have a Statement in your Investigation like, “Candidate must have good communication skills.” You may want to translate that to, “#?# has good communication skills.” This allows for multiple unknown ways of saying “candidate”. What about the multiple ways to indicate “good communication skills”?

Evidence of having “good communication skills” could include: “gave presentation”, “brokered agreement”, “published paper”, or anything that implies the presence of something within the domain (topic) of the research. You may want to expand your investigation to include the statements that imply each investigation statement.

Note: We have built additional tools [not yet released] to support the automation of domain-aware expansion of investigation statements based on “Implication/Evidence Patterns”. Please contact us for more information.

Given something (e.g. situation, event, person, place, thing, etc.), how do you find similar “somthings” when searching a large Natural Language corpus (e.g. the Internet)? Here is one way to do it using PriArt.

Problem: “Find [all] X that is similar to Y” in a large Natural Language corpus.

Solution using PriArt:
1. Find or create a Natural Language description of “Y” in terms of the characteristics of “Y” that represent the “similarity” of interest.
2. Create a new PriArt Investigation based on the description of “Y” with all occurrences of “Y” replaced with “#?#” (the “unknown thing” marker). Note: Be sure to substitute pronoun references to “Y” as well (e.g. “it”, “they”, “those”, etc.).
3. Helpful Hint: When using a search engine pre-filter, you can add statements to your investigation to better target the documents selected to read (e.g. “”#?# is a car.”) .
4. Run the Investigation over the target corpus (recommend “Prior Art” report)
5. Helpful Hint: See the postings to learn how to obtain better results