The ability for an agent to localize itself within an environment is crucial for many real-world applications. For unknown environments, Simultaneous Localization and Mapping (SLAM) enables incremental and concurrent building of and localizing within a map. We present a new, differentiable architecture, Neural Graph Optimizer, progressing towards a complete neural network solution for SLAM by designing a system composed of a local pose estimation model, a novel pose selection module, and a novel graph optimization process. The entire architecture is trained in an end-to-end fashion, enabling the network to automatically learn domain-specific features relevant to the visual odometry and avoid the involved process of feature engineering. We demonstrate the effectiveness of our system on a simulated 2D maze and the 3D ViZ-Doom environment.

Slides

Location

Orientation

Velocity

IR context -> Sociocultural context

Writing Fika. Make a few printouts of the abstract

It kinda happened. W

Write up LMN4A2P thoughts. Took the following and put them in a LMN4A2P roadmap document in Google Docs

Storing a corpora (raw text, BoW, TF-IDF, Matrix)

Uploading from file

Uploading from link/crawl

Corpora labeling and exploring

Index with ElasticSearch

Production of word vectors or ‘effigy documents’

Effigy search using Google CSE for public documents that are similar

General

Site-specific

Semantic (Academic, etc)

Search page

Lists (reweightable) or terms and documents

Cluster-based map (pan/zoom/search)

I’m as enthusiastic about the future of AI as (almost) anyone, but I would estimate I’ve created 1000X more value from careful manual analysis of a few high quality data sets than I have from all the fancy ML models I’ve trained combined. (Thread by Sean Taylor on Twitter, 8:33 Feb 19, 2018)

Prophet is a procedure for forecasting time series data. It is based on an additive model where non-linear trends are fit with yearly and weekly seasonality, plus holidays. It works best with daily periodicity data with at least one year of historical data. Prophet is robust to missing data, shifts in the trend, and large outliers.

Continuing Alignment in social interactionshere. This looks much better. Found this book on game theory for groups. It took me a while to determine if ‘joint action’ is the coordinated ‘joint’ action of two people, or it it was the coordinated action of two people’s joints. It’s neuroscience after all.

Continuing on the Research Browser white paper. Based on the HHS requests, I added default performance logging

In a lot of meetings as an Aaron Proxey

Good progress on the white paper. Just about finished the background section. Need to add pix of the current prototype

Was going to continue The Law of Group Polarization, but got sucked into the following. On a related note, I peeked at the group sensemaking paper from CSCW and realized that they are dealing with group polarization issues.

Soooooooooo, I went back to check the links that the google search “link:http://dotearth.blogs.nytimes.com” brings up. In looking at the pages (mostly other blog-like sites), the link to dotearth is almost always in the blogroll list that’s off to the side on many of these sites. For example look at the lower right on climatecentral.org, and you’ll see the link.

Note that it’s a good thing I’m limiting the results to 10! The second thing to notice is every one of these links is SEO garbage. This one is my favorite. Now, this is ordered according to rank (however that’s calculated) and maybe there are better ways to order the results, but this does make me nervous about using backlinks without some checking. Maybe cosine similarity?

So the last thing, if we want to spend some money is to use the common crawl for backlinks. Not sure if it would make any difference, but there would be more insight. As an example, there’s wikireverse which did exactly that.

Here’s a rough list on why UGC stored in a graph might be the best way to handle the BestPracticesService.

Self generating, self correcting information using incentivized contributions (every time a page you contributed to is used, you get money/medals/other…)

Graph database, maybe document elements rather than documents
BPS has its own network, but it connects to doctors and possibly patients (anonymized?) and their symptoms.

Would support Results-driven medicine from a variety of interesting dimensions. For example we could calculate the best ‘route’ from symptoms to treatment using A*. Conversely, we could see how far from the optimal some providers are.

Because it’s UGC, there can be a robust mechanism for keeping information current (think Wikipedia) as well as handling disputes

Since it’s pledge week on WAMU, I was listening to KQED this morning, starting around 4:45 am. Somewhere around 5:30(?) they ran an environment section that talked about computer-generated hypotheses. Trying to run that down with no luck.

End-user–based framework approaches use different methods to allow for the differences between individual end-users for adaptive, interactive, or personalized assessment and ranking of UGC. They utilize computational methods to personalize the ranking and assessment process or give an individual end-user the opportunity to interact with the system, explore content, personally deﬁne the expected value, and rank content in accordance with individual user requirements. These approaches can also be categorized in two main groups: human centered approaches, also referred to as interactive and adaptive approaches, and machine-centered approaches, also referred to as personalized approaches. The main difference between interactive and adaptive systems compared to personalized systems is that they do not explicitly or implicitly use users’ previous common actions and activities to assess and rank the content. However, they give users opportunities to interact with the system and explore the content space to ﬁnd content suited to their requirements.

Looks like section 3.1 is the prior research part for the Pertinence Slider Concept.

Evaluating the algorithm reveals that enrichment of text (by calling out to
search engines) outperforms other approaches by using simple syntactic conversion

This seems to work, although the dependency on a Google black box is kind of scary. It really makes me wonder what would happen if we analyzed the links created by a search of each sentence (where the subject is contained in the sentence?) would look like ant what we could learn…I took the On The Media retweet of a Google Trends tweet [“Basta” just spiked 2,550% on Google search as @hillaryclinton said #basta during #DemDebate][https://twitter.com/GoogleTrends/status/707756376072843268] and fed that into Google which returned:

4 results (0.51 seconds)

Search Results

Hillary Clinton said 'basta' and America went nuts | Sun ...
national.suntimes.com/.../7/.../hillary-clinton-basta-cnn-univision-debate/
9 hours ago - America couldn't get enough of a line Hillary Clinton dropped during Wednesday night's CNN/Univision debate after she ... "Basta" just spiked 2,550% on Google search as @hillaryclinton said #basta during #DemDebate.

Hillary is Asked If Trump is 'Racist' at Debate, But It Gets ...
https://www.ijreview.com/.../556789-hillary-was-asked-if-trump-was-raci...
"Basta" just spiked 2,550% on Google search as @hillaryclinton said #basta during #DemDebate. — GoogleTrends (@GoogleTrends) March 10, 2016.

Election 2016 | Reuters.com
live.reuters.com/Event/Election_2016?Page=93
Reuters
Happening during tonight's #DemDebate, below are the first three tracks: ... "Basta" just spiked 2,550% on Google search as @hillaryclinton said #basta during # ...

Good discussion with Wayne yesterday about getting lost in a car with a passenger.

The equivalent of a trapper situated in an environment who may not know where he is but is not lost is analogous to people exchanging information where the context is well understood, but new information is being created in that context. Think of sports enthusiasts or researchers. More discussion will happen about the actions in the game than the stadium it was played in. Similarly, the focus of a research paper is the results as opposed to where the authors appear in the document. Events can transpire to change that discussion (The power failure at the 2013 Superbowl, for example) but even then most of the discussion involves how the blackout affected gameplay.

Trustworthy does not mean infallible. GPS gets things wrong, but we still depend on it. It has very high system trust. Interestingly, a Google Search of ‘GPS Conspiracy’ returns no hits about how GPS is being manipulated, while ‘Google Search Conspiracy’ returns quite a few appropriate hits.

GPS can also be considered a potential analogy to how our information gathering behaviors will evolve. Where current search engines index and rank existing content, a GPS synthesises a dynamic route based on an ever-increasing set of constraints (road type, tolls, traffic, weather, etc). Similarly, computational content generation (of which computational journalism is just one of the early trailblazers) will also generate content that is appropriate for the current situation (in 500 feet turn right). Imagine a system that can take a goal “I want to go to the moon” and creates an assistant that constantly evaluates the information landscape to create a near optimal path to that goal with turn-by-turn directions.

Studying how to create Trustworthy Anonymous Citizen Journalism is important then for:

Recognising individuals for who they are rather than who they say they are

Synthesizing trustworthy (quality?) content from the patterns of information as much as the content (Sweden = boring commute, Egypt = one lost, 2016 Republican Primaries = lost and frustrated direction asking, etc). The dog that doesn’t bark is important.

Determining the kind of user interfaces that create useful trustworthy information on the part of the citizen reporters and the interfaces and processes that organize, synthesise, curate and rank the content to the news consumer.

Providing a framework and perspective to provide insight into how computational content generation potentially reshapes Information Retrieval as it transitions to Information Goal Setting and Navigation.

Ok, Figure 3 is terrible. Blue and slightly darker blue in an area chart? Sheesh.

Here’s a nice nugget though regarding detecting fake reviews using machine learning: For assessing spam product reviews, three types of features are used [Jindal and Liu 2008]: (1) review-centric features, which include rating- and text-based features; (2) reviewer-centric features, which include author based features; and (3) product-centric features. The highest accuracy is achieved by using all features. However, it performs as efﬁciently without using rating-based features. Rating-based features are not effective factors for distinguishing spam and nonspam because ratings (feedback) can also be spammed [Jindal and Liu 2008]. With regard to deceptive product reviews, deceptive and truthful reviews vary concerning the complexity of vocabulary, personal and impersonal use of language, trademarks, and personal feelings. Nevertheless, linguistic features of a text are simply not enough to distinguish between false and truthful reviews.(Comparison of deceptive and truthful travel reviews). Here’s a later paper that cites the previous. Looks like some progress has been made: Using Supervised Learning to Classify Authentic and Fake Online Reviews

And here’s a good nugget on calculating credibility. Correlating with expert sources has been very important: Examining approaches for assessing credibility or reliability more closely indicates that most of the available approaches use supervised learning and are mainly based on external sources of ground truth [Castillo et al. 2011; Canini et al. 2011]—features such as author activities and history (e.g., a bio ofan author), author network and structure, propagation (e.g., a resharing tree of a post and who shares), and topical-based affect source credibility [Castillo et al. 2011; Morris et al. 2012]. Castillo et al. [2011] and Morris et al. [2012] show that text- and content-based features are themselves not enough for this task. In addition, Castillo et al. [2011] indicate that authors’ features are by themselves inadequate. Moreover, conducting a study on explicit and implicit credibility judgments, Canini et al. [2011] ﬁnd that the expertise factor has a strong impact on judging credibility, whereas social status has less impact. Based on these ﬁndings, it is suggested that to better convey credibility, improving the way in which social search results are displayed is required [Canini et al. 2011]. Morris et al. [2012] also suggest that information regarding credentials related to the author should be readily accessible (“accessible at a glance”) due to the fact that it is time consuming for a user to search for them. Such information includes factors related to consistency (e.g., the number of posts on a topic), ratings by other users (or resharing or number of mentions), and information related to an author’s personal characteristics (bio, location, number of connections).

On centrality in finding representative posts, from Beyond trending topics: Real-world event identification on twitter: The problem is approached in two concrete steps: ﬁrst by identifying each event and its associated tweets using a clustering technique that clusters together topically similar posts, and second, for each cluster of event, posts are selected that best represent the event. Centrality-based techniques are used to identify relevant posts with high textual quality and are useful for people looking for information about the event. Quality refers to the textual quality of the messages—how well the text can be understood by any person. From three centrality-based approaches (Centroid, LexRank [Radev 2004], and Degree), Centroid is found to be the preferred way to select tweets given a cluster of messages related to an event [Becker et al. 2012]. Furthermore, Becker et al. [2011a] investigate approaches for analyzing the stream of tweets to distinguish between relevant posts about real-world events and nonevent messages. First, they identify each event and its related tweets by using a clustering technique that clusters together topically similar tweets. Then, they compute a set of features for each cluster to help determine which clusters correspond to events and use these features to train a classiﬁer to recognizing between event and nonevent clusters.

The new ground truth framework looks good. Saving outbound and inbound links is also worth doing.

Beware of low percentage patterns. finding the 1% answer is very hard for machine learning, while finding the 49% answer is much better.

SVMs are probably a good way to start since they are resistant to overfitting

Multiple passes may be required to filter the data to get a meaningful result. Patterns like the .edu/.gov ratio may be very helpful

The subReddit Change My View is an interesting UGC site that should provide good examples of information networks on both sides of a controversial point, and a measure of success. It would certainly be interesting to do a link analysis.

Relevant Text – In addition, I think we need a text area that the user can paste text from the webpage that contains the match in context, or something that exemplifies the source characterisation

Notes – To cover anything that’s not covered above

So now Gregg is handling Crawl Service file generation?

Discussion with Katy and Jeremy about the list above?

Pondering how to adjust the ratingObject everything is a string, except for content characterization, which can have multiples. I could do a bitfield or a separate table. Leaning towards the bitfieled.

R. Preston McAfee currently works as chief economist at Microsoft. Previously, he was an economist at Google. Before that he was a Vice President and Research Fellow at Yahoo! Research where he led the Microeconomics and Social Systems group. Also has the ugliest home page I’ve seen since the late ’90s.

The proportional mechanism therefore improves upon the baseline mechanism by disincentivizing q = 0, i.e., it eliminates the worst reviews. Ideally, we would like to be able to drive the equilibrium qualities to 1 in the limit as the number of viewers, M, diverges to inﬁnity; however, as we saw above, this cannot be achieved with the proportional mechanism.

This reflects my intuition. The lower the quality of the rating, the worse the proportional rating system is, and the lower the bar for quality for the contributor. The three places that I can think of offhand that have high-quality UCG (Idea Channel, StackOverflow and Wikipedia) all have people rating the data (contextually!!!) rather than a simple up/downvote.Idea Channel – The main content creators read the comments and incorporate the best in the subsequent episode.Stackoverflow – Has become a place to show of knowledge, and there are community mechanisms of enforcement, and the number of answers are low enough that it’s possible to look over all of them.Others that might be worth thinking aboutQuora? Seems to be an odd mix of questions. Some just seem lazy (how do I become successful) or very open ended (What kind of guy is Barak Obama). The quality of the writing is usually good, but I don’t wind up using it much. So why is that?Reddit. So ugly that I really don’t like using it. Is there a System Quality/Attractiveness as well as System Trust?

Slashdot. Good headline service, but low information in the comments. Occasionally something insightful, but often it seems like rehearsed talking points.

So the better the raters, the better the quality. How can the System evaluate rater quality? Link analysis? Pertinence selection? And if we know a rater is low-quality, can we use that as a measure in its own right?

Trying to test the redundant web page filter, but the urls for most identical pages are actually slightly different:

Thinking more about the economics of contributing trustworthy information. Recently, I’ve discovered the PBS Idea Channel, which is a show that explores pop culture with a philosophical bent (LA Times review here). For example, Deadpool is explored from a phenomenology perspective. But what’s really interesting and seems unique to me is the relationship of the show with its commenters. For each show, there is a follow-on show where the most interesting comments are discussed by the host, Mike Rugnetta. And the comments are surprisingly cogent and good. I think that this is because Rugnetta is acting like the anchor of an interactive news program where the commenters are the reporters. He sets up the topic, gets the ball rolling, and then incorporates the best comments (stories) to wrap up the story. Interestingly, in a recent comment section on aesthetic (which I can’t find now?), he brings up a comment that about science and philosophy and invites the commenter into a deeper discussion and also discusses the potential of an episode about that.

To get a flavor, here’s one of the longer comments (with 25 replies on its own) from the Deadpool show:

I could actually buy that DeadPool’s ability to understand the medium he’s in if it weren’t for one thing he does very often: references to our world. If his fourth wall breaks were limited to interacting with the panels, making quips and nods about the idea of “readers”, and joking about general comic book (or video game or movie) tropes, then I’d be on board with the idea that he is hyper-aware due to his constant physical torment and knowledge of his own perceptions. however, he somehow has knowledge of things that do not seem to exist in the world he inhabits, such as memes, pop culture references, and things like “Leeroy Jenkins”. His hypersensitivity can explain his knowledge of the medium he’s in (an integral part of the reality he inhabits), but I don’t see a way that it could explain him knowing about things that, as far as I’m aware, do not exist in his reality.

Compare that to the comments for the MIT opencourseware intro to MIT 6.034, which I ‘took’ and found well presented and deeply interesting, though not as flashy. Here’s a rough equivalent (with 21 replies):

wow ..it’s such an overwhelming feeling for a guy like me ..who had no chance in hell of ever getting into MIT or any other ivy’s to be able to listen and learn from this lectures online and that too free. :’)

To me, it seems like the Deadpool post is deeply involved with the subject matter of the episode, while the MIT comment is more typical of a YouTube comment in that it is more about the commenter and less about the content. This does imply that working on providing value to good commenting through inclusion in the content of the show can improve the quality and relevance of the comments.

To continue the ‘News Anchor’ thought from above, it might be possible to structure a news entity of some kind where different areas (sports, entertainment, local/regional, etc) could have their own anchors that produce interactive content with their commenters. Some additional capability to handle multimedia uploads from commenters should probably be supported and better navigation, but this sounds more to me like a 21st century news product than many other things that I’ve seen. It’s certainly the opposite of the Sweden paper.

This is starting to look like what I was trying to find. Nash Equilibrium. Huh. The model predicts, as observed in practice, that if exposure is independent of quality, there will be a ﬂood of low quality contributions in equilibrium. An ideal mechanism in this context would elicit both high quality and high participation in equilibrium.

Need to add ‘change password’ option. Done. And now that I know my way around JPA, I like it a lot

Added role-based enabling of menu choices

The code base could really use a cleanup. We have the classic research->production problem…

Adding match/nomatch and blacklist queries. Note that blacklist needs to be by search engine

Finished match

Finished nomatch

Working on Blacklist

Create a loop that changes all the QueryObjects so that qo.getUnquotedName() is used and persist.

I think that this is more an issue of information economics. The incentives in social publication is honor, glory and followers. Maybe some money from ad revenue sharing (Though this is changing?). Traditional news media offers a more direct model where the product (news) is sold to readers and/or advertisers so that the news-making product can be made.

Connectivism states that there is now an emphasis on leaning how to find information as opposed to knowing the information (since information obsolescence happens more rapidly, the value of the information is lower than the knowledge of how to find current knowledge).

Since traditional news media tends to aggregate information to produce stories because it makes learning entertaining and worth the price paid (cash or time watching commercials). However, if the friction to finding free alternatives of the initial information for the story is low, then the value of the story becomes lower, since now all you’re paying for is a pleasing presentation.

Blogs and other free sources make this more difficult for the consumer, since what appears credible may not be, but may be confused with an actual information source nonetheless. Or, looking at confirmation bias, a free pleasing story may have higher value for a consumer than a (non-free) well researched story that disputes the reader’s beliefs.

There is also an emotional cost for checking rumors that you agree with. Going to Snopes to find out that the politician that you hate didn’t actually do that stupid thing you just saw in your feed. So the traditional few-channel media is being subsumed by networks that we construct to support our biases?

Banged away at the white paper. Done! Off to Key West for a long weekend!

Add a ‘I goofed’ button to the GUI (or maybe a ‘back’ button that lets you change the rating?

Add more info that pops up medical provider.

Add an analytics app that looks for ratings that disagree, either as outliers (watch out for that reviewer) or there is disagreement (are we having problems with terms, fuzzy matching, or what?)

Add a second app that tags the ontology onto the ‘Flaggable Match’

Write up a guidance manual for edge conditions. Comes up when you click ‘help’

When an url comes up that has already been reviewed more than N times and the reviews match substantially (A majority? – means odd numbers of reviews) for the same provider don’t run that result item, just add a copy of the rating object wit the name of (‘computed’)

Need to see if I can get this on Monday: Rethinking Journalism: trust and participation in a transformed news landscape. Got the kindle book.

Need to add a menubar to the Gui app that has a ‘data’ and ‘queries’ tab. Data runs the data generation code. Queries has a list of questions that clears the output and then sends the results to the text area.

Still need to move the db to a server. Just realized that it could be a MySql db on Dreamhost too. Having trouble with that. It might be the eclipse jar? Here’s the hibernate jar location in maven:

Figured out how to use code families. Not obvious at all fromthe documentation (too many types of families!), but obvious once you see it. Just select one or more codes in the code manager, right-click in the ‘family’ pane and select ‘New from Selected Items’