Is there a way to link to a category? My idea is to have a lot of categories, but only link to the most important ones from the homepage. I couldn’t figure out any way to do that.

Not sure what you are after. Perhaps you mean to link to the page listing all pages in category ‘foo’. The url for that is /list/foo .

Macros for iTex: Is there any way to define per-page or global macros?

No. Though this is a much-discussed question.

Errors for iTex: So far it seems like if some TeX expression doesn’t work you just get the source rendered, but there is no way to find exactly where the error is. This way if I have a long equation I am left to hunt through the whole thing for the missed bracket or parenthesis. Is there something I’m missing?

I agree that itex’s error-reporting is pretty useless. Depending on the type of error (a missed brace bracket, say), LaTeX’s is often not much better. Here, at least, you know which equation to look at for the error, as each equation is parsed separately, and errors can’t spill over as they sometimes do in LaTeX.

Linking and/or embedding local files: I am running Instiki locally, but I am syncing the whole thing online, so I can use it from more than one computer. I often use Xournal (on a tablet PC) to take notes/do calculations. While for high-level results, or summaries, Instiki is fine, for long/messy calculations it’s a lot faster to just hand-write them in Xournal. Ideally, I want to be able to link to a Xournal file from Instiki and have some quick way of viewing or editing it. Right now it seems like that the only way is to put a file:/// url, but that requires syncing two things separately, and making sure the url’s make sense on every computer I am using.

That probably doesn’t help you very much from the point of view of syncing between different computers (as each Instiki installation will have its own set of uploaded files).

Editing SVG graphics: This is something that I’m pretty sure is a bug. It seems like unless there is empty space before and after the svg tags, the “Edit SVG graphic” button doesn’t show up.

I think it doesn’t like ”<svg” as the first characters on a page. But I have not had any trouble if the graphic is in the middle of the text.

How can I adjust the way math is rendered? I am using Firefox 11 under Linux. All math is much smaller than the surrounding text (e.g. 0 is the height of a lower case letter). I imagine there is some CSS option to change the font size for math, but I couldn’t figure out what it was.

That’s strange. I am using Firefox on a Mac, and see no such inconsistency. Do you have the STIX fonts installed?

I have some programing experience, though I haven’t used Ruby before. I am willing to try to add some of these features myself, if you can give me some pointers as to where to start.

Contributions are always welcome. The source repository is available both through BZR and on Github.

Is it obvious why the spider aren’t just hitting the cache (in which case, they should not slow down the system at all)?

Are they asking for all revisions of some page (or whatever), that would entail a large percentage of cache-misses?

I ask, just because it seems to me that, if they are operating correctly, spiders shouldn’t lead to an undue slowdown. Maybe I’ve been remiss about

<meta name="robots" content="noindex,nofollow" />

directives.

In any case, is it clear that your 3-queue scheme is better than having one queue, with a larger number of worker processes? (I.e., do these spiders insist on making multiple simultaneous connections, or do they access the nlab serially?)

If the page name doesn’t change, surely then you don’t have to expire any of the pages that refer to it?

You do, for a newly-created page… but not, I agree, for a revision of an existing page. I was, somewhat crudely, not distinguishing between those cases. It occurs to me that I can use an after_create hook to distinguish between those cases.

It seems to be the list and recently_revised ones that get expired several times in a row.

That would be a consequence of

When a page is saved, expire all pages that reference that page.

When you expire a page, also expire the corresponding “index pages’ (list, recently-revised, atom feeds).

The first is further-complicated by the facility for renaming pages. That means we need to expire all the pages that refer to the old page and all the pages that refer to the new page.

I guess that could be optimized better for the case where the page doesn’t change names, as we don’t have to expire the same pages twice. I think the current procedure was motivated by complaints (from y’all) that, in some circumstances, pages were not being expired when they should.

Because of that last, my guess is that the pre-upload version is still in the cache, but that the page shown when the file is uploaded doesn’t read the cache version.

You’re probably correct. The rule is that pages with Flash messages on them (like the one that tells you that the file was successfully-uploaded) are not cached. So you get to see the correct page once, but if the incorrect one wasn’t deleted from the cache, that’s what you’ll see the second time.

I upgraded Heterotic Beast to Rails 3.1.0. Despite all my prior testing, the process didn’t go as smoothly as I would have liked, and this forum was pretty disrupted for most of Friday.

Should be back to normal now. But leave a comment here, if something’s still broken for you.

The main new feature is the Asset pipeline, which supposedly speeds the delivery of static files (CSS, javascript, and images). Unfortunately, the result seems buggy.

The reference

"#{asset_path('something.png')}"

sometimes turns into (the correct)

"/forum/assets/something-5c4374aa4b1911ebbabb73883b3cd5c0.png"

and sometimes it turns into (the incorrect)

"/assets/something-5c4374aa4b1911ebbabb73883b3cd5c0.png"

I’m using some Apache-fu to redirect the latter, but that shouldn’t be necessary.

I had to switch to Sass (from .css.erb) to get URLs for background images to include the fingerprint. I.e., within a .css.erb file, the above generates

"/assets/something.png"

Other minor bugs include:

The acts_as_state_machine gem uses some deprecated methods, which generate a warning in the User model. There’s a Rails 3.1 fork which fixes the problem. But it’s unclear when, if ever, that will be released as a gem.

would not have triggered the bug. Only empty elements (which get converted to short-tag syntax, <a id="anchor"/>, in the output) triggered this bug. Since you probably don’t want empty a or code elements (they are perfectly correct in XHTML, but wreak havoc, when the same document is parsed as HTML), you probably didn’t want the problematic (if you prefer that to uselesss) empty elements in the first place.

Deleting the only post in a topic, also deletes the topic. Possibly, the redirect (which normally goes to topic#show but, in this case, should go to the forum#show because the topic no longer exists) is incorrect.

Thinking about Recently Revised and All Pages, you suggested (somewhere) taking them out of the sweeper as a way of stopping them being regenerated every time a page is edited (I don’t know if this was one of you “If you’re going to do something crazy, here’s a way of limiting how crazy you’re going to be” suggestions or if you thought this was actually a good idea).

The former. You’re trading off workload on the server for stale data. Since computers are supposed to serve humans, rather than the other way around, the question is: does this improve the user experience?

Say you implement the above suggestion. On the one hand, the user always (or almost always, depending on implementation) receives the cached page, i.e. gets a quick response. On the other hand, the data is invariably stale.

Leaving these alone, the user is guaranteed to receive fresh data, but there could be a significant delay, if the page has to be regenerated. What percentage of requests for these pages hit the cache?

A better solution is to pull in the will_paginate gem, and paginate the data returned. That makes the request O(1), again, instead of O(N). So the user gets both a quick response AND up-to-date data.

As to moving away from Maruku, to some peg-markdown-based clone, note that

This will benefit Heterotic Beast as well.

The task I’m asking of your guys is not “programming,” per se. It involves writing a formal PEG grammar for Maruku’s extended Markdown syntax (starting with the existing PEG grammar, already in peg-markdown).

On the other hand, if there were someone handy in C, that would be most appreciated, too, because I am crappy at C. Hooking in itex2MML would take mere minutes for a competent C-programmer.

I imagine that a such a fork of peg-markdown (as it’s written in C), would be useful in your other projects, as well.

Did a lot of profiling this weekend, and produced a few tweaks to Maruku’s parsing, which speeded it up a little.

Unfortunately, the main discovery was that (with that test page as input), 3/4 of Maruku’s time is spent in the #to_html output method; only 1/4 is spent in parsing the original input. Thus, my efforts, which maybe improved the parsing speed by 5%, contributed at best a 1% speedup in the total Instiki processing time, i.e something you would never notice.

I hope that one of your guys finds formal grammars sufficiently “categorical” to be worthy of a small bit of their attention.