But how do you actually do that (i.e., what does an algorithm
for minimizing regular expressions in PSPACE look like)?

There are many results showing that it's hard in different contexts (such as when
an equivalent DFA is available), but I found none showing how it's actually done.
Probably it's so easy that I'm the only one not getting it. :-)

This question appears to be off-topic. The users who voted to close gave this specific reason:

"Your question does not appear to be a research-level question in theoretical computer science. For more information about the scope, please see help center. Your question might be suitable for Computer Science which has a broader scope." – Kristoffer Arnsfelt Hansen, D.W., Kaveh, Mohammad Al-Turkistany, Sasho Nikolov

1 Answer
1

Finding the answer to your question is not overly difficult, if one is used to proving PSPACE upper bounds. But I think one cannot find an answer to your question in the literature, so here it is:

Given a regular expression r of alphabetic width n, i.e. with n alphabetic letters, you can enumerate all regular expressions of alphabetic width 1,2,3, one by one (see the paper by Gruber et al. 2012), and test for each such candidate expression c whether L(c) = L(r). The first expression where the test succeeds is clearly of minimum alphabetic width.

Apart from the test for language equivalence, this can be implemented using "only" linear space (and using exponential time).

The regular expression equivalence test can be done by converting r and c each to a nondeterministic finite automaton, and check if these automata are equivalent. The latter can be done in PSPACE as follows (this algorithm is mentioned in the paper by Bonchi and Pous 2013): For each word $w$ of length at most $2^n$, test whether $w \in L(r)$ if and only if $w \in L(s)$. If all these $2^{n+1}-1$ tests are successful, then $L(r)=L(s)$.

Notes

The paper by Gruber et al. (2012) also covers the computation of the number of regular languages whose minimal regular expression is of size n for small values of n. This actually uses an implementation of regular expression minimization. The test for language equivalence is not done in polynomial space as described above, but by computing the minimal DFA. This requires exponential space in the worst case. But there are more practical ways (all practical approaches seem to use exponential space) for testing the equivalence of NFAs, see
http://languageinclusion.org/

$\begingroup$Okay, except for the equivalence test, this is certainly straight-forward. I actually thought in that direction but somehow didn't manage to put it together. Thanks! (Btw., there is an icy wind blowing here in terms of downvotes... )$\endgroup$
– lukas.coenigMay 31 '15 at 6:00