Hi Joe,
[...]
> Thanks. I revised that function so that it looks like this:
>
> (defun semantic-default-simple-setup ()
> "Set up a buffer for semantic parsing of a SIMPLE language."
> (setq semantic--parse-table t
> document-comment-start "/*"
> document-comment-end " */"
> ))
>
> But upon running
>
> M-: (semantic-parse-region (point-min) (point-max))
>
> or
>
> M-: (semantic-parse-region (point-min) (point-max) 'expr)
>
> where `expr' is my start symbol, the following error is triggered.
>
> Debugger entered--Lisp error: (wrong-type-argument listp t)
[...]
This is because you specified that your `semantic--parse-table' is `t'
instead of a list (for the bovinator) or a vector (for wisent).
As you didn't install a specific parser in your setup function, the
default bovinator is used. Obviously, it fails with a
"wrong-type-argument" on parser table value `t', which is not of the
expected format: 'listp'.
Your setup function above must look like:
(defun semantic-default-simple-setup ()
"Set up a buffer for semantic parsing of a SIMPLE language."
(simple-wy--install-parser)
(setq document-comment-start "/*"
document-comment-end " */"
))
where `simple-wy--install-parser' is a function generated from your
simple.wy grammar, that setups for you the wisent parser, based on
what is generated in the simple-wy.el library.
Probably you will also need to add (require 'simple-wy) in your
semantic-simple library.
Good luck!
David

In fact, you're missing a lexer definition, based on lexical rule
analyzers auto-generated from the %type statements in your grammar.
For some reason I thought it would be totally auto-generated - as
in, no need to write anything extra. Oops!
Also there is no need to override `semantic-parse-region' in
simple-mode, because you use the default!
I hacked a little bit you grammar & simple-mode support code, and
the parser works well now with this example.simple file:
Thanks a bunch - is indeed working just fine for me now.
If it seems to you (as it does to me) that these files might help
other people who want to see a bare-bones set up, you might then
include them in the semantic distribution.
I have a few questions at this point. The wisent-calc.wy doesn't
produce the same kind of tags as you suggested here:
;; For use with Semantic, must return valid semantic tags!
expr
: ;; empty
| symbol
(TAG "expr" 'expr :value $1)
| symbol PLUS symbol
(TAG "expr" 'expr :value (concat $1 $2 $3))
;
Rather, like Many of the examples in the documentation for wisent
produce things like `(cons $1 $2)' or `(+ $1 $3)'.
Since I want to "grow" this parser into an interpreter for a simple
programming language (Tiger), it seems like it would be nice to get
some usable LISP code generated at this phase. What do you suggest
for this?
Suppose that, following the pattern established in wisent-calc.wy, I
define
expr
: ;; empty
| symbol
(string-to-number $1)
| symbol PLUS symbol
(+ $1 $3)
;
Then I feed in the example input
10 + 11
with bovinate and get `nil'. (I naively might expect to get
something like `21' or `(+ 10 11)' instead.)
Even the kluge
expr
: ;; empty
| symbol
(concat $1)
| symbol PLUS symbol
(concat "(+ " $1 " " $3 " )")
;
doesn't work, though this seems to be very much like the code that
appears in wisent-java.wy.
Of course, I can write
expr
: ;; empty
| symbol
(TAG "expr" 'expr :value $1)
| symbol PLUS symbol
(TAG "expr" 'expr :value (concat "(+ " $1 " " $3 ")"))
;
and *this* will work with bovinate, but then I'd be curious to know
what the prefered method for doing operations on the output of
bovinate is.
My next question is this - even with the document-comment-start and
document-comment-end as set up here --
(defun semantic-default-simple-setup ()
"Set up a buffer for semantic parsing of a SIMPLE language."
;; Install the parser
(simple-wy--install-parser)
;; Setup the lexer
(setq semantic-lex-analyzer 'simple-lexer
;; Do a full depth lexical analysis.
semantic-lex-depth nil)
;; Other useful things.
(setq document-comment-start "/*"
document-comment-end " */"))
and the apparent instruction to ignore comments as given here
(define-lex simple-lexer
"Simple lexical analyzer."
semantic-lex-ignore-whitespace
semantic-lex-ignore-newline
semantic-lex-ignore-comments
;;;; Auto-generated analyzers.
simple-wy--<symbol>-regexp-analyzer
simple-wy--<punctuation>-string-analyzer
;;;;
semantic-lex-default-action)
Parsing the following input
/* this is a comment */
10 + 11
1000 + 1
produces this as output:
(("expr" expr
(:value "this")
nil #<overlay from 4 to 8 in example.simple>)
("expr" expr
(:value "is")
nil #<overlay from 9 to 11 in example.simple>)
("expr" expr
(:value "a")
nil #<overlay from 12 to 13 in example.simple>)
("expr" expr
(:value "comment")
nil #<overlay from 14 to 21 in example.simple>)
("expr" expr
(:value "(+ 10 11)")
nil #<overlay from 26 to 33 in example.simple>)
("expr" expr
(:value "(+ 1000 1)")
nil #<overlay from 35 to 43 in example.simple>))
How to ensure that things between /* and */ are ignored? (Is that
space character in the definition of document-comment-end relevant?)

Joe,
After David's answer, do you recall what lead to your confusion on
how wisent worked? Is there something obvious we could do to prevent
this confusion for the next person?
Thanks
Eric
>>> David PONCE <david.ponce@...> seems to think that:
>Hi Joe,
>
>[...]
>> I have set it up as you suggested, but this time upon running
>> (semantic-parse-region (point-min) (point-max)) I just get `nil'.
>>
>> I'd appreciate it if you could take a look. I seem to have gotten
>> down to the bare bones here, and I think I've followed all your
>> instructions, and it still doesn't seem to be working as expected.
>>
>> Everything I'm working with, including the auto-generated parser
>> is below. I hope I'm just making a simple error.
>
>In fact, you're missing a lexer definition, based on lexical rule
>analyzers auto-generated from the %type statements in your grammar.
>
>Also there is no need to override `semantic-parse-region' in
>simple-mode, because you use the default!
>
>I hacked a little bit you grammar & simple-mode support code, and the
>parser works well now with this example.simple file:
[ ... ]
--
Eric Ludlam: zappo@..., eric@...
Home: http://www.ludlam.net Siege: http://www.siege-engine.com
Emacs: http://cedet.sourceforge.net GNU: http://www.gnu.org

After David's answer, do you recall what lead to your confusion on
how wisent worked? Is there something obvious we could do to prevent
this confusion for the next person?
Well, as I recently noted there is at least one thing that I'm still
confused about, and that is: what is the correct way to produce
output from the parsing system.
David suggested writing my wisent input in the following style:
;; For use with Semantic, must return valid semantic tags!
expr
: ;; empty
| symbol
(TAG "expr" 'expr :value $1)
| symbol PLUS symbol
(TAG "expr" 'expr :value (concat $1 $2 $3))
;
But in wisent-java.wy I see things like
type_parameters
: LT type_parameter_list_1
(progn $2)
;
type_parameter_list
: type_parameter_list COMMA type_parameter
(cons $3 $1)
| type_parameter
(list $1)
;
Similarly, but more dramatically, wisent-calc.wy actually does the
computations that the user types in!
So my question is - what is the correct protocol to use for
producing output from the parser? Recall that my goal is to use
wisent to transform code for another programming language (Tiger)
into Elisp. (It might be fun to make the language work with
semantic, but that is not a high priority goal for me at this time
-- not for this particular language anyway.)
Other than this question, I think my first point of confusion was
that I thought that if the user (me) didn't supply a lexer, then
some default lexer would be used. That apparently was just
incorrect. Another question would be, why is wisent better to use
as a parser than bovine? (Maybe the full answer is more technical
than I really need to know, but I'm still interested to know
something about this question.)
I think my confusion would be massively reduced, if not prevented
altogether, by documentation that contains a step-by-step guide
(tutorial, maybe) for such things as writing a working wisent
parsing system and getting this system to work with semantic.
I would definitely be willing to help write this documentation if
you like -- but since I'm still riding the learning curve, I'm not
sure if that would be the best thing to do. Still, I'd be happy
to contribute my code (once it is working), and that might be helpful
to others.
To sum up: as always, more thorough documentation almost always
reduces confusion!

After David's answer, do you recall what lead to your confusion
on how wisent worked? Is there something obvious we could do to
prevent this confusion for the next person?
More advice:
This is taken from the GNU coding standards:
The only good way to use documentation strings in writing a good
manual is to use them as a source of information for writing good
text.
Bearing this in mind, you might take another look at
File: semantic-appdev.info, Node: Tag Query
File: semantic-appdev.info, Node: Breadth Search
File: grammar-fw.info, Node: Specialized Implementation
etc. from the point of view of a novice user. Such sections, and
indeed all of your documentation, could be made much more friendly
if the following principle, also from the GNU coding standards, was
adhered to throughout:
At every level, from the sentences in a paragraph to the grouping
of topics into separate manuals, the right way to structure
documentation is according to the concepts and questions that a
user will have in mind when reading it.
I have a lot of questions that I'm not sure are actually answered in
the documentation at all. Not that I could find easily anyway.
I would be happy to help identify more of these user-level questions
for you - and I hope my previous postings have highlighted a couple
of the key ones.
I'm now stuck on what seem to be a couple of relatively
minor problems:
1. How to deal with comments?
2. Why does the rule
| LPAREN WHILE RPAREN
(TAG "expr" 'expr :value (concat "(" "foo" ")"))
not work? (This is drawn from a slightly more complex
parser-generator file than the one I posted earlier... the new file
is below.) This should work with the simple-mode.el file you
already have.
;;; simple.wy -- LALR grammar for (simplified) Tiger
%package simple-wy
;; Not really necessary, as it is the default start symbol
%start expr
%type <punctuation>
%token <punctuation> PLUS "+"
%type <symbol>
%token <symbol> symbol "[A-Za-z][_A-Za-z0-9]*"
%type <string>
%token <string> STRING_LITERAL
%type <number>
%token <number> NUMBER_LITERAL
%type <keyword>
%keyword ARRAY "array"
%keyword break "break"
%keyword DO "do"
%keyword ELSE "else"
%keyword END "end"
%keyword FOR "for"
%keyword FUNCTION "function"
%keyword IF "if"
%keyword IN "in"
%keyword LET "let"
%keyword NIL "nil"
%keyword OF "of"
%keyword THEN "then"
%keyword TO "to"
%keyword TYPE "type"
%keyword VAR "var"
%keyword WHILE "while"
%token <open-paren> LPAREN "("
%token <open-paren> LBRACE "{"
%token <open-paren> LBRACK "["
%token <close-paren> RPAREN ")"
%token <close-paren> RBRACE "}"
%token <close-paren> RBRACK "]"
%%
;; For use with Semantic, must return valid semantic tags!
expr
: ;; empty
| STRING_LITERAL
(TAG "string" 'expr :value $1)
| NUMBER_LITERAL
(TAG "number" 'expr :value $1)
| symbol
(TAG "expr" 'expr :value $1)
| symbol PLUS symbol
(TAG "expr" 'expr :value (concat "(+ " $1 " " $3 ")"))
| NIL
(TAG "expr" 'expr :value $1)
| LPAREN WHILE RPAREN
(TAG "expr" 'expr :value (concat "(" "foo" ")"))
;
%%
(define-lex simple-lexer
"Simple lexical analyzer."
semantic-lex-ignore-whitespace
semantic-lex-ignore-newline
semantic-lex-ignore-comments
semantic-lex-open-paren
semantic-lex-close-paren
;;;; Auto-generated analyzers.
simple-wy--<punctuation>-string-analyzer
simple-wy--<block>-block-analyzer
simple-wy--<symbol>-regexp-analyzer
simple-wy--<string>-sexp-analyzer
simple-wy--<number>-regexp-analyzer
simple-wy--<keyword>-keyword-analyzer
;;;;
semantic-lex-default-action)
;;; simple.wy ends here

Hi,
>>> Joe Corneli <jcorneli@...> seems to think that:
> After David's answer, do you recall what lead to your confusion
> on how wisent worked? Is there something obvious we could do to
> prevent this confusion for the next person?
>
>More advice:
>
>This is taken from the GNU coding standards:
>
> The only good way to use documentation strings in writing a good
> manual is to use them as a source of information for writing good
> text.
>
>Bearing this in mind, you might take another look at
>
> File: semantic-appdev.info, Node: Tag Query
> File: semantic-appdev.info, Node: Breadth Search
> File: grammar-fw.info, Node: Specialized Implementation
>
>etc. from the point of view of a novice user. Such sections, and
>indeed all of your documentation, could be made much more friendly
>if the following principle, also from the GNU coding standards, was
>adhered to throughout:
>
> At every level, from the sentences in a paragraph to the grouping
> of topics into separate manuals, the right way to structure
> documentation is according to the concepts and questions that a
> user will have in mind when reading it.
>
>I have a lot of questions that I'm not sure are actually answered in
>the documentation at all. Not that I could find easily anyway.
>
>I would be happy to help identify more of these user-level questions
>for you - and I hope my previous postings have highlighted a couple
>of the key ones.
[ ... ]
I think you will find everyone here contributing to the doc is an
engineer these days, though Richard Kim did help a great deal.
For me, at least, documentation is an engineering challenge. How
does it work, how to avoid duplication of effort, etc. Building a
reference manual automatically from sources seems like the way to go
for me. I then spend documentation authoring time (when I have some)
trying to improve the doc strings of my lisp functions.
Much of the manuals are just catalogs of things. Complicated bits
tend to go into auto-generated code which David has been working on,
or a skeleton (-skel.*) which can be copied as a starting point for a
new language.
Over the years it is apparent to me that I am not a good documentation
writer. If anyone out there wishes to help resolve these issues via
the contribution of text and structure, please let me know.
Eric
--
Eric Ludlam: zappo@..., eric@...
Home: http://www.ludlam.net Siege: http://www.siege-engine.com
Emacs: http://cedet.sourceforge.net GNU: http://www.gnu.org

Over the years it is apparent to me that I am not a good
documentation writer. If anyone out there wishes to help resolve
these issues via the contribution of text and structure, please
let me know.
Your email responses to my questions have generally been quite
helpful. Maybe looking in the mailing list archives, you'll see
other answers to user's questions that could go into the
documentation. I think that, if possible, its good to take a look
and see what questions people are actually asking and make these the
central focus for documentation.

>>> Joe Corneli <jcorneli@...> seems to think that:
>
> Over the years it is apparent to me that I am not a good
> documentation writer. If anyone out there wishes to help resolve
> these issues via the contribution of text and structure, please
> let me know.
>
>Your email responses to my questions have generally been quite
>helpful. Maybe looking in the mailing list archives, you'll see
>other answers to user's questions that could go into the
>documentation. I think that, if possible, its good to take a look
>and see what questions people are actually asking and make these the
>central focus for documentation.
If you look in the .texi file, you will find some of those email
messages in comment blocks, waiting to be better described there.
This had been graciously done for us by Richard Kim.
Eric
--
Eric Ludlam: zappo@..., eric@...
Home: http://www.ludlam.net Siege: http://www.siege-engine.com
Emacs: http://cedet.sourceforge.net GNU: http://www.gnu.org

Here are a couple more simple questions. I noticed that when I (1)
change the grammar file, (2) run `semantic-grammar-create-package'
(C-c C-c) and load the resulting generated parser support file, (3)
bovinate the example source file that I'm working with, the changes
to the grammar are not always reflected in the *Parser Output*
buffer. When this happens, (4) I have to kill the source code
buffer and (5) open it again, then (6) re-bovinate. Is this the
expected behavior? Is there a shortcut I can use to make sure that
the current parser is being used when I bovinate?
Second question is that after repeating (1)-(3) and/or (1)-(6), my
whole system (X window manager and Emacs) begins to handle much more
sluggishly than usual. Has anyone seen anything like this happen
before? Where might the slow-down be coming from?

>>> Joe Corneli <jcorneli@...> seems to think that:
>Here are a couple more simple questions. I noticed that when I (1)
>change the grammar file, (2) run `semantic-grammar-create-package'
>(C-c C-c) and load the resulting generated parser support file, (3)
>bovinate the example source file that I'm working with, the changes
>to the grammar are not always reflected in the *Parser Output*
>buffer. When this happens, (4) I have to kill the source code
>buffer and (5) open it again, then (6) re-bovinate. Is this the
>expected behavior? Is there a shortcut I can use to make sure that
>the current parser is being used when I bovinate?
My expectation (meaning, what it did when I last was using and fixing
bugs in it) is the following:
1) I edit the grammar
2) I hit C-c C-c to compile the grammar.
3) The grammar generator looks at the %languagemode declaration,
finds all buffers matching those major modes and reinitializes
them with the major-mode function.
4) Wait a moment, and the auto-parse mechanism will refresh the tags
table on those buffers.
>Second question is that after repeating (1)-(3) and/or (1)-(6), my
>whole system (X window manager and Emacs) begins to handle much more
>sluggishly than usual. Has anyone seen anything like this happen
>before? Where might the slow-down be coming from?
[ ... ]
I have not seen this problem, though I almost always work on a dual
processor system. (Either two processors, or Emacs on one machine
displaying to a second machine.)
When Emacs becomes unresponsive, I often start turning off all
the different semantic modes until I'm done debugging whatever
feature started breaking things. I don't recall language
modification doing that to me though.
Eric
--
Eric Ludlam: zappo@..., eric@...
Home: http://www.ludlam.net Siege: http://www.siege-engine.com
Emacs: http://cedet.sourceforge.net GNU: http://www.gnu.org

Community

Help

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details