Date: Fri, 17 Jun 2011 09:44:58 -0400 (Eastern Daylight Time)

Hi Robert,

On Fri, 17 Jun 2011, Robert wrote:

> Hi,
>
> I've used a combined tex lexer that highlighted both context and latex
> environments. Now with folding added, and highlighting parts,
> sections, etc. this becomes confusing so I created separate lexers
> the tex lexer explicitly:
> local tex = require('tex')
> ...
> _rules = {
> { 'whitespace', tex.ws },
> { 'comment', tex.comment },
> { 'environment', environment }, -- different environment
>
> Does this make sense? Is there a way to use embedded lexers for this purpose?

Look at the Rails, CUDA, and GLSL lexers. They reuse the Ruby and CPP
lexers but add small changes. I would recommend this method since it does
not depend on exposing internal lexer patterns or that their names stay
the same and that any original lexer changes/additions/fixes are
reflected in the new lexer.

> Differences to the previous versions is the highlighting of all
> \begin-\end-blocks and \chapters, \sections, etc.
>
> About folding, in plain TeX and Context I have
> \begintt ... \endtt or \starttyping ... \endtyping blocks. It is not
> possible to handle these with the simple folding, correct?
> They can only match text, so I have to write a function?

I don't understand why not. If you add tokens that capture those blocks
you can add them to _foldsymbols, no?

Once you finish these lexers I would like to add/update the Scintillua
ones since I don't know anything about *TeX.

mitchell

>
> Robert
>