Re: [textadept] TeX lexers

From: mitchell <>
Date: Fri, 17 Jun 2011 09:44:58 -0400 (Eastern Daylight Time)

Hi Robert,

On Fri, 17 Jun 2011, Robert wrote:

> Hi,
> I've used a combined tex lexer that highlighted both context and latex
> environments. Now with folding added, and highlighting parts,
> sections, etc. this becomes confusing so I created separate lexers
> [1-3]. To avoid duplication I reused TeX definitions by loading
> the tex lexer explicitly:
> local tex = require('tex')
> ...
> _rules = {
> { 'whitespace', },
> { 'comment', tex.comment },
> { 'environment', environment }, -- different environment
> Does this make sense? Is there a way to use embedded lexers for this purpose?

Look at the Rails, CUDA, and GLSL lexers. They reuse the Ruby and CPP
lexers but add small changes. I would recommend this method since it does
not depend on exposing internal lexer patterns or that their names stay
the same and that any original lexer changes/additions/fixes are
reflected in the new lexer.

> Differences to the previous versions is the highlighting of all
> \begin-\end-blocks and \chapters, \sections, etc.
> About folding, in plain TeX and Context I have
> \begintt ... \endtt or \starttyping ... \endtyping blocks. It is not
> possible to handle these with the simple folding, correct?
> They can only match text, so I have to write a function?

I don't understand why not. If you add tokens that capture those blocks
you can add them to _foldsymbols, no?

Once you finish these lexers I would like to add/update the Scintillua
ones since I don't know anything about *TeX.


> Robert
> [1]
> [2]
> [3]
> --
> You received this message because you are subscribed to the Google Groups "textadept" group.
> To post to this group, send email to
> To unsubscribe from this group, send email to
> For more options, visit this group at

Received on Fri 17 Jun 2011 - 09:44:58 EDT

This archive was generated by hypermail 2.2.0 : Thu 08 Mar 2012 - 12:11:17 EST