Re: [code][textadept] Advanced use of Textadept Lexers

From: Mitchell <m.att.foicica.com>
Date: Thu, 27 Feb 2014 16:52:31 -0500 (Eastern Standard Time)

Hi Richard,

On Thu, 27 Feb 2014, Richard Philips wrote:

> Hello Mitchell,
>
> let us forget - for the time being - about coloring the code: it is too
> much related to style and I realise that it just confounds things.
>
> I give another application of the idea:
>
> A few months back I wrote a spelling checker for textadept. One of
> its features is scanning the text and red underlining the bad words.
>
> I would like to use the same feature in code as well but it should be
> restricted to strings and comments.
>
> I think this problem is not so different as how textadept is styling code;
> textadept associates styling information with tokens. In much the same
> manner I would like to associate actions with tokens.
>
> Transformations is one usage but others are feasable as well. In the
> spelling checker example, the 'M._tokenstyles' alike structure I would use,
> is loaded with functions that either do nothing or spellcheck the token.
>
>
> Mitchell,
>
> Again I want to stress that I am not looking for the solution to a specific
> problem I have: for both the coloring problem and the spelling problem I
> came up with adequate solutions I am quite happy about.
>
> It is just that I noticed that both these problems could be solved in a
> more elegant way by using one of the main strengths of textadept: its
> lexing capabilities.

I think one can leverage Textadept's lexers in two ways to accomplish
similar tasks:

1. You can determine the range of positions the view shows. This will
allow you to call buffer.style_at[] over each of those positions to look
for comments and strings. For each range you can run your spell check or
other filter. It's true you'd have to "hard-code" the filter to look for
comment or string style numbers/style names, but the approach would be
quite effective I think.

2. The latest two beta's integrate the lexer Lua state with Textadept's.
This allows you to use a lexer to lex any range of text[1]. As mentioned
above, you can extract the view's visible text and feed it to a lexer,
obtaining the resulting tokens. You can apply a filter like spell checking
when iterating over those tokens. With this approach, you are not limited
to Textadept's built-in lexers. You can modify `lexer.LEXERPATH` to point
to any directory you want and then call call `lexer.load()` and then
`lexer.lex()` to return your lexer's custom tokens. (Don't forget to
restore `lexer.LEXERPATH` when you're done.) There are more
caveats/details in this approach that I won't go into in this brief
overview. Let me know if you are interested in more or if you have
specific targeted questions -- there's a lot to say here!

Cheers,
Mitchell

[1]: http://foicica.com/lists/code/201401/1514.html (see the footnote)

-- 
You are subscribed to code.att.foicica.com.
To change subscription settings, send an e-mail to code+help.att.foicica.com.
To unsubscribe, send an e-mail to code+unsubscribe.att.foicica.com.
Received on Thu 27 Feb 2014 - 16:52:31 EST

This archive was generated by hypermail 2.2.0 : Fri 28 Feb 2014 - 06:36:13 EST