Re: [code][textadept] new version of the Elastic Tabstops module

From: Peter Rolf <>
Date: Tue, 6 Mar 2018 14:26:56 +0100

Am 2018-03-05 um 14:17 schrieb
> On Monday, March 05, 2018 07:54:36 AM Peter Rolf wrote:
>> * the maximal cache level (3) is now the standard; tests showed, that
>> the total memory consumption is negligible (around 120KB for all files
>> in my ETS session); the usage of a width cache eliminates any repeating
>> calculations, and over time, saves a ton of otherwise wasted CPU cycles
>> (power consumption)
> How many files, how big, and total size of all open files?

Well, if you are interested, you can load one of your ETS based sessions
and calculate the maximal cache size for it (menu 'Initialize all
buffers'; result is shown in the statusbar). Just make sure to reset the
cache beforehand, or better, just start Textadept with that session.

> I'm assuming that the memory consumption for the cache is (somehow)
> proportional to the total size of the open files?

It simply depends on the content of the text. The width cache only
stores the content of all *non empty* tabstop cells. If you use
tabulators for line indentation only (empty tabstop cells), the cache is
not involved. This also means, that probably a lot (if not the majority)
of lines are simply ignored by Elastic Tabstops, because they don't
contain any relevant cells.

Or think about a source code text, where you only have a limited number
of keywords (30-40), operators (20-30), white space character (1). They
can be used dozens of times in the source, but only have one [*]
entry in the cache. And if you have opened a bunch of lets say 'lua'
files, all those 'common' entries are reused for every buffer.

* not exactly true for CHARACTER class entries, as all used combinations
are stored too

In reality the number of cache relevant tabstop cells in a source code
text is quite limited.
If you use a huge table with different 'number' or 'string' types, then
this would have a big impact on the cache size. Theoretically, the
size of the cache could even exceed the file size then.
But I guess you couldn't work with such a big table, if you don't use a
cache in the first place.

As I see it now (only judging from my own experience), most users will
benefit from a maximal cache level (without suffering any
disadvantages). Feel free to test this module and report back for
possible changes.

Regards, Peter

You are subscribed to
To change subscription settings, send an e-mail to
To unsubscribe, send an e-mail to
Received on Tue 06 Mar 2018 - 08:26:56 EST

This archive was generated by hypermail 2.2.0 : Wed 07 Mar 2018 - 06:30:36 EST