Fix bin/publish: copy docs.dist from project root

Fix bin/publish: use correct .env path for rspade_system
Fix bin/publish script: prevent grep exit code 1 from terminating script

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
root
2025-10-21 02:08:33 +00:00
commit f6fac6c4bc
79758 changed files with 10547827 additions and 0 deletions

66
node_modules/typo-js/README.md generated vendored Executable file
View File

@@ -0,0 +1,66 @@
Typo.js is a JavaScript/TypeScript spellchecker that uses Hunspell-style dictionaries.
Usage
=====
To use Typo, simply load it like so:
```javascript
var Typo = require("typo-js");
var dictionary = new Typo(lang_code);
```
Typo includes by default a dictionary for the `en_US` lang_code.
To check if a word is spelled correctly, do this:
```javascript
var is_spelled_correctly = dictionary.check("mispelled");
```
To get suggested corrections for a misspelled word, do this:
```javascript
var array_of_suggestions = dictionary.suggest("mispeling");
// array_of_suggestions == ["misspelling", "dispelling", "misdealing", "misfiling", "misruling"]
```
Typo.js has full support for the following Hunspell affix flags:
* `PFX`
* `SFX`
* `REP`
* `FLAG`
* `COMPOUNDMIN`
* `COMPOUNDRULE`
* `ONLYINCOMPOUND`
* `KEEPCASE`
* `NOSUGGEST`
* `NEEDAFFIX`
It also supports the Typo-specific flag `PRIORITYSUGGEST`. This allows you to specify that certain words should be given priority in the suggestions list when correcting a mispelled word. If you add the following to your `.aff` file (ideally on the line after NOSUGGEST):
```
PRIORITYSUGGEST @
```
and then add the `@` flag to your new words in your `.dic` file, like
```
skibidi/@
rizz/@
```
then those words will be prioritized above other suggestions if they already appear in the suggestions list.
Development
===========
The full TypeScript source code and unit test suites are available in the official Typo.js repository at https://github.com/cfinke/Typo.js
To modify Typo.js, make your changes to `ts/typo.ts` and then run `build.sh` to generate the JavaScript file `typo/typo.js`.
Licensing
=========
Typo.js is free software, licensed under the Modified BSD License.

347
node_modules/typo-js/dictionaries/en_US/README.md generated vendored Executable file
View File

@@ -0,0 +1,347 @@
en_US Hunspell Dictionary
Version 2020.12.07
Mon Dec 7 20:14:35 2020 -0500 [5ef55f9]
http://wordlist.sourceforge.net
README file for English Hunspell dictionaries derived from SCOWL.
These dictionaries are created using the speller/make-hunspell-dict
script in SCOWL.
The following dictionaries are available:
en_US (American)
en_CA (Canadian)
en_GB-ise (British with "ise" spelling)
en_GB-ize (British with "ize" spelling)
en_AU (Australian)
en_US-large
en_CA-large
en_GB-large (with both "ise" and "ize" spelling)
en_AU-large
The normal (non-large) dictionaries correspond to SCOWL size 60 and,
to encourage consistent spelling, generally only include one spelling
variant for a word. The large dictionaries correspond to SCOWL size
70 and may include multiple spelling for a word when both variants are
considered almost equal. The larger dictionaries however (1) have not
been as carefully checked for errors as the normal dictionaries and
thus may contain misspelled or invalid words; and (2) contain
uncommon, yet valid, words that might cause problems as they are
likely to be misspellings of more common words (for example, "ort" and
"calender").
To get an idea of the difference in size, here are 25 random words
only found in the large dictionary for American English:
Bermejo Freyr's Guenevere Hatshepsut Nottinghamshire arrestment
crassitudes crural dogwatches errorless fetial flaxseeds godroon
incretion jalapeño's kelpie kishkes neuroglias pietisms pullulation
stemwinder stenoses syce thalassic zees
The en_US, en_CA and en_AU are the official dictionaries for Hunspell.
The en_GB and large dictionaries are made available on an experimental
basis. If you find them useful please send me a quick email at
kevina@gnu.org.
If none of these dictionaries suite you (for example, maybe you want
the normal dictionary that also includes common variants) additional
dictionaries can be generated at http://app.aspell.net/create or by
modifying speller/make-hunspell-dict in SCOWL. Please do let me know
if you end up publishing a customized dictionary.
If a word is not found in the dictionary or a word is there you think
shouldn't be, you can lookup the word up at http://app.aspell.net/lookup
to help determine why that is.
General comments on these list can be sent directly to me at
kevina@gnu.org or to the wordlist-devel mailing lists
(https://lists.sourceforge.net/lists/listinfo/wordlist-devel). If you
have specific issues with any of these dictionaries please file a bug
report at https://github.com/kevina/wordlist/issues.
IMPORTANT CHANGES INTRODUCED In 2016.11.20:
New Australian dictionaries thanks to the work of Benjamin Titze
(btitze@protonmail.ch).
IMPORTANT CHANGES INTRODUCED IN 2016.04.24:
The dictionaries are now in UTF-8 format instead of ISO-8859-1. This
was required to handle smart quotes correctly.
IMPORTANT CHANGES INTRODUCED IN 2016.01.19:
"SET UTF8" was changes to "SET UTF-8" in the affix file as some
versions of Hunspell do not recognize "UTF8".
ADDITIONAL NOTES:
The NOSUGGEST flag was added to certain taboo words. While I made an
honest attempt to flag the strongest taboo words with the NOSUGGEST
flag, I MAKE NO GUARANTEE THAT I FLAGGED EVERY POSSIBLE TABOO WORD.
The list was originally derived from Németh László, however I removed
some words which, while being considered taboo by some dictionaries,
are not really considered swear words in today's society.
COPYRIGHT, SOURCES, and CREDITS:
The English dictionaries come directly from SCOWL
and is thus under the same copyright of SCOWL. The affix file is
a heavily modified version of the original english.aff file which was
released as part of Geoff Kuenning's Ispell and as such is covered by
his BSD license. Part of SCOWL is also based on Ispell thus the
Ispell copyright is included with the SCOWL copyright.
The collective work is Copyright 2000-2018 by Kevin Atkinson as well
as any of the copyrights mentioned below:
Copyright 2000-2018 by Kevin Atkinson
Permission to use, copy, modify, distribute and sell these word
lists, the associated scripts, the output created from the scripts,
and its documentation for any purpose is hereby granted without fee,
provided that the above copyright notice appears in all copies and
that both that copyright notice and this permission notice appear in
supporting documentation. Kevin Atkinson makes no representations
about the suitability of this array for any purpose. It is provided
"as is" without express or implied warranty.
Alan Beale <biljir@pobox.com> also deserves special credit as he has,
in addition to providing the 12Dicts package and being a major
contributor to the ENABLE word list, given me an incredible amount of
feedback and created a number of special lists (those found in the
Supplement) in order to help improve the overall quality of SCOWL.
The 10 level includes the 1000 most common English words (according to
the Moby (TM) Words II [MWords] package), a subset of the 1000 most
common words on the Internet (again, according to Moby Words II), and
frequently class 16 from Brian Kelk's "UK English Wordlist
with Frequency Classification".
The MWords package was explicitly placed in the public domain:
The Moby lexicon project is complete and has
been place into the public domain. Use, sell,
rework, excerpt and use in any way on any platform.
Placing this material on internal or public servers is
also encouraged. The compiler is not aware of any
export restrictions so freely distribute world-wide.
You can verify the public domain status by contacting
Grady Ward
3449 Martha Ct.
Arcata, CA 95521-4884
grady@netcom.com
grady@northcoast.com
The "UK English Wordlist With Frequency Classification" is also in the
Public Domain:
Date: Sat, 08 Jul 2000 20:27:21 +0100
From: Brian Kelk <Brian.Kelk@cl.cam.ac.uk>
> I was wondering what the copyright status of your "UK English
> Wordlist With Frequency Classification" word list as it seems to
> be lacking any copyright notice.
There were many many sources in total, but any text marked
"copyright" was avoided. Locally-written documentation was one
source. An earlier version of the list resided in a filespace called
PUBLIC on the University mainframe, because it was considered public
domain.
Date: Tue, 11 Jul 2000 19:31:34 +0100
> So are you saying your word list is also in the public domain?
That is the intention.
The 20 level includes frequency classes 7-15 from Brian's word list.
The 35 level includes frequency classes 2-6 and words appearing in at
least 11 of 12 dictionaries as indicated in the 12Dicts package. All
words from the 12Dicts package have had likely inflections added via
my inflection database.
The 12Dicts package and Supplement is in the Public Domain.
The WordNet database, which was used in the creation of the
Inflections database, is under the following copyright:
This software and database is being provided to you, the LICENSEE,
by Princeton University under the following license. By obtaining,
using and/or copying this software and database, you agree that you
have read, understood, and will comply with these terms and
conditions.:
Permission to use, copy, modify and distribute this software and
database and its documentation for any purpose and without fee or
royalty is hereby granted, provided that you agree to comply with
the following copyright notice and statements, including the
disclaimer, and that the same appear on ALL copies of the software,
database and documentation, including modifications that you make
for internal use or for distribution.
WordNet 1.6 Copyright 1997 by Princeton University. All rights
reserved.
THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT-
ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE
LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY
THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.
The name of Princeton University or Princeton may not be used in
advertising or publicity pertaining to distribution of the software
and/or database. Title to copyright in this software, database and
any associated documentation shall at all times remain with
Princeton University and LICENSEE agrees to preserve same.
The 40 level includes words from Alan's 3esl list found in version 4.0
of his 12dicts package. Like his other stuff the 3esl list is also in the
public domain.
The 50 level includes Brian's frequency class 1, words appearing
in at least 5 of 12 of the dictionaries as indicated in the 12Dicts
package, and uppercase words in at least 4 of the previous 12
dictionaries. A decent number of proper names is also included: The
top 1000 male, female, and Last names from the 1990 Census report; a
list of names sent to me by Alan Beale; and a few names that I added
myself. Finally a small list of abbreviations not commonly found in
other word lists is included.
The name files form the Census report is a government document which I
don't think can be copyrighted.
The file special-jargon.50 uses common.lst and word.lst from the
"Unofficial Jargon File Word Lists" which is derived from "The Jargon
File". All of which is in the Public Domain. This file also contain
a few extra UNIX terms which are found in the file "unix-terms" in the
special/ directory.
The 55 level includes words from Alan's 2of4brif list found in version
4.0 of his 12dicts package. Like his other stuff the 2of4brif is also
in the public domain.
The 60 level includes all words appearing in at least 2 of the 12
dictionaries as indicated by the 12Dicts package.
The 70 level includes Brian's frequency class 0 and the 74,550 common
dictionary words from the MWords package. The common dictionary words,
like those from the 12Dicts package, have had all likely inflections
added. The 70 level also included the 5desk list from version 4.0 of
the 12Dics package which is in the public domain.
The 80 level includes the ENABLE word list, all the lists in the
ENABLE supplement package (except for ABLE), the "UK Advanced Cryptics
Dictionary" (UKACD), the list of signature words from the YAWL package,
and the 10,196 places list from the MWords package.
The ENABLE package, mainted by M\Cooper <thegrendel@theriver.com>,
is in the Public Domain:
The ENABLE master word list, WORD.LST, is herewith formally released
into the Public Domain. Anyone is free to use it or distribute it in
any manner they see fit. No fee or registration is required for its
use nor are "contributions" solicited (if you feel you absolutely
must contribute something for your own peace of mind, the authors of
the ENABLE list ask that you make a donation on their behalf to your
favorite charity). This word list is our gift to the Scrabble
community, as an alternate to "official" word lists. Game designers
may feel free to incorporate the WORD.LST into their games. Please
mention the source and credit us as originators of the list. Note
that if you, as a game designer, use the WORD.LST in your product,
you may still copyright and protect your product, but you may *not*
legally copyright or in any way restrict redistribution of the
WORD.LST portion of your product. This *may* under law restrict your
rights to restrict your users' rights, but that is only fair.
UKACD, by J Ross Beresford <ross@bryson.demon.co.uk>, is under the
following copyright:
Copyright (c) J Ross Beresford 1993-1999. All Rights Reserved.
The following restriction is placed on the use of this publication:
if The UK Advanced Cryptics Dictionary is used in a software package
or redistributed in any form, the copyright notice must be
prominently displayed and the text of this document must be included
verbatim.
There are no other restrictions: I would like to see the list
distributed as widely as possible.
The 95 level includes the 354,984 single words, 256,772 compound
words, 4,946 female names and the 3,897 male names, and 21,986 names
from the MWords package, ABLE.LST from the ENABLE Supplement, and some
additional words found in my part-of-speech database that were not
found anywhere else.
Accent information was taken from UKACD.
The VarCon package was used to create the American, British, Canadian,
and Australian word list. It is under the following copyright:
Copyright 2000-2016 by Kevin Atkinson
Permission to use, copy, modify, distribute and sell this array, the
associated software, and its documentation for any purpose is hereby
granted without fee, provided that the above copyright notice appears
in all copies and that both that copyright notice and this permission
notice appear in supporting documentation. Kevin Atkinson makes no
representations about the suitability of this array for any
purpose. It is provided "as is" without express or implied warranty.
Copyright 2016 by Benjamin Titze
Permission to use, copy, modify, distribute and sell this array, the
associated software, and its documentation for any purpose is hereby
granted without fee, provided that the above copyright notice appears
in all copies and that both that copyright notice and this permission
notice appear in supporting documentation. Benjamin Titze makes no
representations about the suitability of this array for any
purpose. It is provided "as is" without express or implied warranty.
Since the original words lists come from the Ispell distribution:
Copyright 1993, Geoff Kuenning, Granada Hills, CA
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. All modifications to the source code must be clearly marked as
such. Binary redistributions based on modified source code
must be clearly marked as modified versions in the documentation
and/or other materials provided with the distribution.
(clause 4 removed with permission from Geoff Kuenning)
5. The name of Geoff Kuenning may not be used to endorse or promote
products derived from this software without specific prior
written permission.
THIS SOFTWARE IS PROVIDED BY GEOFF KUENNING AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL GEOFF KUENNING OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
Build Date: Mon Dec 7 20:19:27 EST 2020
Wordlist Command: mk-list --accents=strip en_US 60

205
node_modules/typo-js/dictionaries/en_US/en_US.aff generated vendored Executable file
View File

@@ -0,0 +1,205 @@
SET UTF-8
TRY esianrtolcdugmphbyfvkwzESIANRTOLCDUGMPHBYFVKWZ'
ICONV 1
ICONV '
NOSUGGEST !
# ordinal numbers
COMPOUNDMIN 1
# only in compounds: 1th, 2th, 3th
ONLYINCOMPOUND c
# compound rules:
# 1. [0-9]*1[0-9]th (10th, 11th, 12th, 56714th, etc.)
# 2. [0-9]*[02-9](1st|2nd|3rd|[4-9]th) (21st, 22nd, 123rd, 1234th, etc.)
COMPOUNDRULE 2
COMPOUNDRULE n*1t
COMPOUNDRULE n*mp
WORDCHARS 0123456789
PFX A Y 1
PFX A 0 re .
PFX I Y 1
PFX I 0 in .
PFX U Y 1
PFX U 0 un .
PFX C Y 1
PFX C 0 de .
PFX E Y 1
PFX E 0 dis .
PFX F Y 1
PFX F 0 con .
PFX K Y 1
PFX K 0 pro .
SFX V N 2
SFX V e ive e
SFX V 0 ive [^e]
SFX N Y 3
SFX N e ion e
SFX N y ication y
SFX N 0 en [^ey]
SFX X Y 3
SFX X e ions e
SFX X y ications y
SFX X 0 ens [^ey]
SFX H N 2
SFX H y ieth y
SFX H 0 th [^y]
SFX Y Y 1
SFX Y 0 ly .
SFX G Y 2
SFX G e ing e
SFX G 0 ing [^e]
SFX J Y 2
SFX J e ings e
SFX J 0 ings [^e]
SFX D Y 4
SFX D 0 d e
SFX D y ied [^aeiou]y
SFX D 0 ed [^ey]
SFX D 0 ed [aeiou]y
SFX T N 4
SFX T 0 st e
SFX T y iest [^aeiou]y
SFX T 0 est [aeiou]y
SFX T 0 est [^ey]
SFX R Y 4
SFX R 0 r e
SFX R y ier [^aeiou]y
SFX R 0 er [aeiou]y
SFX R 0 er [^ey]
SFX Z Y 4
SFX Z 0 rs e
SFX Z y iers [^aeiou]y
SFX Z 0 ers [aeiou]y
SFX Z 0 ers [^ey]
SFX S Y 4
SFX S y ies [^aeiou]y
SFX S 0 s [aeiou]y
SFX S 0 es [sxzh]
SFX S 0 s [^sxzhy]
SFX P Y 3
SFX P y iness [^aeiou]y
SFX P 0 ness [aeiou]y
SFX P 0 ness [^y]
SFX M Y 1
SFX M 0 's .
SFX B Y 3
SFX B 0 able [^aeiou]
SFX B 0 able ee
SFX B e able [^aeiou]e
SFX L Y 1
SFX L 0 ment .
REP 90
REP a ei
REP ei a
REP a ey
REP ey a
REP ai ie
REP ie ai
REP alot a_lot
REP are air
REP are ear
REP are eir
REP air are
REP air ere
REP ere air
REP ere ear
REP ere eir
REP ear are
REP ear air
REP ear ere
REP eir are
REP eir ere
REP ch te
REP te ch
REP ch ti
REP ti ch
REP ch tu
REP tu ch
REP ch s
REP s ch
REP ch k
REP k ch
REP f ph
REP ph f
REP gh f
REP f gh
REP i igh
REP igh i
REP i uy
REP uy i
REP i ee
REP ee i
REP j di
REP di j
REP j gg
REP gg j
REP j ge
REP ge j
REP s ti
REP ti s
REP s ci
REP ci s
REP k cc
REP cc k
REP k qu
REP qu k
REP kw qu
REP o eau
REP eau o
REP o ew
REP ew o
REP oo ew
REP ew oo
REP ew ui
REP ui ew
REP oo ui
REP ui oo
REP ew u
REP u ew
REP oo u
REP u oo
REP u oe
REP oe u
REP u ieu
REP ieu u
REP ue ew
REP ew ue
REP uff ough
REP oo ieu
REP ieu oo
REP ier ear
REP ear ier
REP ear air
REP air ear
REP w qu
REP qu w
REP z ss
REP ss z
REP shun tion
REP shun sion
REP shun cion
REP size cise

49569
node_modules/typo-js/dictionaries/en_US/en_US.dic generated vendored Executable file

File diff suppressed because it is too large Load Diff

31
node_modules/typo-js/package.json generated vendored Executable file
View File

@@ -0,0 +1,31 @@
{
"name": "typo-js",
"version": "1.3.1",
"description": "A Hunspell-style spellchecker.",
"main": "typo.js",
"repository": {
"type": "git",
"url": "git://github.com/cfinke/Typo.js.git"
},
"keywords": [
"spellcheck",
"spellchecker",
"hunspell",
"typo",
"speling"
],
"author": "Christopher Finke <cfinke@gmail.com> (http://www.chrisfinke.com/)",
"license": "BSD-3-Clause",
"bugs": {
"url": "https://github.com/cfinke/Typo.js/issues"
},
"homepage": "https://github.com/cfinke/Typo.js#readme",
"tonicExample": "var Typo = require('typo-js'); var dictionary = new Typo('en_US'); dictionary.check('mispelled');",
"browser": {
"fs": false
},
"devDependencies": {
"@types/chrome": "^0.0.268",
"@types/node": "^20.12.12"
}
}

855
node_modules/typo-js/typo.js generated vendored Executable file
View File

@@ -0,0 +1,855 @@
/* globals chrome: false */
/* globals __dirname: false */
/* globals require: false */
/* globals Buffer: false */
/* globals module: false */
/**
* Typo is a JavaScript implementation of a spellchecker using hunspell-style
* dictionaries.
*/
var Typo;
(function () {
"use strict";
/**
* Typo constructor.
*
* @param {string} [dictionary] The locale code of the dictionary being used. e.g.,
* "en_US". This is only used to auto-load dictionaries.
* @param {string} [affData] The data from the dictionary's .aff file. If omitted
* and Typo.js is being used in a Chrome extension, the .aff
* file will be loaded automatically from
* lib/typo/dictionaries/[dictionary]/[dictionary].aff
* In other environments, it will be loaded from
* [settings.dictionaryPath]/dictionaries/[dictionary]/[dictionary].aff
* @param {string} [wordsData] The data from the dictionary's .dic file. If omitted
* and Typo.js is being used in a Chrome extension, the .dic
* file will be loaded automatically from
* lib/typo/dictionaries/[dictionary]/[dictionary].dic
* In other environments, it will be loaded from
* [settings.dictionaryPath]/dictionaries/[dictionary]/[dictionary].dic
* @param {Object} [settings] Constructor settings. Available properties are:
* {string} [dictionaryPath]: path to load dictionary from in non-chrome
* environment.
* {Object} [flags]: flag information.
* {boolean} [asyncLoad]: If true, affData and wordsData will be loaded
* asynchronously.
* {Function} [loadedCallback]: Called when both affData and wordsData
* have been loaded. Only used if asyncLoad is set to true. The parameter
* is the instantiated Typo object.
*
* @returns {Typo} A Typo object.
*/
Typo = function (dictionary, affData, wordsData, settings) {
settings = settings || {};
this.dictionary = null;
this.rules = {};
this.dictionaryTable = {};
this.compoundRules = [];
this.compoundRuleCodes = {};
this.replacementTable = [];
this.flags = settings.flags || {};
this.memoized = {};
this.loaded = false;
var self = this;
var path;
// Loop-control variables.
var i, j, _len, _jlen;
if (dictionary) {
self.dictionary = dictionary;
// If the data is preloaded, just setup the Typo object.
if (affData && wordsData) {
setup();
}
// Loading data for browser extentions.
else if (typeof window !== 'undefined' && ((window.chrome && window.chrome.runtime) || (window.browser && window.browser.runtime))) {
var runtime = window.chrome && window.chrome.runtime ? window.chrome.runtime : window.browser.runtime;
if (settings.dictionaryPath) {
path = settings.dictionaryPath;
}
else {
path = "typo/dictionaries";
}
if (!affData)
readDataFile(runtime.getURL(path + "/" + dictionary + "/" + dictionary + ".aff"), setAffData);
if (!wordsData)
readDataFile(runtime.getURL(path + "/" + dictionary + "/" + dictionary + ".dic"), setWordsData);
}
// Loading data for Node.js or other environments.
else {
if (settings.dictionaryPath) {
path = settings.dictionaryPath;
}
else if (typeof __dirname !== 'undefined') {
path = __dirname + '/dictionaries';
}
else {
path = './dictionaries';
}
if (!affData)
readDataFile(path + "/" + dictionary + "/" + dictionary + ".aff", setAffData);
if (!wordsData)
readDataFile(path + "/" + dictionary + "/" + dictionary + ".dic", setWordsData);
}
}
function readDataFile(url, setFunc) {
var response = self._readFile(url, null, settings === null || settings === void 0 ? void 0 : settings.asyncLoad);
if (settings === null || settings === void 0 ? void 0 : settings.asyncLoad) {
response.then(function (data) {
setFunc(data);
});
}
else {
setFunc(response);
}
}
function setAffData(data) {
affData = data;
if (wordsData) {
setup();
}
}
function setWordsData(data) {
wordsData = data;
if (affData) {
setup();
}
}
function setup() {
self.rules = self._parseAFF(affData);
// Save the rule codes that are used in compound rules.
self.compoundRuleCodes = {};
for (i = 0, _len = self.compoundRules.length; i < _len; i++) {
var rule = self.compoundRules[i];
for (j = 0, _jlen = rule.length; j < _jlen; j++) {
self.compoundRuleCodes[rule[j]] = [];
}
}
// If we add this ONLYINCOMPOUND flag to self.compoundRuleCodes, then _parseDIC
// will do the work of saving the list of words that are compound-only.
if ("ONLYINCOMPOUND" in self.flags) {
self.compoundRuleCodes[self.flags.ONLYINCOMPOUND] = [];
}
self.dictionaryTable = self._parseDIC(wordsData);
// Get rid of any codes from the compound rule codes that are never used
// (or that were special regex characters). Not especially necessary...
for (i in self.compoundRuleCodes) {
if (self.compoundRuleCodes[i].length === 0) {
delete self.compoundRuleCodes[i];
}
}
// Build the full regular expressions for each compound rule.
// I have a feeling (but no confirmation yet) that this method of
// testing for compound words is probably slow.
for (i = 0, _len = self.compoundRules.length; i < _len; i++) {
var ruleText = self.compoundRules[i];
var expressionText = "";
for (j = 0, _jlen = ruleText.length; j < _jlen; j++) {
var character = ruleText[j];
if (character in self.compoundRuleCodes) {
expressionText += "(" + self.compoundRuleCodes[character].join("|") + ")";
}
else {
expressionText += character;
}
}
self.compoundRules[i] = new RegExp('^' + expressionText + '$', "i");
}
self.loaded = true;
if ((settings === null || settings === void 0 ? void 0 : settings.asyncLoad) && (settings === null || settings === void 0 ? void 0 : settings.loadedCallback)) {
settings.loadedCallback(self);
}
}
return this;
};
Typo.prototype = {
/**
* Loads a Typo instance from a hash of all of the Typo properties.
*
* @param {object} obj A hash of Typo properties, probably gotten from a JSON.parse(JSON.stringify(typo_instance)).
*/
load: function (obj) {
for (var i in obj) {
if (obj.hasOwnProperty(i)) {
this[i] = obj[i];
}
}
return this;
},
/**
* Read the contents of a file.
*
* @param {string} path The path (relative) to the file.
* @param {string} [charset="ISO8859-1"] The expected charset of the file
* @param {boolean} async If true, the file will be read asynchronously. For node.js this does nothing, all
* files are read synchronously.
* @returns {string} The file data if async is false, otherwise a promise object. If running node.js, the data is
* always returned.
*/
_readFile: function (path, charset, async) {
var _a;
charset = charset || "utf8";
if (typeof XMLHttpRequest !== 'undefined') {
var req_1 = new XMLHttpRequest();
req_1.open("GET", path, !!async);
(_a = req_1.overrideMimeType) === null || _a === void 0 ? void 0 : _a.call(req_1, "text/plain; charset=" + charset);
if (!!async) {
var promise = new Promise(function (resolve, reject) {
req_1.onload = function () {
if (req_1.status === 200) {
resolve(req_1.responseText);
}
else {
reject(req_1.statusText);
}
};
req_1.onerror = function () {
reject(req_1.statusText);
};
});
req_1.send(null);
return promise;
}
else {
req_1.send(null);
return req_1.responseText;
}
}
else if (typeof require !== 'undefined') {
// Node.js
var fs = require("fs");
try {
if (fs.existsSync(path)) {
return fs.readFileSync(path, charset);
}
else {
console.log("Path " + path + " does not exist.");
}
}
catch (e) {
console.log(e);
}
return '';
}
return '';
},
/**
* Parse the rules out from a .aff file.
*
* @param {string} data The contents of the affix file.
* @returns object The rules from the file.
*/
_parseAFF: function (data) {
var rules = {};
var line, subline, numEntries, lineParts;
var i, j, _len, _jlen;
var lines = data.split(/\r?\n/);
for (i = 0, _len = lines.length; i < _len; i++) {
// Remove comment lines
line = this._removeAffixComments(lines[i]);
line = line.trim();
if (!line) {
continue;
}
var definitionParts = line.split(/\s+/);
var ruleType = definitionParts[0];
if (ruleType === "PFX" || ruleType === "SFX") {
var ruleCode = definitionParts[1];
var combineable = definitionParts[2];
numEntries = parseInt(definitionParts[3], 10);
var entries = [];
for (j = i + 1, _jlen = i + 1 + numEntries; j < _jlen; j++) {
subline = lines[j];
lineParts = subline.split(/\s+/);
var charactersToRemove = lineParts[2];
var additionParts = lineParts[3].split("/");
var charactersToAdd = additionParts[0];
if (charactersToAdd === "0")
charactersToAdd = "";
var continuationClasses = this.parseRuleCodes(additionParts[1]);
var regexToMatch = lineParts[4];
var entry = {
add: charactersToAdd
};
if (continuationClasses.length > 0)
entry.continuationClasses = continuationClasses;
if (regexToMatch !== ".") {
if (ruleType === "SFX") {
entry.match = new RegExp(regexToMatch + "$");
}
else {
entry.match = new RegExp("^" + regexToMatch);
}
}
if (charactersToRemove != "0") {
if (ruleType === "SFX") {
entry.remove = new RegExp(charactersToRemove + "$");
}
else {
entry.remove = charactersToRemove;
}
}
entries.push(entry);
}
rules[ruleCode] = { "type": ruleType, "combineable": (combineable === "Y"), "entries": entries };
i += numEntries;
}
else if (ruleType === "COMPOUNDRULE") {
numEntries = parseInt(definitionParts[1], 10);
for (j = i + 1, _jlen = i + 1 + numEntries; j < _jlen; j++) {
line = lines[j];
lineParts = line.split(/\s+/);
this.compoundRules.push(lineParts[1]);
}
i += numEntries;
}
else if (ruleType === "REP") {
lineParts = line.split(/\s+/);
if (lineParts.length === 3) {
this.replacementTable.push([lineParts[1], lineParts[2]]);
}
}
else {
// ONLYINCOMPOUND
// COMPOUNDMIN
// FLAG
// KEEPCASE
// NEEDAFFIX
this.flags[ruleType] = definitionParts[1];
}
}
return rules;
},
/**
* Removes comments.
*
* @param {string} data A line from an affix file.
* @return {string} The cleaned-up line.
*/
_removeAffixComments: function (line) {
// This used to remove any string starting with '#' up to the end of the line,
// but some COMPOUNDRULE definitions include '#' as part of the rule.
// So, only remove lines that begin with a comment, optionally preceded by whitespace.
if (line.match(/^\s*#/)) {
return '';
}
return line;
},
/**
* Parses the words out from the .dic file.
*
* @param {string} data The data from the dictionary file.
* @returns HashMap The lookup table containing all of the words and
* word forms from the dictionary.
*/
_parseDIC: function (data) {
data = this._removeDicComments(data);
var lines = data.split(/\r?\n/);
var dictionaryTable = {};
function addWord(word, rules) {
// Some dictionaries will list the same word multiple times with different rule sets.
if (!dictionaryTable.hasOwnProperty(word)) {
dictionaryTable[word] = null;
}
if (rules.length > 0) {
if (dictionaryTable[word] === null) {
dictionaryTable[word] = [];
}
dictionaryTable[word].push(rules);
}
}
// The first line is the number of words in the dictionary.
for (var i = 1, _len = lines.length; i < _len; i++) {
var line = lines[i];
if (!line) {
// Ignore empty lines.
continue;
}
// The line format is one of:
// word
// word/flags
// word/flags xx:abc yy:def
// word xx:abc yy:def
// We don't use the morphological flags (xx:abc, yy:def) and we don't want them included
// in the extracted flags.
var just_word_and_flags = line.replace(/\s.*$/, '');
// just_word_and_flags is definitely one of:
// word
// word/flags
var parts = just_word_and_flags.split('/', 2);
var word = parts[0];
// Now for each affix rule, generate that form of the word.
if (parts.length > 1) {
var ruleCodesArray = this.parseRuleCodes(parts[1]);
// Save the ruleCodes for compound word situations.
if (!("NEEDAFFIX" in this.flags) || ruleCodesArray.indexOf(this.flags.NEEDAFFIX) === -1) {
addWord(word, ruleCodesArray);
}
for (var j = 0, _jlen = ruleCodesArray.length; j < _jlen; j++) {
var code = ruleCodesArray[j];
var rule = this.rules[code];
if (rule) {
var newWords = this._applyRule(word, rule);
for (var ii = 0, _iilen = newWords.length; ii < _iilen; ii++) {
var newWord = newWords[ii];
addWord(newWord, []);
if (rule.combineable) {
for (var k = j + 1; k < _jlen; k++) {
var combineCode = ruleCodesArray[k];
var combineRule = this.rules[combineCode];
if (combineRule) {
if (combineRule.combineable && (rule.type != combineRule.type)) {
var otherNewWords = this._applyRule(newWord, combineRule);
for (var iii = 0, _iiilen = otherNewWords.length; iii < _iiilen; iii++) {
var otherNewWord = otherNewWords[iii];
addWord(otherNewWord, []);
}
}
}
}
}
}
}
if (code in this.compoundRuleCodes) {
this.compoundRuleCodes[code].push(word);
}
}
}
else {
addWord(word.trim(), []);
}
}
return dictionaryTable;
},
/**
* Removes comment lines and then cleans up blank lines and trailing whitespace.
*
* @param {string} data The data from a .dic file.
* @return {string} The cleaned-up data.
*/
_removeDicComments: function (data) {
// I can't find any official documentation on it, but at least the de_DE
// dictionary uses tab-indented lines as comments.
// Remove comments
data = data.replace(/^\t.*$/mg, "");
return data;
},
parseRuleCodes: function (textCodes) {
if (!textCodes) {
return [];
}
else if (!("FLAG" in this.flags)) {
// The flag symbols are single characters
return textCodes.split("");
}
else if (this.flags.FLAG === "long") {
// The flag symbols are two characters long.
var flags = [];
for (var i = 0, _len = textCodes.length; i < _len; i += 2) {
flags.push(textCodes.substr(i, 2));
}
return flags;
}
else if (this.flags.FLAG === "num") {
// The flag symbols are a CSV list of numbers.
return textCodes.split(",");
}
else if (this.flags.FLAG === "UTF-8") {
// The flags are single UTF-8 characters.
// @see https://github.com/cfinke/Typo.js/issues/57
return Array.from(textCodes);
}
else {
// It's possible that this fallback case will not work for all FLAG values,
// but I think it's more likely to work than not returning anything at all.
return textCodes.split("");
}
},
/**
* Applies an affix rule to a word.
*
* @param {string} word The base word.
* @param {Object} rule The affix rule.
* @returns {string[]} The new words generated by the rule.
*/
_applyRule: function (word, rule) {
var entries = rule.entries;
var newWords = [];
for (var i = 0, _len = entries.length; i < _len; i++) {
var entry = entries[i];
if (!entry.match || word.match(entry.match)) {
var newWord = word;
if (entry.remove) {
newWord = newWord.replace(entry.remove, "");
}
if (rule.type === "SFX") {
newWord = newWord + entry.add;
}
else {
newWord = entry.add + newWord;
}
newWords.push(newWord);
if ("continuationClasses" in entry) {
for (var j = 0, _jlen = entry.continuationClasses.length; j < _jlen; j++) {
var continuationRule = this.rules[entry.continuationClasses[j]];
if (continuationRule) {
newWords = newWords.concat(this._applyRule(newWord, continuationRule));
}
/*
else {
// This shouldn't happen, but it does, at least in the de_DE dictionary.
// I think the author mistakenly supplied lower-case rule codes instead
// of upper-case.
}
*/
}
}
}
}
return newWords;
},
/**
* Checks whether a word or a capitalization variant exists in the current dictionary.
* The word is trimmed and several variations of capitalizations are checked.
* If you want to check a word without any changes made to it, call checkExact()
*
* @see http://blog.stevenlevithan.com/archives/faster-trim-javascript re:trimming function
*
* @param {string} aWord The word to check.
* @returns {boolean}
*/
check: function (aWord) {
if (!this.loaded) {
throw "Dictionary not loaded.";
}
if (!aWord) {
return false;
}
// Remove leading and trailing whitespace
var trimmedWord = aWord.replace(/^\s\s*/, '').replace(/\s\s*$/, '');
if (this.checkExact(trimmedWord)) {
return true;
}
// The exact word is not in the dictionary.
if (trimmedWord.toUpperCase() === trimmedWord) {
// The word was supplied in all uppercase.
// Check for a capitalized form of the word.
var capitalizedWord = trimmedWord[0] + trimmedWord.substring(1).toLowerCase();
if (this.hasFlag(capitalizedWord, "KEEPCASE")) {
// Capitalization variants are not allowed for this word.
return false;
}
if (this.checkExact(capitalizedWord)) {
// The all-caps word is a capitalized word spelled correctly.
return true;
}
if (this.checkExact(trimmedWord.toLowerCase())) {
// The all-caps is a lowercase word spelled correctly.
return true;
}
}
var uncapitalizedWord = trimmedWord[0].toLowerCase() + trimmedWord.substring(1);
if (uncapitalizedWord !== trimmedWord) {
if (this.hasFlag(uncapitalizedWord, "KEEPCASE")) {
// Capitalization variants are not allowed for this word.
return false;
}
// Check for an uncapitalized form
if (this.checkExact(uncapitalizedWord)) {
// The word is spelled correctly but with the first letter capitalized.
return true;
}
}
return false;
},
/**
* Checks whether a word exists in the current dictionary.
*
* @param {string} word The word to check.
* @returns {boolean}
*/
checkExact: function (word) {
if (!this.loaded) {
throw "Dictionary not loaded.";
}
var ruleCodes = this.dictionaryTable[word];
var i, _len;
if (typeof ruleCodes === 'undefined') {
// Check if this might be a compound word.
if ("COMPOUNDMIN" in this.flags && word.length >= this.flags.COMPOUNDMIN) {
for (i = 0, _len = this.compoundRules.length; i < _len; i++) {
if (word.match(this.compoundRules[i])) {
return true;
}
}
}
}
else if (ruleCodes === null) {
// a null (but not undefined) value for an entry in the dictionary table
// means that the word is in the dictionary but has no flags.
return true;
}
else if (typeof ruleCodes === 'object') { // this.dictionary['hasOwnProperty'] will be a function.
for (i = 0, _len = ruleCodes.length; i < _len; i++) {
if (!this.hasFlag(word, "ONLYINCOMPOUND", ruleCodes[i])) {
return true;
}
}
}
return false;
},
/**
* Looks up whether a given word is flagged with a given flag.
*
* @param {string} word The word in question.
* @param {string} flag The flag in question.
* @return {boolean}
*/
hasFlag: function (word, flag, wordFlags) {
if (!this.loaded) {
throw "Dictionary not loaded.";
}
if (flag in this.flags) {
if (typeof wordFlags === 'undefined') {
wordFlags = Array.prototype.concat.apply([], this.dictionaryTable[word]);
}
if (wordFlags && wordFlags.indexOf(this.flags[flag]) !== -1) {
return true;
}
}
return false;
},
/**
* Returns a list of suggestions for a misspelled word.
*
* @see http://www.norvig.com/spell-correct.html for the basis of this suggestor.
* This suggestor is primitive, but it works.
*
* @param {string} word The misspelling.
* @param {number} [limit=5] The maximum number of suggestions to return.
* @returns {string[]} The array of suggestions.
*/
alphabet: "",
suggest: function (word, limit) {
if (!this.loaded) {
throw "Dictionary not loaded.";
}
limit = limit || 5;
if (this.memoized.hasOwnProperty(word)) {
var memoizedLimit = this.memoized[word]['limit'];
// Only return the cached list if it's big enough or if there weren't enough suggestions
// to fill a smaller limit.
if (limit <= memoizedLimit || this.memoized[word]['suggestions'].length < memoizedLimit) {
return this.memoized[word]['suggestions'].slice(0, limit);
}
}
if (this.check(word))
return [];
// Check the replacement table.
for (var i = 0, _len = this.replacementTable.length; i < _len; i++) {
var replacementEntry = this.replacementTable[i];
if (word.indexOf(replacementEntry[0]) !== -1) {
var correctedWord = word.replace(replacementEntry[0], replacementEntry[1]);
if (this.check(correctedWord)) {
return [correctedWord];
}
}
}
if (!this.alphabet) {
// Use the English alphabet as the default. Problematic, but backwards-compatible.
this.alphabet = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ';
// Any characters defined in the affix file as substitutions can go in the alphabet too.
// Note that dictionaries do not include the entire alphabet in the TRY flag when it's there.
// For example, Q is not in the default English TRY list; that's why having the default
// alphabet above is useful.
if ('TRY' in this.flags) {
this.alphabet += this.flags['TRY'];
}
// Plus any additional characters specifically defined as being allowed in words.
if ('WORDCHARS' in this.flags) {
this.alphabet += this.flags['WORDCHARS'];
}
// Remove any duplicates.
var alphaArray = this.alphabet.split("");
alphaArray.sort();
var alphaHash = {};
for (var i = 0; i < alphaArray.length; i++) {
alphaHash[alphaArray[i]] = true;
}
this.alphabet = '';
for (var i in alphaHash) {
this.alphabet += i;
}
}
var self = this;
/**
* Returns a hash keyed by all of the strings that can be made by making a single edit to the word (or words in) `words`
* The value of each entry is the number of unique ways that the resulting word can be made.
*
* @arg HashMap words A hash keyed by words (all with the value `true` to make lookups very quick).
* @arg boolean known_only Whether this function should ignore strings that are not in the dictionary.
*/
function edits1(words, known_only) {
var rv = {};
var i, j, _iilen, _len, _jlen, _edit;
var alphabetLength = self.alphabet.length;
for (var word_1 in words) {
for (i = 0, _len = word_1.length + 1; i < _len; i++) {
var s = [word_1.substring(0, i), word_1.substring(i)];
// Remove a letter.
if (s[1]) {
_edit = s[0] + s[1].substring(1);
if (!known_only || self.check(_edit)) {
if (!(_edit in rv)) {
rv[_edit] = 1;
}
else {
rv[_edit] += 1;
}
}
}
// Transpose letters
// Eliminate transpositions of identical letters
if (s[1].length > 1 && s[1][1] !== s[1][0]) {
_edit = s[0] + s[1][1] + s[1][0] + s[1].substring(2);
if (!known_only || self.check(_edit)) {
if (!(_edit in rv)) {
rv[_edit] = 1;
}
else {
rv[_edit] += 1;
}
}
}
if (s[1]) {
// Replace a letter with another letter.
var lettercase = (s[1].substring(0, 1).toUpperCase() === s[1].substring(0, 1)) ? 'uppercase' : 'lowercase';
for (j = 0; j < alphabetLength; j++) {
var replacementLetter = self.alphabet[j];
// Set the case of the replacement letter to the same as the letter being replaced.
if ('uppercase' === lettercase) {
replacementLetter = replacementLetter.toUpperCase();
}
// Eliminate replacement of a letter by itself
if (replacementLetter != s[1].substring(0, 1)) {
_edit = s[0] + replacementLetter + s[1].substring(1);
if (!known_only || self.check(_edit)) {
if (!(_edit in rv)) {
rv[_edit] = 1;
}
else {
rv[_edit] += 1;
}
}
}
}
}
if (s[1]) {
// Add a letter between each letter.
for (j = 0; j < alphabetLength; j++) {
// If the letters on each side are capitalized, capitalize the replacement.
var lettercase = (s[0].substring(-1).toUpperCase() === s[0].substring(-1) && s[1].substring(0, 1).toUpperCase() === s[1].substring(0, 1)) ? 'uppercase' : 'lowercase';
var replacementLetter = self.alphabet[j];
if ('uppercase' === lettercase) {
replacementLetter = replacementLetter.toUpperCase();
}
_edit = s[0] + replacementLetter + s[1];
if (!known_only || self.check(_edit)) {
if (!(_edit in rv)) {
rv[_edit] = 1;
}
else {
rv[_edit] += 1;
}
}
}
}
}
}
return rv;
}
function correct(word) {
var _a;
// Get the edit-distance-1 and edit-distance-2 forms of this word.
var ed1 = edits1((_a = {}, _a[word] = true, _a));
var ed2 = edits1(ed1, true);
// Sort the edits based on how many different ways they were created.
var weighted_corrections = ed2;
for (var ed1word in ed1) {
if (!self.check(ed1word)) {
continue;
}
if (ed1word in weighted_corrections) {
weighted_corrections[ed1word] += ed1[ed1word];
}
else {
weighted_corrections[ed1word] = ed1[ed1word];
}
}
var i, _len;
var sorted_corrections = [];
for (i in weighted_corrections) {
if (weighted_corrections.hasOwnProperty(i)) {
if (self.hasFlag(i, "PRIORITYSUGGEST")) {
// We've defined a new affix rule called PRIORITYSUGGEST, indicating that
// if this word is in the suggestions list for a misspelled word, it should
// be given priority over other suggestions.
//
// Add a large number to its weight to push it to the top of the list.
// If multiple priority suggestions are in the list, they'll still be ranked
// against each other, but they'll all be above non-priority suggestions.
weighted_corrections[i] += 1000;
}
sorted_corrections.push([i, weighted_corrections[i]]);
}
}
function sorter(a, b) {
var a_val = a[1];
var b_val = b[1];
if (a_val < b_val) {
return -1;
}
else if (a_val > b_val) {
return 1;
}
// @todo If a and b are equally weighted, add our own weight based on something like the key locations on this language's default keyboard.
return b[0].localeCompare(a[0]);
}
sorted_corrections.sort(sorter).reverse();
var rv = [];
var capitalization_scheme = "lowercase";
if (word.toUpperCase() === word) {
capitalization_scheme = "uppercase";
}
else if (word.substr(0, 1).toUpperCase() + word.substr(1).toLowerCase() === word) {
capitalization_scheme = "capitalized";
}
var working_limit = limit;
for (i = 0; i < Math.min(working_limit, sorted_corrections.length); i++) {
if ("uppercase" === capitalization_scheme) {
sorted_corrections[i][0] = sorted_corrections[i][0].toUpperCase();
}
else if ("capitalized" === capitalization_scheme) {
sorted_corrections[i][0] = sorted_corrections[i][0].substr(0, 1).toUpperCase() + sorted_corrections[i][0].substr(1);
}
if (!self.hasFlag(sorted_corrections[i][0], "NOSUGGEST") && rv.indexOf(sorted_corrections[i][0]) === -1) {
rv.push(sorted_corrections[i][0]);
}
else {
// If one of the corrections is not eligible as a suggestion , make sure we still return the right number of suggestions.
working_limit++;
}
}
return rv;
}
this.memoized[word] = {
'suggestions': correct(word),
'limit': limit
};
return this.memoized[word]['suggestions'];
}
};
})();
// Support for use as a node.js module.
if (typeof module !== 'undefined') {
module.exports = Typo;
}