Website Structure
This commit is contained in:
parent
62812f2090
commit
71f0676a62
22365 changed files with 4265753 additions and 791 deletions
22
Frontend-Learner/node_modules/parse-statements/LICENSE
generated
vendored
Normal file
22
Frontend-Learner/node_modules/parse-statements/LICENSE
generated
vendored
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
(The MIT License)
|
||||
|
||||
Copyright (c) 2023-2025 SIA Joom
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of this software and associated documentation files (the
|
||||
'Software'), to deal in the Software without restriction, including
|
||||
without limitation the rights to use, copy, modify, merge, publish,
|
||||
distribute, sublicense, and/or sell copies of the Software, and to
|
||||
permit persons to whom the Software is furnished to do so, subject to
|
||||
the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
||||
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
||||
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
375
Frontend-Learner/node_modules/parse-statements/README.md
generated
vendored
Normal file
375
Frontend-Learner/node_modules/parse-statements/README.md
generated
vendored
Normal file
|
|
@ -0,0 +1,375 @@
|
|||
# parse-statements ✂️
|
||||
|
||||
[![NPM version][npm-image]][npm-url]
|
||||
[![dependencies: none][dependencies-none-image]][dependencies-none-url]
|
||||
[![minzipped size][size-image]][size-url]
|
||||
[![code style: prettier][prettier-image]][prettier-url]
|
||||
[![Conventional Commits][conventional-commits-image]][conventional-commits-url]
|
||||
[![License MIT][license-image]][license-url]
|
||||
|
||||
Fast and easy parser of statements in source code in any language.
|
||||
|
||||
`parse-statements` ✂️ allows you to parse statements consisting of a sequence of tokens
|
||||
with arbitrary text between them. Statements cannot overlap.
|
||||
|
||||
In addition to statements, language comments can be described, which can also be located
|
||||
inside statements (between its neighboring tokens).
|
||||
|
||||
Strings are used to describe (find) tokens, from which regexps with `gmu` flags are generated
|
||||
(therefore, the backslash in these lines must be escaped, that is, it must be doubled).
|
||||
|
||||
For each parsed statement, the optional `onParse` callback is called with the context,
|
||||
source code (string), and an array of tokens of statement
|
||||
(and an array of comments between this token and the next one, if any).
|
||||
|
||||
If the sequence of tokens of statement has not completed, instead of the `onParse` callback,
|
||||
an `onError` callback with the same signature is called, receiving an incomplete sequence
|
||||
of parsed tokens of statement.
|
||||
|
||||
Similar optional callbacks can be set for comments.
|
||||
|
||||
Callbacks for statements (only for statements, not for comments) can return a number
|
||||
instead of an `undefined` — then this number will be used as an index at the source code,
|
||||
starting from which the parser will find the next statement.
|
||||
|
||||
In fact, this index will be interpreted as the end of the statement. By default,
|
||||
the end of the statement coincides with the end of its last token,
|
||||
but sometimes we may need to go beyond the boundaries of the found tokens
|
||||
(or, conversely, reduce the length of the statement, that is, reduce its end index).
|
||||
|
||||
With such manual parsing, if we increase the index of the end of the statement,
|
||||
we must remember to manually parse the comments that may appear
|
||||
in this part of the statement — because the parser itself will not do this.
|
||||
It will continue to work from the new end of the statement as usual.
|
||||
|
||||
## Basic example
|
||||
|
||||
Below is a simplified example ([see on TypeScript Playground](https://www.typescriptlang.org/play?#code/JYWwDg9gTgLgBAbwMZQKYEMaoArqgZ1QDEBXAOyRmAjIF84AzKCEOAcjD0IFp8ZNUIVGRj42AbgBQk0JFhwYATzCpEAeTIBhFkJEBRKMygAaOBu0hdMXAVSmNNwvSYt2nW735YrYqZKQ0fAoAFswA7gZGcAC8cAAUQvj46ADmqABccHxQwGQpAJSZAG4QwAAmMQB8iJJwIeFwZKhhcJHQCahJqaj5UrR+Sipw2iKoAB7wsQBKGGU0ADaKADwItXCohtD4meQA1mQQYWQA2gC6UnXjcqKZx1fQN1kwOXmmAHQfAZbCj9m5KWdTmcLnBZA9tnBjmDYJk-q84B83l8fLDnv9AcC1iASPMqPNcqgLCini8Aec1vh-vNUPimkSfhC4WS+pU-AEyEE0jB6SIAMoQEhQJCqWJxCkCoUZEn-YxrTjAKCZNDoOZkRaQhDCMqZMgkEAAIw2tFMCD4eBgOr1hqgtFOssK0ryVSyEuFb3w+OFcXlUGOAAZTm8taYfccAIyBs2wXrSdlBGg8mBtRVmLQ6H7JpYjLATaqigD67JzMFM+FddkQUZgtHyztWdRgoUOybiZcFbo9wC9VfyMf6scC8AT6ZEjil5hH1i4qCzNGLefipqpNIJifwxpd7YrmrIZQ3pv4sBrdYpy9phMn+DeYBI+GCrfL7s9qDiwae5t7fTZg7gNGTmQcadZ1GXMYkXDYjHXUty3eD4YAgXZhHwWtomqet1k2Ahr1ve820lJ8uxfeDEI5f1TgAQndQ8SwUBCkOOYikLeak8kbOBuDgCNKK1OAAGo4AAZj9T9JH7fwfz-MZrjHACyDHYDi1MASFzFOpNSk8FjXFLdZUuDTYF5ajdPWfSYAAGWAPg9F3Yz7lgayyntE86jjeA7JgQzzTXMD3M82AkUvAB+N4QHQMA4m9dAFRQ6ouUTfktwfHS4B9ESXJ-dyLL4bzYkyyyk13ALvhEfBgtC8LIui504snBLJSSyUQyiqARLWdyrxvO84mONY6jwjtn1fUy-JgIMbJM64HKoj9jLqREhuuEbvIAH2WyFTnyWaEQ+BaHiy7lLzgVb1s2tYNq-AcOSHMgAElwAeGTU3k7Nxho5SwNUxBoVELS+pgtZvpG4zvqIZgQGB+77JsyQYpqdKrtBSGPOonLEcW6iip8Mqwoi1LqtQA7io88sGuFJropjeGghBsHUZplhMYZbGKrx1C4Bqom6u7GCUua1q6m+jqcO63rN3wzsvUBjG32+qaey2+apa8w7jrOU66jmnb6ZAFa1rV4zztE78EfcQg7uufA9FM-AwJQDAsDHUgKCoGgFNeyo4nQ5EGVuUX0I138yH-QPE2TLa6hoR7hyJsdw9okiIWONgAB1k4AelTtO2FMNgABJU7IQK2DtUXfo1-2Nb-TDMmjqww9FiO5OnTJPexXFgHPNcN367ctX3KtjzZiuA7bvFV0vbCup7givTfHtKYDuAy4DxiOVuFP09TgAqbP2G3zPi62suS8b4O4kLOdXtMfMe9MRJkjSWHG3CFt7+6dX328H3IT9hu4CQdAt0KDzBIGUC8RMITPBIHYP+VcjBx0js3QO5sHrTjgOgG2gFbBx1XonNgAA9b6qd9S7w3vqFwIBiGkPzsnQuh8-53gFPMMovIMBCmCAAIVQAwaA4DiRQJgQHZecNF4AKAUgEBYC1yZAEQgoOmE5FRyDqZMc6DMFN2wX-XB698HuSoTnVOtBqEFyLifRejCcQsLYUgTh3DeHSIUFAaBR8DayhrMbamSNLbW0yC9CYYF0IQS2LcMx7UQkQwtuErEOIx50kvFEvqZ5x4QKiWJU2qAUGwG8RbT6gsrYW2MgAAxkEjRAPCIDODBuwN4adykSBKdcOA+o8CMCqWwGpzSoBsGkGnNOcAAAqwRLKghtugf+k5pDfTgGnLeLohDjKJnALefTOmtNcJ0vwbB0DzDLGjB4azWAbO6Q0-ZqyKHVLTs0gAXscqZYRgBsSCVAKQUzekLKsKCMgez5D3MbAKeAFC-DuTKRACp0zZnbN2d7EQ9BllSGBTM95PxPkTQeLCvpCBOm0EkIU+0HiIDUmYhAFIcQ8nWxjEAA))
|
||||
of parsing `import` and `export` statements in ECMAScript
|
||||
(a complete example can be found [here](https://github.com/joomcode/parse-imports-exports/blob/main/src/index.ts)):
|
||||
|
||||
```ts
|
||||
import {createParseFunction} from 'parse-statements';
|
||||
|
||||
import type {OnCommentError, OnCommentParse, OnParse} from 'parse-statements';
|
||||
|
||||
const throwError = (message: string): void => {
|
||||
throw new Error(message);
|
||||
};
|
||||
|
||||
type Context = Readonly<{
|
||||
errors: unknown[];
|
||||
exports: [exports: string, ...comments: string[]][];
|
||||
imports: [import: string, ...comments: string[]][];
|
||||
multilineComments: string[];
|
||||
singlelineComments: string[];
|
||||
}>;
|
||||
|
||||
const getCommentSource = (
|
||||
source: string,
|
||||
pair: readonly [{end: number}, {start: number}],
|
||||
): string => source.slice(pair[0].end, pair[1].start);
|
||||
|
||||
const onCommentError: OnCommentError<Context> = (_context, source, {start}) => {
|
||||
throwError(source.slice(start));
|
||||
};
|
||||
|
||||
const onCommentParse: OnCommentParse<Context> = ({singlelineComments}, source, {end}, {start}) => {
|
||||
singlelineComments.push(source.slice(end, start));
|
||||
};
|
||||
|
||||
const onError: OnParse<Context> = ({errors}, source, ...tokens) => {
|
||||
errors.push(source.slice(tokens[0]!.start, tokens[tokens.length - 1]!.end + 30));
|
||||
};
|
||||
|
||||
const onExportParse: OnParse<Context, 3> = (
|
||||
{exports},
|
||||
source,
|
||||
exportStart,
|
||||
exportListEnd,
|
||||
exportEnd,
|
||||
) => {
|
||||
const exportStartComments = exportStart.comments?.map((pair) => getCommentSource(source, pair));
|
||||
const exportListComments = exportListEnd.comments?.map((pair) => getCommentSource(source, pair));
|
||||
|
||||
exports.push([
|
||||
source.slice(exportStart.end, exportEnd.start),
|
||||
...(exportStartComments || []),
|
||||
...(exportListComments || []),
|
||||
]);
|
||||
};
|
||||
|
||||
const onImportParse: OnParse<Context, 3> = (
|
||||
{imports},
|
||||
source,
|
||||
importStart,
|
||||
importFrom,
|
||||
importEnd,
|
||||
) => {
|
||||
const importStartComments = importStart.comments?.map((pair) => getCommentSource(source, pair));
|
||||
const importFromComments = importFrom.comments?.map((pair) => getCommentSource(source, pair));
|
||||
|
||||
imports.push([
|
||||
source.slice(importStart.end, importEnd.start),
|
||||
...(importStartComments || []),
|
||||
...(importFromComments || []),
|
||||
]);
|
||||
};
|
||||
|
||||
const parseImportsExports = createParseFunction<Context>({
|
||||
comments: [
|
||||
{
|
||||
onError: onCommentError,
|
||||
onParse: onCommentParse,
|
||||
tokens: ['\\/\\/', '$\\n?'],
|
||||
},
|
||||
{
|
||||
onError: onCommentError,
|
||||
onParse: ({multilineComments}, source, {end}, {start}) => {
|
||||
multilineComments.push(source.slice(end, start));
|
||||
},
|
||||
tokens: ['\\/\\*', '\\*\\/'],
|
||||
},
|
||||
],
|
||||
onError: (_context, _source, message) => throwError(message),
|
||||
statements: [
|
||||
{
|
||||
canIncludeComments: true,
|
||||
onError,
|
||||
onParse: onImportParse as OnParse,
|
||||
tokens: ['^import\\b', '\\bfrom\\b', '$\\n?'],
|
||||
shouldSearchBeforeComments: true,
|
||||
},
|
||||
{
|
||||
canIncludeComments: true,
|
||||
onError,
|
||||
onParse: onExportParse as OnParse,
|
||||
tokens: ['^export\\b', '\\}', '$\\n?'],
|
||||
shouldSearchBeforeComments: true,
|
||||
},
|
||||
],
|
||||
});
|
||||
|
||||
const importsExports: Context = {
|
||||
errors: [],
|
||||
exports: [],
|
||||
imports: [],
|
||||
multilineComments: [],
|
||||
singlelineComments: [],
|
||||
};
|
||||
|
||||
parseImportsExports(
|
||||
importsExports,
|
||||
`
|
||||
import {foo} from './foo';
|
||||
import bar from './bar'
|
||||
|
||||
// This is a comment
|
||||
|
||||
import /* some comment */ bar from bar;
|
||||
|
||||
'also import from bar;'
|
||||
|
||||
import bar from './baz'
|
||||
|
||||
import with error;
|
||||
import // comment in import without from;
|
||||
|
||||
export {foo} /* also comment} */;
|
||||
export /* comment in export} */ {bar}
|
||||
`,
|
||||
);
|
||||
|
||||
console.log(importsExports);
|
||||
```
|
||||
|
||||
## Install
|
||||
|
||||
Requires [node](https://nodejs.org/en/) version 10 or higher:
|
||||
|
||||
```sh
|
||||
npm install parse-statements
|
||||
```
|
||||
|
||||
`parse-statements` ✂️ works in any environment that supports ES2018
|
||||
(because package uses [RegExp Named Capture Groups](https://github.com/tc39/proposal-regexp-named-groups)).
|
||||
|
||||
## API
|
||||
|
||||
`parse-statements` ✂️ exports one runtime value — the `createParseFunction` function:
|
||||
|
||||
```ts
|
||||
import {createParseFunction} from 'parse-statements';
|
||||
|
||||
type Context = ...; // some type
|
||||
|
||||
const parse = createParseFunction<Context>(options);
|
||||
|
||||
const context: Context = ...;
|
||||
|
||||
parse(context, 'some source code (as string)');
|
||||
```
|
||||
|
||||
The `options` object defines comments, statements, and a global error callback handler
|
||||
(all of these fields are optional):
|
||||
|
||||
```ts
|
||||
import type {Comment, OnGlobalError, Options, ParsedToken, Statement} from 'parse-statements';
|
||||
|
||||
const options: Options<Context> = {
|
||||
comments, // an optional array of comments
|
||||
onError, // an optional callback handler for global parsing errors
|
||||
statements, // an optional array of statements
|
||||
};
|
||||
|
||||
const comments: readonly Comment<Context>[] = [
|
||||
{
|
||||
onError(
|
||||
context: Context,
|
||||
source: string,
|
||||
parsedToken: {start: number; end: number; match: RegExpExecArray; token: string},
|
||||
) {
|
||||
// An optional callback handler is called if, after the opening comment token,
|
||||
// its closing token was not found.
|
||||
// Parsing continues from the point immediately after the opening token.
|
||||
},
|
||||
onParse(
|
||||
context: Context,
|
||||
source: string,
|
||||
openParsedToken: {start: number; end: number; match: RegExpExecArray; token: string},
|
||||
closeParsedToken: {start: number; end: number; match: RegExpExecArray; token: string},
|
||||
) {
|
||||
// An optional callback handler of comment for putting something in context.
|
||||
// The handler is called when the parsing of the comment is completed,
|
||||
// that is, the parsing of the close comment token is completed.
|
||||
// The handler receives opening parsed token and closing parsed token.
|
||||
// Parsing continues from the point immediately after the closing token.
|
||||
},
|
||||
// Opening and closing tokens of comment
|
||||
// (which are converted to regexps using the `RegExp` constructor).
|
||||
tokens: ['open raw token', 'close raw token'],
|
||||
},
|
||||
];
|
||||
|
||||
const onError: OnGlobalError<Context> = (
|
||||
context: Context,
|
||||
source: string,
|
||||
message: string,
|
||||
index: number,
|
||||
) => {
|
||||
// An optional callback handler is called when there are global parsing errors.
|
||||
};
|
||||
|
||||
const statements: readonly Statement<Context>[] = [
|
||||
{
|
||||
// If `true`, then we parse comments inside the statement (between its parts).
|
||||
canIncludeComments: true,
|
||||
onError(
|
||||
context: Context,
|
||||
source: string,
|
||||
firstParsedToken: ParsedToken & {comments?: [ParsedToken, ParsedToken][]},
|
||||
secondParsedToken: ParsedToken & {comments?: [ParsedToken, ParsedToken][]},
|
||||
// ...,
|
||||
lastParsedToken: ParsedToken,
|
||||
) {
|
||||
// An optional callback handler is called if parsing the statement failed, that is,
|
||||
// parsing started with the first statement token, but some next token was not found.
|
||||
// The handler receives all already parsed statement tokens.
|
||||
// If there were comments between a token and its next token, they are passed
|
||||
// to the parsed token object as a separate `comments` property
|
||||
// (thus, the last parsed token cannot have comments).
|
||||
// Parsing continues from the point immediately after the last parsed token.
|
||||
},
|
||||
onParse(
|
||||
context: Context,
|
||||
source: string,
|
||||
firstParsedToken: ParsedToken & {comments?: [ParsedToken, ParsedToken][]},
|
||||
secondParsedToken: ParsedToken & {comments?: [ParsedToken, ParsedToken][]},
|
||||
// ...,
|
||||
lastParsedToken: ParsedToken,
|
||||
) {
|
||||
// An optional callback handler of statement for putting something in context.
|
||||
// The handler is called when the parsing of the statement is completed,
|
||||
// that is, the parsing of the last statement token is completed.
|
||||
// The handler receives all parsed statement tokens.
|
||||
// If there were comments between a token and its next token, they are passed
|
||||
// to the parsed token object as a separate `comments` property
|
||||
// (thus, the last parsed token cannot have comments).
|
||||
// Parsing continues from the point immediately after the last statement token.
|
||||
},
|
||||
// Not-empty array of statement raw tokens
|
||||
// (which are converted to regexps using the `RegExp` constructor).
|
||||
// A statement can have any positive number of tokens.
|
||||
tokens: ['first raw token', 'second raw token'],
|
||||
// If `true`, then the statement fisrt token is searched before the comment tokens,
|
||||
// otherwise after. This can affect parsing because if several different tokens
|
||||
// (first tokens of statements or opening comment tokens) are found
|
||||
// at some position in the source, only the first one will be selected and parsed.
|
||||
shouldSearchBeforeComments: true,
|
||||
},
|
||||
];
|
||||
```
|
||||
|
||||
`parse-statements` ✂️ also exports all types included in the API:
|
||||
|
||||
```ts
|
||||
export type {
|
||||
/**
|
||||
* Description of comment as the callback handlers and open and close tokens.
|
||||
*/
|
||||
Comment,
|
||||
/**
|
||||
* Pair of the comment open and close tokens (raw or parsed).
|
||||
*/
|
||||
CommentPair,
|
||||
/**
|
||||
* `onError` callback handler for error on comment parsing.
|
||||
*/
|
||||
OnCommentError,
|
||||
/**
|
||||
* `onParse` callback handler of comment.
|
||||
*/
|
||||
OnCommentParse,
|
||||
/**
|
||||
* Global `onError` callback handler for error on parsing.
|
||||
*/
|
||||
OnGlobalError,
|
||||
/**
|
||||
* `onParse` callback handler of statement with concrete length (number of tokens).
|
||||
*/
|
||||
OnParse,
|
||||
/**
|
||||
* Options of `createParseFunction` function.
|
||||
*/
|
||||
Options,
|
||||
/**
|
||||
* Parse function.
|
||||
*/
|
||||
Parse,
|
||||
/**
|
||||
* The result of parsing the token.
|
||||
*/
|
||||
ParsedToken,
|
||||
/**
|
||||
* Description of statement as the callback handlers and a sequence of tokens.
|
||||
*/
|
||||
Statement,
|
||||
};
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
[MIT][license-url]
|
||||
|
||||
[conventional-commits-image]: https://img.shields.io/badge/Conventional_Commits-1.0.0-yellow.svg 'The Conventional Commits specification'
|
||||
[conventional-commits-url]: https://www.conventionalcommits.org/en/v1.0.0/
|
||||
[dependencies-none-image]: https://img.shields.io/badge/dependencies-none-success.svg 'No dependencies'
|
||||
[dependencies-none-url]: https://github.com/joomcode/parse-statements/blob/main/package.json
|
||||
[license-image]: https://img.shields.io/badge/license-MIT-blue.svg 'The MIT License'
|
||||
[license-url]: LICENSE
|
||||
[npm-image]: https://img.shields.io/npm/v/parse-statements.svg 'parse-statements'
|
||||
[npm-url]: https://www.npmjs.com/package/parse-statements
|
||||
[prettier-image]: https://img.shields.io/badge/code_style-prettier-ff69b4.svg 'Prettier code formatter'
|
||||
[prettier-url]: https://prettier.io/
|
||||
[size-image]: https://img.shields.io/bundlephobia/minzip/parse-statements 'parse-statements'
|
||||
[size-url]: https://bundlephobia.com/package/parse-statements
|
||||
67
Frontend-Learner/node_modules/parse-statements/getPreparedOptions.cjs
generated
vendored
Normal file
67
Frontend-Learner/node_modules/parse-statements/getPreparedOptions.cjs
generated
vendored
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
'use strict';
|
||||
exports.getPreparedOptions = undefined;
|
||||
/**
|
||||
* Get internal prepared options from public options.
|
||||
*/
|
||||
const getPreparedOptions = exports.getPreparedOptions = ({ comments = [], onError, statements = [], }) => {
|
||||
const commentsKeys = [];
|
||||
const firstTokens = [];
|
||||
const firstTokensAfterComments = [];
|
||||
let keyIndex = 1;
|
||||
const openTokens = [];
|
||||
const preparedComments = { __proto__: null };
|
||||
const preparedStatements = { __proto__: null };
|
||||
const statementsKeys = [];
|
||||
for (const { onError, onParse, tokens: [open, close], } of comments) {
|
||||
const closeRegExp = createRegExp(['', close]);
|
||||
const key = `parseStatementsPackageComment${keyIndex++}`;
|
||||
commentsKeys.push(key);
|
||||
openTokens.push([key, open]);
|
||||
preparedComments[key] = { closeRegExp, onError, onParse };
|
||||
}
|
||||
for (const { canIncludeComments, onError, onParse, tokens: [firstToken, ...restTokens], shouldSearchBeforeComments, } of statements) {
|
||||
const statementKey = `parseStatementsPackageStatement${keyIndex++}`;
|
||||
const tokens = [];
|
||||
(shouldSearchBeforeComments ? firstTokens : firstTokensAfterComments).push([
|
||||
statementKey,
|
||||
firstToken,
|
||||
]);
|
||||
statementsKeys.push(statementKey);
|
||||
for (const nextToken of restTokens) {
|
||||
const nextTokenKey = `parseStatementsPackageStatementPart${keyIndex++}`;
|
||||
const regexpTokens = [[nextTokenKey, nextToken]];
|
||||
if (canIncludeComments) {
|
||||
regexpTokens[shouldSearchBeforeComments ? 'push' : 'unshift'](...openTokens);
|
||||
}
|
||||
const nextTokenRegExp = createRegExp(...regexpTokens);
|
||||
tokens.push({ nextTokenKey, nextTokenRegExp });
|
||||
}
|
||||
preparedStatements[statementKey] = { onError, onParse, tokens };
|
||||
}
|
||||
const nextStatementRegExp = createRegExp(...firstTokens, ...openTokens, ...firstTokensAfterComments);
|
||||
return {
|
||||
commentsKeys,
|
||||
nextStatementRegExp,
|
||||
onError,
|
||||
preparedComments,
|
||||
preparedStatements,
|
||||
statementsKeys,
|
||||
};
|
||||
};
|
||||
/**
|
||||
* Creates regexp by tokens.
|
||||
*/
|
||||
const createRegExp = (...tokens) => {
|
||||
if (!tokens[0]) {
|
||||
return emptyRegExp;
|
||||
}
|
||||
let source = tokens[0][1];
|
||||
if (tokens[0][0] !== '') {
|
||||
source = tokens.map(([key, token]) => `(?<${key}>${token})`).join('|');
|
||||
}
|
||||
return new RegExp(source, 'gmu');
|
||||
};
|
||||
/**
|
||||
* Empty regexp that match only the empty string.
|
||||
*/
|
||||
const emptyRegExp = /^$/g;
|
||||
65
Frontend-Learner/node_modules/parse-statements/getPreparedOptions.js
generated
vendored
Normal file
65
Frontend-Learner/node_modules/parse-statements/getPreparedOptions.js
generated
vendored
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
/**
|
||||
* Get internal prepared options from public options.
|
||||
*/
|
||||
export const getPreparedOptions = ({ comments = [], onError, statements = [], }) => {
|
||||
const commentsKeys = [];
|
||||
const firstTokens = [];
|
||||
const firstTokensAfterComments = [];
|
||||
let keyIndex = 1;
|
||||
const openTokens = [];
|
||||
const preparedComments = { __proto__: null };
|
||||
const preparedStatements = { __proto__: null };
|
||||
const statementsKeys = [];
|
||||
for (const { onError, onParse, tokens: [open, close], } of comments) {
|
||||
const closeRegExp = createRegExp(['', close]);
|
||||
const key = `parseStatementsPackageComment${keyIndex++}`;
|
||||
commentsKeys.push(key);
|
||||
openTokens.push([key, open]);
|
||||
preparedComments[key] = { closeRegExp, onError, onParse };
|
||||
}
|
||||
for (const { canIncludeComments, onError, onParse, tokens: [firstToken, ...restTokens], shouldSearchBeforeComments, } of statements) {
|
||||
const statementKey = `parseStatementsPackageStatement${keyIndex++}`;
|
||||
const tokens = [];
|
||||
(shouldSearchBeforeComments ? firstTokens : firstTokensAfterComments).push([
|
||||
statementKey,
|
||||
firstToken,
|
||||
]);
|
||||
statementsKeys.push(statementKey);
|
||||
for (const nextToken of restTokens) {
|
||||
const nextTokenKey = `parseStatementsPackageStatementPart${keyIndex++}`;
|
||||
const regexpTokens = [[nextTokenKey, nextToken]];
|
||||
if (canIncludeComments) {
|
||||
regexpTokens[shouldSearchBeforeComments ? 'push' : 'unshift'](...openTokens);
|
||||
}
|
||||
const nextTokenRegExp = createRegExp(...regexpTokens);
|
||||
tokens.push({ nextTokenKey, nextTokenRegExp });
|
||||
}
|
||||
preparedStatements[statementKey] = { onError, onParse, tokens };
|
||||
}
|
||||
const nextStatementRegExp = createRegExp(...firstTokens, ...openTokens, ...firstTokensAfterComments);
|
||||
return {
|
||||
commentsKeys,
|
||||
nextStatementRegExp,
|
||||
onError,
|
||||
preparedComments,
|
||||
preparedStatements,
|
||||
statementsKeys,
|
||||
};
|
||||
};
|
||||
/**
|
||||
* Creates regexp by tokens.
|
||||
*/
|
||||
const createRegExp = (...tokens) => {
|
||||
if (!tokens[0]) {
|
||||
return emptyRegExp;
|
||||
}
|
||||
let source = tokens[0][1];
|
||||
if (tokens[0][0] !== '') {
|
||||
source = tokens.map(([key, token]) => `(?<${key}>${token})`).join('|');
|
||||
}
|
||||
return new RegExp(source, 'gmu');
|
||||
};
|
||||
/**
|
||||
* Empty regexp that match only the empty string.
|
||||
*/
|
||||
const emptyRegExp = /^$/g;
|
||||
193
Frontend-Learner/node_modules/parse-statements/index.cjs
generated
vendored
Normal file
193
Frontend-Learner/node_modules/parse-statements/index.cjs
generated
vendored
Normal file
|
|
@ -0,0 +1,193 @@
|
|||
'use strict';
|
||||
exports.createParseFunction = undefined;
|
||||
const { getPreparedOptions } = require('./getPreparedOptions.cjs');
|
||||
/**
|
||||
* Creates parse function by comments and statements.
|
||||
*/
|
||||
const createParseFunction = exports.createParseFunction = (options) => {
|
||||
var { commentsKeys, nextStatementRegExp, onError: onGlobalError, preparedComments, preparedStatements, statementsKeys, } = getPreparedOptions(options);
|
||||
const parse = (context, source) => {
|
||||
var _a, _b, _c, _d, _e, _f;
|
||||
var index = 0;
|
||||
var parsedComments;
|
||||
var previousIndex;
|
||||
findNextStatement: while (index < source.length) {
|
||||
if (index === previousIndex) {
|
||||
index += 1;
|
||||
continue findNextStatement;
|
||||
}
|
||||
previousIndex = index;
|
||||
nextStatementRegExp.lastIndex = index;
|
||||
const nextStatementMatch = nextStatementRegExp.exec(source);
|
||||
if (nextStatementMatch === null) {
|
||||
return;
|
||||
}
|
||||
for (const key of statementsKeys) {
|
||||
const token = (_a = nextStatementMatch.groups) === null || _a === void 0 ? void 0 : _a[key];
|
||||
if (token === undefined) {
|
||||
continue;
|
||||
}
|
||||
const parsedTokens = [];
|
||||
const { onError, onParse, tokens } = preparedStatements[key];
|
||||
index = nextStatementRegExp.lastIndex;
|
||||
let lastParsedToken = {
|
||||
start: nextStatementMatch.index,
|
||||
end: index,
|
||||
match: nextStatementMatch,
|
||||
token,
|
||||
};
|
||||
parsedTokens.push(lastParsedToken);
|
||||
for (const { nextTokenRegExp, nextTokenKey } of tokens) {
|
||||
let previousTokensIndex;
|
||||
let tokensIndex = index;
|
||||
findNextToken: while (tokensIndex < source.length) {
|
||||
if (tokensIndex === previousTokensIndex) {
|
||||
tokensIndex += 1;
|
||||
continue findNextToken;
|
||||
}
|
||||
previousTokensIndex = tokensIndex;
|
||||
nextTokenRegExp.lastIndex = tokensIndex;
|
||||
const nextTokenMatch = nextTokenRegExp.exec(source);
|
||||
if (nextTokenMatch === null) {
|
||||
if (parsedComments === undefined) {
|
||||
parsedComments = {};
|
||||
for (const commentPair of lastParsedToken.comments || emptyComments) {
|
||||
parsedComments[commentPair[0].start] = commentPair;
|
||||
}
|
||||
}
|
||||
delete lastParsedToken.comments;
|
||||
const maybeIndex = onError === null || onError === void 0 ? void 0 : onError(context, source, ...parsedTokens);
|
||||
if (maybeIndex !== undefined) {
|
||||
index = maybeIndex;
|
||||
}
|
||||
continue findNextStatement;
|
||||
}
|
||||
const nextToken = (_b = nextTokenMatch.groups) === null || _b === void 0 ? void 0 : _b[nextTokenKey];
|
||||
if (nextToken !== undefined) {
|
||||
index = nextTokenRegExp.lastIndex;
|
||||
lastParsedToken = {
|
||||
start: nextTokenMatch.index,
|
||||
end: index,
|
||||
match: nextTokenMatch,
|
||||
token: nextToken,
|
||||
};
|
||||
parsedTokens.push(lastParsedToken);
|
||||
break findNextToken;
|
||||
}
|
||||
for (const commentKey of commentsKeys) {
|
||||
const commentToken = (_c = nextTokenMatch.groups) === null || _c === void 0 ? void 0 : _c[commentKey];
|
||||
if (commentToken === undefined) {
|
||||
continue;
|
||||
}
|
||||
if (parsedComments !== undefined) {
|
||||
const commentPair = parsedComments[nextTokenMatch.index];
|
||||
if (commentPair === undefined) {
|
||||
onGlobalError === null || onGlobalError === void 0 ? void 0 : onGlobalError(context, source, `Cannot find already parsed comment in statement ${token} with token ${commentToken}`, nextTokenMatch.index);
|
||||
}
|
||||
else {
|
||||
tokensIndex = commentPair[1].end;
|
||||
(_d = lastParsedToken.comments) !== null && _d !== void 0 ? _d : (lastParsedToken.comments = []);
|
||||
lastParsedToken.comments.push(commentPair);
|
||||
continue findNextToken;
|
||||
}
|
||||
}
|
||||
const { closeRegExp, onError: onCommentError, onParse: onCommentParse, } = preparedComments[commentKey];
|
||||
tokensIndex = nextTokenRegExp.lastIndex;
|
||||
const openToken = {
|
||||
start: nextTokenMatch.index,
|
||||
end: tokensIndex,
|
||||
match: nextTokenMatch,
|
||||
token: commentToken,
|
||||
};
|
||||
closeRegExp.lastIndex = tokensIndex;
|
||||
const closeMatch = closeRegExp.exec(source);
|
||||
if (closeMatch === null) {
|
||||
onCommentError === null || onCommentError === void 0 ? void 0 : onCommentError(context, source, openToken);
|
||||
onError === null || onError === void 0 ? void 0 : onError(context, source, ...parsedTokens);
|
||||
return;
|
||||
}
|
||||
tokensIndex = closeRegExp.lastIndex;
|
||||
const closeToken = {
|
||||
start: closeMatch.index,
|
||||
end: tokensIndex,
|
||||
match: closeMatch,
|
||||
token: closeMatch[0],
|
||||
};
|
||||
(_e = lastParsedToken.comments) !== null && _e !== void 0 ? _e : (lastParsedToken.comments = []);
|
||||
lastParsedToken.comments.push([openToken, closeToken]);
|
||||
onCommentParse === null || onCommentParse === void 0 ? void 0 : onCommentParse(context, source, openToken, closeToken);
|
||||
continue findNextToken;
|
||||
}
|
||||
onGlobalError === null || onGlobalError === void 0 ? void 0 : onGlobalError(context, source, `Cannot find next part of statement ${token} or comments by regexp ${nextTokenRegExp}`, tokensIndex);
|
||||
tokensIndex = nextTokenRegExp.lastIndex;
|
||||
}
|
||||
if (tokensIndex >= source.length) {
|
||||
if (parsedComments === undefined) {
|
||||
parsedComments = {};
|
||||
for (const commentPair of lastParsedToken.comments || emptyComments) {
|
||||
parsedComments[commentPair[0].start] = commentPair;
|
||||
}
|
||||
}
|
||||
delete lastParsedToken.comments;
|
||||
const maybeIndex = onError === null || onError === void 0 ? void 0 : onError(context, source, ...parsedTokens);
|
||||
if (maybeIndex !== undefined) {
|
||||
index = maybeIndex;
|
||||
}
|
||||
continue findNextStatement;
|
||||
}
|
||||
}
|
||||
const maybeIndex = onParse === null || onParse === void 0 ? void 0 : onParse(context, source, ...parsedTokens);
|
||||
if (maybeIndex !== undefined) {
|
||||
index = maybeIndex;
|
||||
}
|
||||
continue findNextStatement;
|
||||
}
|
||||
for (const key of commentsKeys) {
|
||||
const token = (_f = nextStatementMatch.groups) === null || _f === void 0 ? void 0 : _f[key];
|
||||
if (token === undefined) {
|
||||
continue;
|
||||
}
|
||||
if (parsedComments !== undefined) {
|
||||
const commentPair = parsedComments[nextStatementMatch.index];
|
||||
if (commentPair === undefined) {
|
||||
onGlobalError === null || onGlobalError === void 0 ? void 0 : onGlobalError(context, source, `Cannot find already parsed comment with token ${token}`, nextStatementMatch.index);
|
||||
}
|
||||
else {
|
||||
index = commentPair[1].end;
|
||||
continue findNextStatement;
|
||||
}
|
||||
}
|
||||
const { closeRegExp, onError, onParse } = preparedComments[key];
|
||||
index = nextStatementRegExp.lastIndex;
|
||||
const openToken = {
|
||||
start: nextStatementMatch.index,
|
||||
end: index,
|
||||
match: nextStatementMatch,
|
||||
token,
|
||||
};
|
||||
closeRegExp.lastIndex = index;
|
||||
const closeMatch = closeRegExp.exec(source);
|
||||
if (closeMatch === null) {
|
||||
onError === null || onError === void 0 ? void 0 : onError(context, source, openToken);
|
||||
return;
|
||||
}
|
||||
index = closeRegExp.lastIndex;
|
||||
const closeToken = {
|
||||
start: closeMatch.index,
|
||||
end: index,
|
||||
match: closeMatch,
|
||||
token: closeMatch[0],
|
||||
};
|
||||
onParse === null || onParse === void 0 ? void 0 : onParse(context, source, openToken, closeToken);
|
||||
continue findNextStatement;
|
||||
}
|
||||
onGlobalError === null || onGlobalError === void 0 ? void 0 : onGlobalError(context, source, `Cannot find statements or comments by regexp ${nextStatementRegExp}`, index);
|
||||
index = nextStatementRegExp.lastIndex;
|
||||
}
|
||||
};
|
||||
return parse;
|
||||
};
|
||||
/**
|
||||
* Empty comments array to skip `for-or` cycle.
|
||||
*/
|
||||
const emptyComments = [];
|
||||
6
Frontend-Learner/node_modules/parse-statements/index.d.ts
generated
vendored
Normal file
6
Frontend-Learner/node_modules/parse-statements/index.d.ts
generated
vendored
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
import type { Options, Parse } from './types';
|
||||
/**
|
||||
* Creates parse function by comments and statements.
|
||||
*/
|
||||
export declare const createParseFunction: <Context>(options: Options<Context>) => Parse<Context>;
|
||||
export type { Comment, CommentPair, OnCommentError, OnCommentParse, OnGlobalError, OnParse, Options, Parse, ParsedToken, Statement, } from './types';
|
||||
191
Frontend-Learner/node_modules/parse-statements/index.js
generated
vendored
Normal file
191
Frontend-Learner/node_modules/parse-statements/index.js
generated
vendored
Normal file
|
|
@ -0,0 +1,191 @@
|
|||
import { getPreparedOptions } from './getPreparedOptions.js';
|
||||
/**
|
||||
* Creates parse function by comments and statements.
|
||||
*/
|
||||
export const createParseFunction = (options) => {
|
||||
var { commentsKeys, nextStatementRegExp, onError: onGlobalError, preparedComments, preparedStatements, statementsKeys, } = getPreparedOptions(options);
|
||||
const parse = (context, source) => {
|
||||
var _a, _b, _c, _d, _e, _f;
|
||||
var index = 0;
|
||||
var parsedComments;
|
||||
var previousIndex;
|
||||
findNextStatement: while (index < source.length) {
|
||||
if (index === previousIndex) {
|
||||
index += 1;
|
||||
continue findNextStatement;
|
||||
}
|
||||
previousIndex = index;
|
||||
nextStatementRegExp.lastIndex = index;
|
||||
const nextStatementMatch = nextStatementRegExp.exec(source);
|
||||
if (nextStatementMatch === null) {
|
||||
return;
|
||||
}
|
||||
for (const key of statementsKeys) {
|
||||
const token = (_a = nextStatementMatch.groups) === null || _a === void 0 ? void 0 : _a[key];
|
||||
if (token === undefined) {
|
||||
continue;
|
||||
}
|
||||
const parsedTokens = [];
|
||||
const { onError, onParse, tokens } = preparedStatements[key];
|
||||
index = nextStatementRegExp.lastIndex;
|
||||
let lastParsedToken = {
|
||||
start: nextStatementMatch.index,
|
||||
end: index,
|
||||
match: nextStatementMatch,
|
||||
token,
|
||||
};
|
||||
parsedTokens.push(lastParsedToken);
|
||||
for (const { nextTokenRegExp, nextTokenKey } of tokens) {
|
||||
let previousTokensIndex;
|
||||
let tokensIndex = index;
|
||||
findNextToken: while (tokensIndex < source.length) {
|
||||
if (tokensIndex === previousTokensIndex) {
|
||||
tokensIndex += 1;
|
||||
continue findNextToken;
|
||||
}
|
||||
previousTokensIndex = tokensIndex;
|
||||
nextTokenRegExp.lastIndex = tokensIndex;
|
||||
const nextTokenMatch = nextTokenRegExp.exec(source);
|
||||
if (nextTokenMatch === null) {
|
||||
if (parsedComments === undefined) {
|
||||
parsedComments = {};
|
||||
for (const commentPair of lastParsedToken.comments || emptyComments) {
|
||||
parsedComments[commentPair[0].start] = commentPair;
|
||||
}
|
||||
}
|
||||
delete lastParsedToken.comments;
|
||||
const maybeIndex = onError === null || onError === void 0 ? void 0 : onError(context, source, ...parsedTokens);
|
||||
if (maybeIndex !== undefined) {
|
||||
index = maybeIndex;
|
||||
}
|
||||
continue findNextStatement;
|
||||
}
|
||||
const nextToken = (_b = nextTokenMatch.groups) === null || _b === void 0 ? void 0 : _b[nextTokenKey];
|
||||
if (nextToken !== undefined) {
|
||||
index = nextTokenRegExp.lastIndex;
|
||||
lastParsedToken = {
|
||||
start: nextTokenMatch.index,
|
||||
end: index,
|
||||
match: nextTokenMatch,
|
||||
token: nextToken,
|
||||
};
|
||||
parsedTokens.push(lastParsedToken);
|
||||
break findNextToken;
|
||||
}
|
||||
for (const commentKey of commentsKeys) {
|
||||
const commentToken = (_c = nextTokenMatch.groups) === null || _c === void 0 ? void 0 : _c[commentKey];
|
||||
if (commentToken === undefined) {
|
||||
continue;
|
||||
}
|
||||
if (parsedComments !== undefined) {
|
||||
const commentPair = parsedComments[nextTokenMatch.index];
|
||||
if (commentPair === undefined) {
|
||||
onGlobalError === null || onGlobalError === void 0 ? void 0 : onGlobalError(context, source, `Cannot find already parsed comment in statement ${token} with token ${commentToken}`, nextTokenMatch.index);
|
||||
}
|
||||
else {
|
||||
tokensIndex = commentPair[1].end;
|
||||
(_d = lastParsedToken.comments) !== null && _d !== void 0 ? _d : (lastParsedToken.comments = []);
|
||||
lastParsedToken.comments.push(commentPair);
|
||||
continue findNextToken;
|
||||
}
|
||||
}
|
||||
const { closeRegExp, onError: onCommentError, onParse: onCommentParse, } = preparedComments[commentKey];
|
||||
tokensIndex = nextTokenRegExp.lastIndex;
|
||||
const openToken = {
|
||||
start: nextTokenMatch.index,
|
||||
end: tokensIndex,
|
||||
match: nextTokenMatch,
|
||||
token: commentToken,
|
||||
};
|
||||
closeRegExp.lastIndex = tokensIndex;
|
||||
const closeMatch = closeRegExp.exec(source);
|
||||
if (closeMatch === null) {
|
||||
onCommentError === null || onCommentError === void 0 ? void 0 : onCommentError(context, source, openToken);
|
||||
onError === null || onError === void 0 ? void 0 : onError(context, source, ...parsedTokens);
|
||||
return;
|
||||
}
|
||||
tokensIndex = closeRegExp.lastIndex;
|
||||
const closeToken = {
|
||||
start: closeMatch.index,
|
||||
end: tokensIndex,
|
||||
match: closeMatch,
|
||||
token: closeMatch[0],
|
||||
};
|
||||
(_e = lastParsedToken.comments) !== null && _e !== void 0 ? _e : (lastParsedToken.comments = []);
|
||||
lastParsedToken.comments.push([openToken, closeToken]);
|
||||
onCommentParse === null || onCommentParse === void 0 ? void 0 : onCommentParse(context, source, openToken, closeToken);
|
||||
continue findNextToken;
|
||||
}
|
||||
onGlobalError === null || onGlobalError === void 0 ? void 0 : onGlobalError(context, source, `Cannot find next part of statement ${token} or comments by regexp ${nextTokenRegExp}`, tokensIndex);
|
||||
tokensIndex = nextTokenRegExp.lastIndex;
|
||||
}
|
||||
if (tokensIndex >= source.length) {
|
||||
if (parsedComments === undefined) {
|
||||
parsedComments = {};
|
||||
for (const commentPair of lastParsedToken.comments || emptyComments) {
|
||||
parsedComments[commentPair[0].start] = commentPair;
|
||||
}
|
||||
}
|
||||
delete lastParsedToken.comments;
|
||||
const maybeIndex = onError === null || onError === void 0 ? void 0 : onError(context, source, ...parsedTokens);
|
||||
if (maybeIndex !== undefined) {
|
||||
index = maybeIndex;
|
||||
}
|
||||
continue findNextStatement;
|
||||
}
|
||||
}
|
||||
const maybeIndex = onParse === null || onParse === void 0 ? void 0 : onParse(context, source, ...parsedTokens);
|
||||
if (maybeIndex !== undefined) {
|
||||
index = maybeIndex;
|
||||
}
|
||||
continue findNextStatement;
|
||||
}
|
||||
for (const key of commentsKeys) {
|
||||
const token = (_f = nextStatementMatch.groups) === null || _f === void 0 ? void 0 : _f[key];
|
||||
if (token === undefined) {
|
||||
continue;
|
||||
}
|
||||
if (parsedComments !== undefined) {
|
||||
const commentPair = parsedComments[nextStatementMatch.index];
|
||||
if (commentPair === undefined) {
|
||||
onGlobalError === null || onGlobalError === void 0 ? void 0 : onGlobalError(context, source, `Cannot find already parsed comment with token ${token}`, nextStatementMatch.index);
|
||||
}
|
||||
else {
|
||||
index = commentPair[1].end;
|
||||
continue findNextStatement;
|
||||
}
|
||||
}
|
||||
const { closeRegExp, onError, onParse } = preparedComments[key];
|
||||
index = nextStatementRegExp.lastIndex;
|
||||
const openToken = {
|
||||
start: nextStatementMatch.index,
|
||||
end: index,
|
||||
match: nextStatementMatch,
|
||||
token,
|
||||
};
|
||||
closeRegExp.lastIndex = index;
|
||||
const closeMatch = closeRegExp.exec(source);
|
||||
if (closeMatch === null) {
|
||||
onError === null || onError === void 0 ? void 0 : onError(context, source, openToken);
|
||||
return;
|
||||
}
|
||||
index = closeRegExp.lastIndex;
|
||||
const closeToken = {
|
||||
start: closeMatch.index,
|
||||
end: index,
|
||||
match: closeMatch,
|
||||
token: closeMatch[0],
|
||||
};
|
||||
onParse === null || onParse === void 0 ? void 0 : onParse(context, source, openToken, closeToken);
|
||||
continue findNextStatement;
|
||||
}
|
||||
onGlobalError === null || onGlobalError === void 0 ? void 0 : onGlobalError(context, source, `Cannot find statements or comments by regexp ${nextStatementRegExp}`, index);
|
||||
index = nextStatementRegExp.lastIndex;
|
||||
}
|
||||
};
|
||||
return parse;
|
||||
};
|
||||
/**
|
||||
* Empty comments array to skip `for-or` cycle.
|
||||
*/
|
||||
const emptyComments = [];
|
||||
50
Frontend-Learner/node_modules/parse-statements/package.json
generated
vendored
Normal file
50
Frontend-Learner/node_modules/parse-statements/package.json
generated
vendored
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
{
|
||||
"name": "parse-statements",
|
||||
"version": "1.0.11",
|
||||
"description": "Fast and easy parser of statements in source code in any language ✂️",
|
||||
"author": "uid11",
|
||||
"bugs": "https://github.com/joomcode/parse-statements/issues",
|
||||
"devDependencies": {
|
||||
"prettier": ">=3.4",
|
||||
"typescript": ">=5.7"
|
||||
},
|
||||
"exports": {
|
||||
".": {
|
||||
"import": "./index.js",
|
||||
"require": "./index.cjs",
|
||||
"types": "./index.d.ts"
|
||||
}
|
||||
},
|
||||
"files": [
|
||||
"getPreparedOptions.cjs",
|
||||
"getPreparedOptions.js",
|
||||
"index.cjs",
|
||||
"index.js",
|
||||
"index.d.ts",
|
||||
"types.d.ts"
|
||||
],
|
||||
"homepage": "https://github.com/joomcode/parse-statements#readme",
|
||||
"keywords": [
|
||||
"parse",
|
||||
"parser",
|
||||
"source",
|
||||
"statement"
|
||||
],
|
||||
"license": "MIT",
|
||||
"packageManager": "npm@10",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/joomcode/parse-statements.git"
|
||||
},
|
||||
"scripts": {
|
||||
"prettier": "prettier --cache --cache-strategy=content --write .",
|
||||
"prebuild": "rm -f *.cjs *.js *.d.ts",
|
||||
"build": "tsc",
|
||||
"postbuild": "node ./convertEsmToCjs.js",
|
||||
"test": "export _START=$(date +%s%3N) && npm run prettier && npm run build && node ./index.spec.js",
|
||||
"prepublishOnly": "npm test"
|
||||
},
|
||||
"sideEffects": false,
|
||||
"type": "module",
|
||||
"types": "./index.d.ts"
|
||||
}
|
||||
206
Frontend-Learner/node_modules/parse-statements/types.d.ts
generated
vendored
Normal file
206
Frontend-Learner/node_modules/parse-statements/types.d.ts
generated
vendored
Normal file
|
|
@ -0,0 +1,206 @@
|
|||
/**
|
||||
* Description of comment as the callback handlers and open and close tokens.
|
||||
*/
|
||||
export type Comment<Context> = Readonly<{
|
||||
/**
|
||||
* An optional callback handler for comment parsing errors.
|
||||
*/
|
||||
onError?: OnCommentError<Context>;
|
||||
/**
|
||||
* An optional callback handler of comment.
|
||||
*/
|
||||
onParse?: OnCommentParse<Context>;
|
||||
/**
|
||||
* Pair of the comment open and close raw tokens.
|
||||
*/
|
||||
tokens: CommentPair<string>;
|
||||
}>;
|
||||
/**
|
||||
* Pair of the comment open and close tokens (raw or parsed).
|
||||
*/
|
||||
export type CommentPair<Token = ParsedToken> = readonly [open: Token, close: Token];
|
||||
/**
|
||||
* Key of regexp (name of named capturing groups).
|
||||
*/
|
||||
export type Key = string;
|
||||
/**
|
||||
* Returns a copy of the object type with mutable properties.
|
||||
* `Mutable<{readonly foo: string}>` = `{foo: string}`.
|
||||
*/
|
||||
export type Mutable<Type> = {
|
||||
-readonly [Key in keyof Type]: Type[Key];
|
||||
};
|
||||
/**
|
||||
* `onError` callback handler for error on comment parsing.
|
||||
*/
|
||||
export type OnCommentError<Context> = Callback<Context, [open: ParsedToken]>;
|
||||
/**
|
||||
* `onParse` callback handler of comment.
|
||||
*/
|
||||
export type OnCommentParse<Context> = Callback<Context, CommentPair>;
|
||||
/**
|
||||
* Global `onError` callback handler for error on parsing.
|
||||
*/
|
||||
export type OnGlobalError<Context> = Callback<Context, [message: string, index: number]>;
|
||||
/**
|
||||
* `onParse` callback handler of statement with concrete length (number of tokens).
|
||||
*/
|
||||
export type OnParse<Context = any, Length extends keyof AllLength | 0 = 0> = Callback<Context, Length extends keyof AllLength ? [...AllLength[Length], ParsedToken] : ParsedTokens, void | number>;
|
||||
/**
|
||||
* Options of `createParseFunction` function.
|
||||
*/
|
||||
export type Options<Context> = Readonly<{
|
||||
/**
|
||||
* An optional array of comments as token pairs with optional callbacks.
|
||||
*/
|
||||
comments?: readonly Comment<Context>[];
|
||||
/**
|
||||
* An optional callback for global parsing errors.
|
||||
*/
|
||||
onError?: OnGlobalError<Context>;
|
||||
/**
|
||||
* An optional array of statements as a non-empty array of tokens with optional callbacks.
|
||||
*/
|
||||
statements?: readonly Statement<Context>[];
|
||||
}>;
|
||||
/**
|
||||
* Parse function.
|
||||
*/
|
||||
export type Parse<Context> = Callback<Context, []>;
|
||||
/**
|
||||
* The result of parsing the token.
|
||||
*/
|
||||
export type ParsedToken = Readonly<{
|
||||
/**
|
||||
* Index of token start (in source code).
|
||||
*/
|
||||
start: number;
|
||||
/**
|
||||
* Index of token end (in source code).
|
||||
*/
|
||||
end: number;
|
||||
/**
|
||||
* The result of calling the `exec` method in which this token was found.
|
||||
*/
|
||||
match: RegExpExecArray;
|
||||
/**
|
||||
* The found token as a substring of the source code.
|
||||
*/
|
||||
token: string;
|
||||
}>;
|
||||
/**
|
||||
* The result of parsing the statement.
|
||||
*/
|
||||
export type ParsedTokens = [...ParsedTokenWithComments[], ParsedToken];
|
||||
/**
|
||||
* The result of parsing a statement token with parsed comment tokens
|
||||
* in the code between this token and the next token of statement.
|
||||
*/
|
||||
export type ParsedTokenWithComments = ParsedToken & {
|
||||
readonly comments?: readonly CommentPair[];
|
||||
};
|
||||
/**
|
||||
* Internal prepared description of comment.
|
||||
*/
|
||||
export type PreparedComment<Context> = Readonly<{
|
||||
closeRegExp: RegExp;
|
||||
onError: Comment<Context>['onError'];
|
||||
onParse: Comment<Context>['onParse'];
|
||||
}>;
|
||||
/**
|
||||
* Internal prepared options of parse function.
|
||||
*/
|
||||
export type PreparedOptions<Context> = Readonly<{
|
||||
commentsKeys: readonly Key[];
|
||||
nextStatementRegExp: RegExp;
|
||||
onError: Options<Context>['onError'];
|
||||
preparedComments: Readonly<Record<Key, PreparedComment<Context>>>;
|
||||
preparedStatements: Readonly<Record<Key, PreparedStatement<Context>>>;
|
||||
statementsKeys: readonly Key[];
|
||||
}>;
|
||||
/**
|
||||
* Internal prepared description of statement.
|
||||
*/
|
||||
export type PreparedStatement<Context> = Readonly<{
|
||||
onError: Statement<Context>['onError'];
|
||||
onParse: Statement<Context>['onParse'];
|
||||
tokens: readonly PreparedToken[];
|
||||
}>;
|
||||
/**
|
||||
* Internal prepared description of token.
|
||||
*/
|
||||
export type PreparedToken = Readonly<{
|
||||
nextTokenKey: Key;
|
||||
nextTokenRegExp: RegExp;
|
||||
}>;
|
||||
/**
|
||||
* Description of statement as the callback handlers and a sequence of tokens.
|
||||
*/
|
||||
export type Statement<Context> = Readonly<{
|
||||
/**
|
||||
* If `true`, then we parse comments inside the statement (between its parts).
|
||||
*/
|
||||
canIncludeComments: boolean;
|
||||
/**
|
||||
* An optional callback handler for statement parsing errors.
|
||||
*/
|
||||
onError?: OnParse<Context>;
|
||||
/**
|
||||
* An optional callback handler of statement.
|
||||
*/
|
||||
onParse?: OnParse<Context>;
|
||||
/**
|
||||
* Not-empty array of statement raw tokens.
|
||||
*/
|
||||
tokens: readonly [string, ...string[]];
|
||||
/**
|
||||
* If `true`, then the statement first token is searched before the comment tokens, otherwise after.
|
||||
*/
|
||||
shouldSearchBeforeComments: boolean;
|
||||
}>;
|
||||
/**
|
||||
* Pair of the token and his regexp key.
|
||||
*/
|
||||
export type TokenWithKey = readonly [key: Key, token: string];
|
||||
/**
|
||||
* Supported number of tokens in statements.
|
||||
*/
|
||||
type AllLength<P = ParsedTokenWithComments> = {
|
||||
1: [];
|
||||
2: [P];
|
||||
3: [P, P];
|
||||
4: [P, P, P];
|
||||
5: [P, P, P, P];
|
||||
6: [P, P, P, P, P];
|
||||
7: [P, P, P, P, P, P];
|
||||
8: [P, P, P, P, P, P, P];
|
||||
9: [P, P, P, P, P, P, P, P];
|
||||
10: [P, P, P, P, P, P, P, P, P];
|
||||
11: [P, P, P, P, P, P, P, P, P, P];
|
||||
12: [P, P, P, P, P, P, P, P, P, P, P];
|
||||
13: [P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
14: [P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
15: [P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
16: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
17: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
18: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
19: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
20: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
21: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
22: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
23: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
24: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
25: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
26: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
27: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
28: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
29: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
30: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
31: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
32: [P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P, P];
|
||||
};
|
||||
/**
|
||||
* A callback handler called on successful parsing of a statement or on an error during parsing.
|
||||
*/
|
||||
type Callback<Context, Arguments extends readonly unknown[], Return = void> = (this: void, context: Context, source: string, ...args: Arguments) => Return;
|
||||
export {};
|
||||
Loading…
Add table
Add a link
Reference in a new issue