Please check the errata for any errors or issues reported since publication.
See also translations.
This document is also available in these non-normative formats: XML.
Copyright © 2000 W3C® (MIT, ERCIM, Keio, Beihang). W3C liability, trademark and document use rules apply.
XML is a versatile markup language, capable of labeling the information content of diverse data sources, including structured and semi-structured documents, relational databases, and object repositories. A query language that uses the structure of XML intelligently can express queries across all these kinds of data, whether physically stored in XML or viewed as XML via middleware. This specification describes a query language called XQuery, which is designed to be broadly applicable across many types of XML data sources.
A list of changes made since XQuery 3.1 can be found in J Change Log.
This is a draft prepared by the QT4CG (officially registered in W3C as the XSLT Extensions Community Group). Comments are invited.
The publications of this community group are dedicated to our co-chair, Michael Sperberg-McQueen (1954–2024).
This section describes how an XQuery 4.0 text is tokenized prior to parsing.
All keywords are case sensitive. Keywords are not reserved—that is, any lexical QName may duplicate a keyword except as noted in A.4 Reserved Function Names.
Tokenizing an input string is a process that follows the following rules:
[Definition: An ordinary production rule is a production rule in A.1 EBNF that is not annotated ws:explicit.]
[Definition: A literal terminal is a token appearing as a string in quotation marks on the right-hand side of an ordinary production rule.]
Note:
Strings that appear in other production rules do not qualify. For example, "]]>" is not a literal terminal, because it appears only in the rule CDataSection, which is not an ordinary production rule; similarly BracedURILiteral does not qualify because it appears only in URIQualifiedName, and "0x" does not qualify because it appears only in HexIntegerLiteral.
The literal terminals in XQuery 4.0 are: !!=#$%()*+,...///::::=;<<<<===!>=>=?>>>=>>????[@[]{|||}×÷-->allowingancestorancestor-or-selfandarrayasascendingatattributebase-uriboundary-spacebycasecastcastablecatchchildcollationcommentconstructioncontextcopy-namespacescountdecimal-formatdecimal-separatordeclaredefaultdescendantdescendant-or-selfdescendingdigitdivdocumentdocument-nodeelementelseemptyempty-sequenceencodingendenumeqeveryexceptexponent-separatorexternalfalsefixedfnfollowingfollowing-or-selffollowing-siblingfollowing-sibling-or-selfforfunctiongegreatestgroupgrouping-separatorgtidivifimportininfinityinheritinstanceintersectisitemitemskeykeyslaxleleastletltmapmemberminus-signmodmodulenamespacenamespace-nodeNaNnenextno-inheritno-preservenodeofonlyoptionororderorderedorderingotherwisepairsparentpattern-separatorper-millepercentprecedingpreceding-or-selfpreceding-sibling-or-selfpreservepreviousprocessing-instructionrecordreturnsatisfiesschemaschema-attributeschema-elementselfslidingsomestablestartstrictstripswitchtextthentotreattruetrytumblingtypetypeswitchunionunorderedvalidatevaluevaluesvariableversionwhenwherewhilewindowxqueryzero-digit
[Definition: A variable terminal is an instance of a production rule that is not itself an ordinary production rule but that is named (directly) on the right-hand side of an ordinary production rule.]
The variable terminals in XQuery 4.0 are: BinaryIntegerLiteralCDataSectionDecimalLiteralDirCommentConstructorDirElemConstructorDirPIConstructorDoubleLiteralHexIntegerLiteralIntegerLiteralNCNamePragmaQNameStringConstructorStringLiteralStringTemplateURIQualifiedNameWildcard
[Definition: A complex terminal is a variable terminal whose production rule references, directly or indirectly, an ordinary production rule.]
The complex terminals in XQuery 4.0 are: DirElemConstructorPragmaStringConstructorStringTemplate
Note:
The significance of complex terminals is that at one level, a complex terminal is treated as a single token, but internally it may contain arbitrary expressions that must be parsed using the full EBNF grammar.
Tokenization is the process of splitting the supplied input string into a sequence of terminals, where each terminal is either a literal terminal or a variable terminal (which may itself be a complex terminal). Tokenization is done by repeating the following steps:
Starting at the current position, skip any whitespace and comments.
If the current position is not the end of the input, then return the longest literal terminal or variable terminal that can be matched starting at the current position, regardless whether this terminal is valid at this point in the grammar. If no such terminal can be identified starting at the current position, or if the terminal that is identified is not a valid continuation of the grammar rules, then a syntax error is reported.
Note:
Here are some examples showing the effect of the longest token rule:
The expression map{a:b} is a syntax error. Although there is a tokenization of this string that satisfies the grammar (by treating a and b as separate expressions), this tokenization does not satisfy the longest token rule, which requires that a:b is interpreted as a single QName.
The expression 10 div3 is a syntax error. The longest token rule requires that this be interpreted as two tokens ("10" and "div3") even though it would be a valid expression if treated as three tokens ("10", "div", and "3").
The expression $x-$y is a syntax error. This is interpreted as four tokens, ("$", "x-", "$", and "y").
Note:
The lexical production rules for variable terminals have been designed so that there is minimal need for backtracking. For example, if the next terminal starts with "0x", then it can only be either a HexIntegerLiteral or an error; if it starts with "`" (and not with "```") then it can only be a StringTemplate or an error. Direct element constructors in XQuery, however, need special treatment, described below.
This convention, together with the rules for whitespace separation of tokens (see A.3.2 Terminal Delimitation) means that the longest-token rule does not normally result in any need for backtracking. For example, suppose that a variable terminal has been identified as a StringTemplate by examining its first few characters. If the construct turns out not to be a valid StringTemplate, an error can be reported without first considering whether there is some shorter token that might be returned instead.
Tokenization requires special care when the current character is U+003C (LESS-THAN SIGN, <) :
If the following character is U+003D (EQUALS SIGN, =) then the token can be identified unambiguously as the operator <=.
If the following character is U+003C (LESS-THAN SIGN, <) then the token can be identified unambiguously as the operator <<.
If the following character is U+0021 (EXCLAMATION MARK, !) then the token can be identified unambiguously as being a DirCommentConstructor (a CDataSection, which also starts with <! can appear only within a direct element constructor, not as a free-standing token).
If the following character is U+003F (QUESTION MARK, ?) , then the token is identified as a DirPIConstructor if and only if a match for the relevant production ("<?" PITarget (S DirPIContents)? "?>") is found. If there is no such match, then the string "<?" is identified as a less-than operator followed by a lookup operator.
If the following character is a NameStartChar then the token is identified as a DirElemConstructor if and only if a match for the leading part of a DirElemConstructor is found: specifically if a substring starting at the U+003C (LESS-THAN SIGN, <) character matches one of the following regular expressions:
^<\i\c*\s*>(as in<element>...)^<\i\c*\s*/>(as in<element/>)^<\i\c*\s+\i\c*\s*=(as in<element att=...)
If the content matches one of these regular expressions but further analysis shows that the subsequent content does not satisfy the DirElemConstructor production, then a static error is reported.
If the content does not match any of these regular expressions then the token is identified as the less-than operator <.
If the following character is any other character then the token can be identified unambiguously as the less-than operator <.
This analysis is done without regard to the syntactic context of the U+003C (LESS-THAN SIGN, <) character. However, a tokenizer may avoid looking for a DirPIConstructor or DirElemConstructor if it knows that such a constructor cannot appear in the current syntactic context.
Note:
The rules here are described much more precisely than in XQuery 3.1, and the results in edge cases might be incompatible with some XQuery 3.1 processors.
Note:
To avoid potential confusion, simply add whitespace after any less-than operator.
Tokenization unambiguously identifies the boundaries of the terminals in the input, and this can be achieved without backtracking or lookahead. However, tokenization does not unambiguously classify each terminal. For example, it might identify the string "div" as a terminal, but it does not resolve whether this is the operator symbol div, or an NCName or QName used as a node test or as a variable or function name. Classification of terminals generally requires information about the grammatical context, and in some cases requires lookahead.
Note:
Operationally, classification of terminals may be done either in the tokenizer or the parser, or in some combination of the two. For example, according to the EBNF, the expression "parent::x" is made up of three tokens, "parent", "::", and "x". The name "parent" can be classified as an axis name as soon as the following token "::" is recognized, and this might be done either in the tokenizer or in the parser. (Note that whitespace and comments are allowed both before and after "::".)
In the case of a complex terminal, identifying the end of the complex terminal typically involves invoking the parser to process any embedded expressions. Tokenization, as described here, is therefore a recursive process. But other implementations are possible.
Note:
Previous versions of this specification included the statement: When tokenizing, the longest possible match that is consistent with the EBNF is used.
Different processors are known to have interpreted this in different ways. One interpretation, for example, was that the expression 10 div-3 should be split into four tokens (10, div, -, 3) on the grounds that any other tokenization would give a result that was inconsistent with the EBNF grammar. Other processors report a syntax error on this example.
This rule has therefore been rewritten in version 4.0. Tokenization is now entirely insensitive to the grammatical context; div-3 is recognized as a single token even though this results in a syntax error. For some implementations this may mean that expressions that were accepted in earlier releases are no longer accepted in 4.0.
A more subtle example is: (. <?b ) cast as xs:integer?> 0) in which <?b ) cast as xs:integer?> is recognized as a single token (a direct processing instruction constructor) even though such a token cannot validly appear in this grammatical context.
The operator symbols <, <=, >, >=, <<, >>, =>, ->, =!>, and =?> have alternative representations using the characters U+FF1C (FULL-WIDTH LESS-THAN SIGN, <) and U+FF1E (FULL-WIDTH GREATER-THAN SIGN, >) in place of U+003C (LESS-THAN SIGN, <) and U+003E (GREATER-THAN SIGN, >) . The alternative tokens are respectively <, <=, >, >=, <<, >>, =>, ->, =!>, and =?>. In order to avoid visual confusion these alternatives are not shown explicitly in the grammar.
This option is provided to improve the readability of XPath expressions embedded in XML-based host languages such as XSLT; it enables these operators to be depicted using characters that do not require escaping as XML entities or character references.
This rule does not apply to the < and > symbols used to delimit node constructor expressions, which (because they mimic XML syntax) must use U+003C (LESS-THAN SIGN, <) and U+003E (GREATER-THAN SIGN, >) respectively.