dtlib: use IntEnum for token IDs

The way that _init_tokens() is manipulating globals() defeats static
analyses of the file that are trying to infer a type for the 'tok_id'
variable in assignment expressions like 'tok_id = _T_INCLUDE'.

To make it easier on the analyser, define the token types as an
enum.IntEnum named _T. This means we can write e.g. '_T.INCLUDE'
instead of '_T_INCLUDE', avoiding line length increases in the lexing
code.

While we're here, use '==' and '!=' instead of 'is' and 'is not'
when comparing a tok_id that is obtained from an re.Match.lastindex
with a _T.FOO value.

This is now necessary since an int object and a _T object definitely
don't point to the same memory. It worked previously because CPython
interns all integer instances from -5 to 256, but that's an
implementation detail and not a language feature. Since we're getting
the ints from an re.Match.lastindex instead of putting the exact
_T_FOO values into some list, this code probably should not strictly
speaking have been using 'is'.

Explicitly initialize the global _token_re also, to make it more
visible for static analysis.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
1 file changed