Skip to content

Commit 4f53bba

Browse files
committed
fix compatibility with unreleased changes to stdlib tokenizer
python/cpython@c4ef489 (not yet in any released version, but it's been backported to all versions of Python in git, even 2.7!) changed the behaviour in the stdlib's tokenize module to emit a synthetic NEWLINE token even if the file does not end with a newline. This was causing a spurious "mixed-line-endings" warning to be emitted, but luckily the synthetic token is easy to test for (the token text is "").
1 parent be8b755 commit 4f53bba

File tree

1 file changed

+7
-2
lines changed

1 file changed

+7
-2
lines changed

patsy/tokens.py

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,10 @@ def python_tokenize(code):
3232
if pytype == tokenize.ENDMARKER:
3333
break
3434
origin = Origin(code, start, end)
35-
assert pytype not in (tokenize.NL, tokenize.NEWLINE)
35+
assert pytype != tokenize.NL
36+
if pytype == tokenize.NEWLINE:
37+
assert string == ""
38+
continue
3639
if pytype == tokenize.ERRORTOKEN:
3740
raise PatsyError("error tokenizing input "
3841
"(maybe an unclosed string?)",
@@ -98,7 +101,9 @@ def pretty_untokenize(typed_tokens):
98101
brackets = []
99102
for token_type, token in typed_tokens:
100103
assert token_type not in (tokenize.INDENT, tokenize.DEDENT,
101-
tokenize.NEWLINE, tokenize.NL)
104+
tokenize.NL)
105+
if token_type == tokenize.NEWLINE:
106+
continue
102107
if token_type == tokenize.ENDMARKER:
103108
continue
104109
if token_type in (tokenize.NAME, tokenize.NUMBER, tokenize.STRING):

0 commit comments

Comments
 (0)