77msgstr ""
88"Project-Id-Version : Python 3.13\n "
99"Report-Msgid-Bugs-To : \n "
10- "POT-Creation-Date : 2024-09-03 11:11+0800 \n "
10+ "POT-Creation-Date : 2025-01-20 00:13+0000 \n "
1111"PO-Revision-Date : 2018-05-23 16:13+0000\n "
1212"Last-Translator : Adrian Liaw <adrianliaw2000@gmail.com>\n "
1313"Language-Team : Chinese - TAIWAN (https://github.com/python/python-docs-zh- "
@@ -140,84 +140,83 @@ msgstr ""
140140
141141#: ../../library/tokenize.rst:94
142142msgid ""
143- "The reconstructed script is returned as a single string. The result is "
144- "guaranteed to tokenize back to match the input so that the conversion is "
145- "lossless and round-trips are assured. The guarantee applies only to the "
146- "token type and token string as the spacing between tokens (column positions) "
147- "may change."
143+ "The result is guaranteed to tokenize back to match the input so that the "
144+ "conversion is lossless and round-trips are assured. The guarantee applies "
145+ "only to the token type and token string as the spacing between tokens "
146+ "(column positions) may change."
148147msgstr ""
149148
150- #: ../../library/tokenize.rst:100
149+ #: ../../library/tokenize.rst:99
151150msgid ""
152151"It returns bytes, encoded using the :data:`~token.ENCODING` token, which is "
153152"the first token sequence output by :func:`.tokenize`. If there is no "
154153"encoding token in the input, it returns a str instead."
155154msgstr ""
156155
157- #: ../../library/tokenize.rst:105
156+ #: ../../library/tokenize.rst:104
158157msgid ""
159158":func:`.tokenize` needs to detect the encoding of source files it tokenizes. "
160159"The function it uses to do this is available:"
161160msgstr ""
162161
163- #: ../../library/tokenize.rst:110
162+ #: ../../library/tokenize.rst:109
164163msgid ""
165164"The :func:`detect_encoding` function is used to detect the encoding that "
166165"should be used to decode a Python source file. It requires one argument, "
167166"readline, in the same way as the :func:`.tokenize` generator."
168167msgstr ""
169168
170- #: ../../library/tokenize.rst:114
169+ #: ../../library/tokenize.rst:113
171170msgid ""
172171"It will call readline a maximum of twice, and return the encoding used (as a "
173172"string) and a list of any lines (not decoded from bytes) it has read in."
174173msgstr ""
175174
176- #: ../../library/tokenize.rst:118
175+ #: ../../library/tokenize.rst:117
177176msgid ""
178177"It detects the encoding from the presence of a UTF-8 BOM or an encoding "
179178"cookie as specified in :pep:`263`. If both a BOM and a cookie are present, "
180179"but disagree, a :exc:`SyntaxError` will be raised. Note that if the BOM is "
181180"found, ``'utf-8-sig'`` will be returned as an encoding."
182181msgstr ""
183182
184- #: ../../library/tokenize.rst:123
183+ #: ../../library/tokenize.rst:122
185184msgid ""
186185"If no encoding is specified, then the default of ``'utf-8'`` will be "
187186"returned."
188187msgstr ""
189188
190- #: ../../library/tokenize.rst:126
189+ #: ../../library/tokenize.rst:125
191190msgid ""
192191"Use :func:`.open` to open Python source files: it uses :func:"
193192"`detect_encoding` to detect the file encoding."
194193msgstr ""
195194
196- #: ../../library/tokenize.rst:132
195+ #: ../../library/tokenize.rst:131
197196msgid ""
198197"Open a file in read only mode using the encoding detected by :func:"
199198"`detect_encoding`."
200199msgstr ""
201200
202- #: ../../library/tokenize.rst:139
201+ #: ../../library/tokenize.rst:138
203202msgid ""
204203"Raised when either a docstring or expression that may be split over several "
205204"lines is not completed anywhere in the file, for example::"
206205msgstr ""
207206
208- #: ../../library/tokenize.rst:142
207+ #: ../../library/tokenize.rst:141
209208msgid ""
210209"\"\"\" Beginning of\n"
211210"docstring"
212211msgstr ""
213212"\"\"\" Beginning of\n"
214213"docstring"
215214
216- #: ../../library/tokenize.rst:145
215+ #: ../../library/tokenize.rst:144
217216msgid "or::"
218217msgstr "或是: ::"
219218
220- #: ../../library/tokenize.rst:147
219+ #: ../../library/tokenize.rst:146
221220msgid ""
222221"[1,\n"
223222" 2,\n"
@@ -227,49 +226,49 @@ msgstr ""
227226" 2,\n"
228227" 3"
229228
230- #: ../../library/tokenize.rst:154
229+ #: ../../library/tokenize.rst:153
231230msgid "Command-Line Usage"
232231msgstr ""
233232
234- #: ../../library/tokenize.rst:158
233+ #: ../../library/tokenize.rst:157
235234msgid ""
236235"The :mod:`tokenize` module can be executed as a script from the command "
237236"line. It is as simple as:"
238237msgstr ""
239238
240- #: ../../library/tokenize.rst:161
239+ #: ../../library/tokenize.rst:160
241240msgid "python -m tokenize [-e] [filename.py]"
242241msgstr "python -m tokenize [-e] [filename.py]"
243242
244- #: ../../library/tokenize.rst:165
243+ #: ../../library/tokenize.rst:164
245244msgid "The following options are accepted:"
246245msgstr ""
247246
248- #: ../../library/tokenize.rst:171
247+ #: ../../library/tokenize.rst:170
249248msgid "show this help message and exit"
250249msgstr ""
251250
252- #: ../../library/tokenize.rst:175
251+ #: ../../library/tokenize.rst:174
253252msgid "display token names using the exact type"
254253msgstr ""
255254
256- #: ../../library/tokenize.rst:177
255+ #: ../../library/tokenize.rst:176
257256msgid ""
258257"If :file:`filename.py` is specified its contents are tokenized to stdout. "
259258"Otherwise, tokenization is performed on stdin."
260259msgstr ""
261260
262- #: ../../library/tokenize.rst:181
261+ #: ../../library/tokenize.rst:180
263262msgid "Examples"
264263msgstr "範例"
265264
266- #: ../../library/tokenize.rst:183
265+ #: ../../library/tokenize.rst:182
267266msgid ""
268267"Example of a script rewriter that transforms float literals into Decimal "
269268"objects::"
270269msgstr ""
271270
272- #: ../../library/tokenize.rst:186
271+ #: ../../library/tokenize.rst:185
273272msgid ""
274273"from tokenize import tokenize, untokenize, NUMBER, STRING, NAME, OP\n"
275274"from io import BytesIO\n"
@@ -312,11 +311,11 @@ msgid ""
312311" return untokenize(result).decode('utf-8')"
313312msgstr ""
314313
315- #: ../../library/tokenize.rst:225
314+ #: ../../library/tokenize.rst:224
316315msgid "Example of tokenizing from the command line. The script::"
317316msgstr ""
318317
319- #: ../../library/tokenize.rst:227
318+ #: ../../library/tokenize.rst:226
320319msgid ""
321320"def say_hello():\n"
322321" print(\" Hello, World!\" )\n"
@@ -328,15 +327,15 @@ msgstr ""
328327"\n"
329328"say_hello()"
330329
331- #: ../../library/tokenize.rst:232
330+ #: ../../library/tokenize.rst:231
332331msgid ""
333332"will be tokenized to the following output where the first column is the "
334333"range of the line/column coordinates where the token is found, the second "
335334"column is the name of the token, and the final column is the value of the "
336335"token (if any)"
337336msgstr ""
338337
339- #: ../../library/tokenize.rst:236
338+ #: ../../library/tokenize.rst:235
340339msgid ""
341340"$ python -m tokenize hello.py\n"
342341"0,0-0,0: ENCODING 'utf-8'\n"
@@ -382,12 +381,12 @@ msgstr ""
382381"4,11-4,12: NEWLINE '\\ n'\n"
383382"5,0-5,0: ENDMARKER ''"
384383
385- #: ../../library/tokenize.rst:260
384+ #: ../../library/tokenize.rst:259
386385msgid ""
387386"The exact token type names can be displayed using the :option:`-e` option:"
388387msgstr ""
389388
390- #: ../../library/tokenize.rst:262
389+ #: ../../library/tokenize.rst:261
391390msgid ""
392391"$ python -m tokenize -e hello.py\n"
393392"0,0-0,0: ENCODING 'utf-8'\n"
@@ -433,13 +432,13 @@ msgstr ""
433432"4,11-4,12: NEWLINE '\\ n'\n"
434433"5,0-5,0: ENDMARKER ''"
435434
436- #: ../../library/tokenize.rst:286
435+ #: ../../library/tokenize.rst:285
437436msgid ""
438437"Example of tokenizing a file programmatically, reading unicode strings "
439438"instead of bytes with :func:`generate_tokens`::"
440439msgstr ""
441440
442- #: ../../library/tokenize.rst:289
441+ #: ../../library/tokenize.rst:288
443442msgid ""
444443"import tokenize\n"
445444"\n"
@@ -455,11 +454,11 @@ msgstr ""
455454" for token in tokens:\n"
456455" print(token)"
457456
458- #: ../../library/tokenize.rst:296
457+ #: ../../library/tokenize.rst:295
459458msgid "Or reading bytes directly with :func:`.tokenize`::"
460459msgstr ""
461460
462- #: ../../library/tokenize.rst:298
461+ #: ../../library/tokenize.rst:297
463462msgid ""
464463"import tokenize\n"
465464"\n"
0 commit comments