Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/workflows/python.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,10 +69,10 @@ jobs:
- conda-python-3.12-no-numpy
include:
- name: conda-python-docs
cache: conda-python-3.10
cache: conda-python-3.11
image: conda-python-docs
title: AMD64 Conda Python 3.10 Sphinx & Numpydoc
python: "3.10"
title: AMD64 Conda Python 3.11 Sphinx & Numpydoc
python: "3.11"
- name: conda-python-3.11-nopandas
cache: conda-python-3.11
image: conda-python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/python/data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -684,7 +684,7 @@ When using :class:`~.DictionaryArray` with pandas, the analogue is
6 NaN
7 baz
dtype: category
Categories (3, object): ['foo', 'bar', 'baz']
Categories (3, str): ['foo', 'bar', 'baz']

.. _data.record_batch:

Expand Down
12 changes: 6 additions & 6 deletions docs/source/python/ipc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -160,12 +160,12 @@ DataFrame output:
>>> with pa.ipc.open_file(buf) as reader:
... df = reader.read_pandas()
>>> df[:5]
f0 f1 f2
0 1 foo True
1 2 bar None
2 3 baz False
3 4 None True
4 1 foo True
f0 f1 f2
0 1 foo True
1 2 bar None
2 3 baz False
3 4 NaN True
4 1 foo True

Efficiently Writing and Reading Arrow Data
------------------------------------------
Expand Down
12 changes: 6 additions & 6 deletions docs/source/python/pandas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ number of possible values.

>>> df = pd.DataFrame({"cat": pd.Categorical(["a", "b", "c", "a", "b", "c"])})
>>> df.cat.dtype.categories
Index(['a', 'b', 'c'], dtype='object')
Index(['a', 'b', 'c'], dtype='str')
>>> df
cat
0 a
Expand All @@ -182,7 +182,7 @@ number of possible values.
>>> table = pa.Table.from_pandas(df)
>>> table
pyarrow.Table
cat: dictionary<values=string, indices=int8, ordered=0>
cat: dictionary<values=large_string, indices=int8, ordered=0>
----
cat: [ -- dictionary:
["a","b","c"] -- indices:
Expand All @@ -196,7 +196,7 @@ same categories of the Pandas DataFrame.
>>> column = table[0]
>>> chunk = column.chunk(0)
>>> chunk.dictionary
<pyarrow.lib.StringArray object at ...>
<pyarrow.lib.LargeStringArray object at ...>
[
"a",
"b",
Expand Down Expand Up @@ -224,7 +224,7 @@ use the ``datetime64[ns]`` type in Pandas and are converted to an Arrow

>>> df = pd.DataFrame({"datetime": pd.date_range("2020-01-01T00:00:00Z", freq="h", periods=3)})
>>> df.dtypes
datetime datetime64[ns, UTC]
datetime datetime64[us, UTC]
dtype: object
>>> df
datetime
Expand All @@ -234,9 +234,9 @@ use the ``datetime64[ns]`` type in Pandas and are converted to an Arrow
>>> table = pa.Table.from_pandas(df)
>>> table
pyarrow.Table
datetime: timestamp[ns, tz=UTC]
datetime: timestamp[us, tz=UTC]
----
datetime: [[2020-01-01 00:00:00.000000000Z,...,2020-01-01 02:00:00.000000000Z]]
datetime: [[2020-01-01 00:00:00.000000Z,2020-01-01 01:00:00.000000Z,2020-01-01 02:00:00.000000Z]]

In this example the Pandas Timestamp is time zone aware
(``UTC`` on this case), and this information is used to create the Arrow
Expand Down
6 changes: 3 additions & 3 deletions docs/source/python/parquet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -238,9 +238,9 @@ concatenate them into a single table. You can read individual row groups with
>>> parquet_file.read_row_group(0)
pyarrow.Table
one: double
two: string
two: large_string
three: bool
__index_level_0__: string
__index_level_0__: large_string
----
one: [[-1,null,2.5]]
two: [["foo","bar","baz"]]
Expand Down Expand Up @@ -352,7 +352,7 @@ and improved performance for columns with many repeated string values.
one: double
two: dictionary<values=string, indices=int32, ordered=0>
three: bool
__index_level_0__: string
__index_level_0__: large_string
----
one: [[-1,null,2.5]]
two: [ -- dictionary:
Expand Down
Loading