The two workhorse functions for reading text files (a.k.a. flat files) are
read_csv() and read_table(). They both use the same parsing code to
intelligently convert tabular data into a DataFrame object. See the
cookbook for some advanced strategies.

Delimiter to use. If sep is None,
will try to automatically determine this. Separators longer than 1 character
and different from '\s+' will be interpreted as regular expressions, will
force use of the python parsing engine and will ignore quotes in the data.
Regex example: '\\r\\t'.

delimiter :str, default None

Alternative argument name for sep.

delim_whitespace :boolean, default False

Specifies whether or not whitespace (e.g. '' or '\t')
will be used as the delimiter. Equivalent to setting sep='\+s'.
If this option is set to True, nothing should be passed in for the
delimiter parameter.

Row number(s) to use as the column names, and the start of the data. Default
behavior is as if header=0 if no names passed, otherwise as if
header=None. Explicitly pass header=0 to be able to replace existing
names. The header can be a list of ints that specify row locations for a
multi-index on the columns e.g. [0,1,3]. Intervening rows that are not
specified will be skipped (e.g. 2 in this example is skipped). Note that
this parameter ignores commented lines and empty lines if
skip_blank_lines=True, so header=0 denotes the first line of data
rather than the first line of the file.

names :array-like, default None

List of column names to use. If file contains no header row, then you should
explicitly pass header=None.

index_col :int or sequence or False, default None

Column to use as the row labels of the DataFrame. If a sequence is given, a
MultiIndex is used. If you have a malformed file with delimiters at the end of
each line, you might consider index_col=False to force pandas to not use
the first column as the index (row names).

usecols :array-like, default None

Return a subset of the columns. All elements in this array must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in names or
inferred from the document header row(s). For example, a valid usecols
parameter would be [0, 1, 2] or [‘foo’, ‘bar’, ‘baz’]. Using this parameter
results in much faster parsing time and lower memory usage.

squeeze :boolean, default False

If the parsed data only contains one column then return a Series.

prefix :str, default None

Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, ...

mangle_dupe_cols :boolean, default True

Duplicate columns will be specified as ‘X.0’...’X.N’, rather than ‘X’...’X’.

Additional strings to recognize as NA/NaN. If dict passed, specific per-column
NA values. By default the following values are interpreted as NaN:
'-1.#IND','1.#QNAN','1.#IND','-1.#QNAN','#N/AN/A','#N/A','N/A','NA','#NA','NULL','NaN','-NaN','nan','-nan',''.

keep_default_na :boolean, default True

If na_values are specified and keep_default_na is False the default NaN
values are overridden, otherwise they’re appended to.

na_filter :boolean, default True

Detect missing value markers (empty strings and the value of na_values). In
data without any NAs, passing na_filter=False can improve the performance
of reading a large file.

verbose :boolean, default False

Indicate number of NA values placed in non-numeric columns.

skip_blank_lines :boolean, default True

If True, skip over blank lines rather than interpreting as NaN values.

If [[1,3]] -> combine columns 1 and 3 and parse as a single date
column.

If {'foo':[1,3]} -> parse columns 1, 3 as date and call result ‘foo’.
A fast-path exists for iso8601-formatted dates.

infer_datetime_format :boolean, default False

If True and parse_dates is enabled for a column, attempt to infer the
datetime format to speed up the processing.

keep_date_col :boolean, default False

If True and parse_dates specifies combining multiple columns then keep the
original columns.

date_parser :function, default None

Function to use for converting a sequence of string columns to an array of
datetime instances. The default uses dateutil.parser.parser to do the
conversion. Pandas will try to call date_parser in three different ways,
advancing to the next if an exception occurs: 1) Pass one or more arrays (as
defined by parse_dates) as arguments; 2) concatenate (row-wise) the string
values from the columns defined by parse_dates into a single array and pass
that; and 3) call date_parser once for each row using one or more strings
(corresponding to the columns defined by parse_dates) as arguments.

For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip,
bz2, zip, or xz if filepath_or_buffer is a string ending in ‘.gz’, ‘.bz2’,
‘.zip’, or ‘.xz’, respectively, and no decompression otherwise. If using ‘zip’,
the ZIP file must contain only one data file to be read in.
Set to None for no decompression.

New in version 0.18.1: support for ‘zip’ and ‘xz’ compression.

thousands :str, default None

Thousands separator.

decimal :str, default '.'

Character to recognize as decimal point. E.g. use ',' for European data.

lineterminator :str (length 1), default None

Character to break file into lines. Only valid with C parser.

quotechar :str (length 1)

The character used to denote the start and end of a quoted item. Quoted items
can include the delimiter and it will be ignored.

quoting :int or csv.QUOTE_* instance, default None

Control field quoting behavior per csv.QUOTE_* constants. Use one of
QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3). Default (None) results in QUOTE_MINIMAL
behavior.

escapechar :str (length 1), default None

One-character string used to escape delimiter when quoting is QUOTE_NONE.

comment :str, default None

Indicates remainder of line should not be parsed. If found at the beginning of
a line, the line will be ignored altogether. This parameter must be a single
character. Like empty lines (as long as skip_blank_lines=True), fully
commented lines are ignored by the parameter header but not by skiprows.
For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with
header=0 will result in ‘a,b,c’ being treated as the header.

Lines with too many fields (e.g. a csv line with too many commas) will by
default cause an exception to be raised, and no DataFrame will be returned. If
False, then these “bad lines” will dropped from the DataFrame that is
returned (only valid with C parser). See bad lines
below.

warn_bad_lines :boolean, default True

If error_bad_lines is False, and warn_bad_lines is True, a warning for
each “bad line” will be output (only valid with C parser).

Consider a typical CSV file containing, in this case, some time series data:

In [1]: print(open('foo.csv').read())date,A,B,C20090101,a,1,220090102,b,3,420090103,c,4,5

The default for read_csv is to create a DataFrame with simple numbered rows:

The parsers make every attempt to “do the right thing” and not be very
fragile. Type inference is a pretty big deal. So if a column can be coerced to
integer dtype without altering the contents, it will do so. Any non-numeric
columns will come through as object dtype as with the rest of pandas objects.

If the comment parameter is specified, then completely commented lines will
be ignored. By default, completely blank lines will be ignored as well. Both of
these are API changes introduced in version 0.15.

There are some exception cases when a file has been prepared with delimiters at
the end of each data line, confusing the parser. To explicitly disable the
index column inference and discard the last column, pass index_col=False:

To better facilitate working with datetime data, read_csv() and
read_table() use the keyword arguments parse_dates and date_parser
to allow users to specify a variety of columns and date/time formats to turn the
input text data into datetime objects.

It is often the case that we may want to store date and time data separately,
or store various date fields separately. the parse_dates keyword can be
used to specify a combination of columns to parse the dates and/or times from.

You can specify a list of column lists to parse_dates, the resulting date
columns will be prepended to the output (so as to not affect the existing column
order) and the new column names will be the concatenation of the component
column names:

Note that if you wish to combine multiple columns into a single date column, a
nested list must be used. In other words, parse_dates=[1,2] indicates that
the second and third columns should each be parsed as separate date columns
while parse_dates=[[1,2]] means the two columns should be parsed into a
single column.

It is important to remember that if multiple text columns are to be parsed into
a single date column, then a new column is prepended to the data. The index_col
specification is based off of this new set of columns rather than the original
data columns:

read_csv has a fast_path for parsing datetime strings in iso8601 format,
e.g “2000-01-01T00:01:02+00:00” and similar variations. If you can arrange
for your data to store datetimes in this format, load times will be
significantly faster, ~20x has been observed.

Note

When passing a dict as the parse_dates argument, the order of
the columns prepended is not guaranteed, because dict objects do not impose
an ordering on their keys. On Python 2.7+ you may use collections.OrderedDict
instead of a regular dict if this matters to you. Because of this, when using a
dict for ‘parse_dates’ in conjunction with the index_col argument, it’s best to
specify index_col as a column label rather then as an index on the resulting frame.

Pandas will try to call the date_parser function in three different ways. If
an exception is raised, the next one is tried:

date_parser is first called with one or more arrays as arguments,
as defined using parse_dates (e.g., date_parser(['2013','2013'],['1','2']))

If #1 fails, date_parser is called with all the columns
concatenated row-wise into a single array (e.g., date_parser(['20131','20132']))

If #2 fails, date_parser is called once for every row with one or more
string arguments from the columns indicated with parse_dates
(e.g., date_parser('2013','1') for the first row, date_parser('2013','2')
for the second, etc.)

Note that performance-wise, you should try these methods of parsing dates in order:

Try to infer the format using infer_datetime_format=True (see section below)

If you know the format, use pd.to_datetime():
date_parser=lambdax:pd.to_datetime(x,format=...)

If you have a really non-standard format, use a custom date_parser function.
For optimal performance, this should be vectorized, i.e., it should accept arrays
as arguments.

You can explore the date parsing functionality in date_converters.py and
add your own. We would love to turn this module into a community supported set
of date/time parsers. To get you started, date_converters.py contains
functions to parse dual date and time columns, year/month/day columns,
and year/month/day/hour/minute/second columns. It also contains a
generic_parser function so you can curry it with a function that deals with
a single date rather than the entire array.

If you have parse_dates enabled for some or all of your columns, and your
datetime strings are all formatted the same way, you may get a large speed
up by setting infer_datetime_format=True. If set, pandas will attempt
to guess the format of your datetime strings, and then use a faster means
of parsing the strings. 5-10x parsing speeds have been observed. pandas
will fallback to the usual parsing if either the format cannot be guessed
or the format that was guessed cannot properly parse the entire column
of strings. So in general, infer_datetime_format should not have any
negative consequences if enabled.

Here are some examples of datetime strings that can be guessed (All
representing December 30th, 2011 at 00:00:00)

“20111230”

“2011/12/30”

“20111230 00:00:00”

“12/30/2011 00:00:00”

“30/Dec/2011 00:00:00”

“30/December/2011 00:00:00”

infer_datetime_format is sensitive to dayfirst. With
dayfirst=True, it will guess “01/12/2011” to be December 1st. With
dayfirst=False (default) it will guess “01/12/2011” to be January 12th.

The parameter float_precision can be specified in order to use
a specific floating-point converter during parsing with the C engine.
The options are the ordinary converter, the high-precision converter, and
the round-trip converter (which is guaranteed to round-trip values after
writing to a file). For example:

To control which values are parsed as missing values (which are signified by NaN), specifiy a
string in na_values. If you specify a list of strings, then all values in
it are considered to be missing values. If you specify a number (a float, like 5.0 or an integer like 5),
the corresponding equivalent values will also imply a missing value (in this case effectively
[5.0,5] are recognized as NaN.

To completely override the default values that are recognized as missing, specify keep_default_na=False.
The default NaN recognized values are ['-1.#IND','1.#QNAN','1.#IND','-1.#QNAN','#N/A','N/A','NA','#NA','NULL','NaN','-NaN','nan','-nan']. Although a 0-length string
'' is not included in the default NaN values list, it is still treated
as a missing value.

read_csv(path,na_values=[5])

the default values, in addition to 5 , 5.0 when interpreted as numbers are recognized as NaN

read_csv(path,keep_default_na=False,na_values=[""])

only an empty field will be NaN

read_csv(path,keep_default_na=False,na_values=["NA","0"])

only NA and 0 as strings are NaN

read_csv(path,na_values=["Nope"])

the default values, in addition to the string "Nope" are recognized as NaN

The common values True, False, TRUE, and FALSE are all
recognized as boolean. Sometime you would want to recognize some other values
as being boolean. To do this use the true_values and false_values
options:

Quotes (and other escape characters) in embedded fields can be handled in any
number of ways. One way is to use backslashes; to properly parse this data, you
should pass the escapechar option:

In [106]: data='a,b\n"hello, \\"Bob\\", nice to see you",5'In [107]: print(data)a,b"hello, \"Bob\", nice to see you",5In [108]: pd.read_csv(StringIO(data),escapechar='\\')Out[108]: a b0 hello, "Bob", nice to see you 5

While read_csv reads delimited data, the read_fwf() function works
with data files that have known and fixed column widths. The function parameters
to read_fwf are largely the same as read_csv with two extra parameters:

colspecs: A list of pairs (tuples) giving the extents of the
fixed-width fields of each line as half-open intervals (i.e., [from, to[ ).
String value ‘infer’ can be used to instruct the parser to try detecting
the column specifications from the first 100 rows of the data. Default
behaviour, if not specified, is to infer.

widths: A list of field widths which can be used instead of ‘colspecs’
if the intervals are contiguous.

The parser will take care of extra white spaces around the columns
so it’s ok to have extra separation between the columns in the file.

New in version 0.13.0.

By default, read_fwf will try to infer the file’s colspecs by using the
first 100 rows of the file. It can do it only in cases when the columns are
aligned and correctly separated by the provided delimiter (default delimiter
is whitespace).

In [122]: print(open('data/mindex_ex.csv').read())year,indiv,zit,xit1977,"A",1.2,.61977,"B",1.5,.51977,"C",1.7,.81978,"A",.2,.061978,"B",.7,.21978,"C",.8,.31978,"D",.9,.51978,"E",1.4,.91979,"C",.2,.151979,"D",.14,.051979,"E",.5,.151979,"F",1.2,.51979,"G",3.4,1.91979,"H",5.4,2.71979,"I",6.4,1.2

The index_col argument to read_csv and read_table can take a list of
column numbers to turn multiple columns into a MultiIndex for the index of the
returned object:

By specifying list of row locations for the header argument, you
can read in a MultiIndex for the columns. Specifying non-consecutive
rows will skip the intervening rows. In order to have the pre-0.13 behavior
of tupleizing columns, specify tupleize_cols=True.

Under the hood pandas uses a fast and efficient parser implemented in C as well
as a python implementation which is currently more feature-complete. Where
possible pandas uses the C parser (specified as engine='c'), but may fall
back to python if C-unsupported options are specified. Currently, C-unsupported
options include:

sep other than a single character (e.g. regex separators)

skip_footer

sep=None with delim_whitespace=False

Specifying any of the above options will produce a ParserWarning unless the
python engine is selected explicitly using engine='python'.

The Series and DataFrame objects have an instance method to_csv which
allows storing the contents of the object as a comma-separated-values file. The
function takes a number of arguments. Only the first is required.

path_or_buf: A string path to the file to write or a StringIO

sep : Field delimiter for the output file (default ”,”)

na_rep: A string representation of a missing value (default ‘’)

float_format: Format string for floating point numbers

cols: Columns to write (default None)

header: Whether to write out the column names (default True)

index: whether to write row (index) names (default True)

index_label: Column label(s) for index column(s) if desired. If None
(default), and header and index are True, then the index names are
used. (A sequence should be given if the DataFrame uses MultiIndex).

mode : Python write mode, default ‘w’

encoding: a string representing the encoding to use if the contents are
non-ASCII, for python versions prior to 3

line_terminator: Character sequence denoting line end (default ‘\n’)

quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL)

quotechar: Character used to quote fields (default ‘”’)

doublequote: Control quoting of quotechar in fields (default True)

escapechar: Character used to escape sep and quotechar when
appropriate (default None)

chunksize: Number of rows to write at a time

tupleize_cols: If False (default), write as a list of tuples, otherwise
write in an expanded line format suitable for read_csv

The Series object also has a to_string method, but with only the buf,
na_rep, float_format arguments. There is also a length argument
which, if set to True, will additionally output the length of the Series.

double_precision : The number of decimal places to use when encoding floating point values, default 10.

force_ascii : force encoded string to be ASCII, default True.

date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or ‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.

default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object.

Note NaN‘s, NaT‘s and None will be converted to null and datetime objects will be converted based on the date_format and date_unit parameters.

Record oriented serializes the data to a JSON array of column -> value records,
index labels are not included. This is useful for passing DataFrame data to plotting
libraries, for example the JavaScript library d3.js:

Any orient option that encodes to a JSON object will not preserve the ordering of
index and column labels during round-trip serialization. If you wish to preserve
label ordering use the split option as it uses ordered containers.

Reading a JSON string to pandas object can take a number of parameters.
The parser will try to parse a DataFrame if typ is not supplied or
is None. To explicitly force Series parsing, pass typ=series

filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be
a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host
is expected. For instance, a local file could be
file ://localhost/path/to/table.json

typ : type of object to recover (series or frame), default ‘frame’

orient :

Series :

default is index

allowed values are {split, records, index}

DataFrame

default is columns

allowed values are {split, records, index, columns, values}

The format of the JSON string

split

dict like {index -> [index], columns -> [columns], data -> [values]}

records

list like [{column -> value}, ... , {column -> value}]

index

dict like {index -> {column -> value}}

columns

dict like {column -> {index -> value}}

values

just the values array

dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at all, default is True, apply only to the data

convert_axes : boolean, try to convert the axes to the proper dtypes, default is True

convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default is True

numpy : direct decoding to numpy arrays. default is False;
Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True

precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality

date_unit : string, the timestamp unit to detect if converting dates. Default
None. By default the timestamp precision will be detected, if this is not desired
then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp precision to
seconds, milliseconds, microseconds or nanoseconds respectively.

The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.

If a non-default orient was used when encoding to JSON be sure to pass the same
option here so that decoding produces sensible results, see Orient Options for an
overview.

The default of convert_axes=True, dtype=True, and convert_dates=True will try to parse the axes, and all of the data
into appropriate types, including dates. If you need to override specific dtypes, pass a dict to dtype. convert_axes should only
be set to False if you need to preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.

Note

Large integer values may be converted to dates if convert_dates=True and the data and / or column labels appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label meets one of the following criteria:

it ends with '_at'

it ends with '_time'

it begins with 'timestamp'

it is 'modified'

it is 'date'

Warning

When reading JSON data, automatic coercing into dtypes has some quirks:

an index can be reconstructed in a different order from serialization, that is, the returned order is not guaranteed to be the same as before serialization

a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.

bool columns will be converted to integer on reconstruction

Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.

If numpy=True is passed to read_json an attempt will be made to sniff
an appropriate dtype during deserialization and to subsequently decode directly
to numpy arrays, bypassing the need for intermediate Python objects.

This can provide speedups if you are deserialising a large amount of numeric
data:

In [199]: timeitread_json(jsonfloats)100 loops, best of 3: 10.7 ms per loop

In [200]: timeitread_json(jsonfloats,numpy=True)100 loops, best of 3: 6.34 ms per loop

The speedup is less noticeable for smaller datasets:

In [201]: jsonfloats=dffloats.head(100).to_json()

In [202]: timeitread_json(jsonfloats)100 loops, best of 3: 4.97 ms per loop

In [203]: timeitread_json(jsonfloats,numpy=True)100 loops, best of 3: 3.96 ms per loop

Warning

Direct numpy decoding makes a number of assumptions and may fail or produce
unexpected output if these assumptions are not satisfied:

data is numeric.

data is uniform. The dtype is sniffed from the first value decoded.
A ValueError may be raised, or incorrect output may be produced
if this condition is not satisfied.

labels are ordered. Labels are only read from the first container, it is assumed
that each subsequent row / column has been encoded in the same order. This should be satisfied if the
data was encoded using to_json but may not be the case if the JSON
is from another source.

The following examples are not run by the IPython evaluator due to the fact
that having so many network-accessing functions slows down the documentation
build. If you spot an error or an example that doesn’t run, please do not
hesitate to report it over on pandas GitHub issues page.

Read a URL and match a table that contains specific text

match='Metcalf Bank'df_list=read_html(url,match=match)

Specify a header row (by default <th> elements are used to form the column
index); if specified, the header row is taken from the data minus the parsed
header elements (<th> elements).

dfs=read_html(url,header=0)

Specify an index column

dfs=read_html(url,index_col=0)

Specify a number of rows to skip

dfs=read_html(url,skiprows=0)

Specify a number of rows to skip using a list (xrange (Python 2 only) works
as well)

dfs=read_html(url,skiprows=range(2))

Specify an HTML attribute

dfs1=read_html(url,attrs={'id':'table'})dfs2=read_html(url,attrs={'class':'sortable'})print(np.array_equal(dfs1[0],dfs2[0]))# Should be True

Use some combination of the above

dfs=read_html(url,match='Metcalf Bank',index_col=0)

Read in pandas to_html output (with some loss of floating point precision)

The lxml backend will raise an error on a failed parse if that is the only
parser you provide (if you only have a single parser you can provide just a
string, but it is considered good practice to pass a list with one string if,
for example, the function expects a sequence of strings)

dfs=read_html(url,'Metcalf Bank',index_col=0,flavor=['lxml'])

or

dfs=read_html(url,'Metcalf Bank',index_col=0,flavor='lxml')

However, if you have bs4 and html5lib installed and pass None or ['lxml','bs4'] then the parse will most likely succeed. Note that as soon as a parse
succeeds, the function will return.

Finally, the escape argument allows you to control whether the
“<”, “>” and “&” characters escaped in the resulting HTML (by default it is
True). So to get the HTML without escaped characters pass escape=False

The read_excel() method can read Excel 2003 (.xls) and
Excel 2007+ (.xlsx) files using the xlrd Python
module. The to_excel() instance method is used for
saving a DataFrame to Excel. Generally the semantics are
similar to working with csv data. See the cookbook for some
advanced strategies

To facilitate working with multiple sheets from the same file, the ExcelFile
class can be used to wrap the file and can be be passed into read_excel
There will be a performance benefit for reading multiple sheets as the file is
read into memory only once.

The sheet_names property will generate
a list of the sheet names in the file.

The primary use-case for an ExcelFile is parsing multiple sheets with
different parameters

data={}# For when Sheet1's format differs from Sheet2withpd.ExcelFile('path_to_file.xls')asxls:data['Sheet1']=pd.read_excel(xls,'Sheet1',index_col=None,na_values=['NA'])data['Sheet2']=pd.read_excel(xls,'Sheet2',index_col=1)

Note that if the same parsing parameters are used for all sheets, a list
of sheet names can simply be passed to read_excel with no loss in performance.

# using the ExcelFile classdata={}withpd.ExcelFile('path_to_file.xls')asxls:data['Sheet1']=read_excel(xls,'Sheet1',index_col=None,na_values=['NA'])data['Sheet2']=read_excel(xls,'Sheet2',index_col=None,na_values=['NA'])# equivalent using the read_excel functiondata=read_excel('path_to_file.xls',['Sheet1','Sheet2'],index_col=None,na_values=['NA'])

read_excel can read a MultiIndex index, by passing a list of columns to index_col
and a MultiIndex column by passing a list of rows to header. If either the index
or columns have serialized level names those will be read in as well by specifying
the rows/columns that make up the levels.

It is often the case that users will insert columns to do temporary computations
in Excel and you may not want to read in those columns. read_excel takes
a parse_cols keyword to allow you to specify a subset of columns to parse.

If parse_cols is an integer, then it is assumed to indicate the last column
to be parsed.

read_excel('path_to_file.xls','Sheet1',parse_cols=2)

If parse_cols is a list of integers, then it is assumed to be the file column
indices to be parsed.

It is possible to transform the contents of Excel cells via the converters
option. For instance, to convert a column to boolean:

read_excel('path_to_file.xls','Sheet1',converters={'MyBools':bool})

This options handles missing values and treats exceptions in the converters
as missing data. Transformations are applied cell by cell rather than to the
column as a whole, so the array dtype is not guaranteed. For instance, a
column of integers with missing values cannot be transformed to an array
with integer dtype, because NaN is strictly a float. You can manually mask
missing data to recover integer dtype:

To write a DataFrame object to a sheet of an Excel file, you can use the
to_excel instance method. The arguments are largely the same as to_csv
described above, the first argument being the name of the excel file, and the
optional second argument the name of the sheet to which the DataFrame should be
written. For example:

df.to_excel('path_to_file.xlsx',sheet_name='Sheet1')

Files with a .xls extension will be written using xlwt and those with a
.xlsx extension will be written using xlsxwriter (if available) or
openpyxl.

The DataFrame will be written in a way that tries to mimic the REPL output. One
difference from 0.12.0 is that the index_label will be placed in the second
row instead of the first. You can get the previous behaviour by setting the
merge_cells option in to_excel() to False:

Wringing a little more performance out of read_excel
Internally, Excel stores all numeric data as floats. Because this can
produce unexpected behavior when reading in data, pandas defaults to trying
to convert integers to floats if it doesn’t lose information (1.0-->1). You can pass convert_float=False to disable this behavior, which
may give a slight performance improvement.

Pandas supports writing Excel files to buffer-like objects such as StringIO or
BytesIO using ExcelWriter.

New in version 0.17.

Added support for Openpyxl >= 2.2

# Safe import for either Python 2.x or 3.xtry:fromioimportBytesIOexceptImportError:fromcStringIOimportStringIOasBytesIObio=BytesIO()# By setting the 'engine' in the ExcelWriter constructor.writer=ExcelWriter(bio,engine='xlsxwriter')df.to_excel(writer,sheet_name='Sheet1')# Save the workbookwriter.save()# Seek to the beginning and read to copy the workbook to a variable in memorybio.seek(0)workbook=bio.read()

Note

engine is optional but recommended. Setting the engine determines
the version of workbook produced. Setting engine='xlrd' will produce an
Excel 2003-format workbook (xls). Using either 'openpyxl' or
'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If
omitted, an Excel 2007-formatted workbook is produced.

By default, pandas uses the XlsxWriter for .xlsx and openpyxl
for .xlsm files and xlwt for .xls files. If you have multiple
engines installed, you can set the default engine through setting the
config optionsio.excel.xlsx.writer and
io.excel.xls.writer. pandas will fall back on openpyxl for .xlsx
files if Xlsxwriter is not available.

To specify which writer you want to use, you can pass an engine keyword
argument to to_excel and to ExcelWriter. The built-in engines are:

openpyxl: This includes stable support for Openpyxl from 1.6.1. However,
it is advised to use version 2.2 and higher, especially when working with
styles.

xlsxwriter

xlwt

# By setting the 'engine' in the DataFrame and Panel 'to_excel()' methods.df.to_excel('path_to_file.xlsx',sheet_name='Sheet1',engine='xlsxwriter')# By setting the 'engine' in the ExcelWriter constructor.writer=ExcelWriter('path_to_file.xlsx',engine='xlsxwriter')# Or via pandas configuration.frompandasimportoptionsoptions.io.excel.xlsx.writer='xlsxwriter'df.to_excel('path_to_file.xlsx',sheet_name='Sheet1')

A handy way to grab data is to use the read_clipboard method, which takes
the contents of the clipboard buffer and passes them to the read_table
method. For instance, you can copy the following
text to the clipboard (CTRL-C on many operating systems):

ABCx14py25qz36r

And then import the data directly to a DataFrame by calling:

clipdf=pd.read_clipboard()

In [237]: clipdfOut[237]: A B Cx 1 4 py 2 5 qz 3 6 r

The to_clipboard method can be used to write the contents of a DataFrame to
the clipboard. Following which you can paste the clipboard contents into other
applications (CTRL-V on many operating systems). Here we illustrate writing a
DataFrame into clipboard and reading it back.

Several internal refactorings, 0.13 (Series Refactoring), and 0.15 (Index Refactoring),
preserve compatibility with pickles created prior to these versions. However, these must
be read with pd.read_pickle, rather than the default python pickle.load.
See this question
for a detailed explanation.

Note

These methods were previously pd.save and pd.load, prior to 0.12.0, and are now deprecated.

Starting in 0.13.0, pandas is supporting the msgpack format for
object serialization. This is a lightweight portable binary format, similar
to binary JSON, that is highly space efficient, and provides good performance
both on the writing (serialization), and reading (deserialization).

Warning

This is a very new feature of pandas. We intend to provide certain
optimizations in the io of the msgpack data. Since this is marked
as an EXPERIMENTAL LIBRARY, the storage format may not be stable until a future release.

Unlike other io methods, to_msgpack is available on both a per-object basis,
df.to_msgpack() and using the top-level pd.to_msgpack(...) where you
can pack arbitrary collections of python lists, dicts, scalars, while intermixing
pandas objects.

There is a PyTables indexing bug which may appear when querying stores using an index. If you see a subset of results being returned, upgrade to PyTables >= 3.2. Stores created previously will need to be rewritten using the updated version.

Warning

As of version 0.17.0, HDFStore will not drop rows that have all missing values by default. Previously, if all values (except the index) were missing, HDFStore would not write those rows to disk.

The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called
the fixed format. These types of stores are are not appendable once written (though you can simply
remove them and rewrite). Nor are they queryable; they must be
retrieved in their entirety. They also do not support dataframes with non-unique column names.
The fixed format stores offer very fast writing and slightly faster reading than table stores.
This format is specified by default when using put or to_hdf or by format='fixed' or format='f'

Warning

A fixed format will raise a TypeError if you try to retrieve using a where .

HDFStore supports another PyTables format on disk, the table
format. Conceptually a table is shaped very much like a DataFrame,
with rows and columns. A table may be appended to in the same or
other sessions. In addition, delete & query type operations are
supported. This format is specified by format='table' or format='t'
to append or put or to_hdf

New in version 0.13.

This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to
enable put/append/to_hdf to by default store in the table format.

Keys to a store can be specified as a string. These can be in a
hierarchical path-name like format (e.g. foo/bar/bah), which will
generate a hierarchy of sub-stores (or Groups in PyTables
parlance). Keys can be specified with out the leading ‘/’ and are ALWAYS
absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove
everything in the sub-store and BELOW, so be careful.

Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored under the root node.

In[8]:store.foo.bar.bahAttributeError:'HDFStore'objecthasnoattribute'foo'# you can directly access the actual PyTables node but using the root nodeIn[9]:store.root.foo.bar.bahOut[9]:/foo/bar/bah(Group)''children:=['block0_items'(Array),'block0_values'(Array),'axis0'(Array),'axis1'(Array)]

Storing mixed-dtype data is supported. Strings are stored as a
fixed-width using the maximum size of the appended column. Subsequent attempts
at appending longer strings will raise a ValueError.

Passing min_itemsize={`values`:size} as a parameter to append
will set a larger minimum for the string columns. Storing floats,strings,ints,bools,datetime64 are currently supported. For string
columns, passing nan_rep='nan' to append will change the default
nan representation on disk (which converts to/from np.nan), this
defaults to nan.

This query capabilities have changed substantially starting in 0.13.0.
Queries from prior version are accepted (with a DeprecationWarning) printed
if its not string-like.

select and delete operations have an optional criterion that can
be specified to select/delete only a subset of the data. This allows one
to have a very large on-disk table and retrieve only a portion of the
data.

A query is specified using the Term class under the hood, as a boolean expression.

index and columns are supported indexers of a DataFrame

major_axis, minor_axis, and items are supported indexers of
the Panel

if data_columns are specified, these can be used as additional indexers

Valid comparison operators are:

=,==,!=,>,>=,<,<=

Valid boolean expressions are combined with:

| : or

& : and

( and ) : for grouping

These rules are similar to how boolean expressions are used in pandas for indexing.

Note

= will be automatically expanded to the comparison operator ==

~ is the not operator, but can only be used in very limited
circumstances

If a list/tuple of expressions is passed they will be combined via &

The following are valid expressions:

'index>=date'

"columns=['A','D']"

"columnsin['A','D']"

'columns=A'

'columns==A'

"~(columns=['A','B'])"

'index>df.index[3]&string="bar"'

'(index>df.index[3]&index<=df.index[6])|string="bar"'

"ts>=Timestamp('2012-02-01')"

"major_axis>=20130101"

The indexers are on the left-hand side of the sub-expression:

columns, major_axis, ts

The right-hand side of the sub-expression (after a comparison operator) can be:

functions that will be evaluated, e.g. Timestamp('2012-02-01')

strings, e.g. "bar"

date-like, e.g. 20130101, or "20130101"

lists, e.g. "['A','B']"

variables that are defined in the local names space, e.g. date

Note

Passing a string to a query by interpolating it into the query
expression is not recommended. Simply assign the string of interest to a
variable and use that variable in an expression. For example, do this

string="HolyMoly'"store.select('df','index == string')

instead of this

string="HolyMoly'"store.select('df','index == %s'%string)

The latter will not work and will raise a SyntaxError.Note that
there’s a single quote followed by a double quote in the string
variable.

Beginning in 0.13.0, you can store and query using the timedelta64[ns] type. Terms can be
specified in the format: <float>(<unit>), where float may be signed (and fractional), and unit can be
D,s,ms,us,ns for the timedelta. Here’s an example:

You can create/modify an index for a table with create_table_index
after data is already in the table (after and append/put
operation). Creating a table index is highly encouraged. This will
speed your queries a great deal when you use a select with the
indexed dimension as the where.

Note

Indexes are automagically created (starting 0.10.1) on the indexables
and any data columns you specify. This behavior can be turned off by passing
index=False to append.

You can designate (and index) certain columns that you want to be able
to perform queries (other than the indexable columns, which you can
always query). For instance say you want to perform this common
operation, on-disk, and return just the frame that matches this
query. You can specify data_columns=True to force all columns to
be data_columns

There is some performance degradation by making lots of columns into
data columns, so it is up to the user to designate these. In addition,
you cannot change data columns (nor indexables) after the first
append/put operation (Of course you can simply read in the data and
create a new table!)

You can also use the iterator with read_hdf which will open, then
automatically close the store when finished iterating.

fordfinread_hdf('store.h5','df',chunksize=3):print(df)

Note, that the chunksize keyword applies to the source rows. So if you
are doing a query, then the chunksize will subdivide the total rows in the table
and the query applied, returning an iterator on potentially unequal sized chunks.

Here is a recipe for generating a query and using it to create equal sized return
chunks.

To retrieve a single indexable or data column, use the
method select_column. This will, for example, enable you to get the index
very quickly. These return a Series of the result, indexed by the row number.
These do not currently accept the where selector.

Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an
Int64Index of the resulting locations. These coordinates can also be passed to subsequent
where operations.

Sometime your query can involve creating a list of rows to select. Usually this mask would
be a resulting index from an indexing operation. This example selects the months of
a datetimeindex which are 5.

New in 0.10.1 are the methods append_to_multiple and
select_as_multiple, that can perform appending/selecting from
multiple tables at once. The idea is to have one table (call it the
selector table) that you index most/all of the columns, and perform your
queries. The other table(s) are data tables with an index matching the
selector table’s index. You can then perform a very fast query
on the selector table, yet get lots of data back. This method is similar to
having a very wide table, but enables more efficient queries.

The append_to_multiple method splits a given single DataFrame
into multiple tables according to d, a dictionary that maps the
table names to a list of ‘columns’ you want in that table. If None
is used in place of a list, that table will have the remaining
unspecified columns of the given DataFrame. The argument selector
defines which table is the selector table (which you can make queries from).
The argument dropna will drop rows from the input DataFrame to ensure
tables are synchronized. This means that if a row for one of the tables
being written to is entirely np.NaN, that row will be dropped from all tables.

If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES.
Remember that entirely np.Nan rows are not written to the HDFStore, so if
you choose to call dropna=False, some tables may have more rows than others,
and therefore select_as_multiple may not work or it may return unexpected
results.

You can delete from a table selectively by specifying a where. In
deleting rows, it is important to understand the PyTables deletes
rows by erasing the rows, then moving the following data. Thus
deleting can potentially be a very expensive operation depending on the
orientation of your data. This is especially true in higher dimensional
objects (Panel and Panel4D). To get optimal performance, it’s
worthwhile to have the dimension you are deleting be the first of the
indexables.

Data is ordered (on the disk) in terms of the indexables. Here’s a
simple use case. You store panel-type data, with dates in the
major_axis and ids in the minor_axis. The data is then
interleaved like this:

date_1
- id_1
- id_2
- .
- id_n

date_2
- id_1
- .
- id_n

It should be clear that a delete operation on the major_axis will be
fairly quick, as one chunk is removed, then the following data moved. On
the other hand a delete operation on the minor_axis will be very
expensive. In this case it would almost certainly be faster to rewrite
the table using a where that selects all but the missing data.

PyTables allows the stored data to be compressed. This applies to
all kinds of stores, not just tables.

Pass complevel=int for a compression level (1-9, with 0 being no
compression, and the default)

Pass complib=lib where lib is any of zlib,bzip2,lzo,blosc for
whichever compression library you prefer.

HDFStore will use the file based compression scheme if no overriding
complib or complevel options are provided. blosc offers very
fast compression, and is my most used. Note that lzo and bzip2
may not be installed (by Python) by default.

PyTables offers better write performance when tables are compressed after
they are written, as opposed to turning on compression at the very
beginning. You can use the supplied PyTables utility
ptrepack. In addition, ptrepack can change compression levels
after the fact.

HDFStore is not-threadsafe for writing. The underlying
PyTables only supports concurrent reads (via threading or
processes). If you need reading and writing at the same time, you
need to serialize these operations in a single thread in a single
process. You will corrupt your data otherwise. See the (GH2397) for more information.

If you use locks to manage write access between multiple processes, you
may want to use fsync() before releasing write locks. For
convenience you can use store.flush(fsync=True) to do this for you.

Once a table is created its items (Panel) / columns (DataFrame)
are fixed; only exactly the same columns can be appended

Be aware that timezones (e.g., pytz.timezone('US/Eastern'))
are not necessarily equal across timezone versions. So if data is
localized to a specific timezone in the HDFStore using one version
of a timezone library and that data is updated with another version, the data
will be converted to UTC since these timezones are not considered
equal. Either use the same version of timezone library or use tz_convert with
the updated timezone definition.

Warning

PyTables will show a NaturalNameWarning if a column name
cannot be used as an attribute selector.
Natural identifiers contain only letters, numbers, and underscores,
and may not begin with a number.
Other identifiers cannot be used in a where clause
and are generally a bad idea.

Writing data to a HDFStore that contains a category dtype was implemented
in 0.15.2. Queries work the same as if it was an object array. However, the category dtyped data is
stored in a more efficient manner.

The format of the Categorical is readable by prior versions of pandas (< 0.15.2), but will retrieve
the data as an integer based column (e.g. the codes). However, the categoriescan be retrieved
but require the user to select them manually using the explicit meta path.

The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns.
A string column itemsize is calculated as the maximum of the
length of data (for that column) that is passed to the HDFStore, in the first append. Subsequent appends,
may introduce a string for a column larger than the column can hold, an Exception will be raised (otherwise you
could have a silent truncation of these columns, leading to loss of information). In the future we may relax this and
allow a user-specified truncation to occur.

Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column.
min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key to
allow all indexables or data_columns to have this min_itemsize.

Starting in 0.11.0, passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.

Note

If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of any string passed

String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to the string value nan.
You could inadvertently turn an actual nan value into a missing value.

In R this file can be read into a data.frame object using the rhdf5
library. The following example function reads the corresponding column names
and data values from the values and assembles them into a data.frame:

The R function lists the entire HDF5 file’s contents and assembles the
data.frame object from all matching nodes, so use this only as a
starting point if you have stored multiple DataFrame objects to a
single HDF5 file.

0.10.1 of HDFStore can read tables created in a prior version of pandas,
however query terms using the
prior (undocumented) methodology are unsupported. HDFStore will
issue a warning if you try to use a legacy-format file. You must
read in the entire file and write it out using the new format, using the
method copy to take advantage of the updates. The group attribute
pandas_version contains the version information. copy takes a
number of options, please see the docstring.

tables format come with a writing performance penalty as compared to
fixed stores. The benefit is the ability to append/delete and
query (potentially very large amounts of data). Write times are
generally longer as compared with regular stores. Query times can
be quite fast, especially on an indexed axis.

You can pass chunksize=<int> to append, specifying the
write chunksize (default is 50000). This will significantly lower
your memory usage on writing.

You can pass expectedrows=<int> to the first append,
to set the TOTAL number of expected rows that PyTables will
expected. This will optimize read/write performance.

Duplicate rows can be written to tables, but are filtered out in
selection (with the last items being selected; thus a table is
unique on major, minor pairs)

A PerformanceWarning will be raised if you are attempting to
store types that will be pickled by PyTables (rather than stored as
endemic types). See
Here
for more information and some solutions.

These, by default, index the three axes items,major_axis,minor_axis. On an AppendableTable it is possible to setup with the
first append a different indexing scheme, depending on how you want to
store your data. Pass the axes keyword with a list of dimensions
(currently must by exactly 1 less than the total dimensions of the
object). This cannot be changed after table creation.

The pandas.io.sql module provides a collection of query wrappers to both
facilitate data retrieval and to reduce dependency on DB-specific API. Database abstraction
is provided by SQLAlchemy if installed. In addition you will need a driver library for
your database. Examples of such drivers are psycopg2
for PostgreSQL or pymysql for MySQL.
For SQLite this is
included in Python’s standard library by default.
You can find an overview of supported drivers for each SQL dialect in the
SQLAlchemy docs.

New in version 0.14.0.

If SQLAlchemy is not installed, a fallback is only provided for sqlite (and
for mysql for backwards compatibility, but this is deprecated and will be
removed in a future version).
This mode requires a Python database adapter which respect the Python
DB-API.

The function read_sql() is a convenience wrapper around
read_sql_table() and read_sql_query() (and for
backward compatibility) and will delegate to specific function depending on
the provided input (database table name or sql query).
Table names do not need to be quoted if they have special characters.

In the following example, we use the SQlite SQL database
engine. You can use a temporary SQLite database where data are stored in
“memory”.

To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.
For more information on create_engine() and the URI formatting, see the examples
below and the SQLAlchemy documentation

Assuming the following data is in a DataFrame data, we can insert it into
the database using to_sql().

id

Date

Col_1

Col_2

Col_3

26

2012-10-18

X

25.7

True

42

2012-10-19

Y

-12.4

False

63

2012-10-20

Z

5.73

True

In [437]: data.to_sql('data',engine)

With some databases, writing large DataFrames can result in errors due to
packet size limitations being exceeded. This can be avoided by setting the
chunksize parameter when calling to_sql. For example, the following
writes data to the database in batches of 1000 rows at a time:

to_sql() will try to map your data to an appropriate
SQL data type based on the dtype of the data. When you have columns of dtype
object, pandas will try to infer the data type.

You can always override the default type by specifying the desired SQL type of
any of the columns by using the dtype argument. This argument needs a
dictionary mapping column names to SQLAlchemy types (or strings for the sqlite3
fallback mode).
For example, specifying to use the sqlalchemy String type instead of the
default Text type for string columns:

Due to the limited support for timedelta’s in the different database
flavors, columns with type timedelta64 will be written as integer
values as nanoseconds to the database and a warning will be raised.

Note

Columns of category dtype will be converted to the dense representation
as you would get with np.asarray(categorical) (e.g. for string categories
this gives an array of strings).
Because of this, reading the database table back in does not generate
a categorical.

Reading from and writing to different schema’s is supported through the schema
keyword in the read_sql_table() and to_sql()
functions. Note however that this depends on the database flavor (sqlite does not
have schema’s). For example:

You can query using raw SQL in the read_sql_query() function.
In this case you must use the SQL variant appropriate for your database.
When using SQLAlchemy, you can also pass SQLAlchemy Expression language constructs,
which are database-agnostic.

You can also run a plain query without creating a dataframe with
execute(). This is useful for queries that don’t return values,
such as INSERT. This is functionally equivalent to calling execute on the
SQLAlchemy engine or db connection object. Again, you must use the SQL syntax
variant appropriate for your database.

To connect with SQLAlchemy you use the create_engine() function to create an engine
object from database URI. You only need to create the engine once per database you are
connecting to.

fromsqlalchemyimportcreate_engineengine=create_engine('postgresql://scott:[email protected]:5432/mydatabase')engine=create_engine('mysql+mysqldb://scott:[email protected]/foo')engine=create_engine('oracle://scott:[email protected]:1521/sidname')engine=create_engine('mssql+pyodbc://mydsn')# sqlite://<nohostname>/<path># where <path> is relative:engine=create_engine('sqlite:///foo.db')# or absolute, starting with a slash:engine=create_engine('sqlite:////absolute/path/to/foo.db')

The pandas.io.gbq module provides a wrapper for Google’s BigQuery
analytics web service to simplify retrieving results from BigQuery tables
using SQL-like queries. Result sets are parsed into a pandas
DataFrame with a shape and data types derived from the source table.
Additionally, DataFrames can be inserted into new BigQuery tables or appended
to existing tables.

Authentication to the Google BigQuery service is via OAuth2.0.
Is possible to authenticate with either user account credentials or service account credentials.

Authenticating with user account credentials is as simple as following the prompts in a browser window
which will be automatically opened for you. You will be authenticated to the specified
BigQuery account using the product name pandasGBQ. It is only possible on local host.
The remote authentication using user account credentials is not currently supported in Pandas.
Additional information on the authentication mechanism can be found
here.

Authentication with service account credentials is possible via the ‘private_key’ parameter. This method
is particularly useful when working on remote servers (eg. jupyter iPython notebook on remote host).
Additional information on service accounts can be found
here.

The destination table and destination dataset will automatically be created if they do not already exist.

The if_exists argument can be used to dictate whether to 'fail', 'replace'
or 'append' if the destination table already exists. The default value is 'fail'.

For example, assume that if_exists is set to 'fail'. The following snippet will raise
a TableCreationError if the destination table already exists.

df.to_gbq('my_dataset.my_table',projectid,if_exists='fail')

Note

If the if_exists argument is set to 'append', the destination dataframe will
be written to the table using the defined table schema and column types. The
dataframe must match the destination table in column order, structure, and
data types.
If the if_exists argument is set to 'replace', and the existing table has a
different schema, a delay of 2 minutes will be forced to ensure that the new schema
has propagated in the Google environment. See
Google BigQuery issue 191.

Writing large DataFrames can result in errors due to size limitations being exceeded.
This can be avoided by setting the chunksize argument when calling to_gbq().
For example, the following writes df to a BigQuery table in batches of 10000 rows at a time:

df.to_gbq('my_dataset.my_table',projectid,chunksize=10000)

You can also see the progress of your post via the verbose flag which defaults to True.
For example:

While BigQuery uses SQL-like syntax, it has some important differences from traditional
databases both in functionality, API limitations (size and quantity of queries or uploads),
and how Google charges for use of the service. You should refer to Google BigQuery documentation
often as the service seems to be changing and evolving. BiqQuery is best for analyzing large
sets of data quickly, but it is not a direct replacement for a transactional database.

If you delete and re-create a BigQuery table with the same name, but different table schema,
you must wait 2 minutes before streaming data into the table. As a workaround, consider creating
the new table with a different name. Refer to
Google BigQuery issue 191.

Stata data files have limited data type support; only strings with
244 or fewer characters, int8, int16, int32, float32
and float64 can be stored in .dta files. Additionally,
Stata reserves certain values to represent missing data. Exporting a
non-missing value that is outside of the permitted range in Stata for
a particular data type will retype the variable to the next larger
size. For example, int8 values are restricted to lie between -127
and 100 in Stata, and so variables with values above 100 will trigger
a conversion to int16. nan values in floating points data
types are stored as the basic missing data type (. in Stata).

Note

It is not possible to export missing data values for integer data types.

The Stata writer gracefully handles other data types including int64,
bool, uint8, uint16, uint32 by casting to
the smallest supported type that can represent the data. For example, data
with a type of uint8 will be cast to int8 if all values are less than
100 (the upper bound for non-missing int8 data in Stata), or, if values are
outside of this range, the variable is cast to int16.

Warning

Conversion from int64 to float64 may result in a loss of precision
if int64 values are larger than 2**53.

Warning

StataWriter and
to_stata() only support fixed width
strings containing up to 244 characters, a limitation imposed by the version
115 dta file format. Attempting to write Stata dta files with strings
longer than 244 characters raises a ValueError.

The parameter convert_categoricals indicates whether value labels should be
read and used to create a Categorical variable from them. Value labels can
also be retrieved by the function value_labels, which requires read()
to be called before use.

The parameter convert_missing indicates whether missing value
representations in Stata should be preserved. If False (the default),
missing values are represented as np.nan. If True, missing values are
represented using StataMissingValue objects, and columns containing missing
values will have object data type.

Setting preserve_dtypes=False will upcast to the standard pandas data types:
int64 for all integer types and float64 for floating point data. By default,
the Stata data types are preserved when importing.

Categorical data can be exported to Stata data files as value labeled data.
The exported data consists of the underlying category codes as integer data values
and the categories as value labels. Stata does not have an explicit equivalent
to a Categorical and information about whether the variable is ordered
is lost when exporting.

Warning

Stata only supports string value labels, and so str is called on the
categories when exporting data. Exporting Categorical variables with
non-string categories produces a warning, and can result a loss of
information if the str representations of the categories are not unique.

When importing categorical data, the values of the variables in the Stata
data file are not preserved since Categorical variables always
use integer data types between -1 and n-1 where n is the number
of categories. If the original values in the Stata data file are required,
these can be imported by setting convert_categoricals=False, which will
import original data (but not the variable labels). The original values can
be matched to the imported categorical data since there is a simple mapping
between the original Stata data values and the category codes of imported
Categorical variables: missing values are assigned code -1, and the
smallest original value is assigned 0, the second smallest is assigned
1 and so on until the largest original value is assigned the code n-1.

Note

Stata supports partially labeled series. These series have value labels for
some but not all data values. Importing a partially labeled series will produce
a Categorical with string categories for the values that are labeled and
numeric categories for values with no label.

SAS files only contain two value types: ASCII text and floating point
values (usually 8 bytes but sometimes truncated). For xport files,
there is no automatic type conversion to integers, dates, or
categoricals. For SAS7BDAT files, the format codes may allow date
variables to be automatically converted to dates. By default the
whole file is read and returned as a DataFrame.

Specify a chunksize or use iterator=True to obtain reader
objects (XportReader or SAS7BDATReader) for incrementally
reading the file. The reader objects also have attributes that
contain additional information about the file and its variables.

pandas itself only supports IO with a limited set of file formats that map
cleanly to its tabular data model. For reading and writing other file formats
into and from pandas, we recommend these packages from the broader community.