Fast queries now do not log above DEBUG level. (GH#204)
With BigQuery’s release of clustering
querying smaller samples of data is now faster and cheaper.

Don’t load credentials from disk if reauth is True. (GH#212)
This fixes a bug where pandas-gbq could not refresh credentials if the
cached credentials were invalid, revoked, or expired, even when
reauth=True.

Use the google-cloud-bigquery library for API calls. The google-cloud-bigquery package is a new dependency, and dependencies on google-api-python-client and httplib2 are removed. See the installation guide for more details. (GH#93)

Structs and arrays are now named properly (GH#23) and BigQuery functions like array_agg no longer run into errors during type conversion (GH#22).

to_gbq() now uses a load job instead of the streaming API. Remove StreamingInsertError class, as it is no longer used by to_gbq(). (GH#7, GH#75)

The dataframe passed to `.to_gbq(....,if_exists='append')` needs to contain only a subset of the fields in the BigQuery schema. (GH#24)

Use the google-auth library for authentication because oauth2client is deprecated. (GH#39)

read_gbq() now has a auth_local_webserver boolean argument for controlling whether to use web server or console flow when getting user credentials. Replaces –noauth_local_webserver command line argument. (GH#35)

read_gbq() now displays the BigQuery Job ID and standard price in verbose output. (GH#70 and GH#71)

Bug with appending to a BigQuery table where fields have modes (NULLABLE,REQUIRED,REPEATED) specified. These modes were compared versus the remote schema and writing a table via to_gbq() would previously raise. (GH#13)

read_gbq() now stores INTEGER columns as dtype=object if they contain NULL values. Otherwise they are stored as int64. This prevents precision lost for integers greather than 2**53. Furthermore FLOAT columns with values above 10**4 are no longer casted to int64 which also caused precision loss pandas-GH#14064, and pandas-GH#14305