These methods create references to datasets, not the datasets themselves. You can have
a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to
create a dataset from a reference:

You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type
to specify the schema and insert ID by hand, or just supply the struct or struct pointer
directly and the schema will be inferred:

If you've been following so far, extracting data from a BigQuery table
into a Google Cloud Storage object will feel familiar. First create an
Extractor, then optionally configure it, and lastly call its Run method.

type CopyConfig struct {
// JobID is the ID to use for the job. If empty, a random job ID will be generated.JobIDstring// If AddJobIDSuffix is true, then a random string will be appended to JobID.AddJobIDSuffixbool// Srcs are the tables from which data will be copied.Srcs []*Table// Dst is the table into which the data will be copied.Dst *Table// CreateDisposition specifies the circumstances under which the destination table will be created.
// The default is CreateIfNeeded.CreateDispositionTableCreateDisposition// WriteDisposition specifies how existing data in the destination table is treated.
// The default is WriteAppend.WriteDispositionTableWriteDisposition
}

Update modifies specific Dataset metadata fields.
To perform a read-modify-write that protects against intervening reads,
set the etag argument to the DatasetMetadata.ETag field from the read.
Pass the empty string for etag for a "blind write" that will always succeed.

This example illustrates how to perform a read-modify-write sequence on dataset
metadata. Passing the metadata's ETag to the Update call ensures that the call
will fail if the metadata was changed since the read.

type DatasetIterator struct {
// ListHidden causes hidden datasets to be listed when set to true.
// Set before the first call to Next.ListHiddenbool// Filter restricts the datasets returned by label. The filter syntax is described in
// https://cloud.google.com/bigquery/docs/labeling-datasets#filtering_datasets_using_labels
// Set before the first call to Next.Filterstring// The project ID of the listed datasets.
// Set before the first call to Next.ProjectIDstring// contains filtered or unexported fields
}

type DatasetMetadata struct {
// These fields can be set when creating a dataset.Namestring// The user-friendly name for this dataset.Descriptionstring// The user-friendly description of this dataset.Locationstring// The geo location of the dataset.DefaultTableExpirationtime.Duration// The default expiration time for new tables.Labels map[string]string// User-provided labels.
// These fields are read-only.CreationTimetime.TimeLastModifiedTimetime.Time// When the dataset or any of its tables were modified.FullIDstring// The full dataset ID in the form projectID:datasetID.
// ETag is the ETag obtained when reading metadata. Pass it to Dataset.Update to
// ensure that the metadata hasn't changed since it was read.ETagstring
}

type DatasetMetadataToUpdate struct {
Descriptionoptional.String// The user-friendly description of this table.Nameoptional.String// The user-friendly name for this dataset.
// DefaultTableExpiration is the the default expiration time for new tables.
// If set to time.Duration(0), new tables never expire.DefaultTableExpirationoptional.Duration// contains filtered or unexported fields
}

DatasetMetadataToUpdate is used when updating a dataset's metadata.
Only non-nil fields will be updated.

type ExplainQueryStage struct {
// Relative amount of the total time the average shard spent on CPU-bound tasks.ComputeRatioAvgfloat64// Relative amount of the total time the slowest shard spent on CPU-bound tasks.ComputeRatioMaxfloat64// Unique ID for stage within plan.IDint64// Human-readable name for stage.Namestring// Relative amount of the total time the average shard spent reading input.ReadRatioAvgfloat64// Relative amount of the total time the slowest shard spent reading input.ReadRatioMaxfloat64// Number of records read into the stage.RecordsReadint64// Number of records written by the stage.RecordsWrittenint64// Current status for the stage.Statusstring// List of operations within the stage in dependency order (approximately
// chronological).Steps []*ExplainQueryStep// Relative amount of the total time the average shard spent waiting to be scheduled.WaitRatioAvgfloat64// Relative amount of the total time the slowest shard spent waiting to be scheduled.WaitRatioMaxfloat64// Relative amount of the total time the average shard spent on writing output.WriteRatioAvgfloat64// Relative amount of the total time the slowest shard spent on writing output.WriteRatioMaxfloat64
}

type ExtractConfig struct {
// JobID is the ID to use for the job. If empty, a random job ID will be generated.JobIDstring// If AddJobIDSuffix is true, then a random string will be appended to JobID.AddJobIDSuffixbool// Src is the table from which data will be extracted.Src *Table// Dst is the destination into which the data will be extracted.Dst *GCSReference// DisableHeader disables the printing of a header row in exported data.DisableHeaderbool
}

type ExtractStatistics struct {
// The number of files per destination URI or URI pattern specified in the
// extract configuration. These values will be in the same order as the
// URIs specified in the 'destinationUris' field.DestinationURIFileCounts []int64
}

type FieldSchema struct {
// The field name.
// Must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_),
// and must start with a letter or underscore.
// The maximum length is 128 characters.Namestring// A description of the field. The maximum length is 16,384 characters.Descriptionstring// Whether the field may contain multiple values.Repeatedbool// Whether the field is required. Ignored if Repeated is true.Requiredbool// The field data type. If Type is Record, then this field contains a nested schema,
// which is described by Schema.TypeFieldType// Describes the nested schema if Type is set to Record.SchemaSchema
}

type FileConfig struct {
// SourceFormat is the format of the GCS data to be read.
// Allowed values are: CSV, Avro, JSON, DatastoreBackup. The default is CSV.SourceFormatDataFormat// FieldDelimiter is the separator for fields in a CSV file, used when
// reading or exporting data. The default is ",".FieldDelimiterstring// The number of rows at the top of a CSV file that BigQuery will skip when
// reading data.SkipLeadingRowsint64// AllowJaggedRows causes missing trailing optional columns to be tolerated
// when reading CSV data. Missing values are treated as nulls.AllowJaggedRowsbool// AllowQuotedNewlines sets whether quoted data sections containing
// newlines are allowed when reading CSV data.AllowQuotedNewlinesbool// Indicates if we should automatically infer the options and
// schema for CSV and JSON sources.AutoDetectbool// Encoding is the character encoding of data to be read.EncodingEncoding// MaxBadRecords is the maximum number of bad records that will be ignored
// when reading data.MaxBadRecordsint64// IgnoreUnknownValues causes values not matching the schema to be
// tolerated. Unknown values are ignored. For CSV this ignores extra values
// at the end of a line. For JSON this ignores named values that do not
// match any column name. If this field is not set, records containing
// unknown values are treated as bad records. The MaxBadRecords field can
// be used to customize how bad records are handled.IgnoreUnknownValuesbool// Schema describes the data. It is required when reading CSV or JSON data,
// unless the data is being loaded into a table that already exists.SchemaSchema// Quote is the value used to quote data sections in a CSV file. The
// default quotation character is the double quote ("), which is used if
// both Quote and ForceZeroQuote are unset.
// To specify that no character should be interpreted as a quotation
// character, set ForceZeroQuote to true.
// Only used when reading data.QuotestringForceZeroQuotebool
}

FileConfig contains configuration options that pertain to files, typically
text files that require interpretation to be used as a BigQuery table. A
file may live in Google Cloud Storage (see GCSReference), or it may be
loaded into a table via the Table.LoaderFromReader.

type GCSReference struct {
FileConfig// DestinationFormat is the format to use when writing exported files.
// Allowed values are: CSV, Avro, JSON. The default is CSV.
// CSV is not supported for tables with nested or repeated fields.DestinationFormatDataFormat// Compression specifies the type of compression to apply when writing data
// to Google Cloud Storage, or using this GCSReference as an ExternalData
// source with CSV or JSON SourceFormat. Default is None.CompressionCompression// contains filtered or unexported fields
}

GCSReference is a reference to one or more Google Cloud Storage objects, which together constitute
an input or output to a BigQuery operation.

NewGCSReference constructs a reference to one or more Google Cloud Storage objects, which together constitute a data source or destination.
In the simple case, a single URI in the form gs://bucket/object may refer to a single GCS object.
Data may also be split into mutiple files, if multiple URIs or URIs containing wildcards are provided.
Each URI may contain one '*' wildcard character, which (if present) must come after the bucket name.
For more information about the treatment of wildcards and multiple URIs,
see https://cloud.google.com/bigquery/exporting-data-from-bigquery#exportingmultiple

Cancel requests that a job be cancelled. This method returns without waiting for
cancellation to take effect. To check whether the job has terminated, use Job.Status.
Cancelled jobs may still incur costs.

Wait blocks until the job or the context is done. It returns the final status
of the job.
If an error occurs while retrieving the status, Wait returns that error. But
Wait returns nil if the status was retrieved successfully, even if
status.Err() != nil. So callers must check both errors. See the example.

type JobIterator struct {
ProjectIDstring// Project ID of the jobs to list. Default is the client's project.AllUsersbool// Whether to list jobs owned by all users in the project, or just the current caller.StateState// List only jobs in the given state. Defaults to all states.
// contains filtered or unexported fields
}

type JobStatus struct {
StateState// All errors encountered during the running of the job.
// Not all Errors are fatal, so errors here do not necessarily mean that the job has completed or was unsuccessful.Errors []*Error// Statistics about the job.Statistics *JobStatistics// contains filtered or unexported fields
}

JobStatus contains the current State of a job, and errors encountered while processing that job.

type LoadConfig struct {
// JobID is the ID to use for the job. If empty, a random job ID will be generated.JobIDstring// If AddJobIDSuffix is true, then a random string will be appended to JobID.AddJobIDSuffixbool// Src is the source from which data will be loaded.SrcLoadSource// Dst is the table into which the data will be loaded.Dst *Table// CreateDisposition specifies the circumstances under which the destination table will be created.
// The default is CreateIfNeeded.CreateDispositionTableCreateDisposition// WriteDisposition specifies how existing data in the destination table is treated.
// The default is WriteAppend.WriteDispositionTableWriteDisposition
}

type LoadStatistics struct {
// The number of bytes of source data in a load job.InputFileBytesint64// The number of source files in a load job.InputFilesint64// Size of the loaded data in bytes. Note that while a load job is in the
// running state, this value may change.OutputBytesint64// The number of rows imported in a load job. Note that while an import job is
// in the running state, this value may change.OutputRowsint64
}

type QueryConfig struct {
// JobID is the ID to use for the job. If empty, a random job ID will be generated.JobIDstring// If AddJobIDSuffix is true, then a random string will be appended to JobID.AddJobIDSuffixbool// Dst is the table into which the results of the query will be written.
// If this field is nil, a temporary table will be created.Dst *Table// The query to execute. See https://cloud.google.com/bigquery/query-reference for details.Qstring// DefaultProjectID and DefaultDatasetID specify the dataset to use for unqualified table names in the query.
// If DefaultProjectID is set, DefaultDatasetID must also be set.DefaultProjectIDstringDefaultDatasetIDstring// TableDefinitions describes data sources outside of BigQuery.
// The map keys may be used as table names in the query string.TableDefinitions map[string]ExternalData// CreateDisposition specifies the circumstances under which the destination table will be created.
// The default is CreateIfNeeded.CreateDispositionTableCreateDisposition// WriteDisposition specifies how existing data in the destination table is treated.
// The default is WriteEmpty.WriteDispositionTableWriteDisposition// DisableQueryCache prevents results being fetched from the query cache.
// If this field is false, results are fetched from the cache if they are available.
// The query cache is a best-effort cache that is flushed whenever tables in the query are modified.
// Cached results are only available when TableID is unspecified in the query's destination Table.
// For more information, see https://cloud.google.com/bigquery/querying-data#querycachingDisableQueryCachebool// DisableFlattenedResults prevents results being flattened.
// If this field is false, results from nested and repeated fields are flattened.
// DisableFlattenedResults implies AllowLargeResults
// For more information, see https://cloud.google.com/bigquery/docs/data#nestedDisableFlattenedResultsbool// AllowLargeResults allows the query to produce arbitrarily large result tables.
// The destination must be a table.
// When using this option, queries will take longer to execute, even if the result set is small.
// For additional limitations, see https://cloud.google.com/bigquery/querying-data#largequeryresultsAllowLargeResultsbool// Priority specifies the priority with which to schedule the query.
// The default priority is InteractivePriority.
// For more information, see https://cloud.google.com/bigquery/querying-data#batchqueriesPriorityQueryPriority// MaxBillingTier sets the maximum billing tier for a Query.
// Queries that have resource usage beyond this tier will fail (without
// incurring a charge). If this field is zero, the project default will be used.MaxBillingTierint// MaxBytesBilled limits the number of bytes billed for
// this job. Queries that would exceed this limit will fail (without incurring
// a charge).
// If this field is less than 1, the project default will be
// used.MaxBytesBilledint64// UseStandardSQL causes the query to use standard SQL.
// The default is false (using legacy SQL).UseStandardSQLbool// UseLegacySQL causes the query to use legacy SQL.UseLegacySQLbool// Parameters is a list of query parameters. The presence of parameters
// implies the use of standard SQL.
// If the query uses positional syntax ("?"), then no parameter may have a name.
// If the query uses named syntax ("@p"), then all parameters must have names.
// It is illegal to mix positional and named syntax.Parameters []QueryParameter
}

type QueryParameter struct {
// Name is used for named parameter mode.
// It must match the name in the query case-insensitively.Namestring// Value is the value of the parameter.
// The following Go types are supported, with their corresponding
// Bigquery types:
// int, int8, int16, int32, int64, uint8, uint16, uint32: INT64
// Note that uint, uint64 and uintptr are not supported, because
// they may contain values that cannot fit into a 64-bit signed integer.
// float32, float64: FLOAT64
// bool: BOOL
// string: STRING
// []byte: BYTES
// time.Time: TIMESTAMP
// Arrays and slices of the above.
// Structs of the above. Only the exported fields are used.Value interface{}
}

type QueryStatistics struct {
// Billing tier for the job.BillingTierint64// Whether the query result was fetched from the query cache.CacheHitbool// The type of query statement, if valid.StatementTypestring// Total bytes billed for the job.TotalBytesBilledint64// Total bytes processed for the job.TotalBytesProcessedint64// Describes execution plan for the query.QueryPlan []*ExplainQueryStage// The number of rows affected by a DML statement. Present only for DML
// statements INSERT, UPDATE or DELETE.NumDMLAffectedRowsint64// ReferencedTables: [Output-only, Experimental] Referenced tables for
// the job. Queries that reference more than 50 tables will not have a
// complete list.ReferencedTables []*Table// The schema of the results. Present only for successful dry run of
// non-legacy SQL queries.SchemaSchema// Standard SQL: list of undeclared query parameter names detected during a
// dry run validation.UndeclaredQueryParameterNames []string
}

type RowInsertionError struct {
InsertIDstring// The InsertID associated with the affected row.RowIndexint// The 0-based index of the affected row in the batch of rows being inserted.ErrorsMultiError
}

RowInsertionError contains all errors that occurred when attempting to insert a row.

Next loads the next row into dst. Its return value is iterator.Done if there
are no more results. Once Next returns iterator.Done, all subsequent calls
will return iterator.Done.

dst may implement ValueLoader, or may be a *[]Value, *map[string]Value, or struct pointer.

If dst is a *[]Value, it will be set to to new []Value whose i'th element
will be populated with the i'th column of the row.

If dst is a *map[string]Value, a new map will be created if dst is nil. Then
for each schema column name, the map key of that name will be set to the column's
value. STRUCT types (RECORD types or nested schemas) become nested maps.

If dst is pointer to a struct, each column in the schema will be matched
with an exported field of the struct that has the same name, ignoring case.
Unmatched schema columns and struct fields will be ignored.

Each BigQuery column type corresponds to one or more Go types; a matching struct
field must be of the correct type. The correspondences are:

A repeated field corresponds to a slice or array of the element type. A STRUCT
type (RECORD or nested schema) corresponds to a nested struct or struct pointer.
All calls to Next on the same iterator must use the same struct type.

It is an error to attempt to read a BigQuery NULL value into a struct field.
If your table contains NULLs, use a *[]Value or *map[string]Value.

InferSchema tries to derive a BigQuery schema from the supplied struct value.
NOTE: All fields in the returned Schema are configured to be required,
unless the corresponding field in the supplied struct is a slice or array.

It is considered an error if the struct (including nested structs) contains
any exported fields that are pointers or one of the following types:
uint, uint64, uintptr, map, interface, complex64, complex128, func, chan.
In these cases, an error will be returned.
Future versions may handle these cases without error.

type StreamingBuffer struct {
// A lower-bound estimate of the number of bytes currently in the streaming
// buffer.EstimatedBytesuint64// A lower-bound estimate of the number of rows currently in the streaming
// buffer.EstimatedRowsuint64// The time of the oldest entry in the streaming buffer.OldestEntryTimetime.Time
}

type StructSaver struct {
// Schema determines what fields of the struct are uploaded. It should
// match the table's schema.SchemaSchema// If non-empty, BigQuery will use InsertID to de-duplicate insertions
// of this row on a best-effort basis.InsertIDstring// Struct should be a struct or a pointer to a struct.Struct interface{}
}

StructSaver implements ValueSaver for a struct.
The struct is converted to a map of values by using the values of struct
fields corresponding to schema fields. Additional and missing
fields are ignored, as are nested struct pointers that are nil.

type Table struct {
// ProjectID, DatasetID and TableID may be omitted if the Table is the destination for a query.
// In this case the result will be stored in an ephemeral table.ProjectIDstringDatasetIDstring// TableID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_).
// The maximum length is 1,024 characters.TableIDstring// contains filtered or unexported fields
}

CopierFrom returns a Copier which can be used to copy data into a
BigQuery table from one or more BigQuery tables.
The returned Copier may optionally be further configured before its Run method is called.

ExtractorTo returns an Extractor which can be used to extract data from a
BigQuery table into Google Cloud Storage.
The returned Extractor may optionally be further configured before its Run method is called.

LoaderFrom returns a Loader which can be used to load data into a BigQuery table.
The returned Loader may optionally be further configured before its Run method is called.
See GCSReference and ReaderSource for additional configuration options that
affect loading.

This example illustrates how to perform a read-modify-write sequence on table
metadata. Passing the metadata's ETag to the Update call ensures that the call
will fail if the metadata was changed since the read.

const (
// CreateIfNeeded will create the table if it does not already exist.
// Tables are created atomically on successful completion of a job.CreateIfNeededTableCreateDisposition = "CREATE_IF_NEEDED"
// CreateNever ensures the table must already exist and will not be
// automatically created.CreateNeverTableCreateDisposition = "CREATE_NEVER"
)

type TableMetadata struct {
Descriptionstring// The user-friendly description of this table.Namestring// The user-friendly name for this table.SchemaSchemaViewstringIDstring// An opaque ID uniquely identifying the table.TypeTableType// The time when this table expires. If not set, the table will persist
// indefinitely. Expired tables will be deleted and their storage reclaimed.ExpirationTimetime.TimeCreationTimetime.TimeLastModifiedTimetime.Time// The size of the table in bytes.
// This does not include data that is being buffered during a streaming insert.NumBytesint64// The number of rows of data in this table.
// This does not include data that is being buffered during a streaming insert.NumRowsuint64// The time-based partitioning settings for this table.TimePartitioning *TimePartitioning// Contains information regarding this table's streaming buffer, if one is
// present. This field will be nil if the table is not being streamed to or if
// there is no data in the streaming buffer.StreamingBuffer *StreamingBuffer// ETag is the ETag obtained when reading metadata. Pass it to Table.Update to
// ensure that the metadata hasn't changed since it was read.ETagstring
}

type TableMetadataToUpdate struct {
// Description is the user-friendly description of this table.Descriptionoptional.String// Name is the user-friendly name for this table.Nameoptional.String// Schema is the table's schema.
// When updating a schema, you can add columns but not remove them.SchemaSchema// ExpirationTime is the time when this table expires.ExpirationTimetime.Time
}

TableMetadataToUpdate is used when updating a table's metadata.
Only non-nil fields will be updated.

type Uploader struct {
// SkipInvalidRows causes rows containing invalid data to be silently
// ignored. The default value is false, which causes the entire request to
// fail if there is an attempt to insert an invalid row.SkipInvalidRowsbool// IgnoreUnknownValues causes values not matching the schema to be ignored.
// The default value is false, which causes records containing such values
// to be treated as invalid records.IgnoreUnknownValuesbool// A TableTemplateSuffix allows Uploaders to create tables automatically.
//
// Experimental: this option is experimental and may be modified or removed in future versions,
// regardless of any other documented package stability guarantees.
//
// When you specify a suffix, the table you upload data to
// will be used as a template for creating a new table, with the same schema,
// called <table> + <suffix>.
//
// More information is available at
// https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tablesTableTemplateSuffixstring// contains filtered or unexported fields
}

An Uploader does streaming inserts into a BigQuery table.
It is safe for concurrent use.

type ValueSaver interface {
// Save returns a row to be inserted into a BigQuery table, represented
// as a map from field name to Value.
// If insertID is non-empty, BigQuery will use it to de-duplicate
// insertions of this row on a best-effort basis.Save() (row map[string]Value, insertID string, err error)
}