blob: e5b537237b323ea4a39b9e3b075c4a4bae0e7cab [file] [log] [blame]
<html><body>
<style>
body, h1, h2, h3, div, span, p, pre, a {
margin: 0;
padding: 0;
border: 0;
font-weight: inherit;
font-style: inherit;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
}
body {
font-size: 13px;
padding: 1em;
}
h1 {
font-size: 26px;
margin-bottom: 1em;
}
h2 {
font-size: 24px;
margin-bottom: 1em;
}
h3 {
font-size: 20px;
margin-bottom: 1em;
margin-top: 1em;
}
pre, code {
line-height: 1.5;
font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
}
pre {
margin-top: 0.5em;
}
h1, h2, h3, p {
font-family: Arial, sans serif;
}
h1, h2, h3 {
border-bottom: solid #CCC 1px;
}
.toc_element {
margin-top: 0.5em;
}
.firstline {
margin-left: 2 em;
}
.method {
margin-top: 1em;
border: solid 1px #CCC;
padding: 1em;
background: #EEE;
}
.details {
font-weight: bold;
font-size: 14px;
}
</style>
<h1><a href="spanner_v1.html">Cloud Spanner API</a> . <a href="spanner_v1.projects.html">projects</a> . <a href="spanner_v1.projects.instances.html">instances</a> . <a href="spanner_v1.projects.instances.databases.html">databases</a> . <a href="spanner_v1.projects.instances.databases.sessions.html">sessions</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="#batchCreate">batchCreate(database, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Creates multiple new sessions.</p>
<p class="toc_element">
<code><a href="#beginTransaction">beginTransaction(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Begins a new transaction. This step can often be skipped:</p>
<p class="toc_element">
<code><a href="#commit">commit(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Commits a transaction. The request includes the mutations to be</p>
<p class="toc_element">
<code><a href="#create">create(database, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Creates a new session. A session can be used to perform</p>
<p class="toc_element">
<code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
<p class="firstline">Ends a session, releasing server resources associated with it. This will</p>
<p class="toc_element">
<code><a href="#executeBatchDml">executeBatchDml(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Executes a batch of SQL DML statements. This method allows many statements</p>
<p class="toc_element">
<code><a href="#executeSql">executeSql(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Executes an SQL statement, returning all results in a single reply. This</p>
<p class="toc_element">
<code><a href="#executeStreamingSql">executeStreamingSql(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Like ExecuteSql, except returns the result</p>
<p class="toc_element">
<code><a href="#get">get(name, x__xgafv=None)</a></code></p>
<p class="firstline">Gets a session. Returns `NOT_FOUND` if the session does not exist.</p>
<p class="toc_element">
<code><a href="#list">list(database, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</a></code></p>
<p class="firstline">Lists all sessions in a given database.</p>
<p class="toc_element">
<code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
<p class="firstline">Retrieves the next page of results.</p>
<p class="toc_element">
<code><a href="#partitionQuery">partitionQuery(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Creates a set of partition tokens that can be used to execute a query</p>
<p class="toc_element">
<code><a href="#partitionRead">partitionRead(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Creates a set of partition tokens that can be used to execute a read</p>
<p class="toc_element">
<code><a href="#read">read(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Reads rows from the database using key lookups and scans, as a</p>
<p class="toc_element">
<code><a href="#rollback">rollback(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Rolls back a transaction, releasing any locks it holds. It is a good</p>
<p class="toc_element">
<code><a href="#streamingRead">streamingRead(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Like Read, except returns the result set as a</p>
<h3>Method Details</h3>
<div class="method">
<code class="details" id="batchCreate">batchCreate(database, body=None, x__xgafv=None)</code>
<pre>Creates multiple new sessions.
This API can be used to initialize a session cache on the clients.
See https://goo.gl/TgSFN2 for best practices on session cache management.
Args:
database: string, Required. The database in which the new sessions are created. (required)
body: object, The request body.
The object takes the form of:
{ # The request for BatchCreateSessions.
&quot;sessionTemplate&quot;: { # A session in the Cloud Spanner API. # Parameters to be applied to each created session.
&quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
&quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
&quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
# typically earlier than the actual last use time.
&quot;labels&quot;: { # The labels for the session.
#
# * Label keys must be between 1 and 63 characters long and must conform to
# the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
# * Label values must be between 0 and 63 characters long and must conform
# to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
# * No more than 64 labels can be associated with a given session.
#
# See https://goo.gl/xmQnxf for more information on and examples of labels.
&quot;a_key&quot;: &quot;A String&quot;,
},
},
&quot;sessionCount&quot;: 42, # Required. The number of sessions to be created in this batch call.
# The API may return fewer than the requested number of sessions. If a
# specific number of sessions are desired, the client can make additional
# calls to BatchCreateSessions (adjusting
# session_count as necessary).
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # The response for BatchCreateSessions.
&quot;session&quot;: [ # The freshly created sessions.
{ # A session in the Cloud Spanner API.
&quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
&quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
&quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
# typically earlier than the actual last use time.
&quot;labels&quot;: { # The labels for the session.
#
# * Label keys must be between 1 and 63 characters long and must conform to
# the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
# * Label values must be between 0 and 63 characters long and must conform
# to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
# * No more than 64 labels can be associated with a given session.
#
# See https://goo.gl/xmQnxf for more information on and examples of labels.
&quot;a_key&quot;: &quot;A String&quot;,
},
},
],
}</pre>
</div>
<div class="method">
<code class="details" id="beginTransaction">beginTransaction(session, body=None, x__xgafv=None)</code>
<pre>Begins a new transaction. This step can often be skipped:
Read, ExecuteSql and
Commit can begin a new transaction as a
side-effect.
Args:
session: string, Required. The session in which the transaction runs. (required)
body: object, The request body.
The object takes the form of:
{ # The request for BeginTransaction.
&quot;options&quot;: { # # Transactions # Required. Options for the new transaction.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # A transaction.
&quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
# for the transaction. Not returned by default: see
# TransactionOptions.ReadOnly.return_read_timestamp.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
# Read,
# ExecuteSql,
# Commit, or
# Rollback calls.
#
# Single-use read-only transactions do not have IDs, because
# single-use transactions do not support multiple requests.
}</pre>
</div>
<div class="method">
<code class="details" id="commit">commit(session, body=None, x__xgafv=None)</code>
<pre>Commits a transaction. The request includes the mutations to be
applied to rows in the database.
`Commit` might return an `ABORTED` error. This can occur at any time;
commonly, the cause is conflicts with concurrent
transactions. However, it can also happen for a variety of other
reasons. If `Commit` returns `ABORTED`, the caller should re-attempt
the transaction from the beginning, re-using the same session.
On very rare occasions, `Commit` might return `UNKNOWN`. This can happen,
for example, if the client job experiences a 1+ hour networking failure.
At that point, Cloud Spanner has lost track of the transaction outcome and
we recommend that you perform another read from the database to see the
state of things as they are now.
Args:
session: string, Required. The session in which the transaction to be committed is running. (required)
body: object, The request body.
The object takes the form of:
{ # The request for Commit.
&quot;singleUseTransaction&quot;: { # # Transactions # Execute mutations in a temporary transaction. Note that unlike
# commit of a previously-started transaction, commit with a
# temporary transaction is non-idempotent. That is, if the
# `CommitRequest` is sent to Cloud Spanner more than once (for
# instance, due to retries in the application, or in the
# transport library), it is possible that the mutations are
# executed more than once. If this is undesirable, use
# BeginTransaction and
# Commit instead.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;transactionId&quot;: &quot;A String&quot;, # Commit a previously-started transaction.
&quot;mutations&quot;: [ # The mutations to be executed when this transaction commits. All
# mutations are applied atomically, in the order they appear in
# this list.
{ # A modification to one or more Cloud Spanner rows. Mutations can be
# applied to a Cloud Spanner database by sending them in a
# Commit call.
&quot;delete&quot;: { # Arguments to delete operations. # Delete rows from a table. Succeeds whether or not the named
# rows were present.
&quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. The primary keys of the rows within table to delete. The
# primary keys must be specified in the order in which they appear in the
# `PRIMARY KEY()` clause of the table&#x27;s equivalent DDL statement (the DDL
# statement used to create the table).
# Delete is idempotent. The transaction will succeed even if some or all
# rows do not exist.
# the keys are expected to be in the same table or index. The keys need
# not be sorted in any particular way.
#
# If the same key is specified multiple times in the set (for example
# if two ranges, two keys, or a key and a range overlap), Cloud Spanner
# behaves as if the key were only specified once.
&quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
# key range specifications.
{ # KeyRange represents a range of rows in a table or index.
#
# A range has a start key and an end key. These keys can be open or
# closed, indicating if the range includes rows with that key.
#
# Keys are represented by lists, where the ith value in the list
# corresponds to the ith component of the table or index primary key.
# Individual values are encoded as described
# here.
#
# For example, consider the following table definition:
#
# CREATE TABLE UserEvents (
# UserName STRING(MAX),
# EventDate STRING(10)
# ) PRIMARY KEY(UserName, EventDate);
#
# The following keys name rows in this table:
#
# &quot;Bob&quot;, &quot;2014-09-23&quot;
#
# Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
# columns, each `UserEvents` key has two elements; the first is the
# `UserName`, and the second is the `EventDate`.
#
# Key ranges with multiple components are interpreted
# lexicographically by component using the table or index key&#x27;s declared
# sort order. For example, the following range returns all events for
# user `&quot;Bob&quot;` that occurred in the year 2015:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
#
# Start and end keys can omit trailing key components. This affects the
# inclusion and exclusion of rows that exactly match the provided key
# components: if the key is closed, then rows that exactly match the
# provided components are included; if the key is open, then rows
# that exactly match are not included.
#
# For example, the following range includes all events for `&quot;Bob&quot;` that
# occurred during and after the year 2000:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;]
#
# The next example retrieves all events for `&quot;Bob&quot;`:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;]
#
# To retrieve events before the year 2000:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;]
# &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
#
# The following range includes all rows in the table:
#
# &quot;start_closed&quot;: []
# &quot;end_closed&quot;: []
#
# This range returns all users whose `UserName` begins with any
# character from A to C:
#
# &quot;start_closed&quot;: [&quot;A&quot;]
# &quot;end_open&quot;: [&quot;D&quot;]
#
# This range returns all users whose `UserName` begins with B:
#
# &quot;start_closed&quot;: [&quot;B&quot;]
# &quot;end_open&quot;: [&quot;C&quot;]
#
# Key ranges honor column sort order. For example, suppose a table is
# defined as follows:
#
# CREATE TABLE DescendingSortedTable {
# Key INT64,
# ...
# ) PRIMARY KEY(Key DESC);
#
# The following range retrieves all rows with key values between 1
# and 100 inclusive:
#
# &quot;start_closed&quot;: [&quot;100&quot;]
# &quot;end_closed&quot;: [&quot;1&quot;]
#
# Note that 100 is passed as the start, and 1 is passed as the end,
# because `Key` is a descending column in the schema.
&quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
# first `len(end_closed)` key columns exactly match `end_closed`.
&quot;&quot;,
],
&quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
# first `len(start_closed)` key columns exactly match `start_closed`.
&quot;&quot;,
],
&quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
# `len(start_open)` key columns exactly match `start_open`.
&quot;&quot;,
],
&quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
# `len(end_open)` key columns exactly match `end_open`.
&quot;&quot;,
],
},
],
&quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
# many elements as there are columns in the primary or index key
# with which this `KeySet` is used. Individual key values are
# encoded as described here.
[
&quot;&quot;,
],
],
&quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
# `KeySet` matches all keys in the table or index. Note that any keys
# specified in `keys` or `ranges` are only yielded once.
},
&quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be deleted.
},
&quot;replace&quot;: { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, it is
# deleted, and the column values provided are inserted
# instead. Unlike insert_or_update, this means any values not
# explicitly written become `NULL`.
#
# In an interleaved table, if you create the child table with the
# `ON DELETE CASCADE` annotation, then replacing a parent row
# also deletes the child rows. Otherwise, you must delete the
# child rows before you replace the parent row.
# replace operations.
&quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
&quot;columns&quot;: [ # The names of the columns in table to be written.
#
# The list of columns must contain enough columns to allow
# Cloud Spanner to derive values for all primary key columns in the
# row(s) to be modified.
&quot;A String&quot;,
],
&quot;values&quot;: [ # The values to be written. `values` can contain more than one
# list of values. If it does, then multiple rows are written, one
# for each entry in `values`. Each list in `values` must have
# exactly as many entries as there are entries in columns
# above. Sending multiple lists is equivalent to sending multiple
# `Mutation`s, each containing one `values` entry and repeating
# table and columns. Individual values in each list are
# encoded as described here.
[
&quot;&quot;,
],
],
},
&quot;insert&quot;: { # Arguments to insert, update, insert_or_update, and # Insert new rows in a table. If any of the rows already exist,
# the write or transaction fails with error `ALREADY_EXISTS`.
# replace operations.
&quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
&quot;columns&quot;: [ # The names of the columns in table to be written.
#
# The list of columns must contain enough columns to allow
# Cloud Spanner to derive values for all primary key columns in the
# row(s) to be modified.
&quot;A String&quot;,
],
&quot;values&quot;: [ # The values to be written. `values` can contain more than one
# list of values. If it does, then multiple rows are written, one
# for each entry in `values`. Each list in `values` must have
# exactly as many entries as there are entries in columns
# above. Sending multiple lists is equivalent to sending multiple
# `Mutation`s, each containing one `values` entry and repeating
# table and columns. Individual values in each list are
# encoded as described here.
[
&quot;&quot;,
],
],
},
&quot;update&quot;: { # Arguments to insert, update, insert_or_update, and # Update existing rows in a table. If any of the rows does not
# already exist, the transaction fails with error `NOT_FOUND`.
# replace operations.
&quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
&quot;columns&quot;: [ # The names of the columns in table to be written.
#
# The list of columns must contain enough columns to allow
# Cloud Spanner to derive values for all primary key columns in the
# row(s) to be modified.
&quot;A String&quot;,
],
&quot;values&quot;: [ # The values to be written. `values` can contain more than one
# list of values. If it does, then multiple rows are written, one
# for each entry in `values`. Each list in `values` must have
# exactly as many entries as there are entries in columns
# above. Sending multiple lists is equivalent to sending multiple
# `Mutation`s, each containing one `values` entry and repeating
# table and columns. Individual values in each list are
# encoded as described here.
[
&quot;&quot;,
],
],
},
&quot;insertOrUpdate&quot;: { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, then
# its column values are overwritten with the ones provided. Any
# column values not explicitly written are preserved.
#
# When using insert_or_update, just as when using insert, all `NOT
# NULL` columns in the table must be given a value. This holds true
# even when the row already exists and will therefore actually be updated.
# replace operations.
&quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
&quot;columns&quot;: [ # The names of the columns in table to be written.
#
# The list of columns must contain enough columns to allow
# Cloud Spanner to derive values for all primary key columns in the
# row(s) to be modified.
&quot;A String&quot;,
],
&quot;values&quot;: [ # The values to be written. `values` can contain more than one
# list of values. If it does, then multiple rows are written, one
# for each entry in `values`. Each list in `values` must have
# exactly as many entries as there are entries in columns
# above. Sending multiple lists is equivalent to sending multiple
# `Mutation`s, each containing one `values` entry and repeating
# table and columns. Individual values in each list are
# encoded as described here.
[
&quot;&quot;,
],
],
},
},
],
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # The response for Commit.
&quot;commitTimestamp&quot;: &quot;A String&quot;, # The Cloud Spanner timestamp at which the transaction committed.
}</pre>
</div>
<div class="method">
<code class="details" id="create">create(database, body=None, x__xgafv=None)</code>
<pre>Creates a new session. A session can be used to perform
transactions that read and/or modify data in a Cloud Spanner database.
Sessions are meant to be reused for many consecutive
transactions.
Sessions can only execute one transaction at a time. To execute
multiple concurrent read-write/write-only transactions, create
multiple sessions. Note that standalone reads and queries use a
transaction internally, and count toward the one transaction
limit.
Active sessions use additional server resources, so it is a good idea to
delete idle and unneeded sessions.
Aside from explicit deletes, Cloud Spanner may delete sessions for which no
operations are sent for more than an hour. If a session is deleted,
requests to it return `NOT_FOUND`.
Idle sessions can be kept alive by sending a trivial SQL query
periodically, e.g., `&quot;SELECT 1&quot;`.
Args:
database: string, Required. The database in which the new session is created. (required)
body: object, The request body.
The object takes the form of:
{ # The request for CreateSession.
&quot;session&quot;: { # A session in the Cloud Spanner API. # Required. The session to create.
&quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
&quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
&quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
# typically earlier than the actual last use time.
&quot;labels&quot;: { # The labels for the session.
#
# * Label keys must be between 1 and 63 characters long and must conform to
# the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
# * Label values must be between 0 and 63 characters long and must conform
# to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
# * No more than 64 labels can be associated with a given session.
#
# See https://goo.gl/xmQnxf for more information on and examples of labels.
&quot;a_key&quot;: &quot;A String&quot;,
},
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # A session in the Cloud Spanner API.
&quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
&quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
&quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
# typically earlier than the actual last use time.
&quot;labels&quot;: { # The labels for the session.
#
# * Label keys must be between 1 and 63 characters long and must conform to
# the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
# * Label values must be between 0 and 63 characters long and must conform
# to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
# * No more than 64 labels can be associated with a given session.
#
# See https://goo.gl/xmQnxf for more information on and examples of labels.
&quot;a_key&quot;: &quot;A String&quot;,
},
}</pre>
</div>
<div class="method">
<code class="details" id="delete">delete(name, x__xgafv=None)</code>
<pre>Ends a session, releasing server resources associated with it. This will
asynchronously trigger cancellation of any operations that are running with
this session.
Args:
name: string, Required. The name of the session to delete. (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # A generic empty message that you can re-use to avoid defining duplicated
# empty messages in your APIs. A typical example is to use it as the request
# or the response type of an API method. For instance:
#
# service Foo {
# rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
# }
#
# The JSON representation for `Empty` is empty JSON object `{}`.
}</pre>
</div>
<div class="method">
<code class="details" id="executeBatchDml">executeBatchDml(session, body=None, x__xgafv=None)</code>
<pre>Executes a batch of SQL DML statements. This method allows many statements
to be run with lower latency than submitting them sequentially with
ExecuteSql.
Statements are executed in sequential order. A request can succeed even if
a statement fails. The ExecuteBatchDmlResponse.status field in the
response provides information about the statement that failed. Clients must
inspect this field to determine whether an error occurred.
Execution stops after the first failed statement; the remaining statements
are not executed.
Args:
session: string, Required. The session in which the DML statements should be performed. (required)
body: object, The request body.
The object takes the form of:
{ # The request for ExecuteBatchDml.
&quot;statements&quot;: [ # Required. The list of statements to execute in this batch. Statements are executed
# serially, such that the effects of statement `i` are visible to statement
# `i+1`. Each statement must be a DML statement. Execution stops at the
# first failed statement; the remaining statements are not executed.
#
# Callers must provide at least one statement.
{ # A single DML statement.
&quot;sql&quot;: &quot;A String&quot;, # Required. The DML string.
&quot;params&quot;: { # Parameter names and values that bind to placeholders in the DML string.
#
# A parameter placeholder consists of the `@` character followed by the
# parameter name (for example, `@firstName`). Parameter names can contain
# letters, numbers, and underscores.
#
# Parameters can appear anywhere that a literal value is expected. The
# same parameter name can be used more than once, for example:
#
# `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
#
# It is an error to execute a SQL statement with unbound parameters.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
# from a JSON value. For example, values of type `BYTES` and values
# of type `STRING` both appear in params as JSON strings.
#
# In these cases, `param_types` can be used to specify the exact
# SQL type for some or all of the SQL statement parameters. See the
# definition of Type for more information
# about SQL types.
&quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
# table cell or returned from an SQL query.
&quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
&quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
# is the type of the array elements.
&quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
# provides type information for the struct&#x27;s fields.
&quot;fields&quot;: [ # The list of fields that make up this struct. Order is
# significant, because values of this struct type are represented as
# lists, where the order of field values matches the order of
# fields in the StructType. In turn, the order of fields
# matches the order of columns in a read request, or the order of
# fields in the `SELECT` clause of a query.
{ # Message representing a single field of a struct.
&quot;type&quot;: # Object with schema name: Type # The type of the field.
&quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
# SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
# query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
# `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
# columns might have an empty name (e.g., !&quot;SELECT
# UPPER(ColName)&quot;`). Note that a query result can contain
# multiple fields with the same name.
},
],
},
},
},
},
],
&quot;seqno&quot;: &quot;A String&quot;, # Required. A per-transaction sequence number used to identify this request. This field
# makes each request idempotent such that if the request is received multiple
# times, at most one will succeed.
#
# The sequence number must be monotonically increasing within the
# transaction. If a request arrives for the first time with an out-of-order
# sequence number, the transaction may be aborted. Replays of previously
# handled requests will yield the same response as the first execution.
&quot;transaction&quot;: { # This message is used to select the transaction in which a # Required. The transaction to use. Must be a read-write transaction.
#
# To protect against replays, single-use transactions are not supported. The
# caller must either supply an existing transaction ID or begin a new
# transaction.
# Read or
# ExecuteSql call runs.
#
# See TransactionOptions for more information about transactions.
&quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
# This is the most efficient way to execute a transaction that
# consists of a single SQL query.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
# it. The transaction ID of the new transaction is returned in
# ResultSetMetadata.transaction, which is a Transaction.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # The response for ExecuteBatchDml. Contains a list
# of ResultSet messages, one for each DML statement that has successfully
# executed, in the same order as the statements in the request. If a statement
# fails, the status in the response body identifies the cause of the failure.
#
# To check for DML statements that failed, use the following approach:
#
# 1. Check the status in the response message. The google.rpc.Code enum
# value `OK` indicates that all statements were executed successfully.
# 2. If the status was not `OK`, check the number of result sets in the
# response. If the response contains `N` ResultSet messages, then
# statement `N+1` in the request failed.
#
# Example 1:
#
# * Request: 5 DML statements, all executed successfully.
# * Response: 5 ResultSet messages, with the status `OK`.
#
# Example 2:
#
# * Request: 5 DML statements. The third statement has a syntax error.
# * Response: 2 ResultSet messages, and a syntax error (`INVALID_ARGUMENT`)
# status. The number of ResultSet messages indicates that the third
# statement failed, and the fourth and fifth statements were not executed.
&quot;resultSets&quot;: [ # One ResultSet for each statement in the request that ran successfully,
# in the same order as the statements in the request. Each ResultSet does
# not contain any rows. The ResultSetStats in each ResultSet contain
# the number of rows modified by the statement.
#
# Only the first ResultSet in the response contains valid
# ResultSetMetadata.
{ # Results from Read or
# ExecuteSql.
&quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
# produced this result set. These can be requested by setting
# ExecuteSqlRequest.query_mode.
# DML statements always produce stats containing the number of rows
# modified, unless executed using the
# ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
# Other fields may or may not be populated, based on the
# ExecuteSqlRequest.query_mode.
&quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
# the query is profiled. For example, a query could return the statistics as
# follows:
#
# {
# &quot;rows_returned&quot;: &quot;3&quot;,
# &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
# &quot;cpu_time&quot;: &quot;1.19 secs&quot;
# }
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
&quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
# returns a lower bound of the rows modified.
&quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
&quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
# with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
# `plan_nodes`.
{ # Node information for nodes appearing in a QueryPlan.plan_nodes.
&quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
{ # Metadata associated with a parent-child relationship appearing in a
# PlanNode.
&quot;childIndex&quot;: 42, # The node to which the link points.
&quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
# distinguish between the build child and the probe child, or in the case
# of the child being an output variable, to represent the tag associated
# with the output variable.
&quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
# to an output variable of the parent node. The field carries the name of
# the output variable.
# For example, a `TableScan` operator that reads rows from a table will
# have child links to the `SCALAR` nodes representing the output variables
# created for each column that is read by the operator. The corresponding
# `variable` fields will be set to the variable names assigned to the
# columns.
},
],
&quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
# For example, a Parameter Reference node could have the following
# information in its metadata:
#
# {
# &quot;parameter_reference&quot;: &quot;param1&quot;,
# &quot;parameter_type&quot;: &quot;array&quot;
# }
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
# different kinds of nodes differently. For example, If the node is a
# SCALAR node, it will have a condensed representation
# which can be used to directly embed a description of the node in its
# parent.
&quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
# `SCALAR` PlanNode(s).
&quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
# where the `description` string of this node references a `SCALAR`
# subquery contained in the expression subtree rooted at this node. The
# referenced `SCALAR` subquery may not necessarily be a direct child of
# this node.
&quot;a_key&quot;: 42,
},
&quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
},
&quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
&quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
&quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
# key-value pairs. Only present if the plan was returned as a result of a
# profile query. For example, number of executions, number of rows/time per
# execution etc.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
},
],
},
},
&quot;rows&quot;: [ # Each element in `rows` is a row whose format is defined by
# metadata.row_type. The ith element
# in each row matches the ith field in
# metadata.row_type. Elements are
# encoded based on type as described
# here.
[
&quot;&quot;,
],
],
&quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
&quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
# information about the new transaction is yielded here.
&quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
# for the transaction. Not returned by default: see
# TransactionOptions.ReadOnly.return_read_timestamp.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
# Read,
# ExecuteSql,
# Commit, or
# Rollback calls.
#
# Single-use read-only transactions do not have IDs, because
# single-use transactions do not support multiple requests.
},
&quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
# set. For example, a SQL query like `&quot;SELECT UserId, UserName FROM
# Users&quot;` could return a `row_type` value like:
#
# &quot;fields&quot;: [
# { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
# { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
# ]
&quot;fields&quot;: [ # The list of fields that make up this struct. Order is
# significant, because values of this struct type are represented as
# lists, where the order of field values matches the order of
# fields in the StructType. In turn, the order of fields
# matches the order of columns in a read request, or the order of
# fields in the `SELECT` clause of a query.
{ # Message representing a single field of a struct.
&quot;type&quot;: # Object with schema name: Type # The type of the field.
&quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
# SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
# query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
# `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
# columns might have an empty name (e.g., !&quot;SELECT
# UPPER(ColName)&quot;`). Note that a query result can contain
# multiple fields with the same name.
},
],
},
},
},
],
&quot;status&quot;: { # The `Status` type defines a logical error model that is suitable for # If all DML statements are executed successfully, the status is `OK`.
# Otherwise, the error status of the first failed statement.
# different programming environments, including REST APIs and RPC APIs. It is
# used by [gRPC](https://github.com/grpc). Each `Status` message contains
# three pieces of data: error code, error message, and error details.
#
# You can find out more about this error model and how to work with it in the
# [API Design Guide](https://cloud.google.com/apis/design/errors).
&quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
# user-facing error message should be localized and sent in the
# google.rpc.Status.details field, or localized by the client.
&quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
# message types for APIs to use.
{
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
},
],
&quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
},
}</pre>
</div>
<div class="method">
<code class="details" id="executeSql">executeSql(session, body=None, x__xgafv=None)</code>
<pre>Executes an SQL statement, returning all results in a single reply. This
method cannot be used to return a result set larger than 10 MiB;
if the query yields more data than that, the query fails with
a `FAILED_PRECONDITION` error.
Operations inside read-write transactions might return `ABORTED`. If
this occurs, the application should restart the transaction from
the beginning. See Transaction for more details.
Larger result sets can be fetched in streaming fashion by calling
ExecuteStreamingSql instead.
Args:
session: string, Required. The session in which the SQL query should be performed. (required)
body: object, The request body.
The object takes the form of:
{ # The request for ExecuteSql and
# ExecuteStreamingSql.
&quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted SQL statement
# execution, `resume_token` should be copied from the last
# PartialResultSet yielded before the interruption. Doing this
# enables the new SQL statement execution to resume where the last one left
# off. The rest of the request parameters must exactly match the
# request that yielded this token.
&quot;queryOptions&quot;: { # Query optimizer configuration. # Query optimizer configuration to use for the given query.
&quot;optimizerVersion&quot;: &quot;A String&quot;, # An option to control the selection of optimizer version.
#
# This parameter allows individual queries to pick different query
# optimizer versions.
#
# Specifying &quot;latest&quot; as a value instructs Cloud Spanner to use the
# latest supported query optimizer version. If not specified, Cloud Spanner
# uses optimizer version set at the database level options. Any other
# positive integer (from the list of supported optimizer versions)
# overrides the default optimizer version for query execution.
# The list of supported optimizer versions can be queried from
# SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement
# with an invalid optimizer version will fail with a syntax error
# (`INVALID_ARGUMENT`) status.
# See
# https://cloud.google.com/spanner/docs/query-optimizer/manage-query-optimizer
# for more information on managing the query optimizer.
#
# The `optimizer_version` statement hint has precedence over this setting.
},
&quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
# previously created using PartitionQuery(). There must be an exact
# match for the values of fields common to this message and the
# PartitionQueryRequest message used to create this partition_token.
&quot;queryMode&quot;: &quot;A String&quot;, # Used to control the amount of debugging information returned in
# ResultSetStats. If partition_token is set, query_mode can only
# be set to QueryMode.NORMAL.
&quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use.
#
# For queries, if none is provided, the default is a temporary read-only
# transaction with strong concurrency.
#
# Standard DML statements require a read-write transaction. To protect
# against replays, single-use transactions are not supported. The caller
# must either supply an existing transaction ID or begin a new transaction.
#
# Partitioned DML requires an existing Partitioned DML transaction ID.
# Read or
# ExecuteSql call runs.
#
# See TransactionOptions for more information about transactions.
&quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
# This is the most efficient way to execute a transaction that
# consists of a single SQL query.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
# it. The transaction ID of the new transaction is returned in
# ResultSetMetadata.transaction, which is a Transaction.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
},
&quot;seqno&quot;: &quot;A String&quot;, # A per-transaction sequence number used to identify this request. This field
# makes each request idempotent such that if the request is received multiple
# times, at most one will succeed.
#
# The sequence number must be monotonically increasing within the
# transaction. If a request arrives for the first time with an out-of-order
# sequence number, the transaction may be aborted. Replays of previously
# handled requests will yield the same response as the first execution.
#
# Required for DML statements. Ignored for queries.
&quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
# from a JSON value. For example, values of type `BYTES` and values
# of type `STRING` both appear in params as JSON strings.
#
# In these cases, `param_types` can be used to specify the exact
# SQL type for some or all of the SQL statement parameters. See the
# definition of Type for more information
# about SQL types.
&quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
# table cell or returned from an SQL query.
&quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
&quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
# is the type of the array elements.
&quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
# provides type information for the struct&#x27;s fields.
&quot;fields&quot;: [ # The list of fields that make up this struct. Order is
# significant, because values of this struct type are represented as
# lists, where the order of field values matches the order of
# fields in the StructType. In turn, the order of fields
# matches the order of columns in a read request, or the order of
# fields in the `SELECT` clause of a query.
{ # Message representing a single field of a struct.
&quot;type&quot;: # Object with schema name: Type # The type of the field.
&quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
# SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
# query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
# `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
# columns might have an empty name (e.g., !&quot;SELECT
# UPPER(ColName)&quot;`). Note that a query result can contain
# multiple fields with the same name.
},
],
},
},
},
&quot;params&quot;: { # Parameter names and values that bind to placeholders in the SQL string.
#
# A parameter placeholder consists of the `@` character followed by the
# parameter name (for example, `@firstName`). Parameter names can contain
# letters, numbers, and underscores.
#
# Parameters can appear anywhere that a literal value is expected. The same
# parameter name can be used more than once, for example:
#
# `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
#
# It is an error to execute a SQL statement with unbound parameters.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;sql&quot;: &quot;A String&quot;, # Required. The SQL string.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Results from Read or
# ExecuteSql.
&quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
# produced this result set. These can be requested by setting
# ExecuteSqlRequest.query_mode.
# DML statements always produce stats containing the number of rows
# modified, unless executed using the
# ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
# Other fields may or may not be populated, based on the
# ExecuteSqlRequest.query_mode.
&quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
# the query is profiled. For example, a query could return the statistics as
# follows:
#
# {
# &quot;rows_returned&quot;: &quot;3&quot;,
# &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
# &quot;cpu_time&quot;: &quot;1.19 secs&quot;
# }
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
&quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
# returns a lower bound of the rows modified.
&quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
&quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
# with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
# `plan_nodes`.
{ # Node information for nodes appearing in a QueryPlan.plan_nodes.
&quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
{ # Metadata associated with a parent-child relationship appearing in a
# PlanNode.
&quot;childIndex&quot;: 42, # The node to which the link points.
&quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
# distinguish between the build child and the probe child, or in the case
# of the child being an output variable, to represent the tag associated
# with the output variable.
&quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
# to an output variable of the parent node. The field carries the name of
# the output variable.
# For example, a `TableScan` operator that reads rows from a table will
# have child links to the `SCALAR` nodes representing the output variables
# created for each column that is read by the operator. The corresponding
# `variable` fields will be set to the variable names assigned to the
# columns.
},
],
&quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
# For example, a Parameter Reference node could have the following
# information in its metadata:
#
# {
# &quot;parameter_reference&quot;: &quot;param1&quot;,
# &quot;parameter_type&quot;: &quot;array&quot;
# }
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
# different kinds of nodes differently. For example, If the node is a
# SCALAR node, it will have a condensed representation
# which can be used to directly embed a description of the node in its
# parent.
&quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
# `SCALAR` PlanNode(s).
&quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
# where the `description` string of this node references a `SCALAR`
# subquery contained in the expression subtree rooted at this node. The
# referenced `SCALAR` subquery may not necessarily be a direct child of
# this node.
&quot;a_key&quot;: 42,
},
&quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
},
&quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
&quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
&quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
# key-value pairs. Only present if the plan was returned as a result of a
# profile query. For example, number of executions, number of rows/time per
# execution etc.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
},
],
},
},
&quot;rows&quot;: [ # Each element in `rows` is a row whose format is defined by
# metadata.row_type. The ith element
# in each row matches the ith field in
# metadata.row_type. Elements are
# encoded based on type as described
# here.
[
&quot;&quot;,
],
],
&quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
&quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
# information about the new transaction is yielded here.
&quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
# for the transaction. Not returned by default: see
# TransactionOptions.ReadOnly.return_read_timestamp.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
# Read,
# ExecuteSql,
# Commit, or
# Rollback calls.
#
# Single-use read-only transactions do not have IDs, because
# single-use transactions do not support multiple requests.
},
&quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
# set. For example, a SQL query like `&quot;SELECT UserId, UserName FROM
# Users&quot;` could return a `row_type` value like:
#
# &quot;fields&quot;: [
# { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
# { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
# ]
&quot;fields&quot;: [ # The list of fields that make up this struct. Order is
# significant, because values of this struct type are represented as
# lists, where the order of field values matches the order of
# fields in the StructType. In turn, the order of fields
# matches the order of columns in a read request, or the order of
# fields in the `SELECT` clause of a query.
{ # Message representing a single field of a struct.
&quot;type&quot;: # Object with schema name: Type # The type of the field.
&quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
# SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
# query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
# `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
# columns might have an empty name (e.g., !&quot;SELECT
# UPPER(ColName)&quot;`). Note that a query result can contain
# multiple fields with the same name.
},
],
},
},
}</pre>
</div>
<div class="method">
<code class="details" id="executeStreamingSql">executeStreamingSql(session, body=None, x__xgafv=None)</code>
<pre>Like ExecuteSql, except returns the result
set as a stream. Unlike ExecuteSql, there
is no limit on the size of the returned result set. However, no
individual row in the result set can exceed 100 MiB, and no
column value can exceed 10 MiB.
Args:
session: string, Required. The session in which the SQL query should be performed. (required)
body: object, The request body.
The object takes the form of:
{ # The request for ExecuteSql and
# ExecuteStreamingSql.
&quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted SQL statement
# execution, `resume_token` should be copied from the last
# PartialResultSet yielded before the interruption. Doing this
# enables the new SQL statement execution to resume where the last one left
# off. The rest of the request parameters must exactly match the
# request that yielded this token.
&quot;queryOptions&quot;: { # Query optimizer configuration. # Query optimizer configuration to use for the given query.
&quot;optimizerVersion&quot;: &quot;A String&quot;, # An option to control the selection of optimizer version.
#
# This parameter allows individual queries to pick different query
# optimizer versions.
#
# Specifying &quot;latest&quot; as a value instructs Cloud Spanner to use the
# latest supported query optimizer version. If not specified, Cloud Spanner
# uses optimizer version set at the database level options. Any other
# positive integer (from the list of supported optimizer versions)
# overrides the default optimizer version for query execution.
# The list of supported optimizer versions can be queried from
# SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement
# with an invalid optimizer version will fail with a syntax error
# (`INVALID_ARGUMENT`) status.
# See
# https://cloud.google.com/spanner/docs/query-optimizer/manage-query-optimizer
# for more information on managing the query optimizer.
#
# The `optimizer_version` statement hint has precedence over this setting.
},
&quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
# previously created using PartitionQuery(). There must be an exact
# match for the values of fields common to this message and the
# PartitionQueryRequest message used to create this partition_token.
&quot;queryMode&quot;: &quot;A String&quot;, # Used to control the amount of debugging information returned in
# ResultSetStats. If partition_token is set, query_mode can only
# be set to QueryMode.NORMAL.
&quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use.
#
# For queries, if none is provided, the default is a temporary read-only
# transaction with strong concurrency.
#
# Standard DML statements require a read-write transaction. To protect
# against replays, single-use transactions are not supported. The caller
# must either supply an existing transaction ID or begin a new transaction.
#
# Partitioned DML requires an existing Partitioned DML transaction ID.
# Read or
# ExecuteSql call runs.
#
# See TransactionOptions for more information about transactions.
&quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
# This is the most efficient way to execute a transaction that
# consists of a single SQL query.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
# it. The transaction ID of the new transaction is returned in
# ResultSetMetadata.transaction, which is a Transaction.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
},
&quot;seqno&quot;: &quot;A String&quot;, # A per-transaction sequence number used to identify this request. This field
# makes each request idempotent such that if the request is received multiple
# times, at most one will succeed.
#
# The sequence number must be monotonically increasing within the
# transaction. If a request arrives for the first time with an out-of-order
# sequence number, the transaction may be aborted. Replays of previously
# handled requests will yield the same response as the first execution.
#
# Required for DML statements. Ignored for queries.
&quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
# from a JSON value. For example, values of type `BYTES` and values
# of type `STRING` both appear in params as JSON strings.
#
# In these cases, `param_types` can be used to specify the exact
# SQL type for some or all of the SQL statement parameters. See the
# definition of Type for more information
# about SQL types.
&quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
# table cell or returned from an SQL query.
&quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
&quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
# is the type of the array elements.
&quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
# provides type information for the struct&#x27;s fields.
&quot;fields&quot;: [ # The list of fields that make up this struct. Order is
# significant, because values of this struct type are represented as
# lists, where the order of field values matches the order of
# fields in the StructType. In turn, the order of fields
# matches the order of columns in a read request, or the order of
# fields in the `SELECT` clause of a query.
{ # Message representing a single field of a struct.
&quot;type&quot;: # Object with schema name: Type # The type of the field.
&quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
# SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
# query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
# `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
# columns might have an empty name (e.g., !&quot;SELECT
# UPPER(ColName)&quot;`). Note that a query result can contain
# multiple fields with the same name.
},
],
},
},
},
&quot;params&quot;: { # Parameter names and values that bind to placeholders in the SQL string.
#
# A parameter placeholder consists of the `@` character followed by the
# parameter name (for example, `@firstName`). Parameter names can contain
# letters, numbers, and underscores.
#
# Parameters can appear anywhere that a literal value is expected. The same
# parameter name can be used more than once, for example:
#
# `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
#
# It is an error to execute a SQL statement with unbound parameters.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;sql&quot;: &quot;A String&quot;, # Required. The SQL string.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Partial results from a streaming read or SQL query. Streaming reads and
# SQL queries better tolerate large result sets, large rows, and large
# values, but are a little trickier to consume.
&quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
# streaming result set. These can be requested by setting
# ExecuteSqlRequest.query_mode and are sent
# only once with the last response in the stream.
# This field will also be present in the last response for DML
# statements.
&quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
# the query is profiled. For example, a query could return the statistics as
# follows:
#
# {
# &quot;rows_returned&quot;: &quot;3&quot;,
# &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
# &quot;cpu_time&quot;: &quot;1.19 secs&quot;
# }
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
&quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
# returns a lower bound of the rows modified.
&quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
&quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
# with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
# `plan_nodes`.
{ # Node information for nodes appearing in a QueryPlan.plan_nodes.
&quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
{ # Metadata associated with a parent-child relationship appearing in a
# PlanNode.
&quot;childIndex&quot;: 42, # The node to which the link points.
&quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
# distinguish between the build child and the probe child, or in the case
# of the child being an output variable, to represent the tag associated
# with the output variable.
&quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
# to an output variable of the parent node. The field carries the name of
# the output variable.
# For example, a `TableScan` operator that reads rows from a table will
# have child links to the `SCALAR` nodes representing the output variables
# created for each column that is read by the operator. The corresponding
# `variable` fields will be set to the variable names assigned to the
# columns.
},
],
&quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
# For example, a Parameter Reference node could have the following
# information in its metadata:
#
# {
# &quot;parameter_reference&quot;: &quot;param1&quot;,
# &quot;parameter_type&quot;: &quot;array&quot;
# }
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
# different kinds of nodes differently. For example, If the node is a
# SCALAR node, it will have a condensed representation
# which can be used to directly embed a description of the node in its
# parent.
&quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
# `SCALAR` PlanNode(s).
&quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
# where the `description` string of this node references a `SCALAR`
# subquery contained in the expression subtree rooted at this node. The
# referenced `SCALAR` subquery may not necessarily be a direct child of
# this node.
&quot;a_key&quot;: 42,
},
&quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
},
&quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
&quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
&quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
# key-value pairs. Only present if the plan was returned as a result of a
# profile query. For example, number of executions, number of rows/time per
# execution etc.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
},
],
},
},
&quot;resumeToken&quot;: &quot;A String&quot;, # Streaming calls might be interrupted for a variety of reasons, such
# as TCP connection loss. If this occurs, the stream of results can
# be resumed by re-sending the original request and including
# `resume_token`. Note that executing any other transaction in the
# same session invalidates the token.
&quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
# Only present in the first response.
&quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
# information about the new transaction is yielded here.
&quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
# for the transaction. Not returned by default: see
# TransactionOptions.ReadOnly.return_read_timestamp.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
# Read,
# ExecuteSql,
# Commit, or
# Rollback calls.
#
# Single-use read-only transactions do not have IDs, because
# single-use transactions do not support multiple requests.
},
&quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
# set. For example, a SQL query like `&quot;SELECT UserId, UserName FROM
# Users&quot;` could return a `row_type` value like:
#
# &quot;fields&quot;: [
# { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
# { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
# ]
&quot;fields&quot;: [ # The list of fields that make up this struct. Order is
# significant, because values of this struct type are represented as
# lists, where the order of field values matches the order of
# fields in the StructType. In turn, the order of fields
# matches the order of columns in a read request, or the order of
# fields in the `SELECT` clause of a query.
{ # Message representing a single field of a struct.
&quot;type&quot;: # Object with schema name: Type # The type of the field.
&quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
# SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
# query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
# `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
# columns might have an empty name (e.g., !&quot;SELECT
# UPPER(ColName)&quot;`). Note that a query result can contain
# multiple fields with the same name.
},
],
},
},
&quot;values&quot;: [ # A streamed result set consists of a stream of values, which might
# be split into many `PartialResultSet` messages to accommodate
# large rows and/or large values. Every N complete values defines a
# row, where N is equal to the number of entries in
# metadata.row_type.fields.
#
# Most values are encoded based on type as described
# here.
#
# It is possible that the last value in values is &quot;chunked&quot;,
# meaning that the rest of the value is sent in subsequent
# `PartialResultSet`(s). This is denoted by the chunked_value
# field. Two or more chunked values can be merged to form a
# complete value as follows:
#
# * `bool/number/null`: cannot be chunked
# * `string`: concatenate the strings
# * `list`: concatenate the lists. If the last element in a list is a
# `string`, `list`, or `object`, merge it with the first element in
# the next list by applying these rules recursively.
# * `object`: concatenate the (field name, field value) pairs. If a
# field name is duplicated, then apply these rules recursively
# to merge the field values.
#
# Some examples of merging:
#
# # Strings are concatenated.
# &quot;foo&quot;, &quot;bar&quot; =&gt; &quot;foobar&quot;
#
# # Lists of non-strings are concatenated.
# [2, 3], [4] =&gt; [2, 3, 4]
#
# # Lists are concatenated, but the last and first elements are merged
# # because they are strings.
# [&quot;a&quot;, &quot;b&quot;], [&quot;c&quot;, &quot;d&quot;] =&gt; [&quot;a&quot;, &quot;bc&quot;, &quot;d&quot;]
#
# # Lists are concatenated, but the last and first elements are merged
# # because they are lists. Recursively, the last and first elements
# # of the inner lists are merged because they are strings.
# [&quot;a&quot;, [&quot;b&quot;, &quot;c&quot;]], [[&quot;d&quot;], &quot;e&quot;] =&gt; [&quot;a&quot;, [&quot;b&quot;, &quot;cd&quot;], &quot;e&quot;]
#
# # Non-overlapping object fields are combined.
# {&quot;a&quot;: &quot;1&quot;}, {&quot;b&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;1&quot;, &quot;b&quot;: 2&quot;}
#
# # Overlapping object fields are merged.
# {&quot;a&quot;: &quot;1&quot;}, {&quot;a&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;12&quot;}
#
# # Examples of merging objects containing lists of strings.
# {&quot;a&quot;: [&quot;1&quot;]}, {&quot;a&quot;: [&quot;2&quot;]} =&gt; {&quot;a&quot;: [&quot;12&quot;]}
#
# For a more complete example, suppose a streaming SQL query is
# yielding a result set whose rows contain a single string
# field. The following `PartialResultSet`s might be yielded:
#
# {
# &quot;metadata&quot;: { ... }
# &quot;values&quot;: [&quot;Hello&quot;, &quot;W&quot;]
# &quot;chunked_value&quot;: true
# &quot;resume_token&quot;: &quot;Af65...&quot;
# }
# {
# &quot;values&quot;: [&quot;orl&quot;]
# &quot;chunked_value&quot;: true
# &quot;resume_token&quot;: &quot;Bqp2...&quot;
# }
# {
# &quot;values&quot;: [&quot;d&quot;]
# &quot;resume_token&quot;: &quot;Zx1B...&quot;
# }
#
# This sequence of `PartialResultSet`s encodes two rows, one
# containing the field value `&quot;Hello&quot;`, and a second containing the
# field value `&quot;World&quot; = &quot;W&quot; + &quot;orl&quot; + &quot;d&quot;`.
&quot;&quot;,
],
&quot;chunkedValue&quot;: True or False, # If true, then the final value in values is chunked, and must
# be combined with more values from subsequent `PartialResultSet`s
# to obtain a complete field value.
}</pre>
</div>
<div class="method">
<code class="details" id="get">get(name, x__xgafv=None)</code>
<pre>Gets a session. Returns `NOT_FOUND` if the session does not exist.
This is mainly useful for determining whether a session is still
alive.
Args:
name: string, Required. The name of the session to retrieve. (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # A session in the Cloud Spanner API.
&quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
&quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
&quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
# typically earlier than the actual last use time.
&quot;labels&quot;: { # The labels for the session.
#
# * Label keys must be between 1 and 63 characters long and must conform to
# the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
# * Label values must be between 0 and 63 characters long and must conform
# to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
# * No more than 64 labels can be associated with a given session.
#
# See https://goo.gl/xmQnxf for more information on and examples of labels.
&quot;a_key&quot;: &quot;A String&quot;,
},
}</pre>
</div>
<div class="method">
<code class="details" id="list">list(database, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</code>
<pre>Lists all sessions in a given database.
Args:
database: string, Required. The database in which to list sessions. (required)
filter: string, An expression for filtering the results of the request. Filter rules are
case insensitive. The fields eligible for filtering are:
* `labels.key` where key is the name of a label
Some examples of using filters are:
* `labels.env:*` --&gt; The session has the label &quot;env&quot;.
* `labels.env:dev` --&gt; The session has the label &quot;env&quot; and the value of
the label contains the string &quot;dev&quot;.
pageToken: string, If non-empty, `page_token` should contain a
next_page_token from a previous
ListSessionsResponse.
pageSize: integer, Number of sessions to be returned in the response. If 0 or less, defaults
to the server&#x27;s maximum allowed page size.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # The response for ListSessions.
&quot;nextPageToken&quot;: &quot;A String&quot;, # `next_page_token` can be sent in a subsequent
# ListSessions call to fetch more of the matching
# sessions.
&quot;sessions&quot;: [ # The list of requested sessions.
{ # A session in the Cloud Spanner API.
&quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
&quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
&quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
# typically earlier than the actual last use time.
&quot;labels&quot;: { # The labels for the session.
#
# * Label keys must be between 1 and 63 characters long and must conform to
# the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
# * Label values must be between 0 and 63 characters long and must conform
# to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
# * No more than 64 labels can be associated with a given session.
#
# See https://goo.gl/xmQnxf for more information on and examples of labels.
&quot;a_key&quot;: &quot;A String&quot;,
},
},
],
}</pre>
</div>
<div class="method">
<code class="details" id="list_next">list_next(previous_request, previous_response)</code>
<pre>Retrieves the next page of results.
Args:
previous_request: The request for the previous page. (required)
previous_response: The response from the request for the previous page. (required)
Returns:
A request object that you can call &#x27;execute()&#x27; on to request the next
page. Returns None if there are no more items in the collection.
</pre>
</div>
<div class="method">
<code class="details" id="partitionQuery">partitionQuery(session, body=None, x__xgafv=None)</code>
<pre>Creates a set of partition tokens that can be used to execute a query
operation in parallel. Each of the returned partition tokens can be used
by ExecuteStreamingSql to specify a subset
of the query result to read. The same session and read-only transaction
must be used by the PartitionQueryRequest used to create the
partition tokens and the ExecuteSqlRequests that use the partition tokens.
Partition tokens become invalid when the session used to create them
is deleted, is idle for too long, begins a new transaction, or becomes too
old. When any of these happen, it is not possible to resume the query, and
the whole operation must be restarted from the beginning.
Args:
session: string, Required. The session used to create the partitions. (required)
body: object, The request body.
The object takes the form of:
{ # The request for PartitionQuery
&quot;params&quot;: { # Parameter names and values that bind to placeholders in the SQL string.
#
# A parameter placeholder consists of the `@` character followed by the
# parameter name (for example, `@firstName`). Parameter names can contain
# letters, numbers, and underscores.
#
# Parameters can appear anywhere that a literal value is expected. The same
# parameter name can be used more than once, for example:
#
# `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
#
# It is an error to execute a SQL statement with unbound parameters.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;sql&quot;: &quot;A String&quot;, # Required. The query request to generate partitions for. The request will fail if
# the query is not root partitionable. The query plan of a root
# partitionable query has a single distributed union operator. A distributed
# union operator conceptually divides one or more tables into multiple
# splits, remotely evaluates a subquery independently on each split, and
# then unions all results.
#
# This must not contain DML commands, such as INSERT, UPDATE, or
# DELETE. Use ExecuteStreamingSql with a
# PartitionedDml transaction for large, partition-friendly DML operations.
&quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
# from a JSON value. For example, values of type `BYTES` and values
# of type `STRING` both appear in params as JSON strings.
#
# In these cases, `param_types` can be used to specify the exact
# SQL type for some or all of the SQL query parameters. See the
# definition of Type for more information
# about SQL types.
&quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
# table cell or returned from an SQL query.
&quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
&quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
# is the type of the array elements.
&quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
# provides type information for the struct&#x27;s fields.
&quot;fields&quot;: [ # The list of fields that make up this struct. Order is
# significant, because values of this struct type are represented as
# lists, where the order of field values matches the order of
# fields in the StructType. In turn, the order of fields
# matches the order of columns in a read request, or the order of
# fields in the `SELECT` clause of a query.
{ # Message representing a single field of a struct.
&quot;type&quot;: # Object with schema name: Type # The type of the field.
&quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
# SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
# query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
# `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
# columns might have an empty name (e.g., !&quot;SELECT
# UPPER(ColName)&quot;`). Note that a query result can contain
# multiple fields with the same name.
},
],
},
},
},
&quot;transaction&quot;: { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
# transactions are not.
# Read or
# ExecuteSql call runs.
#
# See TransactionOptions for more information about transactions.
&quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
# This is the most efficient way to execute a transaction that
# consists of a single SQL query.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
# it. The transaction ID of the new transaction is returned in
# ResultSetMetadata.transaction, which is a Transaction.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
},
&quot;partitionOptions&quot;: { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
# PartitionReadRequest.
&quot;maxPartitions&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
# PartitionRead requests.
#
# The desired maximum number of partitions to return. For example, this may
# be set to the number of workers available. The default for this option
# is currently 10,000. The maximum value is currently 200,000. This is only
# a hint. The actual number of partitions returned may be smaller or larger
# than this maximum count request.
&quot;partitionSizeBytes&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
# PartitionRead requests.
#
# The desired data size for each partition generated. The default for this
# option is currently 1 GiB. This is only a hint. The actual size of each
# partition may be smaller or larger than this size request.
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # The response for PartitionQuery
# or PartitionRead
&quot;partitions&quot;: [ # Partitions created by this request.
{ # Information returned for each partition returned in a
# PartitionResponse.
&quot;partitionToken&quot;: &quot;A String&quot;, # This token can be passed to Read, StreamingRead, ExecuteSql, or
# ExecuteStreamingSql requests to restrict the results to those identified by
# this partition token.
},
],
&quot;transaction&quot;: { # A transaction. # Transaction created by this request.
&quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
# for the transaction. Not returned by default: see
# TransactionOptions.ReadOnly.return_read_timestamp.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
# Read,
# ExecuteSql,
# Commit, or
# Rollback calls.
#
# Single-use read-only transactions do not have IDs, because
# single-use transactions do not support multiple requests.
},
}</pre>
</div>
<div class="method">
<code class="details" id="partitionRead">partitionRead(session, body=None, x__xgafv=None)</code>
<pre>Creates a set of partition tokens that can be used to execute a read
operation in parallel. Each of the returned partition tokens can be used
by StreamingRead to specify a subset of the read
result to read. The same session and read-only transaction must be used by
the PartitionReadRequest used to create the partition tokens and the
ReadRequests that use the partition tokens. There are no ordering
guarantees on rows returned among the returned partition tokens, or even
within each individual StreamingRead call issued with a partition_token.
Partition tokens become invalid when the session used to create them
is deleted, is idle for too long, begins a new transaction, or becomes too
old. When any of these happen, it is not possible to resume the read, and
the whole operation must be restarted from the beginning.
Args:
session: string, Required. The session used to create the partitions. (required)
body: object, The request body.
The object takes the form of:
{ # The request for PartitionRead
&quot;index&quot;: &quot;A String&quot;, # If non-empty, the name of an index on table. This index is
# used instead of the table primary key when interpreting key_set
# and sorting result rows. See key_set for further information.
&quot;table&quot;: &quot;A String&quot;, # Required. The name of the table in the database to be read.
&quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
# primary keys of the rows in table to be yielded, unless index
# is present. If index is present, then key_set instead names
# index keys in index.
#
# It is not an error for the `key_set` to name rows that do not
# exist in the database. Read yields nothing for nonexistent rows.
# the keys are expected to be in the same table or index. The keys need
# not be sorted in any particular way.
#
# If the same key is specified multiple times in the set (for example
# if two ranges, two keys, or a key and a range overlap), Cloud Spanner
# behaves as if the key were only specified once.
&quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
# key range specifications.
{ # KeyRange represents a range of rows in a table or index.
#
# A range has a start key and an end key. These keys can be open or
# closed, indicating if the range includes rows with that key.
#
# Keys are represented by lists, where the ith value in the list
# corresponds to the ith component of the table or index primary key.
# Individual values are encoded as described
# here.
#
# For example, consider the following table definition:
#
# CREATE TABLE UserEvents (
# UserName STRING(MAX),
# EventDate STRING(10)
# ) PRIMARY KEY(UserName, EventDate);
#
# The following keys name rows in this table:
#
# &quot;Bob&quot;, &quot;2014-09-23&quot;
#
# Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
# columns, each `UserEvents` key has two elements; the first is the
# `UserName`, and the second is the `EventDate`.
#
# Key ranges with multiple components are interpreted
# lexicographically by component using the table or index key&#x27;s declared
# sort order. For example, the following range returns all events for
# user `&quot;Bob&quot;` that occurred in the year 2015:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
#
# Start and end keys can omit trailing key components. This affects the
# inclusion and exclusion of rows that exactly match the provided key
# components: if the key is closed, then rows that exactly match the
# provided components are included; if the key is open, then rows
# that exactly match are not included.
#
# For example, the following range includes all events for `&quot;Bob&quot;` that
# occurred during and after the year 2000:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;]
#
# The next example retrieves all events for `&quot;Bob&quot;`:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;]
#
# To retrieve events before the year 2000:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;]
# &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
#
# The following range includes all rows in the table:
#
# &quot;start_closed&quot;: []
# &quot;end_closed&quot;: []
#
# This range returns all users whose `UserName` begins with any
# character from A to C:
#
# &quot;start_closed&quot;: [&quot;A&quot;]
# &quot;end_open&quot;: [&quot;D&quot;]
#
# This range returns all users whose `UserName` begins with B:
#
# &quot;start_closed&quot;: [&quot;B&quot;]
# &quot;end_open&quot;: [&quot;C&quot;]
#
# Key ranges honor column sort order. For example, suppose a table is
# defined as follows:
#
# CREATE TABLE DescendingSortedTable {
# Key INT64,
# ...
# ) PRIMARY KEY(Key DESC);
#
# The following range retrieves all rows with key values between 1
# and 100 inclusive:
#
# &quot;start_closed&quot;: [&quot;100&quot;]
# &quot;end_closed&quot;: [&quot;1&quot;]
#
# Note that 100 is passed as the start, and 1 is passed as the end,
# because `Key` is a descending column in the schema.
&quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
# first `len(end_closed)` key columns exactly match `end_closed`.
&quot;&quot;,
],
&quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
# first `len(start_closed)` key columns exactly match `start_closed`.
&quot;&quot;,
],
&quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
# `len(start_open)` key columns exactly match `start_open`.
&quot;&quot;,
],
&quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
# `len(end_open)` key columns exactly match `end_open`.
&quot;&quot;,
],
},
],
&quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
# many elements as there are columns in the primary or index key
# with which this `KeySet` is used. Individual key values are
# encoded as described here.
[
&quot;&quot;,
],
],
&quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
# `KeySet` matches all keys in the table or index. Note that any keys
# specified in `keys` or `ranges` are only yielded once.
},
&quot;partitionOptions&quot;: { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
# PartitionReadRequest.
&quot;maxPartitions&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
# PartitionRead requests.
#
# The desired maximum number of partitions to return. For example, this may
# be set to the number of workers available. The default for this option
# is currently 10,000. The maximum value is currently 200,000. This is only
# a hint. The actual number of partitions returned may be smaller or larger
# than this maximum count request.
&quot;partitionSizeBytes&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
# PartitionRead requests.
#
# The desired data size for each partition generated. The default for this
# option is currently 1 GiB. This is only a hint. The actual size of each
# partition may be smaller or larger than this size request.
},
&quot;columns&quot;: [ # The columns of table to be returned for each row matching
# this request.
&quot;A String&quot;,
],
&quot;transaction&quot;: { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
# transactions are not.
# Read or
# ExecuteSql call runs.
#
# See TransactionOptions for more information about transactions.
&quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
# This is the most efficient way to execute a transaction that
# consists of a single SQL query.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
# it. The transaction ID of the new transaction is returned in
# ResultSetMetadata.transaction, which is a Transaction.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # The response for PartitionQuery
# or PartitionRead
&quot;partitions&quot;: [ # Partitions created by this request.
{ # Information returned for each partition returned in a
# PartitionResponse.
&quot;partitionToken&quot;: &quot;A String&quot;, # This token can be passed to Read, StreamingRead, ExecuteSql, or
# ExecuteStreamingSql requests to restrict the results to those identified by
# this partition token.
},
],
&quot;transaction&quot;: { # A transaction. # Transaction created by this request.
&quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
# for the transaction. Not returned by default: see
# TransactionOptions.ReadOnly.return_read_timestamp.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
# Read,
# ExecuteSql,
# Commit, or
# Rollback calls.
#
# Single-use read-only transactions do not have IDs, because
# single-use transactions do not support multiple requests.
},
}</pre>
</div>
<div class="method">
<code class="details" id="read">read(session, body=None, x__xgafv=None)</code>
<pre>Reads rows from the database using key lookups and scans, as a
simple key/value style alternative to
ExecuteSql. This method cannot be used to
return a result set larger than 10 MiB; if the read matches more
data than that, the read fails with a `FAILED_PRECONDITION`
error.
Reads inside read-write transactions might return `ABORTED`. If
this occurs, the application should restart the transaction from
the beginning. See Transaction for more details.
Larger result sets can be yielded in streaming fashion by calling
StreamingRead instead.
Args:
session: string, Required. The session in which the read should be performed. (required)
body: object, The request body.
The object takes the form of:
{ # The request for Read and
# StreamingRead.
&quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted read,
# `resume_token` should be copied from the last
# PartialResultSet yielded before the interruption. Doing this
# enables the new read to resume where the last read left off. The
# rest of the request parameters must exactly match the request
# that yielded this token.
&quot;columns&quot;: [ # Required. The columns of table to be returned for each row matching
# this request.
&quot;A String&quot;,
],
&quot;limit&quot;: &quot;A String&quot;, # If greater than zero, only the first `limit` rows are yielded. If `limit`
# is zero, the default is no limit. A limit cannot be specified if
# `partition_token` is set.
&quot;index&quot;: &quot;A String&quot;, # If non-empty, the name of an index on table. This index is
# used instead of the table primary key when interpreting key_set
# and sorting result rows. See key_set for further information.
&quot;table&quot;: &quot;A String&quot;, # Required. The name of the table in the database to be read.
&quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
# temporary read-only transaction with strong concurrency.
# Read or
# ExecuteSql call runs.
#
# See TransactionOptions for more information about transactions.
&quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
# This is the most efficient way to execute a transaction that
# consists of a single SQL query.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
# it. The transaction ID of the new transaction is returned in
# ResultSetMetadata.transaction, which is a Transaction.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
},
&quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
# previously created using PartitionRead(). There must be an exact
# match for the values of fields common to this message and the
# PartitionReadRequest message used to create this partition_token.
&quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
# primary keys of the rows in table to be yielded, unless index
# is present. If index is present, then key_set instead names
# index keys in index.
#
# If the partition_token field is empty, rows are yielded
# in table primary key order (if index is empty) or index key order
# (if index is non-empty). If the partition_token field is not
# empty, rows will be yielded in an unspecified order.
#
# It is not an error for the `key_set` to name rows that do not
# exist in the database. Read yields nothing for nonexistent rows.
# the keys are expected to be in the same table or index. The keys need
# not be sorted in any particular way.
#
# If the same key is specified multiple times in the set (for example
# if two ranges, two keys, or a key and a range overlap), Cloud Spanner
# behaves as if the key were only specified once.
&quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
# key range specifications.
{ # KeyRange represents a range of rows in a table or index.
#
# A range has a start key and an end key. These keys can be open or
# closed, indicating if the range includes rows with that key.
#
# Keys are represented by lists, where the ith value in the list
# corresponds to the ith component of the table or index primary key.
# Individual values are encoded as described
# here.
#
# For example, consider the following table definition:
#
# CREATE TABLE UserEvents (
# UserName STRING(MAX),
# EventDate STRING(10)
# ) PRIMARY KEY(UserName, EventDate);
#
# The following keys name rows in this table:
#
# &quot;Bob&quot;, &quot;2014-09-23&quot;
#
# Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
# columns, each `UserEvents` key has two elements; the first is the
# `UserName`, and the second is the `EventDate`.
#
# Key ranges with multiple components are interpreted
# lexicographically by component using the table or index key&#x27;s declared
# sort order. For example, the following range returns all events for
# user `&quot;Bob&quot;` that occurred in the year 2015:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
#
# Start and end keys can omit trailing key components. This affects the
# inclusion and exclusion of rows that exactly match the provided key
# components: if the key is closed, then rows that exactly match the
# provided components are included; if the key is open, then rows
# that exactly match are not included.
#
# For example, the following range includes all events for `&quot;Bob&quot;` that
# occurred during and after the year 2000:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;]
#
# The next example retrieves all events for `&quot;Bob&quot;`:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;]
#
# To retrieve events before the year 2000:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;]
# &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
#
# The following range includes all rows in the table:
#
# &quot;start_closed&quot;: []
# &quot;end_closed&quot;: []
#
# This range returns all users whose `UserName` begins with any
# character from A to C:
#
# &quot;start_closed&quot;: [&quot;A&quot;]
# &quot;end_open&quot;: [&quot;D&quot;]
#
# This range returns all users whose `UserName` begins with B:
#
# &quot;start_closed&quot;: [&quot;B&quot;]
# &quot;end_open&quot;: [&quot;C&quot;]
#
# Key ranges honor column sort order. For example, suppose a table is
# defined as follows:
#
# CREATE TABLE DescendingSortedTable {
# Key INT64,
# ...
# ) PRIMARY KEY(Key DESC);
#
# The following range retrieves all rows with key values between 1
# and 100 inclusive:
#
# &quot;start_closed&quot;: [&quot;100&quot;]
# &quot;end_closed&quot;: [&quot;1&quot;]
#
# Note that 100 is passed as the start, and 1 is passed as the end,
# because `Key` is a descending column in the schema.
&quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
# first `len(end_closed)` key columns exactly match `end_closed`.
&quot;&quot;,
],
&quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
# first `len(start_closed)` key columns exactly match `start_closed`.
&quot;&quot;,
],
&quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
# `len(start_open)` key columns exactly match `start_open`.
&quot;&quot;,
],
&quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
# `len(end_open)` key columns exactly match `end_open`.
&quot;&quot;,
],
},
],
&quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
# many elements as there are columns in the primary or index key
# with which this `KeySet` is used. Individual key values are
# encoded as described here.
[
&quot;&quot;,
],
],
&quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
# `KeySet` matches all keys in the table or index. Note that any keys
# specified in `keys` or `ranges` are only yielded once.
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Results from Read or
# ExecuteSql.
&quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
# produced this result set. These can be requested by setting
# ExecuteSqlRequest.query_mode.
# DML statements always produce stats containing the number of rows
# modified, unless executed using the
# ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
# Other fields may or may not be populated, based on the
# ExecuteSqlRequest.query_mode.
&quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
# the query is profiled. For example, a query could return the statistics as
# follows:
#
# {
# &quot;rows_returned&quot;: &quot;3&quot;,
# &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
# &quot;cpu_time&quot;: &quot;1.19 secs&quot;
# }
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
&quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
# returns a lower bound of the rows modified.
&quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
&quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
# with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
# `plan_nodes`.
{ # Node information for nodes appearing in a QueryPlan.plan_nodes.
&quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
{ # Metadata associated with a parent-child relationship appearing in a
# PlanNode.
&quot;childIndex&quot;: 42, # The node to which the link points.
&quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
# distinguish between the build child and the probe child, or in the case
# of the child being an output variable, to represent the tag associated
# with the output variable.
&quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
# to an output variable of the parent node. The field carries the name of
# the output variable.
# For example, a `TableScan` operator that reads rows from a table will
# have child links to the `SCALAR` nodes representing the output variables
# created for each column that is read by the operator. The corresponding
# `variable` fields will be set to the variable names assigned to the
# columns.
},
],
&quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
# For example, a Parameter Reference node could have the following
# information in its metadata:
#
# {
# &quot;parameter_reference&quot;: &quot;param1&quot;,
# &quot;parameter_type&quot;: &quot;array&quot;
# }
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
# different kinds of nodes differently. For example, If the node is a
# SCALAR node, it will have a condensed representation
# which can be used to directly embed a description of the node in its
# parent.
&quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
# `SCALAR` PlanNode(s).
&quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
# where the `description` string of this node references a `SCALAR`
# subquery contained in the expression subtree rooted at this node. The
# referenced `SCALAR` subquery may not necessarily be a direct child of
# this node.
&quot;a_key&quot;: 42,
},
&quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
},
&quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
&quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
&quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
# key-value pairs. Only present if the plan was returned as a result of a
# profile query. For example, number of executions, number of rows/time per
# execution etc.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
},
],
},
},
&quot;rows&quot;: [ # Each element in `rows` is a row whose format is defined by
# metadata.row_type. The ith element
# in each row matches the ith field in
# metadata.row_type. Elements are
# encoded based on type as described
# here.
[
&quot;&quot;,
],
],
&quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
&quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
# information about the new transaction is yielded here.
&quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
# for the transaction. Not returned by default: see
# TransactionOptions.ReadOnly.return_read_timestamp.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
# Read,
# ExecuteSql,
# Commit, or
# Rollback calls.
#
# Single-use read-only transactions do not have IDs, because
# single-use transactions do not support multiple requests.
},
&quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
# set. For example, a SQL query like `&quot;SELECT UserId, UserName FROM
# Users&quot;` could return a `row_type` value like:
#
# &quot;fields&quot;: [
# { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
# { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
# ]
&quot;fields&quot;: [ # The list of fields that make up this struct. Order is
# significant, because values of this struct type are represented as
# lists, where the order of field values matches the order of
# fields in the StructType. In turn, the order of fields
# matches the order of columns in a read request, or the order of
# fields in the `SELECT` clause of a query.
{ # Message representing a single field of a struct.
&quot;type&quot;: # Object with schema name: Type # The type of the field.
&quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
# SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
# query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
# `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
# columns might have an empty name (e.g., !&quot;SELECT
# UPPER(ColName)&quot;`). Note that a query result can contain
# multiple fields with the same name.
},
],
},
},
}</pre>
</div>
<div class="method">
<code class="details" id="rollback">rollback(session, body=None, x__xgafv=None)</code>
<pre>Rolls back a transaction, releasing any locks it holds. It is a good
idea to call this for any transaction that includes one or more
Read or ExecuteSql requests and
ultimately decides not to commit.
`Rollback` returns `OK` if it successfully aborts the transaction, the
transaction was already aborted, or the transaction is not
found. `Rollback` never returns `ABORTED`.
Args:
session: string, Required. The session in which the transaction to roll back is running. (required)
body: object, The request body.
The object takes the form of:
{ # The request for Rollback.
&quot;transactionId&quot;: &quot;A String&quot;, # Required. The transaction to roll back.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # A generic empty message that you can re-use to avoid defining duplicated
# empty messages in your APIs. A typical example is to use it as the request
# or the response type of an API method. For instance:
#
# service Foo {
# rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
# }
#
# The JSON representation for `Empty` is empty JSON object `{}`.
}</pre>
</div>
<div class="method">
<code class="details" id="streamingRead">streamingRead(session, body=None, x__xgafv=None)</code>
<pre>Like Read, except returns the result set as a
stream. Unlike Read, there is no limit on the
size of the returned result set. However, no individual row in
the result set can exceed 100 MiB, and no column value can exceed
10 MiB.
Args:
session: string, Required. The session in which the read should be performed. (required)
body: object, The request body.
The object takes the form of:
{ # The request for Read and
# StreamingRead.
&quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted read,
# `resume_token` should be copied from the last
# PartialResultSet yielded before the interruption. Doing this
# enables the new read to resume where the last read left off. The
# rest of the request parameters must exactly match the request
# that yielded this token.
&quot;columns&quot;: [ # Required. The columns of table to be returned for each row matching
# this request.
&quot;A String&quot;,
],
&quot;limit&quot;: &quot;A String&quot;, # If greater than zero, only the first `limit` rows are yielded. If `limit`
# is zero, the default is no limit. A limit cannot be specified if
# `partition_token` is set.
&quot;index&quot;: &quot;A String&quot;, # If non-empty, the name of an index on table. This index is
# used instead of the table primary key when interpreting key_set
# and sorting result rows. See key_set for further information.
&quot;table&quot;: &quot;A String&quot;, # Required. The name of the table in the database to be read.
&quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
# temporary read-only transaction with strong concurrency.
# Read or
# ExecuteSql call runs.
#
# See TransactionOptions for more information about transactions.
&quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
# This is the most efficient way to execute a transaction that
# consists of a single SQL query.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
# it. The transaction ID of the new transaction is returned in
# ResultSetMetadata.transaction, which is a Transaction.
#
#
# Each session can have at most one active transaction at a time (note that
# standalone reads and queries use a transaction internally and do count
# towards the one transaction limit). After the active transaction is
# completed, the session can immediately be re-used for the next transaction.
# It is not necessary to create a new session for each transaction.
#
# # Transaction Modes
#
# Cloud Spanner supports three transaction modes:
#
# 1. Locking read-write. This type of transaction is the only way
# to write data into Cloud Spanner. These transactions rely on
# pessimistic locking and, if necessary, two-phase commit.
# Locking read-write transactions may abort, requiring the
# application to retry.
#
# 2. Snapshot read-only. This transaction type provides guaranteed
# consistency across several reads, but does not allow
# writes. Snapshot read-only transactions can be configured to
# read at timestamps in the past. Snapshot read-only
# transactions do not need to be committed.
#
# 3. Partitioned DML. This type of transaction is used to execute
# a single Partitioned DML statement. Partitioned DML partitions
# the key space and runs the DML statement over each partition
# in parallel using separate, internal transactions that commit
# independently. Partitioned DML transactions do not need to be
# committed.
#
# For transactions that only read, snapshot read-only transactions
# provide simpler semantics and are almost always faster. In
# particular, read-only transactions do not take locks, so they do
# not conflict with read-write transactions. As a consequence of not
# taking locks, they also do not abort, so retry loops are not needed.
#
# Transactions may only read/write data in a single database. They
# may, however, read/write data in different tables within that
# database.
#
# ## Locking Read-Write Transactions
#
# Locking transactions may be used to atomically read-modify-write
# data anywhere in a database. This type of transaction is externally
# consistent.
#
# Clients should attempt to minimize the amount of time a transaction
# is active. Faster transactions commit with higher probability
# and cause less contention. Cloud Spanner attempts to keep read locks
# active as long as the transaction continues to do reads, and the
# transaction has not been terminated by
# Commit or
# Rollback. Long periods of
# inactivity at the client may cause Cloud Spanner to release a
# transaction&#x27;s locks and abort it.
#
# Conceptually, a read-write transaction consists of zero or more
# reads or SQL statements followed by
# Commit. At any time before
# Commit, the client can send a
# Rollback request to abort the
# transaction.
#
# ### Semantics
#
# Cloud Spanner can commit the transaction if all read locks it acquired
# are still valid at commit time, and it is able to acquire write
# locks for all writes. Cloud Spanner can abort the transaction for any
# reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
# that the transaction has not modified any user data in Cloud Spanner.
#
# Unless the transaction commits, Cloud Spanner makes no guarantees about
# how long the transaction&#x27;s locks were held for. It is an error to
# use Cloud Spanner locks for any sort of mutual exclusion other than
# between Cloud Spanner transactions themselves.
#
# ### Retrying Aborted Transactions
#
# When a transaction aborts, the application can choose to retry the
# whole transaction again. To maximize the chances of successfully
# committing the retry, the client should execute the retry in the
# same session as the original attempt. The original session&#x27;s lock
# priority increases with each consecutive abort, meaning that each
# attempt has a slightly better chance of success than the previous.
#
# Under some circumstances (e.g., many transactions attempting to
# modify the same row(s)), a transaction can abort many times in a
# short period before successfully committing. Thus, it is not a good
# idea to cap the number of retries a transaction can attempt;
# instead, it is better to limit the total amount of wall time spent
# retrying.
#
# ### Idle Transactions
#
# A transaction is considered idle if it has no outstanding reads or
# SQL queries and has not started a read or SQL query within the last 10
# seconds. Idle transactions can be aborted by Cloud Spanner so that they
# don&#x27;t hold on to locks indefinitely. In that case, the commit will
# fail with error `ABORTED`.
#
# If this behavior is undesirable, periodically executing a simple
# SQL query in the transaction (e.g., `SELECT 1`) prevents the
# transaction from becoming idle.
#
# ## Snapshot Read-Only Transactions
#
# Snapshot read-only transactions provides a simpler method than
# locking read-write transactions for doing several consistent
# reads. However, this type of transaction does not support writes.
#
# Snapshot transactions do not take locks. Instead, they work by
# choosing a Cloud Spanner timestamp, then executing all reads at that
# timestamp. Since they do not acquire locks, they do not block
# concurrent read-write transactions.
#
# Unlike locking read-write transactions, snapshot read-only
# transactions never abort. They can fail if the chosen read
# timestamp is garbage collected; however, the default garbage
# collection policy is generous enough that most applications do not
# need to worry about this in practice.
#
# Snapshot read-only transactions do not need to call
# Commit or
# Rollback (and in fact are not
# permitted to do so).
#
# To execute a snapshot transaction, the client specifies a timestamp
# bound, which tells Cloud Spanner how to choose a read timestamp.
#
# The types of timestamp bound are:
#
# - Strong (the default).
# - Bounded staleness.
# - Exact staleness.
#
# If the Cloud Spanner database to be read is geographically distributed,
# stale read-only transactions can execute more quickly than strong
# or read-write transaction, because they are able to execute far
# from the leader replica.
#
# Each type of timestamp bound is discussed in detail below.
#
# ### Strong
#
# Strong reads are guaranteed to see the effects of all transactions
# that have committed before the start of the read. Furthermore, all
# rows yielded by a single read are consistent with each other -- if
# any part of the read observes a transaction, all parts of the read
# see the transaction.
#
# Strong reads are not repeatable: two consecutive strong read-only
# transactions might return inconsistent results if there are
# concurrent writes. If consistency across reads is required, the
# reads should be executed within a transaction or at an exact read
# timestamp.
#
# See TransactionOptions.ReadOnly.strong.
#
# ### Exact Staleness
#
# These timestamp bounds execute reads at a user-specified
# timestamp. Reads at a timestamp are guaranteed to see a consistent
# prefix of the global transaction history: they observe
# modifications done by all transactions with a commit timestamp &lt;=
# the read timestamp, and observe none of the modifications done by
# transactions with a larger commit timestamp. They will block until
# all conflicting transactions that may be assigned commit timestamps
# &lt;= the read timestamp have finished.
#
# The timestamp can either be expressed as an absolute Cloud Spanner commit
# timestamp or a staleness relative to the current time.
#
# These modes do not require a &quot;negotiation phase&quot; to pick a
# timestamp. As a result, they execute slightly faster than the
# equivalent boundedly stale concurrency modes. On the other hand,
# boundedly stale reads usually return fresher results.
#
# See TransactionOptions.ReadOnly.read_timestamp and
# TransactionOptions.ReadOnly.exact_staleness.
#
# ### Bounded Staleness
#
# Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
# subject to a user-provided staleness bound. Cloud Spanner chooses the
# newest timestamp within the staleness bound that allows execution
# of the reads at the closest available replica without blocking.
#
# All rows yielded are consistent with each other -- if any part of
# the read observes a transaction, all parts of the read see the
# transaction. Boundedly stale reads are not repeatable: two stale
# reads, even if they use the same staleness bound, can execute at
# different timestamps and thus return inconsistent results.
#
# Boundedly stale reads execute in two phases: the first phase
# negotiates a timestamp among all replicas needed to serve the
# read. In the second phase, reads are executed at the negotiated
# timestamp.
#
# As a result of the two phase execution, bounded staleness reads are
# usually a little slower than comparable exact staleness
# reads. However, they are typically able to return fresher
# results, and are more likely to execute at the closest replica.
#
# Because the timestamp negotiation requires up-front knowledge of
# which rows will be read, it can only be used with single-use
# read-only transactions.
#
# See TransactionOptions.ReadOnly.max_staleness and
# TransactionOptions.ReadOnly.min_read_timestamp.
#
# ### Old Read Timestamps and Garbage Collection
#
# Cloud Spanner continuously garbage collects deleted and overwritten data
# in the background to reclaim storage space. This process is known
# as &quot;version GC&quot;. By default, version GC reclaims versions after they
# are one hour old. Because of this, Cloud Spanner cannot perform reads
# at read timestamps more than one hour in the past. This
# restriction also applies to in-progress reads and/or SQL queries whose
# timestamp become too old while executing. Reads and SQL queries with
# too-old read timestamps fail with the error `FAILED_PRECONDITION`.
#
# ## Partitioned DML Transactions
#
# Partitioned DML transactions are used to execute DML statements with a
# different execution strategy that provides different, and often better,
# scalability properties for large, table-wide operations than DML in a
# ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
# should prefer using ReadWrite transactions.
#
# Partitioned DML partitions the keyspace and runs the DML statement on each
# partition in separate, internal transactions. These transactions commit
# automatically when complete, and run independently from one another.
#
# To reduce lock contention, this execution strategy only acquires read locks
# on rows that match the WHERE clause of the statement. Additionally, the
# smaller per-partition transactions hold locks for less time.
#
# That said, Partitioned DML is not a drop-in replacement for standard DML used
# in ReadWrite transactions.
#
# - The DML statement must be fully-partitionable. Specifically, the statement
# must be expressible as the union of many statements which each access only
# a single row of the table.
#
# - The statement is not applied atomically to all rows of the table. Rather,
# the statement is applied atomically to partitions of the table, in
# independent transactions. Secondary index rows are updated atomically
# with the base table rows.
#
# - Partitioned DML does not guarantee exactly-once execution semantics
# against a partition. The statement will be applied at least once to each
# partition. It is strongly recommended that the DML statement should be
# idempotent to avoid unexpected results. For instance, it is potentially
# dangerous to run a statement such as
# `UPDATE table SET column = column + 1` as it could be run multiple times
# against some rows.
#
# - The partitions are committed automatically - there is no support for
# Commit or Rollback. If the call returns an error, or if the client issuing
# the ExecuteSql call dies, it is possible that some rows had the statement
# executed on them successfully. It is also possible that statement was
# never executed against other rows.
#
# - Partitioned DML transactions may only contain the execution of a single
# DML statement via ExecuteSql or ExecuteStreamingSql.
#
# - If any error is encountered during the execution of the partitioned DML
# operation (for instance, a UNIQUE INDEX violation, division by zero, or a
# value that cannot be stored due to schema constraints), then the
# operation is stopped at that point and an error is returned. It is
# possible that at this point, some partitions have been committed (or even
# committed multiple times), and other partitions have not been run at all.
#
# Given the above, Partitioned DML is good fit for large, database-wide,
# operations that are idempotent, such as deleting old rows from a very large
# table.
&quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
#
# Authorization to begin a read-write transaction requires
# `spanner.databases.beginOrRollbackReadWriteTransaction` permission
# on the `session` resource.
# transaction type has no options.
},
&quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
#
# Authorization to begin a read-only transaction requires
# `spanner.databases.beginReadOnlyTransaction` permission
# on the `session` resource.
&quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
# reads at a specific timestamp are repeatable; the same read at
# the same timestamp always returns the same data. If the
# timestamp is in the future, the read will block until the
# specified timestamp, modulo the read&#x27;s deadline.
#
# Useful for large scale consistent reads such as mapreduces, or
# for coordinating many reads against a consistent snapshot of the
# data.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
#
# This is useful for requesting fresher data than some previous
# read, or data that is fresh enough to observe the effects of some
# previously committed transaction whose timestamp is known.
#
# Note that this option can only be used in single-use transactions.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
# old. The timestamp is chosen soon after the read is started.
#
# Guarantees that all writes that have committed more than the
# specified number of seconds ago are visible. Because Cloud Spanner
# chooses the exact timestamp, this mode works even if the client&#x27;s
# local clock is substantially skewed from Cloud Spanner commit
# timestamps.
#
# Useful for reading at nearby replicas without the distributed
# timestamp negotiation overhead of `max_staleness`.
&quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
# seconds. Guarantees that all writes that have committed more
# than the specified number of seconds ago are visible. Because
# Cloud Spanner chooses the exact timestamp, this mode works even if
# the client&#x27;s local clock is substantially skewed from Cloud Spanner
# commit timestamps.
#
# Useful for reading the freshest data available at a nearby
# replica, while bounding the possible staleness if the local
# replica has fallen behind.
#
# Note that this option can only be used in single-use
# transactions.
&quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
# the Transaction message that describes the transaction.
&quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
# are visible.
},
&quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
#
# Authorization to begin a Partitioned DML transaction requires
# `spanner.databases.beginPartitionedDmlTransaction` permission
# on the `session` resource.
},
},
&quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
},
&quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
# previously created using PartitionRead(). There must be an exact
# match for the values of fields common to this message and the
# PartitionReadRequest message used to create this partition_token.
&quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
# primary keys of the rows in table to be yielded, unless index
# is present. If index is present, then key_set instead names
# index keys in index.
#
# If the partition_token field is empty, rows are yielded
# in table primary key order (if index is empty) or index key order
# (if index is non-empty). If the partition_token field is not
# empty, rows will be yielded in an unspecified order.
#
# It is not an error for the `key_set` to name rows that do not
# exist in the database. Read yields nothing for nonexistent rows.
# the keys are expected to be in the same table or index. The keys need
# not be sorted in any particular way.
#
# If the same key is specified multiple times in the set (for example
# if two ranges, two keys, or a key and a range overlap), Cloud Spanner
# behaves as if the key were only specified once.
&quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
# key range specifications.
{ # KeyRange represents a range of rows in a table or index.
#
# A range has a start key and an end key. These keys can be open or
# closed, indicating if the range includes rows with that key.
#
# Keys are represented by lists, where the ith value in the list
# corresponds to the ith component of the table or index primary key.
# Individual values are encoded as described
# here.
#
# For example, consider the following table definition:
#
# CREATE TABLE UserEvents (
# UserName STRING(MAX),
# EventDate STRING(10)
# ) PRIMARY KEY(UserName, EventDate);
#
# The following keys name rows in this table:
#
# &quot;Bob&quot;, &quot;2014-09-23&quot;
#
# Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
# columns, each `UserEvents` key has two elements; the first is the
# `UserName`, and the second is the `EventDate`.
#
# Key ranges with multiple components are interpreted
# lexicographically by component using the table or index key&#x27;s declared
# sort order. For example, the following range returns all events for
# user `&quot;Bob&quot;` that occurred in the year 2015:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
#
# Start and end keys can omit trailing key components. This affects the
# inclusion and exclusion of rows that exactly match the provided key
# components: if the key is closed, then rows that exactly match the
# provided components are included; if the key is open, then rows
# that exactly match are not included.
#
# For example, the following range includes all events for `&quot;Bob&quot;` that
# occurred during and after the year 2000:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;]
#
# The next example retrieves all events for `&quot;Bob&quot;`:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;]
# &quot;end_closed&quot;: [&quot;Bob&quot;]
#
# To retrieve events before the year 2000:
#
# &quot;start_closed&quot;: [&quot;Bob&quot;]
# &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
#
# The following range includes all rows in the table:
#
# &quot;start_closed&quot;: []
# &quot;end_closed&quot;: []
#
# This range returns all users whose `UserName` begins with any
# character from A to C:
#
# &quot;start_closed&quot;: [&quot;A&quot;]
# &quot;end_open&quot;: [&quot;D&quot;]
#
# This range returns all users whose `UserName` begins with B:
#
# &quot;start_closed&quot;: [&quot;B&quot;]
# &quot;end_open&quot;: [&quot;C&quot;]
#
# Key ranges honor column sort order. For example, suppose a table is
# defined as follows:
#
# CREATE TABLE DescendingSortedTable {
# Key INT64,
# ...
# ) PRIMARY KEY(Key DESC);
#
# The following range retrieves all rows with key values between 1
# and 100 inclusive:
#
# &quot;start_closed&quot;: [&quot;100&quot;]
# &quot;end_closed&quot;: [&quot;1&quot;]
#
# Note that 100 is passed as the start, and 1 is passed as the end,
# because `Key` is a descending column in the schema.
&quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
# first `len(end_closed)` key columns exactly match `end_closed`.
&quot;&quot;,
],
&quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
# first `len(start_closed)` key columns exactly match `start_closed`.
&quot;&quot;,
],
&quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
# `len(start_open)` key columns exactly match `start_open`.
&quot;&quot;,
],
&quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
# `len(end_open)` key columns exactly match `end_open`.
&quot;&quot;,
],
},
],
&quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
# many elements as there are columns in the primary or index key
# with which this `KeySet` is used. Individual key values are
# encoded as described here.
[
&quot;&quot;,
],
],
&quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
# `KeySet` matches all keys in the table or index. Note that any keys
# specified in `keys` or `ranges` are only yielded once.
},
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Partial results from a streaming read or SQL query. Streaming reads and
# SQL queries better tolerate large result sets, large rows, and large
# values, but are a little trickier to consume.
&quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
# streaming result set. These can be requested by setting
# ExecuteSqlRequest.query_mode and are sent
# only once with the last response in the stream.
# This field will also be present in the last response for DML
# statements.
&quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
# the query is profiled. For example, a query could return the statistics as
# follows:
#
# {
# &quot;rows_returned&quot;: &quot;3&quot;,
# &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
# &quot;cpu_time&quot;: &quot;1.19 secs&quot;
# }
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
&quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
# returns a lower bound of the rows modified.
&quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
&quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
# with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
# `plan_nodes`.
{ # Node information for nodes appearing in a QueryPlan.plan_nodes.
&quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
{ # Metadata associated with a parent-child relationship appearing in a
# PlanNode.
&quot;childIndex&quot;: 42, # The node to which the link points.
&quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
# distinguish between the build child and the probe child, or in the case
# of the child being an output variable, to represent the tag associated
# with the output variable.
&quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
# to an output variable of the parent node. The field carries the name of
# the output variable.
# For example, a `TableScan` operator that reads rows from a table will
# have child links to the `SCALAR` nodes representing the output variables
# created for each column that is read by the operator. The corresponding
# `variable` fields will be set to the variable names assigned to the
# columns.
},
],
&quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
# For example, a Parameter Reference node could have the following
# information in its metadata:
#
# {
# &quot;parameter_reference&quot;: &quot;param1&quot;,
# &quot;parameter_type&quot;: &quot;array&quot;
# }
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
&quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
# different kinds of nodes differently. For example, If the node is a
# SCALAR node, it will have a condensed representation
# which can be used to directly embed a description of the node in its
# parent.
&quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
# `SCALAR` PlanNode(s).
&quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
# where the `description` string of this node references a `SCALAR`
# subquery contained in the expression subtree rooted at this node. The
# referenced `SCALAR` subquery may not necessarily be a direct child of
# this node.
&quot;a_key&quot;: 42,
},
&quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
},
&quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
&quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
&quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
# key-value pairs. Only present if the plan was returned as a result of a
# profile query. For example, number of executions, number of rows/time per
# execution etc.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object.
},
},
],
},
},
&quot;resumeToken&quot;: &quot;A String&quot;, # Streaming calls might be interrupted for a variety of reasons, such
# as TCP connection loss. If this occurs, the stream of results can
# be resumed by re-sending the original request and including
# `resume_token`. Note that executing any other transaction in the
# same session invalidates the token.
&quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
# Only present in the first response.
&quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
# information about the new transaction is yielded here.
&quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
# for the transaction. Not returned by default: see
# TransactionOptions.ReadOnly.return_read_timestamp.
#
# A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
# Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
&quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
# Read,
# ExecuteSql,
# Commit, or
# Rollback calls.
#
# Single-use read-only transactions do not have IDs, because
# single-use transactions do not support multiple requests.
},
&quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
# set. For example, a SQL query like `&quot;SELECT UserId, UserName FROM
# Users&quot;` could return a `row_type` value like:
#
# &quot;fields&quot;: [
# { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
# { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
# ]
&quot;fields&quot;: [ # The list of fields that make up this struct. Order is
# significant, because values of this struct type are represented as
# lists, where the order of field values matches the order of
# fields in the StructType. In turn, the order of fields
# matches the order of columns in a read request, or the order of
# fields in the `SELECT` clause of a query.
{ # Message representing a single field of a struct.
&quot;type&quot;: # Object with schema name: Type # The type of the field.
&quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
# SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
# query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
# `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
# columns might have an empty name (e.g., !&quot;SELECT
# UPPER(ColName)&quot;`). Note that a query result can contain
# multiple fields with the same name.
},
],
},
},
&quot;values&quot;: [ # A streamed result set consists of a stream of values, which might
# be split into many `PartialResultSet` messages to accommodate
# large rows and/or large values. Every N complete values defines a
# row, where N is equal to the number of entries in
# metadata.row_type.fields.
#
# Most values are encoded based on type as described
# here.
#
# It is possible that the last value in values is &quot;chunked&quot;,
# meaning that the rest of the value is sent in subsequent
# `PartialResultSet`(s). This is denoted by the chunked_value
# field. Two or more chunked values can be merged to form a
# complete value as follows:
#
# * `bool/number/null`: cannot be chunked
# * `string`: concatenate the strings
# * `list`: concatenate the lists. If the last element in a list is a
# `string`, `list`, or `object`, merge it with the first element in
# the next list by applying these rules recursively.
# * `object`: concatenate the (field name, field value) pairs. If a
# field name is duplicated, then apply these rules recursively
# to merge the field values.
#
# Some examples of merging:
#
# # Strings are concatenated.
# &quot;foo&quot;, &quot;bar&quot; =&gt; &quot;foobar&quot;
#
# # Lists of non-strings are concatenated.
# [2, 3], [4] =&gt; [2, 3, 4]
#
# # Lists are concatenated, but the last and first elements are merged
# # because they are strings.
# [&quot;a&quot;, &quot;b&quot;], [&quot;c&quot;, &quot;d&quot;] =&gt; [&quot;a&quot;, &quot;bc&quot;, &quot;d&quot;]
#
# # Lists are concatenated, but the last and first elements are merged
# # because they are lists. Recursively, the last and first elements
# # of the inner lists are merged because they are strings.
# [&quot;a&quot;, [&quot;b&quot;, &quot;c&quot;]], [[&quot;d&quot;], &quot;e&quot;] =&gt; [&quot;a&quot;, [&quot;b&quot;, &quot;cd&quot;], &quot;e&quot;]
#
# # Non-overlapping object fields are combined.
# {&quot;a&quot;: &quot;1&quot;}, {&quot;b&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;1&quot;, &quot;b&quot;: 2&quot;}
#
# # Overlapping object fields are merged.
# {&quot;a&quot;: &quot;1&quot;}, {&quot;a&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;12&quot;}
#
# # Examples of merging objects containing lists of strings.
# {&quot;a&quot;: [&quot;1&quot;]}, {&quot;a&quot;: [&quot;2&quot;]} =&gt; {&quot;a&quot;: [&quot;12&quot;]}
#
# For a more complete example, suppose a streaming SQL query is
# yielding a result set whose rows contain a single string
# field. The following `PartialResultSet`s might be yielded:
#
# {
# &quot;metadata&quot;: { ... }
# &quot;values&quot;: [&quot;Hello&quot;, &quot;W&quot;]
# &quot;chunked_value&quot;: true
# &quot;resume_token&quot;: &quot;Af65...&quot;
# }
# {
# &quot;values&quot;: [&quot;orl&quot;]
# &quot;chunked_value&quot;: true
# &quot;resume_token&quot;: &quot;Bqp2...&quot;
# }
# {
# &quot;values&quot;: [&quot;d&quot;]
# &quot;resume_token&quot;: &quot;Zx1B...&quot;
# }
#
# This sequence of `PartialResultSet`s encodes two rows, one
# containing the field value `&quot;Hello&quot;`, and a second containing the
# field value `&quot;World&quot; = &quot;W&quot; + &quot;orl&quot; + &quot;d&quot;`.
&quot;&quot;,
],
&quot;chunkedValue&quot;: True or False, # If true, then the final value in values is chunked, and must
# be combined with more values from subsequent `PartialResultSet`s
# to obtain a complete field value.
}</pre>
</div>
</body></html>