Skip to content

Commit

Permalink
Merge pull request #49 from jgeewax/transaction-docs-fix
Browse files Browse the repository at this point in the history
Updated docstrings for consistency in gcloud.datastore.
  • Loading branch information
jgeewax committed Feb 24, 2014
2 parents b0fa299 + 39ddc2c commit c20cbf1
Show file tree
Hide file tree
Showing 5 changed files with 61 additions and 70 deletions.
8 changes: 0 additions & 8 deletions docs/datastore-api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,6 @@ Connections
:undoc-members:
:show-inheritance:

Credentials
-----------

.. automodule:: gcloud.datastore.credentials
:members:
:undoc-members:
:show-inheritance:

Datasets
--------

Expand Down
16 changes: 8 additions & 8 deletions gcloud/datastore/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@
You'll typically use these to get started with the API:
>>> import gcloud.datastore
>>> dataset = gcloud.datastore.get_dataset('dataset-id-here',
'[email protected]',
'/path/to/private.key')
>>> from gcloud import datastore
>>> dataset = datastore.get_dataset('dataset-id-here',
... '[email protected]',
... '/path/to/private.key')
>>> # Then do other things...
>>> query = dataset.query().kind('EntityKind')
>>> entity = dataset.entity('EntityKind')
Expand Down Expand Up @@ -46,8 +46,8 @@ def get_connection(client_email, private_key_path):
Use this if you are going to access several datasets
with the same set of credentials (unlikely):
>>> import gcloud.datastore
>>> connection = gcloud.datastore.get_connection(email, key_path)
>>> from gcloud import datastore
>>> connection = datastore.get_connection(email, key_path)
>>> dataset1 = connection.dataset('dataset1')
>>> dataset2 = connection.dataset('dataset2')
Expand All @@ -74,8 +74,8 @@ def get_dataset(dataset_id, client_email, private_key_path):
You'll generally use this as the first call to working with the API:
>>> import gcloud.datastore
>>> dataset = gcloud.datastore.get_dataset('dataset-id', email, key_path)
>>> from gcloud import datastore
>>> dataset = datastore.get_dataset('dataset-id', email, key_path)
>>> # Now you can do things with the dataset.
>>> dataset.query().kind('TestKind').fetch()
[...]
Expand Down
8 changes: 4 additions & 4 deletions gcloud/datastore/connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -195,8 +195,8 @@ def run_query(self, dataset_id, query_pb, namespace=None):
Under the hood, the :class:`gcloud.datastore.query.Query` class
uses this method to fetch data:
>>> import gcloud.datastore
>>> connection = gcloud.datastore.get_connection(email, key_path)
>>> from gcloud import datastore
>>> connection = datastore.get_connection(email, key_path)
>>> dataset = connection.dataset('dataset-id')
>>> query = dataset.query().kind('MyKind').filter('property =', 'value')
Expand Down Expand Up @@ -238,9 +238,9 @@ def lookup(self, dataset_id, key_pbs):
and is used under the hood for methods like
:func:`gcloud.datastore.dataset.Dataset.get_entity`:
>>> import gcloud.datastore
>>> from gcloud import datastore
>>> from gcloud.datastore.key import Key
>>> connection = gcloud.datastore.get_connection(email, key_path)
>>> connection = datastore.get_connection(email, key_path)
>>> dataset = connection.dataset('dataset-id')
>>> key = Key(dataset=dataset).kind('MyKind').id(1234)
Expand Down
8 changes: 4 additions & 4 deletions gcloud/datastore/query.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ class Query(object):
which generates a query that can be executed
without any additional work::
>>> import gcloud.datastore
>>> dataset = gcloud.datastore.get_dataset('dataset-id', email, key_path)
>>> from gcloud import datastore
>>> dataset = datastore.get_dataset('dataset-id', email, key_path)
>>> query = dataset.query('MyKind')
:type kind: string
Expand Down Expand Up @@ -217,8 +217,8 @@ def fetch(self, limit=None):
For example::
>>> import gcloud.datastore
>>> dataset = gcloud.datastore.get_dataset('dataset-id', email, key_path)
>>> from gcloud import datastore
>>> dataset = datastore.get_dataset('dataset-id', email, key_path)
>>> query = dataset.query('Person').filter('name =', 'Sally')
>>> query.fetch()
[<Entity object>, <Entity object>, ...]
Expand Down
91 changes: 45 additions & 46 deletions gcloud/datastore/transaction.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,22 +15,22 @@ class Transaction(object):
(either ``insert_auto_id`` or ``upsert``)
into the same mutation, and execute those within a transaction::
import gcloud.datastore
dataset = gcloud.datastore.get_dataset('dataset-id', email, key_path)
with dataset.transaction(bulk_mutation=True) # The default.
entity1.save()
entity2.save()
>>> from gcloud import datastore
>>> dataset = datastore.get_dataset('dataset-id', email, key_path)
>>> with dataset.transaction(bulk_mutation=True) # The default.
... entity1.save()
... entity2.save()
To rollback a transaction if there is an error::
import gcloud.datastore
dataset = gcloud.datastore.get_dataset('dataset-id', email, key_path)
with dataset.transaction() as t:
try:
do_some_work()
entity1.save()
except:
t.rollback()
>>> from gcloud import datastore
>>> dataset = datastore.get_dataset('dataset-id', email, key_path)
>>> with dataset.transaction() as t:
... try:
... do_some_work()
... entity1.save()
... except:
... t.rollback()
If the transaction isn't rolled back,
it will commit by default.
Expand All @@ -42,8 +42,8 @@ class Transaction(object):
That means,
if you try::
with dataset.transaction():
entity = dataset.entity('Thing').save()
>>> with dataset.transaction():
... entity = dataset.entity('Thing').save()
``entity`` won't have a complete Key
until the transaction is committed.
Expand All @@ -52,12 +52,11 @@ class Transaction(object):
the automatically generated ID will be assigned
to the entity::
with dataset.transaction():
entity = dataset.entity('Thing')
entity.save()
assert entity.key().is_partial() # There is no ID on this key.
assert not entity.key().is_partial() # There *is* an ID on this key.
>>> with dataset.transaction():
... entity = dataset.entity('Thing')
... entity.save()
... assert entity.key().is_partial() # There is no ID on this key.
>>> assert not entity.key().is_partial() # There *is* an ID on this key.
.. warning::
If you're using the automatically generated ID functionality,
Expand All @@ -73,16 +72,16 @@ class Transaction(object):
If you don't want to use the context manager
you can initialize a transaction manually::
transaction = dataset.transaction()
transaction.begin()
>>> transaction = dataset.transaction()
>>> transaction.begin()
entity = dataset.entity('Thing')
entity.save()
>>> entity = dataset.entity('Thing')
>>> entity.save()
if error:
transaction.rollback()
else:
transaction.commit()
>>> if error:
... transaction.rollback()
... else:
... transaction.commit()
For now,
this library will enforce a rule of
Expand All @@ -95,31 +94,31 @@ class Transaction(object):
For example, this is perfectly valid::
import gcloud.datastore
dataset = gcloud.datastore.get_dataset('dataset-id', email, key_path)
with dataset.transaction():
dataset.entity('Thing').save()
>>> from gcloud import datastore
>>> dataset = datastore.get_dataset('dataset-id', email, key_path)
>>> with dataset.transaction():
... dataset.entity('Thing').save()
However, this **wouldn't** be acceptable::
import gcloud.datastore
dataset = gcloud.datastore.get_dataset('dataset-id', email, key_path)
with dataset.transaction():
dataset.entity('Thing').save()
with dataset.transaction():
dataset.entity('Thing').save()
>>> from gcloud import datastore
>>> dataset = datastore.get_dataset('dataset-id', email, key_path)
>>> with dataset.transaction():
... dataset.entity('Thing').save()
... with dataset.transaction():
... dataset.entity('Thing').save()
Technically, it looks like the Protobuf API supports this type of pattern,
however it makes the code particularly messy.
If you really need to nest transactions, try::
import gcloud.datastore
dataset1 = gcloud.datastore.get_dataset('dataset-id', email, key_path)
dataset2 = gcloud.datastore.get_dataset('dataset-id', email, key_path)
with dataset1.transaction():
dataset1.entity('Thing').save()
with dataset2.transaction():
dataset2.entity('Thing').save()
>>> from gcloud import datastore
>>> dataset1 = datastore.get_dataset('dataset-id', email, key_path)
>>> dataset2 = datastore.get_dataset('dataset-id', email, key_path)
>>> with dataset1.transaction():
... dataset1.entity('Thing').save()
... with dataset2.transaction():
... dataset2.entity('Thing').save()
:type dataset: :class:`gcloud.datastore.dataset.Dataset`
:param dataset: The dataset to which this :class:`Transaction` belongs.
Expand Down

0 comments on commit c20cbf1

Please sign in to comment.