Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explore going the route of using Clients ? #861

Closed
jgeewax opened this issue May 5, 2015 · 60 comments
Closed

Explore going the route of using Clients ? #861

jgeewax opened this issue May 5, 2015 · 60 comments
Assignees
Labels
api: core auth type: question Request for information or clarification. Not an issue.

Comments

@jgeewax
Copy link
Contributor

jgeewax commented May 5, 2015

After our last talk, there have been quite a few different ideas tossed around to make it clear and obvious which credentials and project IDs are in use during a particular API call, some of those have been....

  1. Changing the default values globally
  2. Making "Connections" a context manager (with connection: # do something)
  3. Creating the concept of a client

We do (1), are talking about doing (2), while the others tend to do (3) -- and comparing the code, I think (3) is the nicest.

gcloud-node:

var gcloud = require('gcloud');
var project1_storage = gcloud.storage({projectId: 'project1', keyFilename: '/path/to/key'});
var project2_storage_auto = gcloud.storage();

# Do things with the two "clients"...
bucket1 = project1_storage.get_bucket('bucket-1')
bucket2 = project2_storage_auto.get_bucket('bucket2')

gcloud-ruby

require 'gcloud/storage'
project1_storage = Gcloud.storage "project-id-1" "/path/to/key"
project2_storage_auto = Gcloud.storage  # Magically figure out the project ID and credentials

# Do things with the two "clients"...
bucket1 = project1_storage.find_bucket "bucket-1"
bucket2 = project2_storage_auto.find_bucket "bucket-2"

gcloud-python

from gcloud import storage
from gcloud.credentials import get_for_service_account_json

# Create two different credentials.
credentials1 = get_for_service_account_json('key1.json')
credentials2 = get_for_service_account_json('key2.json')

# Create two different connections.
connection1 = storage.Connection(credentials=credentials1)
connection2 = storage.Connection(credentials=credentials2)

# Get two different buckets
bucket1 = storage.get_bucket('bucket-1', project='project1', connection=connection1)
bucket2 = storage.get_bucket('bucket-2', project='project2', connection=connection2)

gcloud-python if we followed the client pattern:

from gcloud import storage

project1_storage = storage.Client('project1', '/path/to/key')
project2_storage_auto = storage.Client()

# Do things with the two "clients"...
bucket1 = project1_storage.get_bucket('bucket-1')
bucket2 = project2_storage_auto.get_bucket('bucket-2')

Another option for gcloud-python using the client pattern:

import gcloud

project1_storage = gcloud.storage('project1', '/path/to/key')
project2_storage_auto = gcloud.storage()

# Do things with the two "clients"
bucket1 = project1_storage.get_bucket('bucket-1')
bucket2 = project2_storage_auto.get_bucket('bucket-2')

/cc @dhermes @tseaver

@jgeewax jgeewax added type: question Request for information or clarification. Not an issue. hygiene api: core auth labels May 5, 2015
@jgeewax jgeewax added this to the Core Future milestone May 5, 2015
@tseaver
Copy link
Contributor

tseaver commented May 6, 2015

I honestly don't see the benefit to bundling project together with connection: it only gets passed to a couple of constructors (storage.bucket.Bucket, pubsub.topic.Topic). "Burying" it in a client object doesn't feel like a win for such limited use. The datastore.dataset.Dataset case is different, because a non-default dataset_id has to be passed in lots of places.

If we don't bundle the project, then we could just focus on adopting a common pattern for connection passing / lookup everywhere (#825 and related PRs).

@tseaver
Copy link
Contributor

tseaver commented May 6, 2015

Note that the work toward #825 makes #2 above (using a connection as a context manager) actually feasible.

@dhermes
Copy link
Contributor

dhermes commented May 6, 2015

It's probably worth it to create a list of all config that could be a candidate for a global (or membership in a long-lived object like a Dataset).

Right now we have

  • Datastore
  • Dataset ID
  • Connection -> Credentials
  • Storage
  • Project (only used in 2 of 34 methods, methods only required for "startup")
  • Bucket (not used anywhere, but could be)
  • Connection -> Credentials
  • Pubsub
  • Project (used in all API requests in the path, but rarely used directly in code)
  • Connection -> Credentials

@tseaver
Copy link
Contributor

tseaver commented May 6, 2015

While the back-end pubsub API has project in all requests, our wrappers only require passing the project ID when constructing a Topic and when listing topics / subscriptions for a project: the other cases all get their project ID from the relevant Topic instance.

@dhermes
Copy link
Contributor

dhermes commented May 6, 2015

@tseaver Added this to my list

@tseaver
Copy link
Contributor

tseaver commented May 14, 2015

Issues about the "always use a Client" notion we discussed in Monday's call:

  • It interferes with the "stackable" pattern we use to allow batches / transactions to proxy for their underlying connections.
  • In every place we just visited in the Add optional 'connection' parameter for any method that hits the wire #825 PRs, we would probably end up replacing the connection=None argument and the _require_connection() inside the method with client=None and _require_client(), all FBO developers who a) don't want to use the default project ID, and b) don't want to pass it to the Bucket / Topic constructors or the list_buckets / list_topics / list_subscriptions queries.
  • This line of development could logically be pushed to a point where we move all the API methods onto the Client, dropping all the intermediate classes in favor of pure imperative code.

@jgeewax
Copy link
Contributor Author

jgeewax commented May 14, 2015

@tseaver, hate to make more work for you, but is it crazy to toss in short examplar code snippets illustrating the problem ?

Some questions I'm having trouble answering based on our conversations...

  1. Is it bad form to have objects know about their "parent" (Blob knowing about it's Bucket, which knows about it's Client; or Entity + Key knowing about their Dataset which knows about its Client) ? If so, why?

  2. Is it bad form to make things involving the parent happen on that parent rather than the child? For example...

    # Swap
    topic = Topic('topic_name')
    topic.create()
    # for
    client.create_topic('topic_name')

    and then

    subscription = topic.subscribe('subscription name')
    # or
    subscription = topic.get_subscription('subscription_name')
  3. Is it terrible if batches only permit a single client at a time? That is...

    clientA = datastore.Client()  # With automatic credentails...
    clientB = datastore.Client(project_id='other-project')  # With other credentials...
    
    entityA = ... # Get an entity from clientA
    entityB = ... # Get an entity from clientB
    
    entityA.age = 5
    entityB.age = 5
    
    with clientA.batch() as batch:
    batch.put(entityA)
    batch.put(entityB)  # Fails?
  4. You mentioned that if we go the client route, the "operate on the module" idea is not a good one. I don't have a huge preference, but I'm curious why it'd be horrible to offer an import-time constructed client if they want to "just use the default everything". For example...

    from gcloud.datastore import default_client as datastore
    key = datastore.Key('Product', 123)
    datastore.get([key])

    (I have no problems with us skipping this, but I wanted to ask....)


I get that this is an enormous number of questions, so if it's going to make GitHub explode and make everything unreadable, let's pick one of them for now and go with that...

@tseaver
Copy link
Contributor

tseaver commented May 14, 2015

Is it bad form to have objects know about their "parent" (Blob knowing about it's Bucket, which knows about it's Client; or Entity + Key knowing about their Dataset which knows about its Client) ? If so, why?

I think there are different cases:

  • A Bucket instance is tied to a project (server-side), but not necessarily to a connection: e.g., one might have a connection whose credentials / scopes provided only read access to a given bucket, and another connection whose credentials / scopes had owner access. Also note that the project is only needed when instantiating a Bucket which does not yet exist server-side: existing buckets know their project, but the library user won't have to deal with it.
  • A Blob instance is necessarily tied to a bucket due to the server-side API. There is no conceivable access pattern which allows using a blob with a foreign bucket. In addition, nearly all the Blob methods are implemented using self.bucket.
  • A Key instance knows its dataset_id, but needs nothing else from a "parent" object. Again, one might be using a key across different connections in order to take advantage of different credentials / scopes. An Entity instance holds its Key, but needs nothing beyond it (no connection, for instance).
  • Like a bucket, a Topic instance is tied to a project at creation time. No other methods require / allow the project, however. Also like a bucket, a topic could be used across multiple connections.
  • As a blob is tied to its bucket, a Subscription instance is intrinsically tied to its topic, and uses it to implement its methods (mostly to compute its path).

@tseaver
Copy link
Contributor

tseaver commented May 14, 2015

[Can you] toss in short examplar code snippets illustrating the problem?

["Always use a client"] interferes with the "stackable" pattern we use to allow batches / transactions to proxy for their underlying connections.

At the moment, one can use a batch or a transaction as a context manager an not pass it everywhere, or call methods through it. E.g.:

    from gcloud import datastore
    with datastore.Transaction():
        keys = [entity.key for entity in some_list]
        datastore.delete(keys)

This works because the _require_connection code is willing to look up the current batch / transaction from a thread-local stack, falling back to use the global default. Calling API functions through the notional Client instance will defeat that stacking. Note that a object methods will be also lose that stacking.

In every place we just visited in the #825 PRs, we would probably end up replacing the connection=None argument and the _require_connection() inside the method with client=None and _require_client(), all FBO developers who a) don't want to use the default project ID, and b) don't want to pass it to the Bucket / Topic constructors or the list_buckets / list_topics / list_subscriptions queries.

Per #825, all of the methods / functions that trigger remote API calls now take an optional connection argument (modulo #858 which isn't yet merged). They all fall back to using either the "stack" (for batch/transaction support) or the global default. If we implement "always use a client", then we would have to undo all that work (which cleaned up the code a lot, and go back to using self.connection attributes / properties.

This line of development could logically be pushed to a point where we move all the API methods onto the Client, dropping all the intermediate classes in favor of pure imperative code.

It is a bit of a strawman, but having "always use a client" as the access pattern reduces the desirability of the various client-side objects: they become basically thin wrappers over the client.

@jgeewax
Copy link
Contributor Author

jgeewax commented May 14, 2015

At the moment, one can use a batch or a transaction as a context manager an not pass it everywhere, or call methods through it. E.g.:

from gcloud import datastore
with datastore.Transaction():
    keys = [entity.key for entity in some_list]
    datastore.delete(keys)

This works because the _require_connection code is willing to look up the current batch / transaction from a thread-local stack, falling back to use the global default. Calling API functions through the notional Client instance will defeat that stacking. Note that a object methods will be also lose that stacking.

I'm proposing that we make people be a bit more explicit for batches and transactions:

from gcloud import datastore
client = datastore.Client() #  This guesses the credentials and project ID magically
# or
client = datastore.Client(project='project-id', key_path='/path/to/key.json')

keys = [ .... ]

with client.Transaction() as t:
  t.delete(keys)

# or

with client.Transaction():
  client.delete(keys)

Wouldn't this make the "require connection" stuff just go away all together? We never guess, or look to the global scope, or anything like that...

@dhermes
Copy link
Contributor

dhermes commented May 14, 2015

Wasn't the original desire for a client so that people would type less? So we sacrifice usability elsewhere for the sake of usability of a feature which no one has asked for (easy context switching).

@jgeewax
Copy link
Contributor Author

jgeewax commented May 14, 2015

Wasn't the original desire for a client so that people would type less?

I don't see a huge difference in the amount of typing. What's the delta here?

So we sacrifice usability elsewhere ...

Where are we sacrificing usability? This is where I'm so confused. I see the client pattern as far more usable than the module-level actions and module-level global-only configuration pattern we (and App Engine) have...

@dhermes
Copy link
Contributor

dhermes commented May 14, 2015

RE: "Where are we sacrificing usability?", I was referring to

I'm proposing that we make people be a bit more explicit for batches and transactions

The client pattern is not "far more usable than the module-level actions", it is exactly identical (given our proposal). We are just swapping out

foo.method(...)

for

bar.method(...)

As for the actions, they are driven by the objects.

@jgeewax
Copy link
Contributor Author

jgeewax commented May 14, 2015

be a bit more explicit for batches and transactions

Sure, the explicit case here is one extra line however I believe we could make that go away if we wanted to. Tres didn't seem to think that was a good idea.

from gcloud import datastore
client = datastore.Client()
with client.Transaction():
  client.delete(keys)

versus

from gcloud import datastore
with datastore.Transaction():
  datastore.delete(keys)

The client pattern is not "far more usable than the module-level actions", it is exactly identical (given our proposal).

I don't think it's identical when you look at dealing with multiple projects or multiple sets of credentials -- is it?

Assuming it's not in those cases, I'm having tons of trouble with the idea of that we should design for usability for the one-and-only-one project and set of credentials. Making the multi-project or multi-credential use cases less usable (IMO).

If we did the client pattern, wouldn't we get both ? Multiple clients with multiple projects with multiple sets of credentials are all really nice, and the one-and-only-one-everything is also really nice... No?

@dhermes
Copy link
Contributor

dhermes commented May 14, 2015

But we already support multiple connections, it just requires more typing, which we just shuffle from passing arguments to instantiating client objects.

When #864 is in, creating multiple Connections will be possible with a single line (rather than the 3 or 4 it used to take).

@jgeewax
Copy link
Contributor Author

jgeewax commented May 14, 2015

But we already support multiple connections, it just requires more typing, which we just shuffle from passing arguments to instantiating client objects.

Doesn't it seem weird and repetitive to mix in the configuration details at the time when you're trying to accomplish the action? Shouldn't we split the configure stage apart from the "make an API call" ?

This is my understanding of what we've been aiming for since the beginning, sorry if it's wrong (or long).

The simple case: one-and-only-one everything. Cool.

from gcloud import datastore
key = datastore.Key('Product', '0')
product = datastore.get([key])

The multi-credential, multi-project case. Repetitive.

from gcloud import datastore

# Configure:
connectionA = datastore.Connection(...)
connectionB = datastore.Connection(...)

# Do stuff (and also configure):
keyA = datastore.Key('Product', 'a', dataset_id='dataset-a')
keyB = datastore.Key('Product', 'b', dataset_id='dataset-b')
productA = datastore.get([keyA], connection=connectionA, dataset_id='dataset-a')
productB = datastore.get([keyB], connection=connectionB, dataset_id='dataset-b')
productA.value = 1
datastore.put(productA, connection=connectionA, dataset_id='dataset-a')
datastore.delete([keyA], connection=connectionA, dataset_id='dataset-a')
datastore.delete([keyB], connection=connectionB, dataset_id='dataset-b')

Here's the direction I'm trying to push us...

The simple case: one-and-only-one everything (without any magic). Somewhat cool.

from gcloud import datastore
client = datastore.Client()
key = client.Key('Product', '0') # (or datastore.Key(), I'm not sure)
product = client.get(key)

The simple case: one-and-only-one everything (with the magic). Mostly cool.

from gcloud.datastore import default_client as datastore
key = datastore.Key('Product', '0')
product = datastore.get(key)

The simple case: one-and-only-one everything (with different magic). Cool.

from glcoud import datastore  # Module-level methods just proxy to default_client
key = datastore.Key('Product', '0')
product = datastore.get(key)

The multi-credential, multi-project case. Cool.

from gcloud import datastore

# Configure:
clientA = datastore.Client(project='projectA')
clientB = datastore.Client(project='projectB')
datasetA = clientA.Dataset('dataset-a')
datasetB = clientB.Dataset('dataset-b')

# Do stuff:
keyA = datasetA.Key('Product', 'a')
keyB = datasetB.Key('Product', 'b')
productA = datasetA.get(keyA)
productB = datasetB.get(keyB)
productA.value = 1
datasetA.put(productA)
datasetA.delete(keyA)
datasetB.delete(keyB)

@dhermes
Copy link
Contributor

dhermes commented May 14, 2015

RE:

productA = datastore.get([keyA], connection=connectionA, dataset_id='dataset-a')
productB = datastore.get([keyB], connection=connectionB, dataset_id='dataset-b')
productA.value = 1
datastore.put(productA, connection=connectionA, dataset_id='dataset-a')
datastore.delete([keyA], connection=connectionA, dataset_id='dataset-a')
datastore.delete([keyB], connection=connectionA, dataset_id='dataset-a')

We've been trying to point out that only the connection is the only config object that gets passed on a regular basis. The dataset ID is encoded in the key and is not needed in any of those method calls.


So in the above proposal we go from the concept of Noun+Verb (Key+delete) to

client -> dataset -> verb + client -> dataset -> noun

and we consider

datastore.Client(project='projectA').Dataset('dataset-a').delete([keyA])

to be superior to

datastore.delete([keyA], connection=connectionA)

I don't see an improvement.


(Also note that the project is as-of-yet un-needed for datastore, but as you've mentioned the landscape is changing, though at an unknown date.)

@jgeewax
Copy link
Contributor Author

jgeewax commented May 14, 2015

Wait -- datastore.delete(keyA, connection=connectionA) -- where is the dataset ID there?

Sorry missed "The dataset ID is encoded in the key and is not needed in any of those method calls.":

OK, so then datastore.Client(project='project-a').delete(keyA) should work just fine. So the comparison is:

  • datastore.Client(project='project-a').delete(keyA)
  • datastore.delete(keyA, connection=datastore.Connection(project='project-a'))

And the multiple method calls, we're comparing:

client = datastore.Client(project='project-a')
client.delete(keyA)
if client.get(keyA2).whatever == 42:
  client.delete(keyA2)

to

connection = datastore.Connection(project='project-a')
datastore.delete(keyA, connection=connection)
if datastore.get(keyA2, connection=connection).whatever == 42:
  datastore.delete(keyA2, connection=connection)

@dhermes
Copy link
Contributor

dhermes commented May 14, 2015

It's in the key.

@jgeewax
Copy link
Contributor Author

jgeewax commented May 14, 2015

Sorry, updated my note -- misread.

@dhermes
Copy link
Contributor

dhermes commented May 14, 2015

I still maintain that we are adding more concepts / classes to gain functionality which we already have. The only repetitive use is connection. All other config get encoded in the objects (as eloquently put forth above #861 (comment) by @tseaver).

There is no way to save the repetitive typing of connection, we just exchange it for client (and in the process add a new concept).

@jgeewax
Copy link
Contributor Author

jgeewax commented May 14, 2015

You're saying the first example with client is using it repetitively? We're not swapping client for connection -- we're swapping client for datastore. I agree I don't think we could get rid of client in the first, or datastore in the second.

I do think we could get rid of connection in the second:

connection = datastore.Connection(project='project-a')
connection.delete(keyA)
if connection.get(keyB).whatever == 42:
  connection.delete(keyB)

... but then s/connection/client/ and you have the first example...

@jgeewax
Copy link
Contributor Author

jgeewax commented May 14, 2015

I still maintain that we are adding more concepts / classes to gain functionality which we already have.

Functionality -- yes. It is certainly possible to do these things with what we have.

Usability (aka, not repeating ourselves) -- I don't think so. It looks like we're typing connection far too often when we need to be specific about it, right?

@dhermes
Copy link
Contributor

dhermes commented May 14, 2015

RE: "It looks like we're typing connection far too often " But we exchange typing connection for typing client. There is no gain.

Two simultaneous contexts require typing more to keep them distinct. That is something we can't escape.

I'd prefer we just do nothing until we have users say "it's important for me to have multiple credentials / config active at once". And then we can hear their requirements and design based on real feedback.

@pcostell
Copy link
Contributor

@dhermes I don't like implicit changing behavior. I.e.:

datastore.put(e)  # to default project
set_client(client)
... lots of non datastore code
datastore.put(e)  # now to some other project

Us being able to do the smart thing for users that don't need or want extra configuration is fine.

yes idiomatic is important. But are you saying that passing in the connection everywhere is more idiomatic in python? ISTM that if there isn't a clear more idiomatic way to do things, being consistent is a better policy.

@jgeewax fair point, maybe client is the better name (but then you could still have a Database/Dataset which contained a client that you could interact with directly).

@dhermes
Copy link
Contributor

dhermes commented May 14, 2015

@pcostell I wasn't trying to connect the idiomatic comment to the above discussion, just to point out that doing something to match the other runtimes isn't a good reason to do it.

@marcja
Copy link

marcja commented May 14, 2015

Interesting.

My take on this is that simply focusing on the quantity of typing or the relative aesthetic elegance of the code is distracting from more important issues. To me, there are three: 1) consistency between gcloud API projections in different languages; 2) consistency between gcloud-python API and other APIs that users might use; and 3) user expectations about what should be "remembered".

Consistency between gcloud API projects in different languages

Why does it matter? In a polyglot world, it's not unreasonable that developers have to move between languages, from client-side JavaScript to server-side front-end Python to server-side back-end Java (as just one example). If I've used the Datastore API in one language, it should feel familiar when using it in another language on another part of the stack. Further (and perhaps more importantly), when I search the web for answers to my Datastore usage questions, if I find an answer on StackOverflow that is in Ruby, I should be able to immediately adapt that answer to my code in Python. The examples from Ruby and Node that JJ listed at the top of this thread made sense to me and as a user I would be surprised if Python worked differently. If the document on cloud.google.com had a "how to connect to Datastore" inset that had tabs for each language, I would be annoyed if I were flipping back and forth between the various languages and the Python sample deviated from the norm established by the rest.

Consistency between gcloud-python and other APIs that users might use

Why does it matter? Our users do not have access to Datastore outside of GCP (with the exception of the Datastore fake when doing local App Engine development) and they will not have experience working with Datastore until they become GCP customers. But our users will have experience running MySQL clients, SQLlite, CouchDB, MongoDB, and many others. And the bulk of the first-party Python APIs for those products employ the model of a client/connection object with operations hanging off that object. When I read the samples for the Datastore API that did NOT use the client/connection model, I was surprised as a user because I could not relate it to my prior experience with these other APIs. That created a cognitive load because I now needed to "figure out" how this API worked because all I knew upon first reading was that it didn't work like the others I was already familiar with. I have years of experience working with many different databases using different languages on different platforms. I want my APIs to tap into my knowledge, not set it aside.

User expectations about what should be "remembered"

Why does it matter? As I user I rely on the fact that computers should have a better memory than I do. I put things into computers so that I don't have to remember them. I put things into computers because there is more than I could possibly remember. The point of this is that I hate it when I tell I computer something and it doesn't remember it when I expected it to. It's bad user experience. When I read an API like:

from gcloud.datastore import Connection, delete
connection = Connection(...)
delete(key0, connection=connection)
delete(key1, connection=connection)

it feels "forgetful" to me. I've told you my connection. Why do you keep forgetting it? Why do I have to keep reminding you of it? On the other hand, though functionally equivalent, this:

from gcloud import datastore
client = datastore.Client(...)
client.delete(key1)
client.delete(key2)

does not create the same reaction for me. I've told the client about my connection, and without having to remind it anything about my connection, I can simply tell the client what I want it to do. In a sense, I have a delegated the responsibility of remembering those details to something and now I want to give my instructions to that something to "just take care of it". The client idiom meets that expectation, while the parameterized idiom does not.

To be clear, I have read and understood the more technical points about "stackability" and the implementation considerations and I also totally get that the "forgetfulness" argument is only at play when the user is working with multiple connection/datasets. That said, I am more concerned about the user experience of this API (in the context of the universe of gcloud and other APIs it lives in). There will no doubt be users for whom multiple connections/datasets is a requirement. Their user experience should not deviate from the mainstream user experience. Their user experience should not turn into "forgetful" mode just because their business problem got a little more complex.

@dhermes
Copy link
Contributor

dhermes commented May 14, 2015

@marcja RE: "User expectations about what should be "remembered"" We have made it very convenient for users that don't need to context switch to have the library detect automatically and remember their configuration details.

For a user running in GCE

from gcloud import datastore
key = datastore.Key('Kind', 1234)  # Dataset ID is inferred from GCE environ
datastore.delete(key)  # Connection credentials inferred from GCE service account

For a user running in a custom environment

$ export GCLOUD_DATASET_ID="foo"
$ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials/file.json"

and then

from gcloud import datastore
key = datastore.Key('Kind', 1234)  # Dataset ID is inferred from env. var.
datastore.delete(key)  # Connection credentials inferred from env. var.

The snippets

delete(key0, connection=connection)
delete(key1, connection=connection)

were meant for a case where a user has two separate connections, in which case being explicit is important.

@marcja
Copy link

marcja commented May 14, 2015

@dhermes Yes, exactly as I said in my final paragraph above, I understand that it only applies to the multiple connections case, but in that case it feels "forgetful".

@jgeewax
Copy link
Contributor Author

jgeewax commented May 15, 2015

Yea - I think we all agree that the module-level case is most and common and nice to have. The client-pattern doesn't preclude us from doing that though... We can totally still provide that style while still making the multiple-connection case nice...

@tmatsuo
Copy link
Contributor

tmatsuo commented May 15, 2015

Just a thought.

Are we really sure that people want to use multiple credentials? That means one single application has multiple identities. I think that is maintenance-wise cumbersome.

I would just attach one single identity to an application, and add necessary permissions to that identity. For example, I would add a service account to the second project, instead of having multiple service accounts. I understand there is a strong need for multi-project access though.

Maybe I miss something, what's the use case for multi credentials access?

@jgeewax
Copy link
Contributor Author

jgeewax commented May 15, 2015

One use case is moving and reconciling data between two apps: get data out of one dataset, move into another dataset. Configuring the project1/project2 stuff with a client is a common pattern. Passing in arguments, to use @marcja 's term, seems "forgetful".

Regardless, of that, the fact that configuring is always done in the global scope is an issue for me. I'll let @Alfus chime in on this one as well.

@dhermes
Copy link
Contributor

dhermes commented May 15, 2015

@jgeewax RE: "use case is moving and reconciling data between two apps" @tmatsuo point is that you could just add a service account from project 1 as an admin for project 2 and use the same credential.

As for global scope, that has bothered me in some ways too. However, if you don't do it within the package, then the "it just works" with no set-up (that I discussed above) is never possible.

@jgeewax
Copy link
Contributor Author

jgeewax commented May 15, 2015

As for global scope, that has bothered me in some ways too.

Cool.. so let's fix it... Here's a reason (or excuse) to do that... And the "it just works" stuff can still exist too.

@tmatsuo
Copy link
Contributor

tmatsuo commented May 15, 2015

@dhermes
Yes!
Re: copying data. I would temporary add the service account A to the permission list of project B. It is an easier and safer option than temporary pushing another JSON key to the application. NO?

The use cases I can come up with are following 2:

  1. Use the service account and 3 legged OAuth in a single app. For example, a storage application, where the management tasks are done with the service account, but when the end users access their files, they use their own OAuth2 credentials.
  2. Thread/process safety

For the first case, I might store the users' credentials and re-use them. Anyways, I think I have to change credentials in a single application.

For the second case, I would prefer the library handling thread/process safety for me.

Re: the fact that configuring is always done in the global scope
I also see the problem. If multi-thread/process application change the global configuration, what happens?

@dhermes
Copy link
Contributor

dhermes commented May 15, 2015

@jgeewax

How can

And the "it just works" stuff can still exist too.

work? We manage this by having the get_default_foo() methods as fallbacks if the user doesn't pass the corresponding config value. They fall back to globals (having get_default_foo() re-compute it's output every time is a bad idea, and using a memoizing decorator on the methods is still using globals).

@marcja
Copy link

marcja commented May 15, 2015

This is reasonable, but I encourage you all to write the help article first
to guide folks through this use case. While your approach is logical, I
can't say all users would find it obvious. As a user I would immediately
try to do dual credentials, only to discover it doesn't work and then
search the web to find out how to make it work. And then I find your
article that says, "you can't do it with code; you have to do it at least
partly through administration".
On Thu, May 14, 2015 at 9:19 PM Takashi Matsuo [email protected]
wrote:

@dhermes https://github.com/dhermes
Yes!
Re: copying data. I would temporary add the service account A to the
permission list of project B. It is an easier and safer option than
temporary pushing another JSON key to the application. NO?

The use cases I can come up with are following 2:

Use the service account and 3 legged OAuth in a single app. For
example, a storage application, where the management tasks are done with
the service account, but when the end users access their files, they use
their own OAuth2 credentials.
2.

Thread/process safety

For the first case, I might store the users' credentials and re-use them.
Anyways, I think I have to change credentials in a single application.

For the second case, I would prefer the library handling thread/process
safety for me.

Re: the fact that configuring is always done in the global scope
I also see the problem. If multi-thread/process application change the
global configuration, what happens?


Reply to this email directly or view it on GitHub
#861 (comment)
.

@dhermes
Copy link
Contributor

dhermes commented May 15, 2015

@tmatsuo Right now the global is not threadsafe but we do pay some attention to thread-safety in batching / transactions, so it wouldn't be hard to move over.

The case you lay out above (user creds. + service account) seems like such a rare case and potentially something we should discourage users from building.


@marcja

As a user I would immediately try to do dual credentials

You mean this is something you would immediately experiment with? It's not obvious to me that anyone who wasn't very familiar with Google APIs would think it even possible to have more than one way to authenticate an application.

@marcja
Copy link

marcja commented May 15, 2015

I think I'm failing to make my point.

When I buy AV receiver, I expect to leverage my prior knowledge and be able
to setup my home theater system without cracking the manual. When I get an
AV receiver that deviates from my experience, that thwarts my expectations,
I have to resort to cracking the manual and suddenly I get very annoyed.
Seriously, did you (the AV receiver product designers) even try to make it
obvious for me (an experienced techie) to use? Or did you think being
fancy/cute/novel/innovative was helpful/valuable to me? I like technology
and have no fear of learning, but sometimes I just want to get my home
theater setup without being given an object lesson in how in the way
current home theater systems are "backward" or "ripe for innovation".

To make the analogy clear, I know that, as an experienced developer, I have
the skills to recover from failure to complete my task at my first attempt.
I know what Google searches to make, I know how to make inferences from
documentation, I know how to keep hacking on it until it works. But
"Seriously, Google", did you even try to make it obvious for me (an
experience developer) to use? You couldn't help me get it to work without
searching Google or mining StackOverflow?

As a senior person at Google, I have heard the customer feedback that a)
our APIs seem like they were written for what's easy to implement
versus what's easy to use and that b) we often don't seem like our left
hand knows what our right hand is doing. I care deeply about how obvious
and consistent our GCP APIs are. Having API issues that are "one search
away" from an answer means that our customers are also "one search away"
from switching to Amazon, Microsoft, or some other competitor.

@jgeewax
Copy link
Contributor Author

jgeewax commented May 15, 2015

How can

And the "it just works" stuff can still exist too.

work? We manage this by having the get_default_foo() methods as fallbacks if the user doesn't pass the corresponding config value. They fall back to globals (having get_default_foo() re-compute it's output every time is a bad idea, and using a memoizing decorator on the methods is still using globals).

The way I'd suggested before was...

Code I'd write

from gcloud import datastore
datastore.get(datsatore.Key('Product', 123))

or

from gcloud.datastore import default_client as datastore
datastore.get(datastore.Key('Product', 123))

Code under the hood:

# Maybe this is in api.py?
from gcloud import datastore

# Create a global default client that users may or may not use.
default_client = datastore.Client()

# Bring that default client's methods into global scope.
get = default_client.get
delete = default_client.delete
set_project = default_client.set_project
set_credentials = default_client.set_credentials
# ...

Hm, I want to change the settings on the globally shared default connection...

datastore.set_project('...')
datastore.set_credentials('...')
# or
default_client.set_project('...')
default_client.set_credentials('...')

I want to be explicit now about using the default client...

from gcloud import datastore
client = datastore.default_client
other_client = datastore.Client(credentials=other_credentials)

We manage this by having the get_default_foo() methods as fallbacks if the user doesn't pass the corresponding config value.

Not sure I understand what get_default_foo() is here, but in the example I'm saying that the methods themselves never "try to figure out the client". The client is either explicitly created by the user, or the "special one" that we explicitly created for the user. No more DefaultsContainer or _implicit_environ stuff... All code everywhere is always using a client -- we just happen to have created one with no arguments for you as a convenience.

having get_default_foo() re-compute it's output every time is a bad idea

Again, not 100% certain on what you mean by that method, but I'm going to take a guess. If you mean that we shouldn't re-compute the magic values every time (and we also shouldn't compute them for the first time until someone needs them), I agree. So a Client shouldn't try to figure out what the magic things are until methods are called on it. That is:

# datastore client

class Client(object):
  def __init__(self, credentials=None, ...):
    self._credentials = credentials

  def get(self, key, ...):
    self.connection._rpc('...')

  @property
  def connection(self):
    if self._connection is None:
      self._connection = Connection(credentials=self.credentials, ...)
    return self._connection

  @property
  def credentials(self):
    if self._credentials is None:
      self._credentials = discovery_module.magically_figure_out_my_creds()
    return self._credentials

So instantiating a Client() doesn't try to figure anything out. Once I call a method that needs those properties, we trigger the "figure out everything please, using magic if necessary, throwing errors if necessary". The memo-ized state is at the object level (it's all inside the Client object) and the client is globally available.

@jgeewax
Copy link
Contributor Author

jgeewax commented May 21, 2015

So... looks like this conversation has gone silent... but there's some talk on PR #886.. Here's the summary of what I see so far

Pattern Voters Total
Client @jgeewax, @marcja, @Alfus, @pcostell 4
Global config @tseaver, @dhermes 2
Not voted yet? @tmatsuo 1

Given that scenario, I really want to push that unless someone shows why going the client route is a horrible idea, we should get started on moving that direction (being a lot of work doesn't count).

@tseaver
Copy link
Contributor

tseaver commented May 21, 2015

Seems to me that voting isn't quite what we need, especially without a more concrete thing to be considering. Rather than talking further past one another, can we work on exploring the idea more fully by writing up the "developer experience" documents which show all the ways clients will be used, similar to the one I wrote for pubsub?

@jgeewax
Copy link
Contributor Author

jgeewax commented May 21, 2015

This whole thread has been about "what code a developer has to write" with many examples (I think at least), so I'm not sure what else we should explore. @dhermes mentioned that we're not stating any new facts, just debating "which is better".

Given that, we've had several people chime in with feelings that the globally-configured way of doing things "seems forgetful" -- the majority preferring the client pattern. I've not heard anyone say "if we go the client pattern way, we can't still make the exact same code we have today continue to work just fine". So it seems to me, that if we wanted to offer both, we could if we went the client route, and we can't if we go the globally configured route.

Overall, I'm having trouble seeing what the argument is against going the route of client pattern. Can anyone chime in and explain why it's a bad idea?

I think the feedback has been "for multi-client multi-credential multi-project code, we prefer the client pattern", so I'm looking for other reasons of why it's a bad choice.

@dhermes
Copy link
Contributor

dhermes commented May 21, 2015

  1. (Most importantly) I have lost the energy to fight on this, but have not changed my opinion. I'm ready to implement clients.
  2. The reason not to write clients is because it is more code to maintain and "more ways to do the same thing" to confuse users.
  3. Keeping "the exact same code we have today continue to work just fine" doesn't address @marcja concern that we "thwarts my expectations" or @pcostell concerns about implicit globals.
  4. The "seems forgetful" concerns were about multi-tenant code. We've still yet to hear from any users that need multiple credentials at once and @tmatsuo pointed out that we can just recommend that users add a service account to a different project. (IMO Google as well as library devs should recommend this.)

@tseaver
Copy link
Contributor

tseaver commented May 21, 2015

Like Danny, I've run out of steam to argue.

The reason that I suggested starting a DX document is that I think it will either make the concerns I have obvious, or else make it clear that they can be addressed cleanly.

At the moment, the only way I can see to cleanly implement clients without re-welding them in to every class we just cleaned up, is to have the clients work at a layer "above" the classes we have now, with proxies for the "real" objects that pass in the connection from the client to every call that takes one. Maybe we can come up with a cleaner implementation than that, if we have a specific outline of how the client objects get used.

@dhermes dhermes closed this as completed Aug 27, 2015
parthea pushed a commit that referenced this issue Oct 21, 2023
* Add new "quickstart" samples [(#547)](GoogleCloudPlatform/python-docs-samples#547)

* Quickstart tests [(#569)](GoogleCloudPlatform/python-docs-samples#569)

* Add tests for quickstarts
* Update secrets

* Fix vision failure on Python 3

Change-Id: Ieb53e6cdd8b1a70089b970b7a2aa57dd3d24c3de

* Generate most non-appengine readmes

Change-Id: I3779282126cdd05b047194d356932b9995484115

* Update samples to support latest Google Cloud Python [(#656)](GoogleCloudPlatform/python-docs-samples#656)

* Auto-update dependencies. [(#715)](GoogleCloudPlatform/python-docs-samples#715)

* Vision cloud client snippets [(#751)](GoogleCloudPlatform/python-docs-samples#751)

* fixes typo in detect_properties [(#761)](GoogleCloudPlatform/python-docs-samples#761)

* Vision 1.1 [(#827)](GoogleCloudPlatform/python-docs-samples#827)

* Adds vision 1.1 features

* Update README

* Updates GCS snippet to match local file [(#836)](GoogleCloudPlatform/python-docs-samples#836)

* Improvess consistency in docs and fixes links in restructured text [(#839)](GoogleCloudPlatform/python-docs-samples#839)

* Auto-update dependencies. [(#825)](GoogleCloudPlatform/python-docs-samples#825)

* Crop hints tutorial [(#861)](GoogleCloudPlatform/python-docs-samples#861)

* Adds crop hints tutorial.

* Uses aspect ratio so that we actually crop.

* Addresses review feedback

* nits

* Restructures samples for CI

* Auto-update dependencies. [(#866)](GoogleCloudPlatform/python-docs-samples#866)

* Adds document text detection tutorial. [(#868)](GoogleCloudPlatform/python-docs-samples#868)

* Adds document text detection tutorial.

* Feedback from review

* Less whitespace and fewer hanging indents

* Fixes a few style issues that came up in document text review. [(#871)](GoogleCloudPlatform/python-docs-samples#871)

* Fixes a few style issues that came up in document text review.

* Fixing my breaks

* Auto-update dependencies. [(#872)](GoogleCloudPlatform/python-docs-samples#872)

* An attempt at flattening the detect example [(#873)](GoogleCloudPlatform/python-docs-samples#873)

* Adds web detection tutorial [(#874)](GoogleCloudPlatform/python-docs-samples#874)

* Vision face tutorial [(#880)](GoogleCloudPlatform/python-docs-samples#880)

* Updates sample to use the Cloud client library

* Nits found after commit

* Nudge for travis

* flake8 hates my face

* Auto-update dependencies. [(#876)](GoogleCloudPlatform/python-docs-samples#876)

* Remove cloud config fixture [(#887)](GoogleCloudPlatform/python-docs-samples#887)

* Remove cloud config fixture

* Fix client secrets

* Fix bigtable instance

* Auto-update dependencies. [(#888)](GoogleCloudPlatform/python-docs-samples#888)

* Remove resource [(#890)](GoogleCloudPlatform/python-docs-samples#890)

* Remove resource fixture

* Remove remote resource

* Re-generate all readmes

* Auto-update dependencies. [(#922)](GoogleCloudPlatform/python-docs-samples#922)

* Auto-update dependencies.

* Fix pubsub iam samples

* Adds checks for all features using https. [(#944)](GoogleCloudPlatform/python-docs-samples#944)

* Adds checks for all features using https.

* Fixes overindent for lint

* Fix README rst links [(#962)](GoogleCloudPlatform/python-docs-samples#962)

* Fix README rst links

* Update all READMEs

* Auto-update dependencies. [(#1004)](GoogleCloudPlatform/python-docs-samples#1004)

* Auto-update dependencies.

* Fix natural language samples

* Fix pubsub iam samples

* Fix language samples

* Fix bigquery samples

* Auto-update dependencies. [(#1011)](GoogleCloudPlatform/python-docs-samples#1011)

* Auto-update dependencies. [(#1033)](GoogleCloudPlatform/python-docs-samples#1033)

* Vision GAPIC client library [(#1015)](GoogleCloudPlatform/python-docs-samples#1015)

* Migrate quickstart to gapic

* formatting

* updating detect_faces, failing tests

* Migrate detect_faces to gapic

* Migrate detect_labels to gapic

* Migrate detect_landmarks to gapic

* Migrate detect_logos to gapic

* remove "Likelihood" from test outputs

* Migrate detect_safe_search to gapic

* Migrate detect_text to gapic

* Migrate detect_properties to gapic

* Migrate detect_web to gapic

* Migrate crophints to gapic

* Migrate detect_document to gapic;

* Migrate crop_hints.py to gapic

* hard code the likelihood names

* Make code snippets more self-contained

* Migrate doctext.py to gapic

* Migrate web_detect.py to gapic

* Migrate faces.py to gapic

* flake8

* fix missing string format

* remove url scores from sample output

* region tags update

* region tag correction

* move region tag in get crop hints

* move region tags

* import style

* client creation

* rename bound to vertex

* add region tags

* increment client library version

* update README to include link to the migration guide

* correct version number

* update readme

* update client library version in requirements and readme

* Auto-update dependencies. [(#1055)](GoogleCloudPlatform/python-docs-samples#1055)

* Auto-update dependencies.

* Explicitly use latest bigtable client

Change-Id: Id71e9e768f020730e4ca9514a0d7ebaa794e7d9e

* Revert language update for now

Change-Id: I8867f154e9a5aae00d0047c9caf880e5e8f50c53

* Remove pdb. smh

Change-Id: I5ff905fadc026eebbcd45512d4e76e003e3b2b43

* Auto-update dependencies. [(#1093)](GoogleCloudPlatform/python-docs-samples#1093)

* Auto-update dependencies.

* Fix storage notification poll sample

Change-Id: I6afbc79d15e050531555e4c8e51066996717a0f3

* Fix spanner samples

Change-Id: I40069222c60d57e8f3d3878167591af9130895cb

* Drop coverage because it's not useful

Change-Id: Iae399a7083d7866c3c7b9162d0de244fbff8b522

* Try again to fix flaky logging test

Change-Id: I6225c074701970c17c426677ef1935bb6d7e36b4

* Update all generated readme auth instructions [(#1121)](GoogleCloudPlatform/python-docs-samples#1121)

Change-Id: I03b5eaef8b17ac3dc3c0339fd2c7447bd3e11bd2

* Added Link to Python Setup Guide [(#1158)](GoogleCloudPlatform/python-docs-samples#1158)

* Update Readme.rst to add Python setup guide

As requested in b/64770713.

This sample is linked in documentation https://cloud.google.com/bigtable/docs/scaling, and it would make more sense to update the guide here than in the documentation.

* Update README.rst

* Update README.rst

* Update README.rst

* Update README.rst

* Update README.rst

* Update install_deps.tmpl.rst

* Updated readmegen scripts and re-generated related README files

* Fixed the lint error

* Auto-update dependencies. [(#1138)](GoogleCloudPlatform/python-docs-samples#1138)

* Auto-update dependencies. [(#1186)](GoogleCloudPlatform/python-docs-samples#1186)

* Auto-update dependencies. [(#1245)](GoogleCloudPlatform/python-docs-samples#1245)

* Vision beta [(#1211)](GoogleCloudPlatform/python-docs-samples#1211)

* remove unicode [(#1246)](GoogleCloudPlatform/python-docs-samples#1246)

* Added "Open in Cloud Shell" buttons to README files [(#1254)](GoogleCloudPlatform/python-docs-samples#1254)

* Auto-update dependencies. [(#1282)](GoogleCloudPlatform/python-docs-samples#1282)

* Auto-update dependencies.

* Fix storage acl sample

Change-Id: I413bea899fdde4c4859e4070a9da25845b81f7cf

* Auto-update dependencies. [(#1320)](GoogleCloudPlatform/python-docs-samples#1320)

* Vision API features update [(#1339)](GoogleCloudPlatform/python-docs-samples#1339)

* Revert "Vision API features update [(#1339)](GoogleCloudPlatform/python-docs-samples#1339)" [(#1351)](GoogleCloudPlatform/python-docs-samples#1351)

This reverts commit fba66eec5b72a8313eb3fba0a6601306801b9212.

* Auto-update dependencies. [(#1377)](GoogleCloudPlatform/python-docs-samples#1377)

* Auto-update dependencies.

* Update requirements.txt

* fix landmark sample [(#1424)](GoogleCloudPlatform/python-docs-samples#1424)

* Vision GA [(#1427)](GoogleCloudPlatform/python-docs-samples#1427)

* replace types. with vision.types. in detect.py

* copy beta code snippets

* update tests, flake

* remove beta_snippets

* update command line interface to include web-geo samples

* flake

* simplify detect document text

* [DO NOT MERGE] Vision API OCR PDF/TIFF sample [(#1420)](GoogleCloudPlatform/python-docs-samples#1420)

* add docpdf sample

* import order

* list blobs

* filename change

* add the renamed files

* parse json string to AnnotateFileResponse message

* show more of the response

* simplify response processing to better focus on how to make the request

* fix typo

* linter

* linter

* linter

* Regenerate the README files and fix the Open in Cloud Shell link for some samples [(#1441)](GoogleCloudPlatform/python-docs-samples#1441)

* detect-pdf update [(#1460)](GoogleCloudPlatform/python-docs-samples#1460)

* detect-pdf update

* update test

* Update READMEs to fix numbering and add git clone [(#1464)](GoogleCloudPlatform/python-docs-samples#1464)

* Move ocr pdf/tiff samples to GA [(#1522)](GoogleCloudPlatform/python-docs-samples#1522)

* Move ocr pdf/tiff samples to GA

* Remove blank spaces and fragment

* Fix the vision geo test. [(#1518)](GoogleCloudPlatform/python-docs-samples#1518)

Sometimes, Vision sees Zepra.  Othertimes, it sees Electra Tower.

* [DO_NOT_MERGE] Add samples for object localization and handwritten ocr [(#1572)](GoogleCloudPlatform/python-docs-samples#1572)

* Add samples for object localization and handwritten ocr

* Update to released lib

* Update beta_snippets.py

* [DO NOT MERGE] Product search [(#1580)](GoogleCloudPlatform/python-docs-samples#1580)

Product search

* Update vision web_detect test image [(#1607)](GoogleCloudPlatform/python-docs-samples#1607)

The original image no longer appears on cloud.google.com/vision

* Vision - remove unused region tags [(#1620)](GoogleCloudPlatform/python-docs-samples#1620)

* Vision region tag update [(#1635)](GoogleCloudPlatform/python-docs-samples#1635)

* Udpate Beta Vision samples to use beta tags [(#1640)](GoogleCloudPlatform/python-docs-samples#1640)

* Update samples to GA, cleanup tests, delete old samples [(#1704)](GoogleCloudPlatform/python-docs-samples#1704)

* Add print output to crop hints tutorial [(#1797)](GoogleCloudPlatform/python-docs-samples#1797)

* Remove unused code [(#1745)](GoogleCloudPlatform/python-docs-samples#1745)

* Display the score/confidence value [(#1429)](GoogleCloudPlatform/python-docs-samples#1429)

* Display the score/confidence value

A small code addition to display the score/confidence value of a detected face above the face detection box on the output image. This is very useful to know the confidence!

* Changes applied to meet coding style requirements 

I have edited the already submitted code to meet the coding style requirements!

* Edits because white spaces

* Remove [(#1431)](GoogleCloudPlatform/python-docs-samples#1431)

I'm updating all the openapi files in the getting-started sample in all the sample repos to remove basePath: "/"
Here's the reason from simonz130:

From the OpenAPI 2 spec:
* basePath: "If it is not included, the API is served directly under the host. The value MUST start with a leading slash (/). "
* Paths for methods: "A relative path to an individual endpoint. The field name MUST begin with a slash. The path is appended to the basePath in order to construct the full URL."

This OpenAPI getting-started sample have basePath: "/", which (per strict spec interpretation) means all the paths start with double-slashes. (e.g "//v1/shelves" rather than "/v1/shelves"). Removing basepath="/" fixes that.

* Auto-update dependencies. [(#1846)](GoogleCloudPlatform/python-docs-samples#1846)

ACK, merging.

* update samples for product search GA [(#1861)](GoogleCloudPlatform/python-docs-samples#1861)

* update samples for product search GA

* update to use 0.35.1

* Use default font [(#1865)](GoogleCloudPlatform/python-docs-samples#1865)

Test environment does not support all fonts.

* use shared sample data bucket [(#1874)](GoogleCloudPlatform/python-docs-samples#1874)

* Pass max_results through to API - issue #1173 [(#1917)](GoogleCloudPlatform/python-docs-samples#1917)

* Fix Vision Product Search sample comment typo [(#1897)](GoogleCloudPlatform/python-docs-samples#1897)

* vision: update samples to address changes in model annotations. [(#1991)](GoogleCloudPlatform/python-docs-samples#1991)

changes to the vision model evaluation changed annotations for
some of the sample data used in these tests.  This corrects those
expectations to reflect current evaluation.

Background: internal issue 123358697

* Auto-update dependencies. [(#1980)](GoogleCloudPlatform/python-docs-samples#1980)

* Auto-update dependencies.

* Update requirements.txt

* Update requirements.txt

* Vision API: further fixes. [(#2002)](GoogleCloudPlatform/python-docs-samples#2002)

* Vision API: further fixes.

Redirects testing to the central cloud-samples-data asset bucket.
Relaxes case considerations.
Addresses web subtests, missed in previous PR.

* Added two samples for "OCR with PDF/TIFF as source files" [(#2034)](GoogleCloudPlatform/python-docs-samples#2034)

* Added two samples for "OCR with PDF/TIFF as source files"

* Moved the code to beta_snippets.py

* Fixed the sub-parser names.

* Shortened the line that was too long.

* Added newline at the end of the file

* Using the builtin open function instead

* Renamed a variable

* Fixed the wrong arg parameter

* Added extra comment lines

* Regenerated README.rst

* Added specific strings to be unit-tested

* Added the sample for async image batch annotation [(#2045)](GoogleCloudPlatform/python-docs-samples#2045)

* Added the sample for async image batch annotation

* Fixed the wrong function name

* Changes based on Noah's comments.

* Need newer library version for latest beta [(#2052)](GoogleCloudPlatform/python-docs-samples#2052)

* Fixed string in test [(#2135)](GoogleCloudPlatform/python-docs-samples#2135)

* Fixed string in test

* Updated to latest AutoML

* Update detect.py [(#2174)](GoogleCloudPlatform/python-docs-samples#2174)

1) I got argument parse error when bucket_name=bucket_name is given
2)blob_list[0] gave me folder name

* Revert "Update detect.py" [(#2274)](GoogleCloudPlatform/python-docs-samples#2274)

* Revert "Update detect.py [(#2174)](GoogleCloudPlatform/python-docs-samples#2174)"

This reverts commit 6eaad9a3166ab3262c1211c2f41fb4b5d8234b7d.

* Update beta_snippets_test.py

* Update beta_snippets.py

* Update detect.py

* Move import inside region tags [(#2211)](GoogleCloudPlatform/python-docs-samples#2211)

* Move import inside region tags

* Update detect.py

* Fix comment. [(#2108)](GoogleCloudPlatform/python-docs-samples#2108)

Comment should reflect real filename.

* Fix a typo in output message / remove duplicate parser assignment. [(#1999)](GoogleCloudPlatform/python-docs-samples#1999)

* Fix a typo in output message.

Fixes a minor typo error in the `draw_hint` function. Because the tutorial is one of the starting points for new users, it's worth correcting it to avoid confusion.

* Remove duplicate `argparse` assignment.

`argparse.ArgumentParser()` was assigned twice in if statement so removed the duplicate.

* move import re [(#2303)](GoogleCloudPlatform/python-docs-samples#2303)

* Makes quickstart more REPL friendly [(#2354)](GoogleCloudPlatform/python-docs-samples#2354)

* vision geo test fix [(#2353)](GoogleCloudPlatform/python-docs-samples#2353)

Gus already LGTM

* Purge products [(#2349)](GoogleCloudPlatform/python-docs-samples#2349)

* add vision_product_search_purge_products_in_product_set

* add vision_product_search_purge_orphan_products

* update comment

* flake

* update print message

* update python sample to use operation.result

* longer timeout

* remove unused variable

* Adds updates for samples profiler ... vision [(#2439)](GoogleCloudPlatform/python-docs-samples#2439)

* Update Pillow dependency per security alert CVE-2019-16865 [(#2492)](GoogleCloudPlatform/python-docs-samples#2492)

* Add Set Endpoint Samples [(#2497)](GoogleCloudPlatform/python-docs-samples#2497)

* Add Set Endpoint Samples

* Add additional test result option

* Sample Request update

* Add filter_

* Auto-update dependencies. [(#2005)](GoogleCloudPlatform/python-docs-samples#2005)

* Auto-update dependencies.

* Revert update of appengine/flexible/datastore.

* revert update of appengine/flexible/scipy

* revert update of bigquery/bqml

* revert update of bigquery/cloud-client

* revert update of bigquery/datalab-migration

* revert update of bigtable/quickstart

* revert update of compute/api

* revert update of container_registry/container_analysis

* revert update of dataflow/run_template

* revert update of datastore/cloud-ndb

* revert update of dialogflow/cloud-client

* revert update of dlp

* revert update of functions/imagemagick

* revert update of functions/ocr/app

* revert update of healthcare/api-client/fhir

* revert update of iam/api-client

* revert update of iot/api-client/gcs_file_to_device

* revert update of iot/api-client/mqtt_example

* revert update of language/automl

* revert update of run/image-processing

* revert update of vision/automl

* revert update testing/requirements.txt

* revert update of vision/cloud-client/detect

* revert update of vision/cloud-client/product_search

* revert update of jobs/v2/api_client

* revert update of jobs/v3/api_client

* revert update of opencensus

* revert update of translate/cloud-client

* revert update to speech/cloud-client

Co-authored-by: Kurtis Van Gent <[email protected]>
Co-authored-by: Doug Mahugh <[email protected]>

* fix: get bounds for blocks instead of pages [(#2705)](GoogleCloudPlatform/python-docs-samples#2705)

* fix: use `page.bounding_box` when feature is page

Closes #2702

* fix: outline blocks instead of pages

Co-authored-by: Leah E. Cole <[email protected]>

* Add vision ocr set endpoint samples [(#2569)](GoogleCloudPlatform/python-docs-samples#2569)

* Add vision ocr set endpoint samples

* Remove port number as it is optional in Python

* Use unique output names

* lint

* Add support for python2 print statements

* use uuid instead of datetime

* remove all tests that use https as they perform duplicate work

Co-authored-by: Leah E. Cole <[email protected]>

* vision: update samples to throw errors if one occurs [(#2725)](GoogleCloudPlatform/python-docs-samples#2725)

* vision: update samples to throw errors if one occurs

* Add link to error page docs

* Add link to error message

Co-authored-by: Leah E. Cole <[email protected]>
Co-authored-by: Gus Class <[email protected]>

* vision: move published samples into master [(#2743)](GoogleCloudPlatform/python-docs-samples#2743)

Add generated samples for Vision API
Add required attribute mime_type
Resolve encoding error in py2
Remove autogenerated warnings
Remove coding: utf-8 line
Remove argument encoding checks
Remove CLI
Remove unnecessary statics, variables, and imports
Blacken with l=88
Remove unused region tag and comments
Verify that there are no published links pointing to removed region tags
Shorten docstring
Replace concrete file path with "path/to/your/document.pdf"

Co-authored-by: Yu-Han Liu <[email protected]>

* fix: vision product search tests to call setup and teardown and use uuid [(#2830)](GoogleCloudPlatform/python-docs-samples#2830)

* vision: fix flaky test [(#2988)](GoogleCloudPlatform/python-docs-samples#2988)

* vision: fix flaky tests to be more generic in the results [(#2915)](GoogleCloudPlatform/python-docs-samples#2915)

* chore(deps): update dependency google-cloud-storage to v1.26.0 [(#3046)](GoogleCloudPlatform/python-docs-samples#3046)

* chore(deps): update dependency google-cloud-storage to v1.26.0

* chore(deps): specify dependencies by python version

* chore: up other deps to try to remove errors

Co-authored-by: Leah E. Cole <[email protected]>
Co-authored-by: Leah Cole <[email protected]>

* Clarifying comment for batch requests [(#3071)](GoogleCloudPlatform/python-docs-samples#3071)

* Clarifying comment for batch requests

* vision: fixing linter for batch

* vision: remove redundant flaky web test [(#3090)](GoogleCloudPlatform/python-docs-samples#3090)

Fix: GoogleCloudPlatform/python-docs-samples#2880

* vision: fix flaky test [(#3091)](GoogleCloudPlatform/python-docs-samples#3091)

Fix: GoogleCloudPlatform/python-docs-samples#2876

* chore(deps): update dependency google-cloud-vision to v0.42.0 [(#3170)](GoogleCloudPlatform/python-docs-samples#3170)

* chore(deps): update dependency pillow to v6.2.2 [(#3186)](GoogleCloudPlatform/python-docs-samples#3186)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [pillow](https://python-pillow.org) ([source](https://togithub.com/python-pillow/Pillow)) | patch | `==6.2.1` -> `==6.2.2` |

---

### Release Notes

<details>
<summary>python-pillow/Pillow</summary>

### [`v6.2.2`](https://togithub.com/python-pillow/Pillow/blob/master/CHANGES.rst#&#8203;622-2020-01-02)

[Compare Source](https://togithub.com/python-pillow/Pillow/compare/6.2.1...6.2.2)

-   This is the last Pillow release to support Python 2.7 [#&#8203;3642](https://togithub.com/python-pillow/Pillow/issues/3642)

-   Overflow checks for realloc for tiff decoding. CVE-2020-5310
    [wiredfool, radarhere]

-   Catch SGI buffer overrun. CVE-2020-5311
    [radarhere]

-   Catch PCX P mode buffer overrun. CVE-2020-5312
    [radarhere]

-   Catch FLI buffer overrun. CVE-2020-5313
    [radarhere]

-   Raise an error for an invalid number of bands in FPX image. CVE-2019-19911
    [wiredfool, radarhere]

</details>

---

### Renovate configuration

:date: **Schedule**: At any time (no schedule defined).

:vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

:recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox.

:no_bell: **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples).

* chore(deps): update dependency pillow to v7 [(#3218)](GoogleCloudPlatform/python-docs-samples#3218)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [pillow](https://python-pillow.org) ([source](https://togithub.com/python-pillow/Pillow)) | major | `==6.2.2` -> `==7.1.0` |

---

### Release Notes

<details>
<summary>python-pillow/Pillow</summary>

### [`v7.1.0`](https://togithub.com/python-pillow/Pillow/blob/master/CHANGES.rst#&#8203;710-2020-04-01)

[Compare Source](https://togithub.com/python-pillow/Pillow/compare/7.0.0...7.1.0)

-   Fix multiple OOB reads in FLI decoding [#&#8203;4503](https://togithub.com/python-pillow/Pillow/issues/4503)
    [wiredfool]

-   Fix buffer overflow in SGI-RLE decoding [#&#8203;4504](https://togithub.com/python-pillow/Pillow/issues/4504)
    [wiredfool, hugovk]

-   Fix bounds overflow in JPEG 2000 decoding [#&#8203;4505](https://togithub.com/python-pillow/Pillow/issues/4505)
    [wiredfool]

-   Fix bounds overflow in PCX decoding [#&#8203;4506](https://togithub.com/python-pillow/Pillow/issues/4506)
    [wiredfool]

-   Fix 2 buffer overflows in TIFF decoding [#&#8203;4507](https://togithub.com/python-pillow/Pillow/issues/4507)
    [wiredfool]

-   Add APNG support [#&#8203;4243](https://togithub.com/python-pillow/Pillow/issues/4243)
    [pmrowla, radarhere, hugovk]

-   ImageGrab.grab() for Linux with XCB [#&#8203;4260](https://togithub.com/python-pillow/Pillow/issues/4260)
    [nulano, radarhere]

-   Added three new channel operations [#&#8203;4230](https://togithub.com/python-pillow/Pillow/issues/4230)
    [dwastberg, radarhere]

-   Prevent masking of Image reduce method in Jpeg2KImagePlugin [#&#8203;4474](https://togithub.com/python-pillow/Pillow/issues/4474)
    [radarhere, homm]

-   Added reading of earlier ImageMagick PNG EXIF data [#&#8203;4471](https://togithub.com/python-pillow/Pillow/issues/4471)
    [radarhere]

-   Fixed endian handling for I;16 getextrema [#&#8203;4457](https://togithub.com/python-pillow/Pillow/issues/4457)
    [radarhere]

-   Release buffer if function returns prematurely [#&#8203;4381](https://togithub.com/python-pillow/Pillow/issues/4381)
    [radarhere]

-   Add JPEG comment to info dictionary [#&#8203;4455](https://togithub.com/python-pillow/Pillow/issues/4455)
    [radarhere]

-   Fix size calculation of Image.thumbnail() [#&#8203;4404](https://togithub.com/python-pillow/Pillow/issues/4404)
    [orlnub123]

-   Fixed stroke on FreeType &lt; 2.9 [#&#8203;4401](https://togithub.com/python-pillow/Pillow/issues/4401)
    [radarhere]

-   If present, only use alpha channel for bounding box [#&#8203;4454](https://togithub.com/python-pillow/Pillow/issues/4454)
    [radarhere]

-   Warn if an unknown feature is passed to features.check() [#&#8203;4438](https://togithub.com/python-pillow/Pillow/issues/4438)
    [jdufresne]

-   Fix Name field length when saving IM images [#&#8203;4424](https://togithub.com/python-pillow/Pillow/issues/4424)
    [hugovk, radarhere]

-   Allow saving of zero quality JPEG images [#&#8203;4440](https://togithub.com/python-pillow/Pillow/issues/4440)
    [radarhere]

-   Allow explicit zero width to hide outline [#&#8203;4334](https://togithub.com/python-pillow/Pillow/issues/4334)
    [radarhere]

-   Change ContainerIO return type to match file object mode [#&#8203;4297](https://togithub.com/python-pillow/Pillow/issues/4297)
    [jdufresne, radarhere]

-   Only draw each polygon pixel once [#&#8203;4333](https://togithub.com/python-pillow/Pillow/issues/4333)
    [radarhere]

-   Add support for shooting situation Exif IFD tags [#&#8203;4398](https://togithub.com/python-pillow/Pillow/issues/4398)
    [alexagv]

-   Handle multiple and malformed JPEG APP13 markers [#&#8203;4370](https://togithub.com/python-pillow/Pillow/issues/4370)
    [homm]

-   Depends: Update libwebp to 1.1.0 [#&#8203;4342](https://togithub.com/python-pillow/Pillow/issues/4342), libjpeg to 9d [#&#8203;4352](https://togithub.com/python-pillow/Pillow/issues/4352)
    [radarhere]

### [`v7.0.0`](https://togithub.com/python-pillow/Pillow/blob/master/CHANGES.rst#&#8203;700-2020-01-02)

[Compare Source](https://togithub.com/python-pillow/Pillow/compare/6.2.2...7.0.0)

-   Drop support for EOL Python 2.7 [#&#8203;4109](https://togithub.com/python-pillow/Pillow/issues/4109)
    [hugovk, radarhere, jdufresne]

-   Fix rounding error on RGB to L conversion [#&#8203;4320](https://togithub.com/python-pillow/Pillow/issues/4320)
    [homm]

-   Exif writing fixes: Rational boundaries and signed/unsigned types [#&#8203;3980](https://togithub.com/python-pillow/Pillow/issues/3980)
    [kkopachev, radarhere]

-   Allow loading of WMF images at a given DPI [#&#8203;4311](https://togithub.com/python-pillow/Pillow/issues/4311)
    [radarhere]

-   Added reduce operation [#&#8203;4251](https://togithub.com/python-pillow/Pillow/issues/4251)
    [homm]

-   Raise ValueError for io.StringIO in Image.open [#&#8203;4302](https://togithub.com/python-pillow/Pillow/issues/4302)
    [radarhere, hugovk]

-   Fix thumbnail geometry when DCT scaling is used [#&#8203;4231](https://togithub.com/python-pillow/Pillow/issues/4231)
    [homm, radarhere]

-   Use default DPI when exif provides invalid x_resolution [#&#8203;4147](https://togithub.com/python-pillow/Pillow/issues/4147)
    [beipang2, radarhere]

-   Change default resize resampling filter from NEAREST to BICUBIC [#&#8203;4255](https://togithub.com/python-pillow/Pillow/issues/4255)
    [homm]

-   Fixed black lines on upscaled images with the BOX filter [#&#8203;4278](https://togithub.com/python-pillow/Pillow/issues/4278)
    [homm]

-   Better thumbnail aspect ratio preservation [#&#8203;4256](https://togithub.com/python-pillow/Pillow/issues/4256)
    [homm]

-   Add La mode packing and unpacking [#&#8203;4248](https://togithub.com/python-pillow/Pillow/issues/4248)
    [homm]

-   Include tests in coverage reports [#&#8203;4173](https://togithub.com/python-pillow/Pillow/issues/4173)
    [hugovk]

-   Handle broken Photoshop data [#&#8203;4239](https://togithub.com/python-pillow/Pillow/issues/4239)
    [radarhere]

-   Raise a specific exception if no data is found for an MPO frame [#&#8203;4240](https://togithub.com/python-pillow/Pillow/issues/4240)
    [radarhere]

-   Fix Unicode support for PyPy [#&#8203;4145](https://togithub.com/python-pillow/Pillow/issues/4145)
    [nulano]

-   Added UnidentifiedImageError [#&#8203;4182](https://togithub.com/python-pillow/Pillow/issues/4182)
    [radarhere, hugovk]

-   Remove deprecated **version** from plugins [#&#8203;4197](https://togithub.com/python-pillow/Pillow/issues/4197)
    [hugovk, radarhere]

-   Fixed freeing unallocated pointer when resizing with height too large [#&#8203;4116](https://togithub.com/python-pillow/Pillow/issues/4116)
    [radarhere]

-   Copy info in Image.transform [#&#8203;4128](https://togithub.com/python-pillow/Pillow/issues/4128)
    [radarhere]

-   Corrected DdsImagePlugin setting info gamma [#&#8203;4171](https://togithub.com/python-pillow/Pillow/issues/4171)
    [radarhere]

-   Depends: Update libtiff to 4.1.0 [#&#8203;4195](https://togithub.com/python-pillow/Pillow/issues/4195), Tk Tcl to 8.6.10 [#&#8203;4229](https://togithub.com/python-pillow/Pillow/issues/4229), libimagequant to 2.12.6 [#&#8203;4318](https://togithub.com/python-pillow/Pillow/issues/4318)
    [radarhere]

-   Improve handling of file resources [#&#8203;3577](https://togithub.com/python-pillow/Pillow/issues/3577)
    [jdufresne]

-   Removed CI testing of Fedora 29 [#&#8203;4165](https://togithub.com/python-pillow/Pillow/issues/4165)
    [hugovk]

-   Added pypy3 to tox envlist [#&#8203;4137](https://togithub.com/python-pillow/Pillow/issues/4137)
    [jdufresne]

-   Drop support for EOL PyQt4 and PySide [#&#8203;4108](https://togithub.com/python-pillow/Pillow/issues/4108)
    [hugovk, radarhere]

-   Removed deprecated setting of TIFF image sizes [#&#8203;4114](https://togithub.com/python-pillow/Pillow/issues/4114)
    [radarhere]

-   Removed deprecated PILLOW_VERSION [#&#8203;4107](https://togithub.com/python-pillow/Pillow/issues/4107)
    [hugovk]

-   Changed default frombuffer raw decoder args [#&#8203;1730](https://togithub.com/python-pillow/Pillow/issues/1730)
    [radarhere]

</details>

---

### Renovate configuration

:date: **Schedule**: At any time (no schedule defined).

:vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

:recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox.

:no_bell: **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples).

* Simplify noxfile setup. [(#2806)](GoogleCloudPlatform/python-docs-samples#2806)

* chore(deps): update dependency requests to v2.23.0

* Simplify noxfile and add version control.

* Configure appengine/standard to only test Python 2.7.

* Update Kokokro configs to match noxfile.

* Add requirements-test to each folder.

* Remove Py2 versions from everything execept appengine/standard.

* Remove conftest.py.

* Remove appengine/standard/conftest.py

* Remove 'no-sucess-flaky-report' from pytest.ini.

* Add GAE SDK back to appengine/standard tests.

* Fix typo.

* Roll pytest to python 2 version.

* Add a bunch of testing requirements.

* Remove typo.

* Add appengine lib directory back in.

* Add some additional requirements.

* Fix issue with flake8 args.

* Even more requirements.

* Readd appengine conftest.py.

* Add a few more requirements.

* Even more Appengine requirements.

* Add webtest for appengine/standard/mailgun.

* Add some additional requirements.

* Add workaround for issue with mailjet-rest.

* Add responses for appengine/standard/mailjet.

Co-authored-by: Renovate Bot <[email protected]>

* Update dependency google-cloud-vision to v1 [(#3227)](GoogleCloudPlatform/python-docs-samples#3227)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [google-cloud-vision](https://togithub.com/googleapis/python-vision) | major | `==0.42.0` -> `==1.0.0` |

---

### Release Notes

<details>
<summary>googleapis/python-vision</summary>

### [`v1.0.0`](https://togithub.com/googleapis/python-vision/blob/master/CHANGELOG.md#&#8203;100-httpswwwgithubcomgoogleapispython-visioncomparev0420v100-2020-02-28)

[Compare Source](https://togithub.com/googleapis/python-vision/compare/v0.42.0...v1.0.0)

##### Features

-   bump release status to GA ([#&#8203;11](https://www.github.com/googleapis/python-vision/issues/11)) ([2129bde](https://www.github.com/googleapis/python-vision/commit/2129bdedfa0dca85c5adc5350bff10d4a485df77))

</details>

---

### Renovate configuration

:date: **Schedule**: At any time (no schedule defined).

:vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

:recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox.

:no_bell: **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples).

* Update dependency pillow to v7.1.1 [(#3263)](GoogleCloudPlatform/python-docs-samples#3263)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [pillow](https://python-pillow.org) ([source](https://togithub.com/python-pillow/Pillow)) | patch | `==7.1.0` -> `==7.1.1` |

---

### Release Notes

<details>
<summary>python-pillow/Pillow</summary>

### [`v7.1.1`](https://togithub.com/python-pillow/Pillow/blob/master/CHANGES.rst#&#8203;711-2020-04-02)

[Compare Source](https://togithub.com/python-pillow/Pillow/compare/7.1.0...7.1.1)

-   Fix regression seeking and telling PNGs [#&#8203;4512](https://togithub.com/python-pillow/Pillow/issues/4512) [#&#8203;4514](https://togithub.com/python-pillow/Pillow/issues/4514)
    [hugovk, radarhere]

</details>

---

### Renovate configuration

:date: **Schedule**: At any time (no schedule defined).

:vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

:recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox.

:no_bell: **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples).

* vision: increase timeout for tests [(#3383)](GoogleCloudPlatform/python-docs-samples#3383)

Fix: GoogleCloudPlatform/python-docs-samples#2955
Fix: GoogleCloudPlatform/python-docs-samples#2992

* [vision] fix: longer timeout [(#3447)](GoogleCloudPlatform/python-docs-samples#3447)

fixes #2962

* testing: replace @flaky with @pytest.mark.flaky [(#3496)](GoogleCloudPlatform/python-docs-samples#3496)

* testing: replace @flaky with @pytest.mark.flaky

* lint

* mark few tests as flaky

that involves LRO polling.

* lint

* chore(deps): update dependency pillow to v7.1.2 [(#3557)](GoogleCloudPlatform/python-docs-samples#3557)

* chore(deps): update dependency google-cloud-storage to v1.28.0 [(#3260)](GoogleCloudPlatform/python-docs-samples#3260)

Co-authored-by: Takashi Matsuo <[email protected]>

* [vision] fix: add timeout for LRO result and mark it as flaky [(#3684)](GoogleCloudPlatform/python-docs-samples#3684)

fixes #3674

* [vision] fix: mark a test as flaky [(#3709)](GoogleCloudPlatform/python-docs-samples#3709)

fixes #3702

* chore: some lint fixes [(#3751)](GoogleCloudPlatform/python-docs-samples#3751)

* chore: some lint fixes

* longer timeout, more retries

* disable detect_test.py::test_async_detect_document

* [vision] testing: retry upon errors [(#3764)](GoogleCloudPlatform/python-docs-samples#3764)

fixes #3734

I only wrapped some of the tests. Potentially we can do it for
everything.

* [vision] testing: re-enable test_async_detect_document [(#3761)](GoogleCloudPlatform/python-docs-samples#3761)

fixes #3753

also made the data PDF to be smaller.

* chore(deps): update dependency google-cloud-storage to v1.28.1 [(#3785)](GoogleCloudPlatform/python-docs-samples#3785)

* chore(deps): update dependency google-cloud-storage to v1.28.1

* [asset] testing: use uuid instead of time

Co-authored-by: Takashi Matsuo <[email protected]>

* Replace GCLOUD_PROJECT with GOOGLE_CLOUD_PROJECT. [(#4022)](GoogleCloudPlatform/python-docs-samples#4022)

* chore(deps): update dependency google-cloud-storage to v1.29.0 [(#4040)](GoogleCloudPlatform/python-docs-samples#4040)

* chore(deps): update dependency pillow to v7.2.0 [(#4208)](GoogleCloudPlatform/python-docs-samples#4208)

* testing(vision): use different ids for test functions [(#4227)](GoogleCloudPlatform/python-docs-samples#4227)

fixes #4224

* chore(deps): update dependency pytest to v5.4.3 [(#4279)](GoogleCloudPlatform/python-docs-samples#4279)

* chore(deps): update dependency pytest to v5.4.3

* specify pytest for python 2 in appengine

Co-authored-by: Leah Cole <[email protected]>

* Update dependency flaky to v3.7.0 [(#4300)](GoogleCloudPlatform/python-docs-samples#4300)

* Update dependency google-cloud-storage to v1.30.0

* Update dependency pytest to v6 [(#4390)](GoogleCloudPlatform/python-docs-samples#4390)

* feat: fixed doc string comment mismatch in Product Search [(#4432)](GoogleCloudPlatform/python-docs-samples#4432)

Changes documentation string for a GCS example from `file_path` to `image_uri`.

* chore(deps): update dependency google-cloud-storage to v1.31.0 [(#4564)](GoogleCloudPlatform/python-docs-samples#4564)

Co-authored-by: Takashi Matsuo <[email protected]>

* chore: update templates

Co-authored-by: Jason Dobry <[email protected]>
Co-authored-by: Jon Wayne Parrott <[email protected]>
Co-authored-by: DPE bot <[email protected]>
Co-authored-by: Gus Class <[email protected]>
Co-authored-by: Brent Shaffer <[email protected]>
Co-authored-by: Bill Prin <[email protected]>
Co-authored-by: Yu-Han Liu <[email protected]>
Co-authored-by: michaelawyu <[email protected]>
Co-authored-by: Rebecca Taylor <[email protected]>
Co-authored-by: Frank Natividad <[email protected]>
Co-authored-by: Noah Negrey <[email protected]>
Co-authored-by: Jeffrey Rennie <[email protected]>
Co-authored-by: Tim Swast <[email protected]>
Co-authored-by: Alix Hamilton <[email protected]>
Co-authored-by: Rebecca Taylor <[email protected]>
Co-authored-by: Krissda Prakalphakul <[email protected]>
Co-authored-by: Peshmerge <[email protected]>
Co-authored-by: navinger <[email protected]>
Co-authored-by: Charles Engelke <[email protected]>
Co-authored-by: shollyman <[email protected]>
Co-authored-by: Shahin <[email protected]>
Co-authored-by: Charles Engelke <[email protected]>
Co-authored-by: Agnel Vishal <[email protected]>
Co-authored-by: Grega Kespret <[email protected]>
Co-authored-by: Da-Woon Chung <[email protected]>
Co-authored-by: Yu-Han Liu <[email protected]>
Co-authored-by: Torry Yang <[email protected]>
Co-authored-by: Kurtis Van Gent <[email protected]>
Co-authored-by: Doug Mahugh <[email protected]>
Co-authored-by: Bu Sun Kim <[email protected]>
Co-authored-by: Leah E. Cole <[email protected]>
Co-authored-by: Michelle Casbon <[email protected]>
Co-authored-by: WhiteSource Renovate <[email protected]>
Co-authored-by: Leah Cole <[email protected]>
Co-authored-by: Cameron Zahedi <[email protected]>
Co-authored-by: Takashi Matsuo <[email protected]>
Co-authored-by: Eric Schmidt <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: core auth type: question Request for information or clarification. Not an issue.
Projects
None yet
Development

No branches or pull requests

6 participants