Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why does docker-compose up-d now change scale number? #5663

Closed
dgilling opened this issue Feb 8, 2018 · 7 comments
Closed

Why does docker-compose up-d now change scale number? #5663

dgilling opened this issue Feb 8, 2018 · 7 comments

Comments

@dgilling
Copy link

dgilling commented Feb 8, 2018

Previously docker-compose up -d would never change the number of instances deployed.

For example, if I run the following:

docker-compose up -d
docker-compose scale my_service=5
docker-compose up -d

I would still have 5 instances running.

With the latest version, if I run, the same sequence:

docker-compose up -d
docker-compose scale my_service=5
docker-compose up -d

It actually removes 4 of the 5 instances. Why does the scale command have to be tightly coupled to the up command and break legacy behavior?

This is with a legacy swarm cluster Docker version 17.05.0-ce, build 89658be
And with Mac OSX 17.09.1-ce-mac42 (21090)

Even if I target a different service, docker compose will rescale even the other services.

docker-compose up -d
docker-compose scale my_service=5
docker-compose up -d my_other_service

my_service will be scaled back down to 1.

This just breaks legacy scripts and behavior on so many levels.

@shin-
Copy link

shin- commented Feb 8, 2018

https://github.com/docker/compose/blob/master/CHANGELOG.md#1130-2017-05-02

Breaking changes

  • docker-compose up now resets a service's scaling to its default value. You can use the newly introduced --scale option to specify a custom scale value

@dgilling
Copy link
Author

dgilling commented Feb 8, 2018

Is there any way to go back to legacy behavior? Why would this be done in the first place?
Now two separate things are tightly coupled which weren't done before.

If I have a complex compose file with 10 different services, I now have to read what is the current scale value and then enter ALL of them when I call up -d such as
docker-compose up -d --scale service_a=10 service_b=5 service_c=100 service_d=50, etc

Some of us have very dynamic scaling based on load so we don't hardcode the scale amount in a docker compose file, which means we would have to read in what the current value is.

@shin-
Copy link

shin- commented Feb 8, 2018

Is there any way to go back to legacy behavior?

Short of downgrading to a Compose 1.12.0 or lower, no.

Why would this be done in the first place?

To address #1661

Now two separate things are tightly coupled which weren't done before.

It's arguable whether they are separate things or not ; with scale now being part of a service's configuration, our opinion is that up and rescaling are inherently the same operation.

If I have a complex compose file with 10 different services, I now have to read what is the current scale value and then enter ALL of them when I call up -d such as
docker-compose up -d --scale service_a=10 service_b=5 service_c=100 service_d=50, etc

Some of us have very dynamic scaling based on load so we don't hardcode the scale amount in a docker compose file, which means we would have to read in what the current value is.

You can still rescale services individually with docker-compose up --scale svc_a=99 --no-deps -d svc_a. But beyond that, it sounds like you may want a script to handle that for you.

@dgilling
Copy link
Author

dgilling commented Feb 8, 2018

It seems that you could probably support both to account for legacy behavior.
If a scale is defined in the yaml, then use that.
Else, if scale is not defined in the yaml, a service should not be rescaled down to 1 on "up" if already running with a scale factor larger than 1.

Previously, the current scale amount would be remembered on "up" if scale was not defined in the yaml file.

It really seems like Docker doesn't care about long term support in the enterprise world and always breaking APIs. Any user who previously relied on the scale being remembered will suddenly be putting out fires when the service is scaled down to 1 and thus have to re-audit any scripts.

@shin-
Copy link

shin- commented Feb 8, 2018

I understand how this could be frustrating. We try to be as conservative as possible with introducing changes that break backwards compatibility, and we call it out clearly in our CHANGELOG and release notes when we do. Is there anything else you would like to see us do to alleviate the pain these changes may cause for you and other users downstream?

@dgilling
Copy link
Author

dgilling commented Feb 8, 2018

A breaking change could have easily been made non-breaking if a user wasn't even using the scale parameter in their yaml. I'm a big believer in API version as being evolutionary.

For example, this feature could have been spec'ed as:

  • If scale factor set in compose file, reset to that as specified by the feature.
  • If scale factor is NOT even set in compose file, revert to legacy behavior where previous scale setting is used.

While I understand there is a conflict when "up" is called where both the compose file has a scale set and the runtime state also has a scale set, one of the two values needs to win. However, this problem would occur for only a small set of users.

On the other hand, I would place a bet many Docker users don't even put a scale factor in their compose files to be version controlled either because:

  1. it's a new feature only recently available, or
  2. In production, scale changes so rapidly many times per day under load and is an operational concern thus is not checked into version control.

For these users, there is really no reason calling "up" should reset their services scale to 1. Legacy behavior could have been maintained. It's this kind of behavior I've seen repeated by Docker over and over. A lack of regard for supporting legacy behavior even if small. Stuff like semvar is very handy for a developer to assume NO breaking changes when upgrading just to the next minor version. If behavior really needs to change, wait until a major version release for it to be the default behavior where it's expected stuff to break stuff AND THEN include a guide on upgrading from major-1 to major. Here is a very nice example from Elasticsearch:

https://www.elastic.co/guide/en/elasticsearch/reference/5.6/breaking_50_search_changes.html

Maybe the feature itself (scale in compose) could have been elaborated more. It seems that scale is being used for two very different purposes:

  1. Setting a defined amount for a service such as scale=0 for a service that should run on "up" or scale=3 to set a min number of nodes (such as for availability of a service or db) required for the application to run. This I could see benefits being in compose file as it defines the constraints to run the application even under no load. A DSL for defining scale constraints just like the normal affinity constraints could be interesting which could include min, max, etc.

  2. On the other hand, scaling to large number of containers on large orchestration clusters doesn't make sense to check into version control. (I'm referring where scale my_service_a=120)

Scale in compose seems more like a set of application level constrains (i.e at least 3 nodes running of service_a) rather than the exact horizontal scale out amount in production.

@shin-
Copy link

shin- commented Feb 8, 2018

Whether it could have been done differently is honestly beside the point - we're not about to introduce another breaking change to "unbreak" a change we made almost a year ago.

I agree with you about semantic versioning, and I wish we could use major version bumps when backward-incompatible changes are made. It's definitely something I want to bring up internally for the next time we have to make such a change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants