Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Syncs upstream stable/train into jg-ironic-rebalance #7

Open
wants to merge 93 commits into
base: jg-ironic-rebalance
Choose a base branch
from

Conversation

jovial
Copy link

@jovial jovial commented May 12, 2022

No description provided.

SeanMooney and others added 30 commits July 8, 2020 16:22
This change adds a max_queues config option to allow
operators to set the maximium number of virtio queue
pairs that can be allocated to a virtio network
interface.

Change-Id: I9abe783a9a9443c799e7c74a57cc30835f679a01
Closes-Bug: #1847367
(cherry picked from commit 0e6aac3)
Attempting to boot an instance with 'hw:cpu_policy=dedicated' will
result in a request from nova-scheduler to placement for allocation
candidates with $flavor.vcpu 'PCPU' inventory. Similarly, booting an
instance with 'hw:cpu_thread_policy=isolate' will result in a request
for allocation candidates with 'HW_CPU_HYPERTHREADING=forbidden', i.e.
hosts without hyperthreading. This has been the case since the
cpu-resources feature was implemented in Train. However, as part of that
work and to enable upgrades from hosts that predated Train, we also make
a second request for candidates with $flavor.vcpu 'VCPU' inventory. The
idea behind this is that old compute nodes would only report 'VCPU' and
should be useable, and any new compute nodes that got caught up in this
second request could never actually be scheduled to since there wouldn't
be enough cores from 'ComputeNode.numa_topology.cells.[*].pcpuset'
available to schedule to, resulting in rejection by the
'NUMATopologyFilter'. However, if a host was rejected in the first
query because it reported the 'HW_CPU_HYPERTHREADING' trait, it could
get picked up by the second query and would happily be scheduled to,
resulting in an instance consuming 'VCPU' inventory from a host that
properly supported 'PCPU' inventory.

The solution is simply, though also a huge hack. If we detect that the
host is using new style configuration and should be able to report
'PCPU', check if the instance asked for no hyperthreading and whether
the host has it. If all are True, reject the request.

Change-Id: Id39aaaac09585ca1a754b669351c86e234b89dd9
Signed-off-by: Stephen Finucane <[email protected]>
Closes-Bug: #1889633
(cherry picked from commit 9c27033)
(cherry picked from commit 7ddab32)
Previously disk_bus values were never validated and could easily end up
being ignored by the underlying virt driver and hypervisor.

For example, a common mistake made by users is to request a virtio-scsi
disk_bus when using the libvirt virt driver. This however isn't a valid
bus and is ignored, defaulting back to the virtio (virtio-blk) bus.

This change adds a simple validation in the compute API using the
potential disk_bus values provided by the DiskBus field class as used
when validating the hw_*_bus image properties.

Conflicts:
    nova/tests/unit/compute/test_compute_api.py

NOTE(lyarwood): Conflict as If9c459a9a0aa752c478949e4240286cbdb146494 is
not present in stable/train. test_validate_bdm_disk_bus is also updated
as Ib31ba2cbff0ebb22503172d8801b6e0c3d2aa68a is not present in
stable/train.

Closes-Bug: #1876301
Change-Id: I77b28b9cc8f99b159f628f4655d85ff305a71db8
(cherry picked from commit 5913bd8)
(cherry picked from commit fb31ae4)
Make the spec of virtual persistent memory consistent with
the contents of the admin manual, update the dependency of virtual
persistent memory about daxio, and add NOTE for the tested kernel
version.

Closes-Bug: #1894022

Change-Id: I30539bb47c98a588b95c066a394949d60af9c520
(cherry picked from commit a8b0c6b)
(cherry picked from commit eae463c)
Because of the libvirt issue[1], there is a bug[2] that if we set cache mode
whose write semantic is not O_DIRECT (.i.e unsafe, writeback or writethrough),
there will be a problem with the volume drivers
(.i.e nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver,
nova.virt.libvirt.volume.LibvirtNFSVolumeDriver and so on), which designate
native io explicitly.

That problem will generate a libvirt xml for the instance,
whose content contains

```
...
<disk ... >
  <driver ... cache='unsafe/writeback/writethrough' io='native' />
</disk>
...
```
In turn, it will fail to start the instance or attach the disk.

> When qemu is configured with a block device that has aio=native set, but
> the cache mode doesn't use O_DIRECT (i.e. isn't cache=none/directsync or any
> unnamed mode with explicit cache.direct=on), then the raw-posix block driver
> for local files and block devices will silently fall back to aio=threads.
> The blockdev-add interface rejects such combinations, but qemu can't
> change the existing legacy interfaces that libvirt uses today.

[1]: libvirt/libvirt@0583840
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1086704

Closes-Bug: #1841363
Change-Id: If9acc054100a6733f3659a15dd9fc2d462e84d64
(cherry picked from commit af2405e)
(cherry picked from commit 0bd5892)
In bug 1879787, the call to network_api.get_instance_nw_info() in
_post_live_migration() on the source compute manager eventually calls
out to the Neutron REST API. If this fails, the exception is
unhandled, and the migrating instance - which is fully running on the
destination at this point - will never be updated in the database.
This update normally happens later in
post_live_migration_at_destination().

The network_info variable obtained from get_instance_nw_info() is used
for two things: notifications - which aren't critical - and unplugging
the instance's vifs on the source - which is very important!

It turns out that at the time of the get_instance_nw_info() call, the
network info in the instance info cache is still valid for unplugging
the source vifs. The port bindings on the destination are only
activated by the network_api.migrate_instance_start() [1] call that
happens shortly *after* the problematic get_instance_nw_info() call.
In other words, get_instance_nw_info() will always return the source
ports. Because of that, we can replace it with a call to
instance.get_network_info().

NOTE(artom) The functional test has been excised, as in stable/train
the NeutronFixture does not properly support live migration with
ports, making the test worthless. The work to support this was done as
part of bp/support-move-ops-with-qos-ports-ussuri, and starts at
commit b2734b5.

NOTE(artom) The
test_post_live_migration_no_shared_storage_working_correctly and
test_post_live_migration_cinder_v3_api unit tests had to be adjusted
as part of the backport to pass with the new code.

[1] https://opendev.org/openstack/nova/src/commit/d9e04c4ff0b1a9c3383f1848dc846e93030d83cb/nova/network/neutronv2/api.py#L2493-L2522

Change-Id: If0fbae33ce2af198188c91638afef939256c2556
Closes-bug: 1879787
(cherry picked from commit 6488a5d)
(cherry picked from commit 2c949cb)
When creating a live snapshot of an instance, nova creates a
copy of the instance disk using a QEMU shallow rebase. This
copy - the delta file - is then extracted and uploaded. The
delta file will eventually be deleted, when the temporary
working directory nova is using for the live snapshot is
discarded, however, until this happens, we will use 3x the
size of the image of host disk space: the original disk,
the delta file, and the extracted file. This can be problematic
when concurrent snapshots of multiple instances are requested
at once.

The solution is simple: delete the delta file after it has
been extracted and is no longer necessary.

Change-Id: I15e9975fa516d81e7d34206e5a4069db5431caa9
Closes-Bug: #1881727
(cherry picked from commit d2af7ca)
(cherry picked from commit e51555b)
Previously, we were setting the environment variable to disable
greendns in eventlet *after* import eventlet. This has no effect, as
eventlet processes environment variables at import time. This patch
moves the setting of EVENTLET_NO_GREENDNS before importing eventlet in
order to correctly disable greendns.

Closes-bug: 1895322
Change-Id: I4deed815c8984df095019a7f61d089f233f1fc66
(cherry picked from commit 7c1d964)
(cherry picked from commit 79e6b7f)
mnaser reported a weird case where an instance was found
in both cell0 (deleted there) and in cell1 (not deleted
there but in error state from a failed build). It's unclear
how this could happen besides some weird clustered rabbitmq
issue where maybe the schedule and build request to conductor
happens twice for the same instance and one picks a host and
tries to build and the other fails during scheduling and is
buried in cell0.

To avoid a split brain situation like this, we add a sanity
check in _bury_in_cell0 to make sure the instance mapping is
not pointing at a cell when we go to update it to cell0.
Similarly a check is added in the schedule_and_build_instances
flow (the code is moved to a private method to make it easier
to test).

Worst case is this is unnecessary but doesn't hurt anything,
best case is this helps avoid split brain clustered rabbit
issues.

Closes-Bug: #1775934

Change-Id: I335113f0ec59516cb337d34b6fc9078ea202130f
(cherry picked from commit 5b55251)
You must specify the 'policies' field. Currently, not doing so will
result in a HTTP 500 error code. This should be a 4xx error. Add a test
to demonstrate the bug before we provide a fix.

Changes:
  nova/tests/functional/regressions/test_bug_1894966.py

NOTE(stephenfin): Need to update 'super' call to Python 2-compatible
variant.

Change-Id: I72e85855f621d3a51cd58d14247abd302dcd958b
Signed-off-by: Stephen Finucane <[email protected]>
Related-Bug: #1894966
(cherry picked from commit 2c66962)
(cherry picked from commit 94d24e3)
As noted inline, the 'policies' field may be a list but it expects one
of two items.

Change-Id: I34c68df1e6330dab1524aa0abec733610211a407
Signed-off-by: Stephen Finucane <[email protected]>
Closes-Bug: #1894966
(cherry picked from commit 32c43fc)
(cherry picked from commit 781210b)
In vSphere 7.0, the VirtualDevice.key cannot be the same any more.
So set different values to VirtualDevice.key

Change-Id: I574ed88729d2f0760ea4065cc0e542eea8d20cc2
Closes-Bug: #1892961
(cherry picked from commit a5d153a)
(cherry picked from commit 0ea5bcc)
The 'vram' property of the 'video' device must be an integer else
libvirt will spit the dummy out, e.g.

  libvirt.libvirtError: XML error: cannot parse video vram '8192.0'

The division operator in Python 3 results in a float, not an integer
like in Python 2. Use the truncation division operator instead.

Change-Id: Iebf678c229da4f455459d068cafeee5f241aea1f
Signed-off-by: Stephen Finucane <[email protected]>
Closes-Bug: #1896496
(cherry picked from commit f2ca089)
(cherry picked from commit fd7c66f)
(cherry picked from commit 121e481)
When vif_type="tap" (such as when using calico),
attempting to create an instance using an image that has
the property hw_vif_multiqueue_enabled=True fails, because
the interface is always being created without multiqueue
flags.

This change checks if the property is defined and passes
the multiqueue parameter to create the tap interface
accordingly.

In case the multiqueue parameter is passed but the
vif_model is not virtio (or unspecified), the old
behavior is maintained.

Change-Id: I0307c43dcd0cace1620d2ac75925651d4ee2e96c
Closes-bug: #1893263
(cherry picked from commit 84cfc8e)
(cherry picked from commit a69845f)
If a compute node is backed by ceph, and the image is not clone-able
in that same ceph, nova will try to download the image from glance
and upload it to ceph itself. This is nice in that it "just works",
but it also means we store that image in ceph in an extremely
inefficient way. In a glance multi-store case with multiple ceph
clusters, the user is currently required to make sure that the image
they are going to use is stored in a backend local to the compute
node they land on, and if they do not (or can not), then nova will
do this non-COW inefficient copy of the image, which is likely not
what the operator expects.

Per the discussion at the Denver PTG, this adds a workaround flag
which allows the operators to direct nova to *not* do this behavior
and instead refuse to boot the instance entirely.

Conflicts:
    nova/conf/workarounds.py

NOTE(melwitt): The conflict is because this patch originally landed on
ussuri and change If874f018ea996587e178219569c2903c2ee923cf (Reserve
DISK_GB resource for the image cache) landed afterward and was
backported to stable/train.

Related-Bug: #1858877
Change-Id: I069b6b1d28eaf1eee5c7fb8d0fdef9c0c229a1bf
(cherry picked from commit 80191e6)
This is a follow up to change
I8e4e5afc773d53dee9c1c24951bb07a45ddc2f1a which fixed an issue with
validation when the topmost patch after a Zuul rebase is a merge
patch.

We need to also use the $commit_hash variable for the check for
stable-only patches, else it will incorrectly fail because it is
checking the merge patch's commit message.

Change-Id: Ia725346b65dd5e2f16aa049c74b45d99e22b3524
(cherry picked from commit 1e10461)
(cherry picked from commit f1e4f6b)
(cherry picked from commit e676a48)
Currently in the archive_deleted_rows code, we will attempt to clean up
"residue" of deleted instance records by assuming any table with a
'instance_uuid' column represents data tied to an instance's lifecycle
and delete such records.

This behavior poses a problem in the case where an instance has a PCI
device allocated and someone deletes the instance. The 'instance_uuid'
column in the pci_devices table is used to track the allocation
association of a PCI with an instance. There is a small time window
during which the instance record has been deleted but the PCI device
has not yet been freed from a database record perspective as PCI
devices are freed during the _complete_deletion method in the compute
manager as part of the resource tracker update call.

Records in the pci_devices table are anyway not related to the
lifecycle of instances so they should not be considered residue to
clean up if an instance is deleted. This adds a condition to avoid
archiving pci_devices on the basis of an instance association.

Closes-Bug: #1899541

Change-Id: Ie62d3566230aa3e2786d129adbb2e3570b06e4c6
(cherry picked from commit 1c256cf)
(cherry picked from commit 09784db)
(cherry picked from commit 79df36f)
Zuul and others added 30 commits March 31, 2021 12:45
This change adds a funcitonal regression test that
assert the broken behavior when trying to live migrate
with a neutron backend that does not support multiple port
bindings.

Conflicts/Changes:
  nova/tests/functional/regressions/test_bug_1888395.py:
    - specify api major version to allow block_migration 'auto'
    - use TempDir fixture for instances path
    - worked around  lack of create_server and start_computes in integrated
      helpers in train by inlining the behavior in setUp and test_live_migrate
    - reverted to python2 compatiable super() syntax
  nova/tests/unit/virt/libvirt/fake_imagebackend.py:
    - include portion of change Ia3d7351c1805d98bcb799ab0375673c7f1cb8848
      which stubs out the is_file_in_instance_path method. That was
      included in a feature patch set so just pulling the necessary
      bit.

Change-Id: I470a016d35afe69809321bd67359f466c3feb90a
Partial-Bug: #1888395
(cherry picked from commit 71bc6fc)
(cherry picked from commit bea55a7)
In the rocky cycle nova was enhanced to support the multiple
port binding live migration workflow when neutron supports
the binding-extended API extension.
When the migration_data object was extended to support
multiple port bindings, populating the vifs field was used
as a sentinel to indicate that the new workflow should
be used.

In the train release
I734cc01dce13f9e75a16639faf890ddb1661b7eb
(SR-IOV Live migration indirect port support)
broke the semantics of the migrate_data object by
unconditionally populating the vifs field

This change restores the rocky semantics, which are depended
on by several parts of the code base, by only conditionally
populating vifs if neutron supports multiple port bindings.

Changes to patch:
  - unit/virt/libvirt/fakelibvirt.py: Include partial pick from
    change Ia3d7351c1805d98bcb799ab0375673c7f1cb8848 to add the
    jobStats, complete_job and fail_job to fakelibvirt. The full
    change was not cherry-picked as it was part of the numa aware
    live migration feature in Victoria.
  - renamed import of nova.network.neutron to
    nova.network.neutronv2.api
  - mocked nova.virt.libvirt.guest.Guest.get_job_info to return
    fakelibvirt.VIR_DOMAIN_JOB_COMPLETED
  - replaced from urllib import parse as urlparse with
    import six.moves.urllib.parse as urlparse for py2.7

Conflicts:
    nova/tests/functional/regressions/test_bug_1888395.py
    nova/tests/unit/compute/test_compute.py
    nova/tests/unit/compute/test_compute_mgr.py
    nova/tests/unit/virt/test_virt_drivers.py

Co-Authored-By: Sean Mooney <[email protected]>
Change-Id: Ia00277ac8a68a635db85f9e0ce2c6d8df396e0d8
Closes-Bug: #1888395
(cherry picked from commit b8f3be6)
(cherry picked from commit afa843c)
Bug #1894804 outlines how DEVICE_DELETED events were often missing from
QEMU on Focal based OpenStack CI hosts as originally seen in bug
 #1882521. This has eventually been tracked down to some undefined QEMU
behaviour when a new device_del QMP command is received while another is
still being processed, causing the original attempt to be aborted.

We hit this race in slower OpenStack CI envs as n-cpu rather crudely
retries attempts to detach devices using the RetryDecorator from
oslo.service. The default incremental sleep time currently being tight
enough to ensure QEMU is still processing the first device_del request
on these slower CI hosts when n-cpu asks libvirt to retry the detach,
sending another device_del to QEMU hitting the above behaviour.

Additionally we have also seen the following check being hit when
testing with QEMU >= v5.0.0. This check now rejects overlapping
device_del requests in QEMU rather than aborting the original:

qemu/qemu@cce8944

This change aims to avoid this situation entirely by raising the default
incremental sleep time between detach requests from 2 seconds to 10,
leaving enough time for the first attempt to complete. The overall
maximum sleep time is also increased from 30 to 60 seconds.

Future work will aim to entirely remove this retry logic with a libvirt
event driven approach, polling for the the
VIR_DOMAIN_EVENT_ID_DEVICE_REMOVED and
VIR_DOMAIN_EVENT_ID_DEVICE_REMOVAL_FAILED events before retrying.

Finally, the cleanup of unused arguments in detach_device_with_retry is
left for a follow up change in order to keep this initial change small
enough to quickly backport.

Closes-Bug: #1882521
Related-Bug: #1894804
Change-Id: Ib9ed7069cef5b73033351f7a78a3fb566753970d
(cherry picked from commit dd1e6d4)
(cherry picked from commit 4819f69)
(cherry picked from commit f32286c)
Move volume_delete related logic away from this method, in order to make
it generic and usable elsewhere.

NOTE(lyarwood): Conflict caused by I52fbbcac9dc386f24ee81b3321dd0d8355e01976
landing in stbale/ussuri.

Conflicts:
  nova/tests/unit/virt/libvirt/test_driver.py

Change-Id: I17357d85f845d4160cb7c7784772530a1e92af76
Related-Bug: #1732428
(cherry picked from commit ce22034)
(cherry picked from commit 2e89699)
During unshelve, instance is spawn with image created by shelve
and is deleted just after, instance.image_ref still point
to the original instance build image.

In qcow2 environment, this is an issue because instance backing file
don't match anymore instance.image_ref and during live-migration/resize,
target host will fetch image corresponding to instance.image_ref
involving instance corruption.

This change fetches original image and rebase instance disk on it.
This avoid image_ref mismatch and bring back storage benefit to keep common
image in cache.

If original image is no more available in glance, backing file is merged into
disk(flatten), ensuring instance integrity during next live-migration/resize
operation.

NOTE(lyarwood): Test conflicts caused by If56842da51688 not being
present in stable/train.

Conflicts:
  nova/tests/unit/virt/libvirt/test_driver.py

Change-Id: I1a33fadf0b7439cf06c06cba2bc06df6cef0945b
Closes-Bug: #1732428
(cherry picked from commit 8953a68)
(cherry picked from commit 7003618)
In different location we assume system_metadata.image_base_image_ref
exists, because it is set during instance creation in method
_populate_instance_for_create

But once instance is rebuild, all system_metadata image property a dropped
and replace by new image property and without setting back
image_base_image_ref.

This change propose to set image_base_image_ref during rebuild.

In specific case of shelve/unshelve in Qcow2 backend, image_base_image_ref is
used to rebase disk image, so we ensure this property is set as instance may
have been rebuild before the fix was apply.

Related-Bug: #1732428
Closes-Bug: #1893618
Change-Id: Ia3031ea1f7db8b398f02d2080ca603ded8970200
(cherry picked from commit fe52b6c)
(cherry picked from commit 5604140)
This currently runs in the 'check' pipeline, as part of the pep8 job,
which causes otherwise perfectly valid backports to report as failing
CI. There's no reason a stable core shouldn't be encouraged to review
these patches: we simply want to prevent them *merging* before their
parent(s). Resolve this conflict by moving the check to separate voting
job in the 'gate' pipeline as well as a non-voting job in the 'check'
pipeline to catch more obvious issues.

Change-Id: Id3e4452883f6a3cf44ff58b39ded82e882e28c23
Signed-off-by: Stephen Finucane <[email protected]>
(cherry picked from commit 98b01c9)
(cherry picked from commit fef0305)
(cherry picked from commit b7677ae)
(cherry picked from commit 91314f7)
During an assisted volume snapshot delete request from Cinder nova
removes the snapshot from the backing file chain. During that nova
checks the existence of such file. However in some cases (see the bug
report) the path is relative and therefore os.path.exists fails.

This patch makes sure that nova uses the volume absolute path to make
the backing file path absolute as well.

Closes-Bug #1885528

Change-Id: I58dca95251b607eaff602783fee2fc38e2421944
(cherry picked from commit b933312)
(cherry picked from commit 831abc9)
(cherry picked from commit c2044d4)
Error-out the migrations (cold and live) whenever the
anti-affinity policy is violated. This addresses
violations when multiple concurrent migrations are
requested.

Added detection on:
- prep_resize
- check_can_live_migration_destination
- pre_live_migration

The improved method of detection now locks based on group_id
and considers other migrations in-progress as well.

Closes-bug: #1821755
Change-Id: I32e6214568bb57f7613ddeba2c2c46da0320fabc
(cherry picked from commit 33c8af1)
(cherry picked from commit 8b62a4e)
(cherry picked from commit 6ede6df)
(cherry picked from commit bf90a1e)
During the VM booting process Nova asks Neutron for the security groups
of the project. If there are no any fields specified, Neutron will
prepare list of security groups with all fields, including rules.
In case if project got many SGs, it may take long time as rules needs to
be loaded separately for each SG on Neutron's side.

During booting of the VM, Nova really needs only "id" and "name" of the
security groups so this patch limits request to only those 2 fields.

This lazy loading of the SG rules was introduced in Neutron in [1] and
[2].

[1] https://review.opendev.org/#/c/630401/
[2] https://review.opendev.org/#/c/637407/

Related-Bug: #1865223
Change-Id: I15c3119857970c38541f4de270bd561166334548
(cherry picked from commit 388498a)
(cherry picked from commit 4f49545)
(cherry picked from commit f7d84db)
(cherry picked from commit be4a514)
This change modifies _get_neutron_events_for_live_migration
to filter the event to just the subset that will be sent
at plug-time.

Currently neuton has a bug where by the dhcp agent
send a network-vif-plugged event during live migration after
we update the port profile with "migrating-to:"
this cause a network-vif-plugged event to be sent for
configuration where vif_plugging in nova/os-vif is a noop.

when that is corrected the current logic in nova cause the migration
to time out as its waiting for an event that will never arrive.

This change filters the set of events we wait for to just the plug
time events.

Conflicts:
    nova/compute/manager.py
    nova/tests/unit/compute/test_compute_mgr.py

Related-Bug: #1815989
Closes-Bug: #1901707
Change-Id: Id2d8d72d30075200d2b07b847c4e5568599b0d3b
(cherry picked from commit 8b33ac0)
(cherry picked from commit ef348c4)
(cherry picked from commit d9c833d)
Setuptools 58.0 (bundled in virtualenv 20.8) breaks the installation of
decorator 3.4.0. So this patch pins virtualenv to avoid the break.

As the used 'require' feature was introduced in tox in version 3.2 [1],
the required minversion has to be bumped, too.

[1] https://tox.readthedocs.io/en/latest/config.html#conf-requires

Conflicts:
    tox.ini

NOTE(melwitt): The conflict is because change
Ie1a0cbd82a617dbcc15729647218ac3e9cd0e5a9 (Stop testing Python 2) is
not in Train.

Change-Id: I26b2a14e0b91c0ab77299c3e4fbed5f7916fe8cf
(cherry picked from commit b27f8e9)
NOTE(melwitt): This is the combination of two commits, the bug fix and
a followup change to the unit test to enable it also run on
Python < 3.6.

Our console proxies (novnc, serial, spice) run in a websockify server
whose request handler inherits from the python standard
SimpleHTTPRequestHandler. There is a known issue [1] in the
SimpleHTTPRequestHandler which allows open redirects by way of URLs
in the following format:

  http://vncproxy.my.domain.com//example.com/%2F..

which if visited, will redirect a user to example.com.

We can intercept a request and reject requests that pass a redirection
URL beginning with "//" by implementing the
SimpleHTTPRequestHandler.send_head() method containing the
vulnerability to reject such requests with a 400 Bad Request.

This code is copied from a patch suggested in one of the issue comments
[2].

Closes-Bug: #1927677

[1] https://bugs.python.org/issue32084
[2] https://bugs.python.org/issue32084#msg306545

Conflicts:
    nova/tests/unit/console/test_websocketproxy.py

NOTE(melwitt): The conflict is because change
I23ac1cc79482d0fabb359486a4b934463854cae5 (Allow TLS ciphers/protocols
to be configurable for console proxies) is not in Train.

NOTE(melwitt): The difference from the cherry picked change:
HTTPStatus.BAD_REQUEST => 400 is due to the fact that HTTPStatus does
not exist in Python 2.7.

Reduce mocking in test_reject_open_redirect for compat

This is a followup for change Ie36401c782f023d1d5f2623732619105dc2cfa24
to reduce mocking in the unit test coverage for it.

While backporting the bug fix, it was found to be incompatible with
earlier versions of Python < 3.6 due to a difference in internal
implementation [1].

This reduces the mocking in the unit test to be more agnostic to the
internals of the StreamRequestHandler (ancestor of
SimpleHTTPRequestHandler) and work across Python versions >= 2.7.

Related-Bug: #1927677

[1] python/cpython@34eeed4

Change-Id: I546d376869a992601b443fb95acf1034da2a8f36
(cherry picked from commit 214cabe)
(cherry picked from commit 9c2f297)
(cherry picked from commit 94e265f)
(cherry picked from commit d43b88a)

Change-Id: Ie36401c782f023d1d5f2623732619105dc2cfa24
(cherry picked from commit 781612b)
(cherry picked from commit 4709256)
(cherry picked from commit 6b70350)
(cherry picked from commit 719e651)
Ie36401c782f023d1d5f2623732619105dc2cfa24 was intended
to address OSSA-2021-002 (CVE-2021-3654) however after its
release it was discovered that the fix only worked
for urls with 2 leading slashes or more then 4.

This change adresses the missing edgecase for 3 leading slashes
and also maintian support for rejecting 2+.

Conflicts:
  nova/console/websocketproxy.py
  nova/tests/unit/console/test_websocketproxy.py

NOTE(melwitt): The conflict and difference in websocketproxy.py from
the cherry picked change: HTTPStatus.BAD_REQUEST => 400 is due to the
fact that HTTPStatus does not exist in Python 2.7. The conflict in
test_websocketproxy.py is because change
I23ac1cc79482d0fabb359486a4b934463854cae5 (Allow TLS ciphers/protocols
to be configurable for console proxies) is not in Train. The difference
in test_websocketproxy.py from the cherry picked change is due to a
difference in internal implementation [1] in Python < 3.6. See change
I546d376869a992601b443fb95acf1034da2a8f36 for reference.

[1] python/cpython@34eeed4

Change-Id: I95f68be76330ff09e5eabb5ef8dd9a18f5547866
co-authored-by: Matteo Pozza
Closes-Bug: #1927677
(cherry picked from commit 6fbd0b7)
(cherry picked from commit 47dad48)
(cherry picked from commit 9588cdb)
(cherry picked from commit 0997043)
Currently neutron can report ports to have MAC addresses
in upper case when they're created like that. In the meanwhile
libvirt configuration file always stores MAC in lower case
which leads to KeyError while trying to retrieve migrate_vif.

Closes-Bug: #1945646
Change-Id: Ie3129ee395427337e9abcef2f938012608f643e1
(cherry picked from commit 6a15169)
(cherry picked from commit 63a6388)
(cherry picked from commit 6c3d5de)
(cherry picked from commit 28d0059)
(cherry picked from commit 184a3c9)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.