Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare GA Branch #30542

Merged
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 21 additions & 17 deletions sdk/communication/azure-communication-callautomation/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,23 @@
# Release History
# Release History

## 1.0.0b1 (Unreleased)
Call Automation enables developers to build call workflows. Personalise customer interactions by listening to call events and take actions based on your business logic.
## 1.0.0 (2023-05-29)
Call Automation enables developers to build call workflows. Personalise customer interactions by listening to call events and take actions based on your business logic. For more information, please see the [README][read_me].

### Features Added
- Create outbound calls to an Azure Communication Service user or a phone number.
- Answer/Redirect/Reject incoming call from an Azure Communication Service user or a phone number.
- Transfer the call to another participant.
- List, add or remove participants from the call.
- Hangup or terminate the call.
- Play audio files to one or more participants in the call.
- Recognize incoming DTMF in the call.
- Record calls with option to start/resume/stop.
- Record mixed and unmixed audio recordings.
- Download recordings.
- Parse various events happening in the call, such as CallConnected and PlayCompleted event.
- Start/Stop continuous DTMF recognition by subscribing/unsubscribing to tones.
- Send DTMF tones to a participant in the call.
### Features Added
- Create outbound calls to an Azure Communication Service user or a phone number.
- Answer/Redirect/Reject incoming call from an Azure Communication Service user or a phone number.
- Transfer the call to another participant.
- List, add or remove participant from the call.
- Hangup or terminate the call.
- Play audio files to one or more participants in the call.
- Recognize incoming DTMF in the call.
- Record calls with option to start/resume/stop.
- Record mixed and unmixed audio recordings.
- Download recordings.

<!-- LINKS -->
[read_me]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/communication/Azure.Communication.CallAutomation/README.md
[Overview]: https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/call-automation
[Demo Video]: https://ignite.microsoft.com/sessions/14a36f87-d1a2-4882-92a7-70f2c16a306a
[Incoming Call Concept]: https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/incoming-call-notification
[Build a customer interaction workflow using Call Automation]: https://learn.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/callflows-for-customer-interactions
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
AddParticipantResult,
RemoveParticipantResult,
TransferCallResult,
MediaStreamingConfiguration,
ChannelAffinity
)
from ._shared.models import (
Expand All @@ -35,11 +34,7 @@
RecordingContent,
RecordingChannel,
RecordingFormat,
RecordingStorage,
RecognizeInputType,
MediaStreamingAudioChannelType,
MediaStreamingContentType,
MediaStreamingTransportType,
DtmfTone,
CallConnectionState,
RecordingState
Expand All @@ -63,7 +58,6 @@
"AddParticipantResult",
"RemoveParticipantResult",
"TransferCallResult",
"MediaStreamingConfiguration",

# common ACS communication identifier
"CommunicationIdentifier",
Expand All @@ -80,11 +74,7 @@
"RecordingContent",
"RecordingChannel",
"RecordingFormat",
"RecordingStorage",
"RecognizeInputType",
"MediaStreamingAudioChannelType",
"MediaStreamingContentType",
"MediaStreamingTransportType",
"DtmfTone",
"CallConnectionState",
"RecordingState"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@
from azure.core import CaseInsensitiveEnumMeta

class ApiVersion(str, Enum, metaclass=CaseInsensitiveEnumMeta):
V2023_01_15_PREVIEW = "2023-01-15-preview"
V2023_03_06 = "2023-03-06"

DEFAULT_VERSION = ApiVersion.V2023_01_15_PREVIEW.value
DEFAULT_VERSION = ApiVersion.V2023_03_06.value
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@
AnswerCallRequest,
RedirectCallRequest,
RejectCallRequest,
StartCallRecordingRequest,
CustomContext
StartCallRecordingRequest
)
from ._models import (
CallConnectionProperties,
Expand All @@ -38,8 +37,7 @@
from ._models import (
CallInvite,
ServerCallLocator,
GroupCallLocator,
MediaStreamingConfiguration
GroupCallLocator
)
from azure.core.credentials import (
TokenCredential,
Expand All @@ -54,8 +52,7 @@
CallRejectReason,
RecordingContent,
RecordingChannel,
RecordingFormat,
RecordingStorage
RecordingFormat
)
from azure.core.exceptions import HttpResponseError

Expand Down Expand Up @@ -96,12 +93,13 @@ def __init__(
parsed_url = urlparse(endpoint.rstrip('/'))
atazimsft marked this conversation as resolved.
Show resolved Hide resolved
if not parsed_url.netloc:
raise ValueError(f"Invalid URL: {format(endpoint)}")

self._client = AzureCommunicationCallAutomationService(
endpoint,
api_version=api_version or DEFAULT_VERSION,
credential=credential,
authentication_policy=get_authentication_policy(
endpoint, credential),
endpoint, credential),
sdk_moniker=SDK_MONIKER,
**kwargs)

Expand Down Expand Up @@ -155,8 +153,6 @@ def create_call(
callback_url: str,
*,
operation_context: Optional[str] = None,
media_streaming_configuration: Optional['MediaStreamingConfiguration'] = None,
azure_cognitive_services_endpoint_url: Optional[str] = None,
**kwargs
) -> CallConnectionProperties:
"""Create a call connection request to a target identity.
Expand All @@ -167,38 +163,23 @@ def create_call(
:type callback_url: str
:keyword operation_context: Value that can be used to track the call and its associated events.
:paramtype operation_context: str
:keyword media_streaming_configuration: Media Streaming Configuration.
:paramtype media_streaming_configuration: ~azure.communication.callautomation.MediaStreamingConfiguration
:keyword azure_cognitive_services_endpoint_url:
The identifier of the Cognitive Service resource assigned to this call.
:paramtype azure_cognitive_services_endpoint_url: str
:return: CallConnectionProperties
:rtype: ~azure.communication.callautomation.CallConnectionProperties
:raises ~azure.core.exceptions.HttpResponseError:
"""
user_custom_context = CustomContext(
voip_headers=target_participant.voip_headers,
sip_headers=target_participant.sip_headers
) if target_participant.sip_headers or target_participant.voip_headers else None
create_call_request = CreateCallRequest(
targets=[serialize_identifier(target_participant.target)],
callback_uri=callback_url,
source_caller_id_number=serialize_phone_identifier(
target_participant.source_caller_id_number) if target_participant.source_caller_id_number else None,
source_display_name=target_participant.source_display_name,
source_identity=serialize_communication_user_identifier(
source=serialize_communication_user_identifier(
self.source_identity) if self.source_identity else None,
operation_context=operation_context,
media_streaming_configuration=media_streaming_configuration.to_generated(
) if media_streaming_configuration else None,
azure_cognitive_services_endpoint_url=azure_cognitive_services_endpoint_url,
custom_context=user_custom_context
)

result = self._client.create_call(
create_call_request=create_call_request,
repeatability_first_sent=get_repeatability_timestamp(),
repeatability_request_id=get_repeatability_guid(),
**kwargs)

return CallConnectionProperties._from_generated(# pylint:disable=protected-access
Expand All @@ -213,10 +194,6 @@ def create_group_call(
source_caller_id_number: Optional['PhoneNumberIdentifier'] = None,
source_display_name: Optional[str] = None,
operation_context: Optional[str] = None,
media_streaming_configuration: Optional['MediaStreamingConfiguration'] = None,
azure_cognitive_services_endpoint_url: Optional[str] = None,
sip_headers: Optional[Dict[str, str]] = None,
voip_headers: Optional[Dict[str, str]] = None,
**kwargs
) -> CallConnectionProperties:
"""Create a call connection request to a list of multiple target identities.
Expand All @@ -234,36 +211,20 @@ def create_group_call(
:paramtype source_display_name: str
:keyword operation_context: Value that can be used to track the call and its associated events.
:paramtype operation_context: str
:keyword media_streaming_configuration: Media Streaming Configuration.
:paramtype media_streaming_configuration: ~azure.communication.callautomation.MediaStreamingConfiguration
:keyword azure_cognitive_services_endpoint_url:
The identifier of the Cognitive Service resource assigned to this call.
:paramtype azure_cognitive_services_endpoint_url: str
:keyword sip_headers: Sip Headers for PSTN Call
:paramtype sip_headers: Dict[str, str]
:keyword voip_headers: Voip Headers for Voip Call
:paramtype voip_headers: Dict[str, str]
:return: CallConnectionProperties
:rtype: ~azure.communication.callautomation.CallConnectionProperties
:raises ~azure.core.exceptions.HttpResponseError:
"""
user_custom_context = CustomContext(
voip_headers=voip_headers, sip_headers=sip_headers) if sip_headers or voip_headers else None

create_call_request = CreateCallRequest(
targets=[serialize_identifier(identifier)
for identifier in target_participants],
callback_uri=callback_url,
source_caller_id_number=serialize_phone_identifier(
source_caller_id_number) if source_caller_id_number else None,
source_display_name=source_display_name,
source_identity=serialize_identifier(
source=serialize_identifier(
self.source_identity) if self.source_identity else None,
operation_context=operation_context,
media_streaming_configuration=media_streaming_configuration.to_generated(
) if media_streaming_configuration else None,
azure_cognitive_services_endpoint_url=azure_cognitive_services_endpoint_url,
custom_context=user_custom_context,
)

result = self._client.create_call(
Expand All @@ -281,8 +242,6 @@ def answer_call(
incoming_call_context: str,
callback_url: str,
*,
media_streaming_configuration: Optional['MediaStreamingConfiguration'] = None,
azure_cognitive_services_endpoint_url: Optional[str] = None,
operation_context: Optional[str] = None,
**kwargs
) -> CallConnectionProperties:
Expand All @@ -294,11 +253,6 @@ def answer_call(
:type incoming_call_context: str
:param callback_url: The call back url for receiving events.
:type callback_url: str
:keyword media_streaming_configuration: Media Streaming Configuration.
:paramtype media_streaming_configuration: ~azure.communication.callautomation.MediaStreamingConfiguration
:keyword azure_cognitive_services_endpoint_url:
The endpoint url of the Azure Cognitive Services resource attached.
:paramtype azure_cognitive_services_endpoint_url: str
:keyword operation_context: The operation context.
:paramtype operation_context: str
:return: CallConnectionProperties
Expand All @@ -308,10 +262,7 @@ def answer_call(
answer_call_request = AnswerCallRequest(
incoming_call_context=incoming_call_context,
callback_uri=callback_url,
media_streaming_configuration=media_streaming_configuration.to_generated(
) if media_streaming_configuration else None,
azure_cognitive_services_endpoint_url=azure_cognitive_services_endpoint_url,
answered_by_identifier=serialize_communication_user_identifier(
answered_by=serialize_communication_user_identifier(
self.source_identity) if self.source_identity else None,
operation_context=operation_context
)
Expand Down Expand Up @@ -343,15 +294,9 @@ def redirect_call(
:rtype: None
:raises ~azure.core.exceptions.HttpResponseError:
"""
user_custom_context = CustomContext(
voip_headers=target_participant.voip_headers,
sip_headers=target_participant.sip_headers
) if target_participant.sip_headers or target_participant.voip_headers else None

redirect_call_request = RedirectCallRequest(
incoming_call_context=incoming_call_context,
target=serialize_identifier(target_participant.target),
custom_context=user_custom_context
)

self._client.redirect_call(
Expand Down Expand Up @@ -400,9 +345,7 @@ def start_recording(
recording_channel_type: Optional[Union[str, 'RecordingChannel']] = None,
recording_format_type: Optional[Union[str, 'RecordingFormat']] = None,
audio_channel_participant_ordering: Optional[List['CommunicationIdentifier']] = None,
recording_storage_type: Optional[Union[str, 'RecordingStorage']] = None,
channel_affinity: Optional[List['ChannelAffinity']] = None,
external_storage_location: Optional[str] = None,
**kwargs
) -> RecordingProperties:
"""Start recording for a ongoing call. Locate the call with call locator.
Expand All @@ -425,17 +368,11 @@ def start_recording(
which participant first audio was detected.
Channel to participant mapping details can be found in the metadata of the recording.
:paramtype audio_channel_participant_ordering: list[~azure.communication.callautomation.CommunicationIdentifier]
:keyword recording_storage_type: Recording storage mode.
``External`` enables bring your own storage.
:paramtype recording_storage_type: str
:keyword channel_affinity: The channel affinity of call recording
When 'recordingChannelType' is set to 'unmixed', if channelAffinity is not specified,
'channel' will be automatically assigned.
Channel-Participant mapping details can be found in the metadata of the recording.
:paramtype channel_affinity: list[~azure.communication.callautomation.ChannelAffinity]
:keyword external_storage_location: The location where recording is stored,
when RecordingStorageType is set to 'BlobStorage'.
:paramtype external_storage_location: str or ~azure.communication.callautomation.RecordingStorage
:return: RecordingProperties
:rtype: ~azure.communication.callautomation.RecordingProperties
:raises ~azure.core.exceptions.HttpResponseError:
Expand All @@ -455,8 +392,6 @@ def start_recording(
recording_channel_type = recording_channel_type,
recording_format_type = recording_format_type,
audio_channel_participant_ordering = audio_channel_participant_ordering,
recording_storage_type = recording_storage_type,
external_storage_location = external_storage_location,
channel_affinity = channel_affinity_internal,
repeatability_first_sent=get_repeatability_timestamp(),
repeatability_request_id=get_repeatability_guid()
Expand Down
Loading