Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Post 4.10.1] Bump to [email protected] #3432

Merged
merged 5 commits into from
Sep 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
### Changed

- Bumped development dependency [`[email protected]`](https://npmjs.com/package/node-fetch) in PR [#3467](https://github.com/microsoft/BotFramework-WebChat/pull/3467) by [@dependabot](https://github.com/dependabot)
- Bumped Cognitive Services Speech SDK to 1.13.1, by [@compulim](https://github.com/compulim) in PR [#3432](https://github.com/microsoft/BotFramework-WebChat/pull/3432)
- [`[email protected]`](https://npmjs.com/package/microsoft-cognitiveservices-speech-sdk)

## [4.10.1] - 2020-09-09

Expand Down
34 changes: 23 additions & 11 deletions __tests__/html/speechRecognition.simple.html
Original file line number Diff line number Diff line change
Expand Up @@ -15,33 +15,45 @@
EventIterator,
expect,
fetchSpeechData,
fetchSpeechServicesCredentials,
float32ArraysToPcmWaveArrayBuffer,
host,
iterateAsyncIterable,
MockAudioContext,
pageObjects,
pcmWaveArrayBufferToRiffWaveArrayBuffer,
recognizeRiffWaveArrayBuffer,
pageObjects,
shareObservable,
timeouts,
token
} = window.WebChatTest;

(async function () {
const queryParams = window.WebChatTest.parseURLParams(location.hash);
const {
sa: speechAuthorizationToken,
sr: speechRegion,
ss: speechSubscriptionKey,
t: channelType
} = queryParams;
let { sa: speechAuthorizationToken, sr: speechRegion, ss: speechSubscriptionKey, t: channelType } = queryParams;

if (!channelType) {
throw new Error('Channel type must be passed via hash, #t=dl or #t=dlspeech');
} else if (!speechRegion || !(speechAuthorizationToken || speechSubscriptionKey)) {
throw new Error(
'Speech region and authorization token or subscription key must be passed via hash, #sr=westus2&sa=a1b2c3d or #sr=westus2&ss=a1b2c3d.'
console.warn(
'Channel type is not specified, assuming Direct Line. To change channel type, pass it via hash, #t=dl or #t=dlspeech.'
);

channelType = 'dl';
} else if (channelType !== 'dl' && channelType !== 'dlspeech') {
throw new Error('Invalid channel type specified, must be either "dl" or "dlspeech".');
} else if (speechSubscriptionKey && !speechRegion) {
throw new Error('Speech region must be specified when speech subscription key is specified.');
}

if (!speechAuthorizationToken && !speechSubscriptionKey) {
console.warn('Both speech authorization token and subscription key are not passed, will fetch it from bot.');

const {
authorizationToken: fetchedSpeechAuthorizationToken,
region: fetchedSpeechRegion
} = await fetchSpeechServicesCredentials();

speechAuthorizationToken = fetchedSpeechAuthorizationToken;
speechRegion = fetchedSpeechRegion;
}

const speechCredentials = {
Expand Down
133 changes: 121 additions & 12 deletions packages/bundle/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion packages/bundle/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
"markdown-it-attrs-es5": "1.2.0",
"markdown-it-for-inline": "0.1.1",
"memoize-one": "5.1.1",
"microsoft-cognitiveservices-speech-sdk": "1.10.1",
"microsoft-cognitiveservices-speech-sdk": "1.13.1",
"prop-types": "15.7.2",
"sanitize-html": "1.27.4",
"url-search-params-polyfill": "8.1.0",
Expand Down
2 changes: 0 additions & 2 deletions packages/directlinespeech/__tests__/sendSpeechActivity.js
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,6 @@ describe.each([['without internal HTTP support'], ['with internal HTTP support',

await expect(activityUtterances).resolves.toEqual(['Bellevue.']);
});


}
);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ import {
AudioStreamNodeAttachedEvent,
AudioStreamNodeAttachingEvent,
AudioStreamNodeDetachedEvent
// AudioStreamNodeErrorEvent,
} from 'microsoft-cognitiveservices-speech-sdk/distrib/lib/src/common/AudioSourceEvents';

import { createNoDashGuid } from 'microsoft-cognitiveservices-speech-sdk/distrib/lib/src/common/Guid';
Expand Down Expand Up @@ -69,23 +68,23 @@ class QueuedArrayBufferAudioSource {
return this._id;
}

// Returns an IAudioSourceNode asynchronously.
// Reference at node_modules/microsoft-cognitiveservices-speech-sdk/distrib/es2015/src/common/IAudioSource.d.ts
attach(audioNodeId) {
this.onEvent(new AudioStreamNodeAttachingEvent(this._id, audioNodeId));

return this.upload(audioNodeId).onSuccessContinueWith(streamReader => {
return this.upload(audioNodeId).onSuccessContinueWith(stream => {
this.onEvent(new AudioStreamNodeAttachedEvent(this._id, audioNodeId));

return {
detach: () => {
streamReader.close();

delete this._streams[audioNodeId];

this.onEvent(new AudioStreamNodeDetachedEvent(this._id, audioNodeId));
this.turnOff();
},
id: () => audioNodeId,
read: () => streamReader.read()
read: stream.read.bind(stream)
};
});
}
Expand All @@ -108,6 +107,7 @@ class QueuedArrayBufferAudioSource {
return PromiseHelper.fromResult(true);
}

// Creates a new Stream with bytes from the first queued ArrayBuffer.
upload(audioNodeId) {
return this.turnOn().onSuccessContinueWith(() => {
const stream = new Stream(audioNodeId);
Expand All @@ -126,9 +126,10 @@ class QueuedArrayBufferAudioSource {
});
}

// Stream will only close the internal stream writer.
stream.close();

return stream.getReader();
return stream;
});
}

Expand Down
Loading